Abstract: A system for automatically displaying output information on a display surface based on activities in a target environment includes an image capturing device for capturing data of the target environment in real-time, a processing unit for processing the data to determine a location for displaying the output information, and determine a content of the output information, and a video output unit for displaying the output information on the display surface. The video output unit is configured to rotate in a 360° manner, and wherein an angle of rotation of the video output unit is controlled by the processing unit. The image capturing device further captures the output information displayed on the display surface in real time, and provides feedback data to the processing unit in real-time, so as to enable the processing unit to control a focus, a direction, and the output information displayed on the video output unit.
SYSTEM AND METHOD FOR DISPLAYING VIDEO DATA IN A TARGET
ENVIRONMENT
TECHNICAL FIELD
[0001] The present disclosure relates generally to a system and method for displaying video data in a target environment, and more particulary to displaying video data based on a closed loop Artificial Intellgence (AI) driven process, using real-time interactive visual and audio prompts.
BACKGROUND
[0002] Companies typically spend large amounts of money advertising newly launched products, by advertising on display screens in public places such as shopping centers, train/bus stations, etc. To maximise their profits and return on investment, companies want a maximum number of customers to buy the new product. However, existing display screens used for advertisements only display pre-programmed data. In particular, existing display screens do not display advertisements based on the customers being targeted.
[0003] Further, in Self Check-Out (SCO) stores, a cashier is not physically present at the billing counter to handle billing for a customer. This has led to an increase in theft rates. Thus, there is a need in SCO stores for a system to provide an audio/video alarm based on an observation of a theft. Similarly, in the event a customer is unable to use the SCO system properly, there is need for a system to alert a store supervisor that the customer may require attention and assistance.
[0004] Furthermore, in the event of a natural disaster (e.g. earthquakes, fire, tsunami, etc.), an alarm may be needed to alert people in public places about the disaster and provide them with directions to a place of greater safety. In emergency situations such as an attack on an army/naval base or a terrorist attack at a public place, current safety procedures do not help to locate the attacker or otherwise provide video/audio instructions to help officials to catch the attacker.
[0005] Furthermore, current systems for coaching/training students, fail to observe the environment of the students and adapt the coaching/training accordingly. For example, current training systems execute pre-established procedures for training medical or veterinary students and fail to take into account the environment of the students.
[0006] Hence, in view of the above, there exists a need for a system that takes into account an environment of one or more target users, and provides automated audio/video outputs accordingly.
SUMMARY
[0007] In an aspect of the present disclosure, there is provided a system for automatically displaying output information on a display surface based on one or more activities in a target environment. The system includes an image capturing device configured to capture image and video data of the target environment in real-time for recognizing one or more activities. The system may include a processing unit configured to process the image and video data to determine a location for displaying the output information, and determine a content of the output information. The system may further include a video output unit configured to display the output information on the display surface, wherein the video output unit is configured to rotate in a 360° manner, and wherein an angle of rotation of the video output unit is controlled by the processing unit. The image capturing device is further configured to capture the output information displayed on the display surface in real time, and provide feedback data to the processing unit in real-time, so as to enable the processing unit to control a focus, a direction, and the output information displayed on the video output unit.
[0008] In another aspect of the present disclosure, there is provided a method for automatically displaying output information on a display surface based on one or more activities in a target environment. The method includes capturing image and video data of the target environment in real-time for recognizing one or more activities. The method may further include processing the image and video data to determine a location for displaying the output information, and determine a content of the output information. The method may further include displaying the output information on the display surface by a video output unit, wherein the video output unit is configured to rotate in a 360° manner. The method may further include controlling an angle of rotation of the video output unit based on the captured image and video data. The method
may further include capturing the output information displayed on the display surface in real time, and generating feedback data to control a focus, a direction, and the output information displayed on the video output unit.
[0009] In yet another aspect of the present disclosure, there is provided a computer programmable product for automatically displaying output information on a display surface based on one or more activities in a target environment. The computer programmable product comprises a set of instructions, the set of instructions when executed by a processor causes the processor to: capture image and video data of the target environment in real-time for recognizing one or more activities, process the image and video data to determine a location for displaying the output information and determine a content of the output information, generate an output information based on the processed data, and display the output information on the display surface by a video output unit. The video output unit is configured to rotate in a 360° manner, and an angle of rotation of the video output unit is controlled based on the captured image and video data. Further, the output information displayed on the display surface in real time is captured, and feedback data is generated to control a focus, a direction, and the output information displayed on the video output unit.
[0010] Various embodiments of the present disclosure provide a system that captures human behaviour and interacts and communicates with the humans to instruct or inform the humans in a fashion appropriate to the process and environment being observed. The system can take in visual, audio and other sensor inputs and create visual and audio outputs to form a closed loop interaction governed by the software intelligence operating in the background. The system may further act as an intelligent instructor/coach/supervisor allowing for automated assurance of optimum performance to standards and prescribed processes. The system creates the opportunity to have two way communication with the user using the camera for input and the projector for the output part and being controlled by the AI software.
[0011] The system is useful in a scenario where a user needs to be coached or trained. The camera may observe the environmental process and using the pr ojector, direct the user to act according to the desired outcome and keep a record of the training. Another example would be to train medical or veterinary students to perform specific procedures. The closed loop feedback ensures that the Al software is in real time control and can alter or correct processes and activities as they occur. The closed loop AI driven process control using real time interacti ve visual and audio prompts/nudges to coach, control and or assure optimum process or behavioural outcomes.
[0012] It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
[0014] FIG. 1 is a diagram of an example system for displaying video data based on activities in a target environment, in accordance with an embodiment of the present disclosure;
[0015] FIG. 2 is a diagram of an example operation for processing an image frame captured by the image capturing device, in accordance with an embodiment of the present disclosure;
[0016] FIG. 3A is a diagram of an example video output unit for projecting pre-defined image/video data generated by a processing unit on a display surface, in accordance with an embodiment of the present disclosure;
[0017] FIG. 3B is a diagram of an example mechanism for rotating the motorized mirror around vertical and horizontal axes in a mirror plane, in accordance with an embodiment of the present disclosure;
[0018] FIG. 3C illustrates an example motor including two electrically controlled levers, in accordance with an embodiment of the present disclosure; and
[0019] FIG. 4 is an example flowchart illustrating a method for automatically displaying video data on a display surface based on one or more activities in a target environment.
[0020] In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
[0021] The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although the best mode of carrying out the present disclosure has been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
[0022] FIG. 1 is a diagram of an example system 100 for displaying video data based on activities in a target environment 101, in accordance with an embodiment of the present disclosure. In an embodiment of the present disclosure, the target environment 101 may pertain to a self-checkout store (SCO), and although not shown, may include entities such as products, conveyors, industrial robots, and activities such as an operator entering or exiting the scene; picking, dropping, moving, weighting or scanning items; operating a touchscreen display; and paying through cash, mobile electronic transactions, or a credit card. However, it would be apparent to one of ordinary skill in the art, that the target environment 101 may pertain to general industrial areas, military/naval bases, training halls, security screening areas, shopping centers, and restaurants. In addition to the target environment 101 specified above, the system 100 may be useful in retail automation, customer up-selling, employee coaching, employee training, logistical automation (goods inwards/outwards), medical direction such as EG surgical training, surgical expert training with visual cues, emergency instructions in the event of fire, earthquake, and a shooting attack.
CLAIMS:
1. A system for automatically displaying output information on a display surface based on one or more activities in a target environment, the system comprising:
an image capturing device configured to capture image and video data of the target environment in real-time for recognizing one or more activities;
a processing unit configured to process the image and video data to determine a location for displaying the output information, and determine a content of the output information; and a video output unit configured to display the output information on the display surface, wherein the video output unit is configured to rotate in a 360° manner, and wherein an angle of rotation of the video output unit is controlled by the processing unit,
wherein the image capturing device is further configured to capture the output information displayed on the display surface in real time, and provide feedback data to the processing unit in real-time, so as to enable the processing unit to control a focus, a direction, and the content of the output information displayed on the video output unit.
2. The system of claim 1 further comprising an audio recording device configured to record audio data of the target environment and transmit the recorded audio data to the processing unit, and wherein the audio recording device is configured to record audio data of the output information as feedback data, and provide the feedback data to the processing unit.
3. The system of claim 1, wherein the processing unit includes an Artificial Intelligence (AI) platform that is configured to direct visual information to one or more predefined locations in the target environment.
4. The system of claim 1 , wherein the processing unit includes a graphical processing unit (GPU) for processing video/image data.
5. The system of claim 1, wherein the output information comprises pre-defined video and audio data including at least one of: alarms, notifications, advertisements, instructions, and training videos.
6. The system of claim 1, wherein the video output unit comprises a projector, and the display surface comprises at least one of: a white projection screen, a grey projection screen, and a white wall for displaying a projected image.
7. The system of claim 1, wherein the processing unit is configured to receive a circular input image frame from the image capturing device, create a flattened representation of the circular input image frame, and generate a grid view image of the flattened representation, wherein the grid view image provides information regarding position of one or more entities in the circular input image frame, and wherein the processing unit is configured to generate and display the output information based on the grid view image.
8. The system of claim 1 further comprising a sensor unit configured to detect one or more events and changes in the target environment, wherein the sensor unit includes at least one of: a radar, an x-ray, a scanner, a motion sensor, a temperature sensor, a gas sensor, and a fire sensor.
9. The system of claim 1, wherein the video output unit comprises:
a light source;
a lens;
a motorized mirror configured to be moved in horizontal and vertical directions to project a predefined image onto one or more positions of the display surface, wherein a movement of the motorized mirror is controlled by one or more electrically controlled levers operated by the processing unit; and
a motorized focus system configured to guide a light from the light source reflected by the lens towards the motorized mirror.
10. A method for automatically displaying an output information on a display surface based on one or more activities in a target environment, the method comprising: capturing image and video data of the target environment in real-time for recognizing one or more activities;
processing the image and video data to determine a location for displaying the output information, and determining a content of the output information;
displaying the output information on the display surface by a video output unit, wherein the video output unit is configured to rotate in a 360° manner;
controlling an angle of rotation of the video output unit based on the captured image and video data; and
capturing the output information displayed on the display surface in real time, and generating feedback data to control a focus, a direction, and the content of the output information displayed on the video output unit.
11. The method of claim 10 further comprising:
recording audio data of the target environment and transmitting the recorded audio data; and
recording audio data of the output information as feedback data, and providing the feedback data.
12. The method of claim 10 further comprising directing visual information to one or more predefined locations in the target environment using an A1 platform.
13. The method of claim 10 further comprising processing the video/image data using a graphical processing unit (GPU).
14. The method of claim 10, wherein the output information comprises pre-defined video and audio data including at least one of: alarms, notifications, advertisements, instructions, and training videos.
15. The method of claim 10, wherein the video output unit comprises a projector, and the display surface comprises at least one of: a white projection screen, a grey projection screen, and a white wall for displaying a projected image.
16. The method of claim 10 further comprising receiving a circular input image frame, creating a flattened representation of the circular input image frame, and generating a grid view image of the flattened representation, wherein the grid view image provides information regarding position of one or more entities in the circular input image frame, and generating and displaying the output information based on the grid view image.
17. The method of claim 10 further comprising detecting one or more events and changes in the target environment using a sensor selected from a group consisting of: a radar, an x-ray, a scanner, a motion sensor, a temperature sensor, a gas sensor, and a fire sensor.
18. The method of claim 10, wherein the video output unit comprises:
a light source;
a lens;
a motorized mirror configured to be moved in horizontal and vertical directions to project a predefined image onto one or more positions of the display surface, wherein a movement of the motorized mirror is controlled by one or more electrically controlled levers; and
a motorized focus system configured to guide a tight from the tight source reflected by the lens towards the motorized mirror.
19. A computer programmable product for automatically displaying output information on a display surface based on one or more activities in a target environment, the computer programmable product comprising a set of instructions, the set of instructions when executed by a processor causes the processor to:
capture image and video data of the target environment in real-time for recognizing one or more activities;
process the image and video data to determine a location for displaying the output information, and determine a content of the output information; and display the output information on the display surface by a video output unit, wherein the video output unit is configured to rotate in a 360° manner;
control an angle of rotation of the video output unit based on the captured image and video data; and
capture the output information displayed on the display surface in real time, and generate feedback data to control a focus, a direction, and the content of the output information displayed on the video output unit.
| # | Name | Date |
|---|---|---|
| 1 | 202217027695-AbandonedLetter.pdf | 2024-02-19 |
| 1 | 202217027695.pdf | 2022-05-13 |
| 2 | 202217027695-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [13-05-2022(online)].pdf | 2022-05-13 |
| 2 | 202217027695-FER.pdf | 2022-09-09 |
| 3 | 202217027695-STATEMENT OF UNDERTAKING (FORM 3) [13-05-2022(online)].pdf | 2022-05-13 |
| 3 | 202217027695-COMPLETE SPECIFICATION [13-05-2022(online)].pdf | 2022-05-13 |
| 4 | 202217027695-DECLARATION OF INVENTORSHIP (FORM 5) [13-05-2022(online)].pdf | 2022-05-13 |
| 4 | 202217027695-REQUEST FOR EXAMINATION (FORM-18) [13-05-2022(online)].pdf | 2022-05-13 |
| 5 | 202217027695-PROOF OF RIGHT [13-05-2022(online)].pdf | 2022-05-13 |
| 5 | 202217027695-DRAWINGS [13-05-2022(online)].pdf | 2022-05-13 |
| 6 | 202217027695-POWER OF AUTHORITY [13-05-2022(online)].pdf | 2022-05-13 |
| 6 | 202217027695-FORM 1 [13-05-2022(online)].pdf | 2022-05-13 |
| 7 | 202217027695-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105-PCT Pamphlet) [13-05-2022(online)].pdf | 2022-05-13 |
| 7 | 202217027695-FORM 18 [13-05-2022(online)].pdf | 2022-05-13 |
| 8 | 202217027695-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105-PCT Pamphlet) [13-05-2022(online)].pdf | 2022-05-13 |
| 8 | 202217027695-FORM 18 [13-05-2022(online)].pdf | 2022-05-13 |
| 9 | 202217027695-POWER OF AUTHORITY [13-05-2022(online)].pdf | 2022-05-13 |
| 9 | 202217027695-FORM 1 [13-05-2022(online)].pdf | 2022-05-13 |
| 10 | 202217027695-DRAWINGS [13-05-2022(online)].pdf | 2022-05-13 |
| 10 | 202217027695-PROOF OF RIGHT [13-05-2022(online)].pdf | 2022-05-13 |
| 11 | 202217027695-DECLARATION OF INVENTORSHIP (FORM 5) [13-05-2022(online)].pdf | 2022-05-13 |
| 11 | 202217027695-REQUEST FOR EXAMINATION (FORM-18) [13-05-2022(online)].pdf | 2022-05-13 |
| 12 | 202217027695-STATEMENT OF UNDERTAKING (FORM 3) [13-05-2022(online)].pdf | 2022-05-13 |
| 12 | 202217027695-COMPLETE SPECIFICATION [13-05-2022(online)].pdf | 2022-05-13 |
| 13 | 202217027695-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [13-05-2022(online)].pdf | 2022-05-13 |
| 13 | 202217027695-FER.pdf | 2022-09-09 |
| 14 | 202217027695.pdf | 2022-05-13 |
| 14 | 202217027695-AbandonedLetter.pdf | 2024-02-19 |
| 1 | SS1E_08-09-2022.pdf |