Abstract: ABSTRACT METHOD AND SYSTEM FOR CONTROLLING DISPLAY INTERFACE OF VEHICLE The present disclosure describes a system (100) and a method (200) for controlling display interface of a vehicle. The system (100) comprises at least one image capturing device (104) attached to a wearable headgear of a user and a processing unit (106). The at least one image capturing device (104) is configured to capture a plurality of images of field of vision of the user. The processing unit (106) is configured to receive the plurality of images, identify the display interface (102) in a set of images of the plurality of images, and control the display interface (102) in response to identification of the display interface in the set of images. Figure 1
DESC:METHOD AND SYSTEM FOR CONTROLLING DISPLAY INTERFACE OF VEHICLE
CROSS-REFERENCE TO RELATED APPLICATIONS
The present application claims priority from Indian Provisional Patent Application No. 202221060356 filed on 21/10/2022, the entirety of which is incorporated herein by a reference.
TECHNICAL FIELD
The present disclosure generally relates to a display interface of a vehicle. Particularly, the present disclosure relates to a system and a method for controlling display interface of a vehicle.
BACKGROUND
With the rapid development of the automobile industry and the improvement of the living standard of people, a multimedia display screen of the automobile has become one of the important accessories in the automobile, and diversified functions and visual touch operation meet the requirements of most users.
The display screens used in the vehicles are larger along with high brightness. The display screen has a strong impact on the user (i.e., the driver of the vehicle). The driver (i.e., the user) is easily distracted due to the intense light of the display screen and unable to pay attention in driving. Such distraction to the driver is a big potential safety hazard and may lead to fatal accidents on the roads.
To avoid such distraction to the driver, some vehicles include a controlling switch to control the display screen of the vehicle. The controlling switch can be a panel button or touch screen. The vehicle driver manually operates the controlling switch to control the display screen of the vehicle. However, a manual operation of the driver to control the vehicle display diverts the driver's attention from driving to the operation of the display of the vehicle. Such operation of the display during the driving of the vehicle fails to ensure the undivided attention of the driver on the road.
Therefore, there exists a need for a mechanism to control the display interface without manual operation that overcomes one or more problems associated as set forth above.
SUMMARY
An object of the present disclosure is to provide a system for controlling display interface of a vehicle.
Another object of the present disclosure is to provide a method for controlling display interface of a vehicle.
In accordance with the first aspect of the present disclosure, there is provided a system for controlling a display interface of a vehicle. The system comprises at least one image capturing device attached to a wearable headgear of a user and a processing unit. The at least one image capturing device is configured to capture a plurality of images of field of vision of the user. The processing unit is configured to receive the plurality of images, identify the display interface in a set of images of the plurality of images, and control the display interface in response to identification of the display interface in the set of images.
The present disclosure provides a system for controlling display interface of a vehicle. Beneficially, the system automatically controls the display interface of the vehicle based on the requirement of the user. The disclosed system beneficially eliminates the user's manual operation of the display interface. Moreover, the disclosed system ensures the driver’s undivided attention on the driving of the vehicle rather than operating the display interface of the vehicle. Moreover, the disclosed system beneficially turns off the display interface of the vehicle when the user of the vehicle does not require any information from the display interface of the vehicle. Beneficially, the display interface of the vehicle is turned on automatically when the driver requires some information from the display interface of the vehicle. Beneficially, the system is capable of identifying the situation when the driver of the vehicle requires information from the display interface of the vehicle.
In accordance with the second aspect of the present disclosure, there is provided a method for controlling a display interface of a vehicle. The method comprises capturing a plurality of images of field of vision of a user, identifying the display interface in a set of images of the plurality of images, and controlling the display interface in response to identification of the display interface in the set of images.
Additional aspects, advantages, features, and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments constructed in conjunction with the appended claims that follow.
It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
BRIEF DESCRIPTION OF DRAWINGS
The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
FIG. 1 illustrates a block diagram of a system for controlling a display interface of a vehicle, in accordance with an aspect of the present disclosure.
FIG. 2 illustrates a flow chart of a method for controlling a display interface of a vehicle, in accordance with another aspect of the present disclosure.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
The description set forth below in connection with the appended drawings is intended as a description of certain embodiments of a system for controlling a display interface of a vehicle and is not intended to represent the only forms that may be developed or utilized. The description sets forth the various structures and/or functions in connection with the illustrated embodiments; however, it is to be understood that the disclosed embodiments are merely exemplary of the disclosure that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however, that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
The terms “comprise”, “comprises”, “comprising”, “include(s)”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, or system that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or system. In other words, one or more elements in a system or apparatus preceded by “comprises... a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.
In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings which are shown by way of illustration-specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
The present disclosure will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the description with unnecessary detail.
As used herein, the terms “display interface”, and “display unit” are used interchangeably and refer to a digital display, analog display, or a combination thereof capable of displaying various information related to the vehicle. The display interface also allows the driver to interact with the vehicle's information and entertainment system. The display interface may display information about at least one of: vehicle speed, RPM of the powertrain, fuel level, odometer, navigation maps, audio, and climate control settings, warning messages, and so forth. The display interface may comprise an input mechanism such as a touchscreen. The display interface may be capable of presenting information including text, two-dimensional visual images, and/or three-dimensional visual images. Additionally, the display interface may present information in the form of audio and haptics. The display interface may include but is not limited to, a liquid crystal display (LCD), a light-emitting diode (LED) display, and a plasma display. Alternatively, the display interface may utilize other display technologies.
As used herein, the terms “image capturing device” “imaging device” and “imaging unit” are used interchangeably and refer to a device that is capable of capturing still or moving images. The image capturing device comprises a lens, an image sensor, and an image processor. The lens focuses light onto the image sensor, which converts the light into an electrical signal. The image processor then converts the electrical signal into a digital image.
As used herein, the terms “processing unit”, ‘data processing unit’ and ‘processor’ are used interchangeably and refer to a computational element that is operable to respond to and process image signals and generate responsive commands to control other sub-systems in a system. Optionally, the processing unit includes but is not limited to, a microprocessor, a microcontroller, an image signal processor, a complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a combination thereof. Furthermore, the term “processor” may refer to one or more individual processors, processing devices, and various elements associated with a processing device that may be shared by other processing devices. Furthermore, the processing unit may comprise ARM Cortex-M series processors, such as the Cortex-M4 or Cortex-M7, or any similar processor designed to handle real-time tasks with high performance and low power consumption. Furthermore, the processing unit may comprise custom and/or proprietary processors.
As used herein, the terms “communication unit”, and “communication module” are used interchangeably and refer to an arrangement of interconnected programmable and/or non-programmable components that are configured to facilitate data communication between the system and the display interface of the vehicle. The communication unit may utilize Wi-Fi, Bluetooth, Zigbee, or a combination thereof to communicate between the system and the display interface of the vehicle. Additionally, the communication module utilizes wired or wireless communication that can be carried out via any number of known protocols, including, but not limited to, Internet Protocol (IP), Wireless Access Protocol (WAP), Frame Relay, or Asynchronous Transfer Mode (ATM). Moreover, although the communication module described herein is being implemented with TCP/IP communications protocols, the communication module may also be implemented using IPX, Appletalk, IP-6, NetBIOS, OSI, any tunneling protocol (e.g., IPsec, SSH), or any number of existing or future protocols.
As used herein, the terms “IMU sensor”, “sensor arrangement”, and “sensors” are used interchangeably and refer to an inertial measurement unit sensor, that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. It is to be understood that accelerometers measure the acceleration of the body in three dimensions, gyroscopes measure the angular rate of the body in three dimensions, and magnetometers measure the direction of the magnetic field.
As used herein, the term “user” refers to a person operating the vehicle.
As used herein, the term “memory unit” refers to components or storage devices that are integrated into the system to store reference images. The memory unit plays a crucial role in recording and retaining important information associated with the charging events for various purposes. Furthermore, the memory unit performs historical data logging to enable the analysis of the user behavior pattern. Furthermore, the memory module may store firmware and software updates of the system. This allows for remote updates and upgrades, ensuring that the system operates with the latest features, security patches, and performance improvements.
As used herein, the terms “wearable headgear”, “headgear”, “safety headgear”, and “helmet” are used interchangeably and refer to a type of helmet that is specifically designed to protect the head of a rider of a vehicle. The wearable headgear comprises a hard outer shell and a soft inner liner. The hard outer shell is designed to distribute the impact of a crash, while the soft inner liner is designed to absorb energy and protect the user’s head from injury.
As used herein, the term “field of vision” refers to the angular extent of the scene that can be captured by the image capturing device. The field of vision is an area in front of the image capturing device and it is measured in degrees. The field of vision can be measured horizontally, vertically, or diagonally.
As used herein, the term “communicably coupled” refers to a bi-directional connection between the various components of the system and entities outside the system. The bi-directional connection between the various components of the system enables the exchange of data between two or more components of the system. Similarly, the bi-directional connection between the system and other elements/modules enables the exchange of data between the system and the other elements/modules.
Figure 1, in accordance with an embodiment, describes a system 100 for controlling a display interface 102 of a vehicle. The system 100 comprises at least one image capturing device 104 attached to a wearable headgear of a user and a processing unit 106. The at least one image capturing device 104 is configured to capture a plurality of images of field of vision of the user. The processing unit 106 is configured to receive the plurality of images, identify the display interface 102 in a set of images of the plurality of images, and control the display interface 102 in response to identification of the display interface in the set of images.
The present disclosure provides a system 100 for controlling display interface 102 of a vehicle. Beneficially, the system 100 automatically controls the display interface 102 of the vehicle based on the requirement of the user. The system 100 beneficially eliminates the user's manual operation of the display interface 102. Moreover, the system 100 ensures the driver’s undivided attention on the driving of the vehicle rather than operating the display interface 102 of the vehicle. Moreover, the system beneficially turns off the display interface 102 of the vehicle when the user of the vehicle does not require any information from the display interface 102 of the vehicle. Beneficially, the display interface 102 of the vehicle is turned on automatically when the driver requires some information from the display interface 102 of the vehicle. Beneficially, the system 100 is capable of identifying the situation when the driver of the vehicle requires information from the display interface 102 of the vehicle.
It is to be understood that the image capturing device 104 attached to the wearable headgear of the user is selected from a group of still image cameras, motion cameras, infrared cameras, action cameras, and so forth. Furthermore, it is to be understood that the image capturing device 104 continuously captures the plurality of images in the field of vision of the user.
In an embodiment, the system 100 comprises a memory unit 108 configured to store a plurality of reference images comprising reference positions of the display interface 102 in the field of vision of the user. Beneficially, the system 100 is configured to update the plurality of reference images comprising reference positions of the display interface 102 in the field of vision of the user, according to different users of the vehicle. More beneficially, the memory unit 108 is configured to store historical instances of controlling the display interface 102 to enable identification and analyses of patterns of control of display interface 102.
In an embodiment, the processing unit 106 is configured to compare the received plurality of images with the plurality of reference images to identify the display interface 102 in the set of images. Beneficially, the processing unit 106 employs computer vision algorithms to compare the received plurality of images with the plurality of reference images to identify the display interface 102 in the set of images. Beneficially, the processing unit 106 is capable of quickly identifying the display interface 102 in the set of images.
In an embodiment, the system 100 comprises a communication unit 110 configured to communicably couple the processing unit 106 and the display interface 102 of the vehicle. Beneficially, the communication unit 110 establishes a secured communication between the processing unit 106 and the display interface 102 of the vehicle.
In an embodiment, the processing unit 106 is configured to generate a command signal to control the display interface 102 in response to identification of the display interface 102 in the set of images and send the command signal to the display interface 102. It is to be understood that the command signal is communicated to the display interface 102 by the communication unit 110. Beneficially, the command signal comprises an instruction to turn on the display interface 102. It is to be understood that the command signal may comprise an instruction to turn off the display interface 102 when the display interface 102 is not identified in the set of images for a time period greater than a threshold time period.
In an embodiment, the system 100 comprises at least one inertial measurement unit (IMU) sensor 112 attached to each of the safety headgear and the vehicle, wherein the processing unit 106 is configured to determine traveling terrain of the vehicle based on data received from the IMU sensors 112. The traveling terrain of the vehicle may comprise smooth traveling terrain such as highways and paved roads or bumpy/rough traveling terrain such as off-road and unpaved roads. Beneficially, the data received from the IMU sensor 112 enables identification of the traveling terrain of the vehicle to improve the accuracy of the system 100.
In an embodiment, the processing unit 106 is configured to collate the data received from the IMU sensors 112 to determine traveling terrain of the vehicle. Beneficially, the processing unit 106 collates the data received from the IMU sensors 112 attached to each of the safety headgear and the vehicle, to increase the accuracy of the determination of the traveling terrain of the vehicle.
In an embodiment, the processing unit 106 is configured to determine head movements of the user based on the data received from the IMU sensor 112 attached to the safety headgear of the user. Beneficially, the head movement of the user is determined to accurately determine whether the user wishes to look at the display interface 102 so that the display interface 102 can be turned on by the system.
In an embodiment, the processing unit 106 is configured to determine an intended head movement from the head movements based on a user behavior pattern. Beneficially, the processing unit 106 is configured to determine whether the head movement of the user is intentional or due to the traveling terrain of the vehicle.
In an embodiment, the processing unit 106 is configured to control the display interface 102 in response to determined intended head movement and identification of the display interface 102 in the set of images. Beneficially, the processing unit 106 generates a command signal when a combination of intentional head movement of the user to look at the display interface 102 is identified. Beneficially, the plurality of images of the field of vision of the user and the intended head movements of the user are collated together to control the display interface 102 more accurately.
In an embodiment, the system 100 comprises at least one image capturing device 104 attached to a wearable headgear of a user and a processing unit 106. The at least one image capturing device 104 is configured to capture a plurality of images of field of vision of the user. The processing unit 106 is configured to receive the plurality of images, identify the display interface 102 in a set of images of the plurality of images, and control the display interface 102 in response to identification of the display interface in the set of images. Furthermore, the system 100 comprises a memory unit 108 configured to store a plurality of reference images comprising reference positions of the display interface 102 in the field of vision of the user. Furthermore, the processing unit 106 is configured to compare the received plurality of images with the plurality of reference images to identify the display interface 102 in the set of images. Furthermore, the system 100 comprises a communication unit 110 configured to communicably couple the processing unit 106 and the display interface 102 of the vehicle. Furthermore, the processing unit 106 is configured to generate a command signal to control the display interface 102 in response to identification of the display interface 102 in the set of images and send the command signal to the display interface 102. Furthermore, the system 100 comprises at least one inertial measurement unit (IMU) sensor 112 attached to each of the safety headgear and the vehicle, wherein the processing unit 106 is configured to determine traveling terrain of the vehicle based on data received from the IMU sensors 112. Furthermore, the processing unit 106 is configured to collate the data received from the IMU sensors 112 to determine traveling terrain of the vehicle. Furthermore, the processing unit 106 is configured to determine head movements of the user based on the data received from the IMU sensor 112 attached to the safety headgear of the user. Furthermore, the processing unit 106 is configured to determine an intended head movement from the head movements based on a user behavior pattern. Furthermore, the processing unit 106 is configured to control the display interface 102 in response to determined intended head movement and identification of the display interface 102 in the set of images.
In an exemplary embodiment, when the user of the vehicle is driving the vehicle, the at least one image capturing device 104 attached to the wearable headgear of the user captures the plurality of images of field of vision of the user. The plurality of images are received by the processing unit 106 to identify the display interface 102 in a set of images of the plurality of images. The head movement of the user is also identified to determine whether the user intends to look at the display interface 102. Once the processing unit 106 determines that the user is looking at the display interface 102, the processing unit 106 generates a command signal to turn on the display interface 102. After the display interface 102 is tuned on, the at least one image capturing device 104 attached to the wearable headgear of the user captures the plurality of images of field of vision of the user. The plurality of images are received by the processing unit 106 to identify the display interface 102 in a set of images of the plurality of images. When the display interface 102 is not identified in the set of images for a time period greater than a threshold time period, the processing unit 106 generates a command signal to turn off the display interface 102. In an example, the threshold time period is three seconds.
Figure 2, describes method 200 for controlling a display interface 102 of a vehicle. The method 200 starts at step 202 and finishes at step 206. At step 202, the method 200 comprises capturing a plurality of images of field of vision of a user. At step 204, the method 200 comprises identifying the display interface 102 in a set of images of the plurality of images. At step 206, the method 200 comprises controlling the display interface 102 in response to identification of the display interface 102 in the set of images.
In an embodiment, the method 200 comprises storing a plurality of reference images comprising reference positions of the display interface 102 in the field of vision of the user.
In an embodiment, the method 200 comprises comparing the received plurality of images with the plurality of reference images to identify the display interface 102 in the set of images.
In an embodiment, the method 200 comprises generating a command signal to control the display interface 102 in response to identification of the display interface 102 in the set of images and sending the command signal to the display interface 102.
In an embodiment, the method 200 comprises collating data received from IMU sensors 112 to determine traveling terrain of the vehicle.
In an embodiment, the method 200 comprises determining head movements of the user based on the data received from the IMU sensor 112 attached to the safety headgear of the user.
In an embodiment, the method 200 comprises determining an intended head movement from the head movements based on a user behavior pattern.
In an embodiment, the method 200 comprises controlling the display interface in response to determined intended head movement and identification of the display interface 102 in the set of images.
In an embodiment, the method the method 200 comprises capturing a plurality of images of field of vision of a user, identifying the display interface 102 in a set of images of the plurality of images, controlling the display interface 102 in response to identification of the display interface 102 in the set of images. Furthermore, the method 200 comprises storing a plurality of reference images comprising reference positions of the display interface 102 in the field of vision of the user. Furthermore, the method 200 comprises comparing the received plurality of images with the plurality of reference images to identify the display interface 102 in the set of images. Furthermore, the method 200 comprises generating a command signal to control the display interface 102 in response to identification of the display interface 102 in the set of images and sending the command signal to the display interface 102. Furthermore, the method 200 comprises collating data received from IMU sensors 112 to determine traveling terrain of the vehicle. Furthermore, the method 200 comprises determining head movements of the user based on the data received from the IMU sensor 112 attached to the safety headgear of the user. Furthermore, the method 200 comprises determining an intended head movement from the head movements based on a user behavior pattern. Furthermore, the method 200 comprises controlling the display interface in response to determined intended head movement and identification of the display interface 102 in the set of images.
It would be appreciated that all the explanations and embodiments of the system 100 also apply mutatis-mutandis to the method 200.
In the description of the present invention, it is also to be noted that, unless otherwise explicitly specified or limited, the terms “disposed”, “mounted,” and “connected” are to be construed broadly, and may for example be fixedly connected, detachably connected, or integrally connected, either mechanically or electrically. They may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Modifications to embodiments and combinations of different embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as “including”, “comprising”, “incorporating”, “have”, and “is” used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural where appropriate.
Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the present disclosure, the drawings, and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.
,CLAIMS:WE CLAIM:
1. A system (100) for controlling a display interface (102) of a vehicle, the system (100) comprises:
- at least one image capturing device (104) attached to a wearable headgear of a user and configured to capture a plurality of images of field of vision of the user; and
- a processing unit (106) configured to:
- receive the plurality of images;
- identify the display interface (102) in a set of images of the plurality of images; and
- control the display interface (102) in response to identification of the display interface in the set of images.
2. The system (100) as claimed in claim 1, wherein the system (100) comprises a memory unit (108) configured to store a plurality of reference images comprising reference positions of the display interface (102) in the field of vision of the user.
3. The system (100) as claimed in claim 2, wherein the processing unit (106) is configured to compare the received plurality of images with the plurality of reference images to identify the display interface (102) in the set of images.
4. The system (100) as claimed in claim 1, wherein the system (100) comprises a communication unit (110) configured to communicably couple the processing unit (106) and the display interface (102) of the vehicle.
5. The system (100) as claimed in claim 4, wherein the processing unit (106) is configured to generate a command signal to control the display interface (102) in response to identification of the display interface (102) in the set of images and send the command signal to the display interface (102).
6. The system (100) as claimed in claim 1, wherein the system (100) comprises at least one IMU sensor (112) attached to each of the safety headgear and the vehicle, wherein the processing unit (106) is configured to determine traveling terrain of the vehicle based on data received from the IMU sensors (112).
7. The system (100) as claimed in claim 6, wherein the processing unit (106) is configured to collate the data received from the IMU sensors (112) to determine traveling terrain of the vehicle.
8. The system (100) as claimed in claim 6, wherein the processing unit (106) is configured to determine head movements of the user based on the data received from the IMU sensor (112) attached to the safety headgear of the user.
9. The system (100) as claimed in claim 8, wherein the processing unit (106) is configured to determine an intended head movement from the head movements based on a user behavior pattern.
10. The system (100) as claimed in claims 1 and 9, wherein the processing unit (106) is configured to control the display interface (102) in response to determined intended head movement and identification of the display interface (102) in the set of images.
11. A method (200) for controlling a display interface (102) of a vehicle, the method (200) comprising:
- capturing a plurality of images of field of vision of a user;
- identifying the display interface (102) in a set of images of the plurality of images; and
- controlling the display interface (102) in response to identification of the display interface (102) in the set of images.
12. The method (200) as claimed in claim 11, wherein the method (200) comprises storing a plurality of reference images comprising reference positions of the display interface (102) in the field of vision of the user.
13. The method (200) as claimed in claim 11, wherein the method (200) comprises comparing the received plurality of images with the plurality of reference images to identify the display interface (102) in the set of images.
14. The method (200) as claimed in claim 11, wherein the method (200) comprises generating a command signal to control the display interface (102) in response to identification of the display interface (102) in the set of images and sending the command signal to the display interface (102).
15. The method (200) as claimed in claim 11, wherein the method (200) comprises collating data received from IMU sensors (112) to determine traveling terrain of the vehicle.
16. The method (200) as claimed in claim 11, wherein the method (200) comprises determining head movements of the user based on the data received from the IMU sensor (112) attached to the safety headgear of the user.
17. The method (200) as claimed in claim 11, wherein the method (200) comprises determining an intended head movement from the head movements based on a user behavior pattern.
18. The method (200) as claimed in claim 11, wherein the method (200) comprises controlling the display interface in response to determined intended head movement and identification of the display interface (102) in the set of images.
| # | Name | Date |
|---|---|---|
| 1 | 202221060356-PROVISIONAL SPECIFICATION [21-10-2022(online)].pdf | 2022-10-21 |
| 2 | 202221060356-FORM FOR SMALL ENTITY(FORM-28) [21-10-2022(online)].pdf | 2022-10-21 |
| 3 | 202221060356-FORM FOR SMALL ENTITY [21-10-2022(online)].pdf | 2022-10-21 |
| 4 | 202221060356-FORM 1 [21-10-2022(online)].pdf | 2022-10-21 |
| 5 | 202221060356-FIGURE OF ABSTRACT [21-10-2022(online)].pdf | 2022-10-21 |
| 6 | 202221060356-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [21-10-2022(online)].pdf | 2022-10-21 |
| 7 | 202221060356-EVIDENCE FOR REGISTRATION UNDER SSI [21-10-2022(online)].pdf | 2022-10-21 |
| 8 | 202221060356-DRAWINGS [21-10-2022(online)].pdf | 2022-10-21 |
| 9 | 202221060356-DECLARATION OF INVENTORSHIP (FORM 5) [21-10-2022(online)].pdf | 2022-10-21 |
| 10 | 202221060356-FORM-26 [13-11-2022(online)].pdf | 2022-11-13 |
| 11 | 202221060356-ORIGINAL UR 6(1A) FORM 1 & 26-211122.pdf | 2022-11-23 |
| 12 | 202221060356-DRAWING [20-10-2023(online)].pdf | 2023-10-20 |
| 13 | 202221060356-COMPLETE SPECIFICATION [20-10-2023(online)].pdf | 2023-10-20 |
| 14 | 202221060356-FORM-9 [31-10-2023(online)].pdf | 2023-10-31 |
| 15 | 202221060356-MSME CERTIFICATE [01-11-2023(online)].pdf | 2023-11-01 |
| 16 | 202221060356-FORM28 [01-11-2023(online)].pdf | 2023-11-01 |
| 17 | 202221060356-FORM 18A [01-11-2023(online)].pdf | 2023-11-01 |
| 18 | Abstact.jpg | 2023-11-29 |
| 19 | 202221060356-FER.pdf | 2023-12-27 |
| 20 | 202221060356-OTHERS [09-04-2024(online)].pdf | 2024-04-09 |
| 21 | 202221060356-FER_SER_REPLY [09-04-2024(online)].pdf | 2024-04-09 |
| 22 | 202221060356-DRAWING [09-04-2024(online)].pdf | 2024-04-09 |
| 23 | 202221060356-COMPLETE SPECIFICATION [09-04-2024(online)].pdf | 2024-04-09 |
| 24 | 202221060356-CLAIMS [09-04-2024(online)].pdf | 2024-04-09 |
| 25 | 202221060356-ABSTRACT [09-04-2024(online)].pdf | 2024-04-09 |
| 26 | 202221060356-US(14)-HearingNotice-(HearingDate-13-06-2024).pdf | 2024-05-27 |
| 27 | 202221060356-Correspondence to notify the Controller [29-05-2024(online)].pdf | 2024-05-29 |
| 28 | 202221060356-Written submissions and relevant documents [28-06-2024(online)].pdf | 2024-06-28 |
| 29 | 202221060356-RELEVANT DOCUMENTS [28-06-2024(online)].pdf | 2024-06-28 |
| 30 | 202221060356-PETITION UNDER RULE 137 [28-06-2024(online)].pdf | 2024-06-28 |
| 31 | 202221060356-PatentCertificate22-07-2024.pdf | 2024-07-22 |
| 32 | 202221060356-IntimationOfGrant22-07-2024.pdf | 2024-07-22 |
| 1 | SearchHistoryE_27-12-2023.pdf |