DESC:SYSTEM AND METHOD FOR DETECTING SAFETY HEADGEAR OF USER OPERATING A VEHICLE
CROSS-REFERENCE TO RELATED APPLICATIONS
The present application claims priority from Indian Provisional Patent Application No. 202221060355 filed on 21/10/2022, the entirety of which is incorporated herein by a reference.
TECHNICAL FIELD
The present disclosure generally relates to the detection of safety headgear of a user of a vehicle. Particularly, the present disclosure relates to a system and a method for monitoring position and orientation of a safety headgear of a user operating a vehicle.
BACKGROUND
Recently there has been a rapid development of transportation due to the increase in population and rapid development in personal mobility solutions. However, with an increase in personal mobility, the probability of accidents has also increased on the roads due to a number of reasons. Such accidents may sometimes lead to the loss of life of the user of the vehicle.
Typically, the safety of the rider of a personal mobility vehicle such as a motorcycle or scooter is always compromised due to high exposure of the rider's body. The rider safety on such vehicles is a major concern throughout the world. To enhance the safety of the riders of such vehicles, safety headgear has been introduced so as to protect the head of the rider and minimize the risk of fatal injury in case of an accident. Such safety headgear is highly effective in protecting the rider from fatal injuries in case of an accident. However, it is important for the rider to wear the safety headgear properly all the time while riding the vehicle. It has been observed that most of the riders fail to equip themselves properly with safety headgear while riding, leading to serious injuries and even fatalities during accidents. Thus, to solve the problem of riders not wearing the safety headgear, many systems have been developed to detect the presence of safety headgear near the vehicle and the rider.
However, the present safety headgear detection systems are complex and fail to detect the position of the safety headgear to ensure whether the rider has been wearing the safety headgear properly or carrying it along without properly wearing the same. Moreover, such systems also fail to continuously monitor whether the rider has been properly wearing the safety headgear throughout the journey.
Therefore, there exists a need for a mechanism to detect and monitor the safety headgear of user of the vehicle that overcomes one or more problems associated as set forth above.
SUMMARY
An object of the present disclosure is to provide a system for monitoring position and orientation of a safety headgear of a user operating a vehicle.
Another object of the present disclosure is to provide a method for monitoring position and orientation of a safety headgear of a user operating a vehicle.
In accordance with the first aspect of the present disclosure, there is provided a system for monitoring position and orientation of a safety headgear of a user operating a vehicle. The system comprises at least one vision sensor attached to the safety headgear of the user and a processing unit. The at least one vision sensor is configured to capture a plurality of images of field of vision of the vision sensor. The processing unit is configured to receive the plurality of images, identify at least one reference point present in the plurality of images, determine the position and the orientation of the safety headgear of the user based on the identified at least one reference point, and continuously monitor the identified at least one reference point in the plurality of images to monitor the position and the orientation of the safety headgear of the user.
The present disclosure provides a system for detecting safety headgear of a user operating a vehicle. The system, as disclosed in the present disclosure is advantageous in terms of automatically detecting the safety headgear of a user operating a vehicle which ensure riding safety of the user. Further, the method and system, capable of detecting safety headgear of the user operating the vehicle with less complex structure of the system. Furthermore, the system as disclosed is capable of detecting safety headgear of the user operating the vehicle which increases accuracy of the detecting the condition than the conventional helmet detection system. Advantageously, the system continuously monitors the presence, position and orientation of the safety headgear of the user throughout the duration of the operation of the vehicle.
In accordance with the second aspect of the present disclosure, there is provided a method for monitoring position and orientation of a safety headgear of a user operating a vehicle. The method comprises capturing a plurality of images of field of vision of a vision sensor, identifying at least one reference point present in the plurality of images, determining the position and the orientation of the safety headgear of the user based on the identified at least one reference point, and continuously monitoring the identified at least one reference point in the plurality of images to monitor the position and the orientation of the safety headgear of the user.
Additional aspects, advantages, features, and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments constructed in conjunction with the appended claims that follow.
It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
BRIEF DESCRIPTION OF DRAWINGS
The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
FIG. 1 illustrates a block diagram of a system for monitoring position and orientation of a safety headgear of a user operating a vehicle, in accordance with an aspect of the present disclosure.
FIG. 2 illustrates a flow chart of a method for monitoring position and orientation of a safety headgear of a user operating a vehicle, in accordance with another aspect of the present disclosure.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
The description set forth below in connection with the appended drawings is intended as a description of certain embodiments of a system for controlling a display interface of a vehicle and is not intended to represent the only forms that may be developed or utilized. The description sets forth the various structures and/or functions in connection with the illustrated embodiments; however, it is to be understood that the disclosed embodiments are merely exemplary of the disclosure that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however, that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
The terms “comprise”, “comprises”, “comprising”, “include(s)”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, or system that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or system. In other words, one or more elements in a system or apparatus preceded by “comprises... a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.
In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings which are shown by way of illustration-specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
The present disclosure will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the description with unnecessary detail.
As used herein, the term “vehicle” refers to an automobile that requires the user to wear a headgear while operating the vehicle. The vehicle may include but not limited to, a two-wheel vehicle, a three-wheel vehicle, a four-wheel vehicle and so forth. Similarly, the vehicle may include an automobile capable of operating on road surface, water surface and/or air surface depending on the type of vehicle.
As used herein, the term “propulsion system” refers to group of components of the vehicle that generate and delivers power for the movement of the vehicle.
As used herein, the terms “instrument cluster” “display interface”, and “display unit” are used interchangeably and refer to a digital display, analog display, or a combination thereof capable of displaying various information related to the vehicle. The display interface also allows the driver to interact with the vehicle's information and entertainment system. The display interface may display information about at least one of: vehicle speed, RPM of the powertrain, fuel level, odometer, navigation maps, audio, and climate control settings, warning messages, and so forth. The display interface may comprise an input mechanism such as a touchscreen. The display interface may be capable of presenting information including text, two-dimensional visual images, and/or three-dimensional visual images. Additionally, the display interface may present information in the form of audio and haptics. The display interface may include but is not limited to, a liquid crystal display (LCD), a light-emitting diode (LED) display, and a plasma display. Alternatively, the display interface may utilize other display technologies.
As used herein, the terms “vision sensor” “imaging sensor” and “vision unit” are used interchangeably and refer to a device that is capable of capturing still or moving images. The image capturing device comprises a lens, an image sensor, and an image processor. The lens focuses light onto the image sensor, which converts the light into an electrical signal. The image processor then converts the electrical signal into a digital image.
As used herein, the terms “processing unit”, ‘data processing unit’ and ‘processor’ are used interchangeably and refer to a computational element that is operable to respond to and process image signals and generate responsive commands to control other sub-systems in a system. Optionally, the processing unit includes but is not limited to, a microprocessor, a microcontroller, an image signal processor, a complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a combination thereof. Furthermore, the term “processor” may refer to one or more individual processors, processing devices, and various elements associated with a processing device that may be shared by other processing devices. Furthermore, the processing unit may comprise ARM Cortex-M series processors, such as the Cortex-M4 or Cortex-M7, or any similar processor designed to handle real-time tasks with high performance and low power consumption. Furthermore, the processing unit may comprise custom and/or proprietary processors.
As used herein, the terms “communication unit”, and “communication module” are used interchangeably and refer to an arrangement of interconnected programmable and/or non-programmable components that are configured to facilitate data communication between the system and the display interface of the vehicle. The communication unit may utilize Wi-Fi, Bluetooth, Zigbee, or a combination thereof to communicate between the system and the display interface of the vehicle. Additionally, the communication module utilizes wired or wireless communication that can be carried out via any number of known protocols, including, but not limited to, Internet Protocol (IP), Wireless Access Protocol (WAP), Frame Relay, or Asynchronous Transfer Mode (ATM). Moreover, although the communication module described herein is being implemented with TCP/IP communications protocols, the communication module may also be implemented using IPX, Appletalk, IP-6, NetBIOS, OSI, any tunneling protocol (e.g., IPsec, SSH), or any number of existing or future protocols.
As used herein, the terms “Inertial Measurement Unit sensor”, “IMU sensor”, “sensor arrangement”, and “sensors” are used interchangeably and refer to an inertial measurement unit sensor, that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. It is to be understood that accelerometers measure the acceleration of the body in three dimensions, gyroscopes measure the angular rate of the body in three dimensions, and magnetometers measure the direction of the magnetic field.
As used herein, the term “user” refers to a person operating the vehicle.
As used herein, the term “memory unit” refers to components or storage devices that are integrated into the system to store reference images. The memory unit plays a crucial role in recording and retaining important information associated with the charging events for various purposes. Furthermore, the memory unit performs historical data logging to enable the analysis of the user behavior pattern. Furthermore, the memory module may store firmware and software updates of the system. This allows for remote updates and upgrades, ensuring that the system operates with the latest features, security patches, and performance improvements.
As used herein, the terms “safety headgear”, “headgear”, “wearable headgear”, and “helmet” are used interchangeably and refer to a type of helmet that is specifically designed to protect the head of a rider of a vehicle. The wearable headgear comprises a hard outer shell and a soft inner liner. The hard outer shell is designed to distribute the impact of a crash, while the soft inner liner is designed to absorb energy and protect the user’s head from injury.
As used herein, the term “reference point” refers to distinct identifiable objects present in the images captured by the vision sensor. The reference point helps in determining the direction in which the user is looking by determining the reference points present in the plurality of images of the field of view of the vision sensor.
As used herein, the term “field of vision” refers to the angular extent of the scene that can be captured by the vision sensor. The field of vision is an area in front of the vision sensor and it is measured in degrees. The field of vision can be measured horizontally, vertically, or diagonally.
As used herein, the term “communicably coupled” refers to a bi-directional connection between the various components of the system and entities outside the system. The bi-directional connection between the various components of the system enables the exchange of data between two or more components of the system. Similarly, the bi-directional connection between the system and other elements/modules enables the exchange of data between the system and the other elements/modules.
Figure 1, in accordance with an embodiment, describes a system 100 for monitoring position and orientation of a safety headgear 102 of a user operating a vehicle. The system 100 comprises at least one vision sensor 104 attached to the safety headgear 102 of the user and a processing unit 106. The at least one vision sensor 104 is configured to capture a plurality of images of field of vision of the vision sensor 104. The processing unit 106 is configured to receive the plurality of images, identify at least one reference point present in the plurality of images, determine the position and the orientation of the safety headgear 102 of the user based on the identified at least one reference point, and continuously monitor the identified at least one reference point in the plurality of images to monitor the position and the orientation of the safety headgear 102 of the user.
The present disclosure provides a system 100 for detecting safety headgear 102 of a user operating a vehicle. The system 100 is advantageous in terms of automatically detecting safety headgear 102 of a user operating a vehicle which ensure safety of the user. Advantageously, the system 100, capable of detecting safety headgear 102 of the user operating the vehicle is less complex. Advantageously, the system 100 continuously monitors the presence, position and orientation of the safety headgear 102 of the user throughout the duration of the operation of the vehicle. Advantageously, the system 100 is capable of restricting the operation of the vehicle if the safety headgear 102 is not properly positioned.
It is to be understood that the vision sensor 104 attached to the wearable headgear of the user is selected from a group of still image cameras, motion cameras, infrared cameras, action cameras, and so forth. Furthermore, it is to be understood that the vision sensor 104 continuously captures the plurality of images in the field of vision of the user.
In an embodiment, the system 100 comprises memory unit 108 configured to store a plurality of reference images comprising predefined reference points in the field of vision of the vision sensor 104. Beneficially, the system 100 is configured to update the plurality of reference images comprising predefined reference points in the field of vision of the vision sensor 104, according to different users of the vehicle. More beneficially, the memory unit 108 is configured to store historical instances of safety headgear 102 detection failure to enable identification and analyses of patterns of user behavior.
In an embodiment, the processing unit 106 is configured to compare the received plurality of images with the plurality of reference images to identify at least one reference point present in the plurality of images. Beneficially, the processing unit 106 employs computer vision algorithms to compare the received plurality of images with the plurality of reference images to identify at least one reference point present in the plurality of images. Beneficially, the processing unit 106 is capable of quickly identifying the at least one reference point in the set of images.
In an embodiment, the processing unit 106 is configured to update the stored plurality of reference images comprising predefined reference points in the field of vision of the vision sensor 104, when the user wears the safety headgear 102 for first time. Beneficially, such update of the stored plurality of reference images would recalibrate the system 100 for different users according to their field of vision.
In an embodiment, the at least one reference point comprises an image of at least one of: an instrument cluster of the vehicle, a rear-view mirror, a road marking and a steering mechanism of the vehicle. Beneficially, the at least one reference point may include any suitable reference point present in the field of vision of the user.
In an embodiment, the system 100 comprises at least one inertial measurement unit IMU sensor 112 attached to each of the safety headgear 102 of the user and the vehicle, wherein the processing unit 106 is configured to determine traveling terrain of the vehicle based on data received from the IMU sensors 112. The traveling terrain of the vehicle may comprise smooth traveling terrain such as highways and paved roads or bumpy/rough traveling terrain such as off-road and unpaved roads. Furthermore, the travelling terrain may comprise road, air or water according to the type of the vehicle. Beneficially, the data received from the IMU sensor 112 enables identification of the traveling terrain of the vehicle to improve the accuracy of the system 100.
In an embodiment, the processing unit 106 is configured to determine head movements of the user based on the data received from the IMU sensor 112 attached to the safety headgear 102 of the user. Beneficially, the head movement of the user is determined to accurately determine whether the user wishes to look away from the reference points due to driving requirement, leading to improved accuracy of the system 100.
In an embodiment, the system 100 comprises a communication unit 110 configured to communicably couple the system 100 and the vehicle. Beneficially, the communication unit 110 establishes a secured communication between the processing unit 106 and the vehicle.
In an embodiment, the processing unit 106 is configured to generate a command signal, when the at least one reference point is not present in the plurality of images and send the command signal to the vehicle. It is to be understood that the command signal is communicated to the vehicle by the communication unit 110. Beneficially, the command signal may comprise a warning to display on the vehicle instrument cluster. It is to be understood that the command signal is generated when the at least one reference point is not present in the plurality of images for a time period greater than a threshold time period.
In an embodiment, the vehicle receives the command signal and displays a warning on a display of the vehicle. Beneficially, the warning may remind and encourage the user to properly wear the safety headgear 102.
In another embodiment, the vehicle receives the command signal and turns off a propulsion system of the vehicle. It is to be understood that the command signal with instruction to turn off the propulsion system of the vehicle would be generated after related warning is displayed on the display of the vehicle.
In an embodiment, the system 100 comprises at least one vision sensor 104 attached to the safety headgear 102 of the user and a processing unit 106. The at least one vision sensor 104 is configured to capture a plurality of images of field of vision of the vision sensor 104. The processing unit 106 is configured to receive the plurality of images, identify at least one reference point present in the plurality of images, determine the position and the orientation of the safety headgear 102 of the user based on the identified at least one reference point, and continuously monitor the identified at least one reference point in the plurality of images to monitor the position and the orientation of the safety headgear 102 of the user. Furthermore, the system 100 comprises memory unit 108 configured to store a plurality of reference images comprising predefined reference points in the field of vision of the vision sensor 104. Furthermore, the processing unit 106 is configured to compare the received plurality of images with the plurality of reference images to identify at least one reference point present in the plurality of images. Furthermore, the processing unit 106 is configured to update the stored plurality of reference images comprising predefined reference points in the field of vision of the vision sensor 104, when the user wears the safety headgear 102 for first time. Furthermore, the at least one reference point comprises an image of at least one of: an instrument cluster of the vehicle, a rear-view mirror, a road marking and a steering mechanism of the vehicle. Furthermore, the system 100 comprises at least one IMU sensor 112 attached to each of the safety headgear 102 of the user and the vehicle, wherein the processing unit 106 is configured to determine traveling terrain of the vehicle based on data received from the IMU sensors 112. Furthermore, the processing unit 106 is configured to determine head movements of the user based on the data received from the IMU sensor 112 attached to the safety headgear 102 of the user. Furthermore, the system 100 comprises a communication unit 110 configured to communicably couple the system 100 and the vehicle. Furthermore, the processing unit 106 is configured to generate a command signal, when the at least one reference point is not present in the plurality of images and send the command signal to the vehicle. Furthermore, the vehicle receives the command signal and displays a warning on a display of the vehicle. Furthermore, the vehicle receives the command signal and turns off a propulsion system of the vehicle.
In an exemplary embodiment, when the user of the vehicle is driving the vehicle, the at least one vision sensor 104 attached to the wearable headgear of the user captures the plurality of images of field of vision of the vision sensor 104. The plurality of images are received by the processing unit 106 to identify at least one reference point present in the plurality of images. The head movement of the user is also identified to determine whether the user intends to look away from the at least one reference point. Once the processing unit 106 the position and the orientation of the safety headgear 102 of the user based on the identified at least one reference point, the same is continuously monitored. If the processing unit 106 fails to identify the presence of at least one reference point for a time period more than a threshold time period, a command signal is generated to display a warning on instrument cluster of the vehicle. If the user fails to resolve the warning by properly wearing the safety headgear 102 within a few minutes, a further command signal is generated to turn off the propulsion system of the vehicle.
Figure 2, describes method 200 for monitoring position and orientation of a safety headgear 102 of a user operating a vehicle. The method 200 starts at step 202 and finishes at step 208. At step 202, the method 200 comprises capturing a plurality of images of field of vision of a vision sensor 104. At step 204, the method 200 comprises identifying at least one reference point present in the plurality of images. At step 206, the method 200 comprises determining the position and the orientation of the safety headgear 102 of the user based on the identified at least one reference point. At step 208, the method 200 comprises continuously monitoring the identified at least one reference point in the plurality of images to monitor the position and the orientation of the safety headgear 102 of the user.
In an embodiment, the method 200 comprises storing a plurality of reference images comprising predefined reference points in the field of vision of the vision sensor 102.
In an embodiment, the method 200 comprises comparing the received plurality of images with the plurality of reference images to identify at least one reference point present in the plurality of images.
In an embodiment, the method 200 comprises updating the stored plurality of reference images comprising predefined reference points in the field of vision of the vision sensor 104, when the user wears the safety headgear 102 for first time.
In an embodiment, the at least one reference point comprises an image of at least one of: an instrument cluster of the vehicle, a rear-view mirror, a road marking and a steering mechanism of the vehicle.
In an embodiment, the method 200 comprises determining traveling terrain of the vehicle based on data received from IMU sensors 112.
In an embodiment, the method 200 comprises determining head movements of the user based on the data received from the IMU sensor 112 attached to the safety headgear 102 of the user.
In an embodiment, the method 200 comprises generating a command signal, when the at least one reference point is not present in the plurality of images and send the command signal to the vehicle.
In an embodiment, the method 200 comprises receiving the command signal for displaying a warning on a display of the vehicle or receiving the command signal for turning off a propulsion system of the vehicle.
In an embodiment, the method 200 comprises capturing a plurality of images of field of vision of a vision sensor 104, identifying at least one reference point present in the plurality of images, determining the position and the orientation of the safety headgear 102 of the user based on the identified at least one reference point, and continuously monitoring the identified at least one reference point in the plurality of images to monitor the position and the orientation of the safety headgear 102 of the user. Furthermore, the method 200 comprises storing a plurality of reference images comprising predefined reference points in the field of vision of the vision sensor 102. Furthermore, the method 200 comprises comparing the received plurality of images with the plurality of reference images to identify at least one reference point present in the plurality of images. Furthermore, the method 200 comprises updating the stored plurality of reference images comprising predefined reference points in the field of vision of the vision sensor 104, when the user wears the safety headgear 102 for first time. Furthermore, the at least one reference point comprises an image of at least one of: an instrument cluster of the vehicle, a rear-view mirror, a road marking and a steering mechanism of the vehicle. Furthermore, the method 200 comprises determining traveling terrain of the vehicle based on data received from IMU sensors 112. Furthermore, the method 200 comprises determining head movements of the user based on the data received from the IMU sensor 112 attached to the safety headgear 102 of the user. Furthermore, the method 200 comprises generating a command signal, when the at least one reference point is not present in the plurality of images and send the command signal to the vehicle. Furthermore, the method 200 comprises receiving the command signal for displaying a warning on a display of the vehicle or receiving the command signal for turning off a propulsion system of the vehicle.
It would be appreciated that all the explanations and embodiments of the system 100 also apply mutatis-mutandis to the method 200.
In the description of the present invention, it is also to be noted that, unless otherwise explicitly specified or limited, the terms “disposed”, “mounted,” and “connected” are to be construed broadly, and may for example be fixedly connected, detachably connected, or integrally connected, either mechanically or electrically. They may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Modifications to embodiments and combinations of different embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as “including”, “comprising”, “incorporating”, “have”, and “is” used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural where appropriate.
Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the present disclosure, the drawings, and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.
,CLAIMS:WE CLAIM:
1. A system (100) for monitoring position and orientation of a safety headgear (102) of a user operating a vehicle, the system (100) comprises:
- at least one vision sensor (104) attached to the safety headgear (102) of the user and configured to capture a plurality of images of field of vision of the vision sensor (104); and
- a processing unit (106) configured to:
- receive the plurality of images;
- identify at least one reference point present in the plurality of images;
- determine the position and the orientation of the safety headgear (102) of the user based on the identified at least one reference point; and
- continuously monitor the identified at least one reference point in the plurality of images to monitor the position and the orientation of the safety headgear (102) of the user.
2. The system (100) as claimed in claim 1, wherein the system (100) comprises memory unit (108) configured to store a plurality of reference images comprising predefined reference points in the field of vision of the vision sensor (104).
3. The system (100) as claimed in claim 2, wherein the processing unit (106) is configured to compare the received plurality of images with the plurality of reference images to identify at least one reference point present in the plurality of images.
4. The system (100) as claimed in claim 2, wherein the processing unit (106) is configured to update the stored plurality of reference images comprising predefined reference points in the field of vision of the vision sensor (104), when the user wears the safety headgear (102) for first time.
5. The system (100) as claimed in claim 1, wherein the at least one reference point comprises an image of at least one of: an instrument cluster of the vehicle, a rear-view mirror, a road marking and a steering mechanism of the vehicle.
6. The system (100) as claimed in claim 1, wherein the system (100) comprises at least one IMU sensor (112) attached to each of the safety headgear (102) of the user and the vehicle, wherein the processing unit (106) is configured to determine traveling terrain of the vehicle based on data received from the IMU sensors (112).
7. The system (100) as claimed in claim 6, wherein the processing unit (106) is configured to determine head movements of the user based on the data received from the IMU sensor (112) attached to the safety headgear (102) of the user.
8. The system (100) as claimed in claim 1, wherein the system (100) comprises a communication unit (110) configured to communicably couple the system (100) and the vehicle.
9. The system (100) as claimed in claim 1, wherein the processing unit (106) is configured to generate a command signal, when the at least one reference point is not present in the plurality of images and send the command signal to the vehicle.
10. The system (100) as claimed in claim 9, wherein the vehicle receives the command signal and displays a warning on a display of the vehicle.
11. The system (100) as claimed in claim 9, wherein the vehicle receives the command signal and turns off a propulsion system of the vehicle.
12. A method (200) for monitoring position and orientation of a safety headgear (102) of a user operating a vehicle, the method (200) comprising:
- capturing a plurality of images of field of vision of a vision sensor (104);
- identifying at least one reference point present in the plurality of images;
- determining the position and the orientation of the safety headgear (102) of the user based on the identified at least one reference point; and
- continuously monitoring the identified at least one reference point in the plurality of images to monitor the position and the orientation of the safety headgear (102) of the user.
13. The method (200) as claimed in claim 12, wherein the method (200) comprises storing a plurality of reference images comprising predefined reference points in the field of vision of the vision sensor (102).
14. The method (200) as claimed in claim 12, wherein the method (200) comprises comparing the received plurality of images with the plurality of reference images to identify at least one reference point present in the plurality of images.
15. The method (200) as claimed in claim 12, wherein the method (200) comprises updating the stored plurality of reference images comprising predefined reference points in the field of vision of the vision sensor (104), when the user wears the safety headgear (102) for first time.
16. The method (200) as claimed in claim 12, wherein the at least one reference point comprises an image of at least one of: an instrument cluster of the vehicle, a rear-view mirror, a road marking and a steering mechanism of the vehicle.
17. The method (200) as claimed in claim 12, wherein the method (200) comprises determining traveling terrain of the vehicle based on data received from IMU sensors (112).
18. The method (200) as claimed in claim 12, wherein the method (200) comprises determining head movements of the user based on the data received from the IMU sensor (112) attached to the safety headgear (102) of the user.
19. The method (200) as claimed in claim 12, wherein the method (200) comprises generating a command signal, when the at least one reference point is not present in the plurality of images and send the command signal to the vehicle.
20. The method (200) as claimed in claim 12, wherein the method (200) comprises receiving the command signal for displaying a warning on a display of the vehicle or receiving the command signal for turning off a propulsion system of the vehicle.