Abstract: The present invention relates to a system (100) and method (200) for monitoring a user in a vehicle. The system (100) has one or more image sensors (110) for capturing real time images of one or more users. The system (100) has a processing unit (120) configured to receive the real time images of the one or more users riding the vehicle from the one or more image sensors (110). The processing unit (120) further has one or more processing modules configured to determine one or more user parameters based on the real time images. The system (100) further has a feedback module (130) configured to receive an input from the processing unit (120) if any of the one or more user parameters exceed a predetermined threshold of the one or more user parameters. The feedback module (130) is then configured to generate an output command. Reference Figure 1
Description:FIELD OF THE INVENTION
[001] The present invention relates to monitoring of a user in a vehicle. More particularly, the present invention relates to a system and a method for monitoring the user in the vehicle.
BACKGROUND OF THE INVENTION
[002] With the advancement in vehicle technologies, there is greater focus on enhancement of user activity and user safety systems, and on improving the overall riding experience of the user. In existing designs, the user activity and monitoring system is explored to a very limited extent, leading to several problems including the safety of the user and the vehicle accidents. Conventionally, it is not possible to monitor the user activity and to detect riding patterns and the riding postures of the user in real-time, and therefore it becomes difficult to evaluate the overall performance of the user. The absence or insufficient monitoring of the users may lead to safety concerns including but not limited to drowsy riding, distracted riding, impaired riding, medical emergency, aggressive riding, and the like.
[003] There is a growing need for implementing the user monitoring system for the user based on the user activity and the user parameters for ensuring the safety of the user and reducing the risk of the vehicle accidents. However, the existing solutions do not focus on monitoring the user based on the real-time visual data of the user as implementing advanced user monitoring systems in vehicles requires significant investments in research, development, and integration. There is a need for the user monitoring system that is cost-friendly and affordable to the user. Also, the visual monitoring of the user in a saddle type vehicle faces technical challenges, for example, the face of the user is covered with a helmet and the saddle type vehicle is more susceptible to vibrations, insufficient illumination and movements due to road condition, and this impacts the accuracy and reliability of the sensors and camera. Similarly, the saddle type vehicle is more susceptible to noise and hence, may impact the overall efficiency of the user monitoring systems. Further, it is difficult to monitor the body posture of the user in the saddle type vehicles and also the saddle type vehicles have space constraints as well and hence, cannot accommodate a huge user monitoring system.
[004] Therefore, it is required that a compact design be made with the integration of the user activity detection such as fatigue detection, drowsiness detection, distraction detection, abnormal riding recognition, bad pose detection, and the like. The users of the saddle type vehicles now require their vehicle to be more customised to their requirements and expect the vehicle to monitor their riding style to generate an alert in emergency situations.
[005] The automation also needs to allow the vehicle to make informed decisions on the safety of the user, in addition to overall improvement in user experience. In existing systems, the user monitoring system is not present due to various constraints such as cost, technical challenges, space, and the like. Further, the existing systems do not allow the vehicle to send the alert to the user in the middle of the trip, which severely restricts user experience and has significant safety concerns.
[006] Thus, there is a need in the art for a system and method for monitoring a user in a vehicle, which addresses at least the aforementioned problems.
SUMMARY OF THE INVENTION
[007] In one aspect, the present invention relates to a system for monitoring a user in a vehicle. The system has one or more image sensors for capturing real time images of one or more users riding the vehicle. The system further has a processing unit configured to receive the real time images of the one or more users riding the vehicle from the one or more image sensors. The processing unit further has one or more processing modules configured to determine one or more user parameters based on the real time images of the one or more users. The system further has a feedback module configured to receive an input from the processing unit if any of the one or more user parameters exceed a predetermined threshold of the one or more user parameters. The feedback module is then configured to generate an output command.
[008] In an embodiment of the invention, the plurality of processing modules include at least a first processing module, a second processing module and a third processing module, the first processing module is configured to determine distraction of the one or more users based on the real time images of the one or more users, the second processing module is configured to determine drowsiness of the one or more users based on the real time images of the one or more users, and the third processing module is configured to determine unsafe riding posture of the one or more users based on the real time images of the one or more users.
[009] In a further embodiment of the invention, the first processing module is configured to detect one or more of a yaw, a pitch and a roll of a head of the one or more users based on the real time images, determine if any one of the yaw, the pitch and the roll of the head of the one or more users exceed a first predetermined value, and the feedback module receives an input if any one of the yaw, the pitch and the roll of the head of the one or more users exceed the first predetermined value.
[010] In a further embodiment of the invention, the first processing module is configured to estimate a gaze angle of the one or more users based on the real time images, if the detected yaw, pitch and roll are within the first predetermined value, and determine if the estimated gaze angle of the one or more users exceed a second predetermined value, and the feedback module receives an input if the estimated gaze angle of the one or more users exceed the second predetermined value.
[011] In a further embodiment of the invention, the second processing module is configured to detect whether eyes of the one or more users are closed for a time period exceeding a third predetermined time period based on the real time images, and detect whether the one or more users are yawning based on the real time images, and the feedback module receives an input if at least one of, the eyes of the one or more users are closed for a time period exceeding the third predetermined time period and the one or more users are yawning.
[012] In a further embodiment of the invention, the third processing module is configured to detect whether the one or more users are riding with one hand, detect a lean angle of the one or more users based on the real time images, determine whether the lean angle of the one or more users exceed a fourth predetermined value, and detect whether the one or more users are riding in an unsafe riding posture, and the feedback module receives an input if at least one of, the one or more users are riding the vehicle with one hand and the lean angle of the one or more users exceed the fourth predetermined value and the one or more users are riding in the unsafe riding posture.
[013] In a further embodiment of the invention, the output command generated by the feedback module includes at least one of: providing a visual alert to the one or more users, providing a haptic alert to the one or more users, and providing an audio alert to the one or more users.
[014] In a further embodiment of the invention, the system has an illumination sensor unit, the illumination sensor unit is in communication with the processing unit and is configured to detect a level of ambient light around the vehicle, and the processing unit is configured to switch on a vehicle lighting system if the ambient light is below a predetermined threshold value of ambient light.
[015] In a further embodiment of the invention, the system has an auxiliary sensor unit, the auxiliary sensor unit is in communication with the processing unit and is configured to detect one or more vehicle parameters. The processing unit is configured to determine whether the one or more vehicle parameters are below a first predetermined threshold. The processing unit is further configured to switch off the one or more image sensors or the illumination sensor unit or the system if the one or more vehicle parameters are below the first predetermined threshold.
[016] In another aspect, the present invention relates to a method for monitoring a user in a vehicle. The method has the steps of capturing, by one or more image sensors, real-time images of the one or more users riding the vehicle; receiving, by a processing unit, the real-time images of the one or more users riding the vehicle captured by the one or more image sensors; determining, by one or more processing modules of the processing unit, one or more user parameters based on the real time images of the one or more users; receiving, by a feedback module, an input from the processing unit if any of the one or more user parameters exceed a predetermined threshold of the one or more user parameters; and generating, by the feedback module, an output command.
[017] In an embodiment of the invention, the method further has the steps of determining, by a first processing module of the processing unit, distraction of the one or more users based on the real time images of the one or more users; determining, by a second processing module of the processing unit, drowsiness of the one or more users based on the real time images of the one or more users; and determining, by a third processing module of the processing unit, unsafe riding posture of the one or more users based on the real time images of the one or more users.
[018] In a further embodiment of the invention, the method further has the steps of detecting, by the first processing module, one or more of a yaw, a pitch and a roll of a head of the one or more users based on the real time images; and determining, by the first processing module, if any one of the yaw, the pitch and the roll of the head of the one or more users exceed a first predetermined value. Further, the feedback module receives an input if any one of the yaw, the pitch and the roll of the head of the one or more users exceed the first predetermined value.
[019] In a further embodiment of the invention, the method further has the steps of estimating, by the first processing module, a gaze angle of the one or more users based on the real time images, if the detected yaw, pitch and roll are within the first predetermined value; and determining, by the first processing module, if the estimated gaze angle of the one or more users exceed a second predetermined value. The feedback module receives an input if the estimated gaze angle of the one or more users exceed the second predetermined value.
[020] In a further embodiment of the invention, the method further has the step of detecting, by the second processing module, whether eyes of the one or more users are closed for a time period exceeding a third predetermined time period based on the real time images; and detecting, by the second processing module, whether the one or more users are yawning based on the real time images. The feedback module receives an input if at least one of, the eyes of the one or more users are closed for a time period exceeding the third predetermined time period and the one or more users are yawning.
[021] In a further embodiment of the invention, the method further has the steps of detecting, by the third processing module, whether the one or more users are riding with one hand; detecting, by the third processing module, a lean angle of the one or more users based on the real time images; determining, by the third processing module, whether the lean angle of the one or more users exceed a fourth predetermined value; and detecting, by the third processing module, whether the one or more users are riding in an unsafe riding posture. Further, the feedback module receives an input if at least one of, the one or more users are riding the vehicle with one hand and the lean angle of the one or more users exceed the fourth predetermined value and the one or more users are riding in the unsafe riding posture.
[022] In a further embodiment of the invention, the output command generated by the feedback module has at least one of: providing a visual alert to the one or more users, providing a haptic alert to the one or more users, and providing an audio alert to the one or more users.
[023] In a further embodiment of the invention, the method further has the steps of detecting, by an illumination sensor unit, a level of ambient light around the vehicle; and switching on, by the processing unit, a vehicle lighting system if the ambient light is below a predetermined threshold value of ambient light.
[024] In a further embodiment of the invention, the method further has the steps of detecting, by an auxiliary sensor unit, one or more vehicle parameters; determining, by the processing unit, whether the one or more vehicle parameters are below a first predetermined threshold; and switching off, by the processing unit, the one or more image sensors or the illumination sensor unit or the system if the one or more vehicle parameters are below the first predetermined threshold.
BRIEF DESCRIPTION OF THE DRAWINGS
[025] Reference will be made to embodiments of the invention, examples of which may be illustrated in accompanying figures. These figures are intended to be illustrative, not limiting. Although the invention is generally described in context of these embodiments, it should be understood that it is not intended to limit the scope of the invention to these particular embodiments.
Figure 1 illustrates a system for monitoring a user in a vehicle, in accordance with an embodiment of the present invention.
Figure 2 illustrates the steps involved in the method for monitoring a user in a vehicle, in accordance with an embodiment of the invention.
Figure 3 illustrates a method flow diagram for monitoring the user in the vehicle, in accordance with an embodiment of the present invention.
Figure 4 illustrates the method flow diagram for monitoring the user in the vehicle, in accordance with an alternative embodiment of the present invention.
Figure 5 illustrates the user parameters detection, in accordance with an embodiment of the invention.
Figure 6 illustrates the user parameters detection, in accordance with an alternative embodiment of the invention.
Figure 7 illustrates a software architecture for the system and method for monitoring the user in the vehicle, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[026] The present invention relates to monitoring of a user in a vehicle. More particularly, the present invention relates to a system and a method for monitoring a user in a vehicle. The system and method of the present invention are typically used in a vehicle such as a two wheeled vehicle, or a three wheeled vehicle including trikes, or a four wheeled vehicle, or other multi-wheeled vehicles as required.
[027] Figure 1 illustrates a system 100 for monitoring a user in a vehicle, in accordance with an embodiment of the present invention. The monitoring of the user is done based on the analysis of the user activity/parameter in real-time during riding condition of the vehicle. These conditions are the static and dynamic characteristics of the user that define the behaviour or performance of the user when the vehicle is on the road and the user is riding the vehicle. The user activities/parameters include but are not limited to the following:
1. Distraction of the User: The user may be distracted while riding the vehicle and the same may lead to road accidents. For example, the users often engage in activities which leads to a lack of focus while riding the vehicle, which can include looking around at surrounding activities, fatigue, speaking to the pillion rider, texting, browsing the internet, calling, and the like. Therefore, it becomes important to monitor the user in such distracted situations. Specifically, the distraction of the user may be determined based on a head pose estimation and a gaze estimation. As is generally known, the head of the user can rotate in three degrees of freedom namely, a yaw, a pitch and a roll. In order to accurately determine the head pose, it is important to take all the three factors into consideration. The estimated angles should be within a predetermined range for a predetermined time period to establish that the user is not distracted. The trained modules are required to estimate the yaw, the pitch and the roll of the head of the user. Similarly, it is also required to determine the gaze of the user and the use of the mobile device by the user during the riding condition of the vehicle.
2. Drowsiness of the User: The user is monitored based on the fatigue and the drowsiness of the user. For example, the user may fall asleep, the user is yawning and the eyes of the user are closed while riding the vehicle or experience decreased alertness and hence, leading to an increased risk of accident. Therefore, it becomes important to monitor the user in such situations.
3. Riding Posture of the User: The riding posture of the user is also the reason that may lead to a significant risk to the safety of the user. Therefore, it is important to analyse and detect the riding posture of the user such as the user is riding the vehicle with one hand, the user is riding while standing, the lean angle of the user is beyond a predetermined range, the user is riding with the mobile device in one hand and the user is riding in an unsafe posture. Therefore, it becomes important to monitor the user in such erratic situations. The user monitoring systems are essential for evaluating the performance and behaviour of the user. They can provide valuable data on aspects such as user pose, behaviour, and actions. This information can be utilized for training the user, identifying areas of improvement, and promoting safe riding habits in the user.
[028] The user activities/parameters are detected and analysed dynamically by the system 100 during the course of riding the vehicle by the user in real-time. As illustrated, the system 100 has one or more image sensors 110 being configured for capturing real-time images of the user riding the vehicle. In an embodiment, the one or more image sensors 110 captures the one or more users in the real-time and generates the live feed of the one or more users in the form of images and videos for further processing. In an embodiment, the one or more image sensors 110 comprise one or more of, but not limited to a camera, a Red-Green-Blue camera, a Red-Green-Blue + Infrared camera, Infrared camera, monochrome camera, thermal camera, Radio Detection and Ranging (RADAR), Light Detection and Ranging (LIDAR), Time of Flight (TOF) camera, and the like. In an example, once the vehicle is powered ON by the user, the camera is configured to capture the images and the videos of the one or more users in the real-time.
[029] Further, the system 100 has a processing unit 120 that is configured to receive the real-time images of the one or more users riding the vehicle from the one or more image sensors 110. The processing unit 120 has one or more processing modules such as a first processing module 122, a second processing module 124 and a third processing module 126. The one or more processing modules are configured to determine one or more user parameters based on the real-time images received from the one or more image sensors 110. In an embodiment, the processing unit 120 includes one or more modules, the one or more modules being configured to determine the one or more user activities during riding condition of the vehicle based on the real time images. In an embodiment, the first processing module 122 is configured to determine the distraction of the one or more users based on the real time images of the one or more users. The second processing module 124 is configured to determine the drowsiness of the one or more users based on the real time images of the one or more users, and the third processing module 126 is configured to determine the unsafe riding posture of the one or more users based on the real time images of the one or more users.
[030] In a further embodiment, the one or more modules comprises a plurality of Artificial Intelligence based models having machine learning and deep machine learning capabilities. In this regard, the one or more user parameters/activities corresponds to one or more of: the distraction of the user, the drowsiness of the user, and the body posture of the user.
[031] The system 100 has a feedback module 130. The feedback module 130 is in communication with the processing unit 120. The feedback module 130 is configured to receive an input from the processing unit 120 if any of the one or more user parameters exceed a predetermined threshold of the one or more user parameters. The feedback module 130 is configured to generate an output command if the one or more user parameters exceed a predetermined threshold. In an embodiment, the output command generated by the feedback module 130 includes at least one of: providing a visual alert to the one or more users, providing a haptic alert to the one or more users, and providing an audio alert to the one or more users. For example, the feedback module 130 generates an output command to the user in the form of a display/indicator on the cluster or a sound with varying intensity or haptic alert to the rider. Further, the indication to the user comprises one or more of voice indication, video indication, haptic indication, display indication, and user activity report.
[032] Further, as depicted in Figure 1, the processing unit 120 has the first processing module 122, the first processing module 122 is configured to detect one or more of a yaw, a pitch and a roll of a head of the one or more users based on the real time images. The first processing module 122 is further configured to determine if any one of the yaw, the pitch and the roll of the head of the one or more users exceed a first predetermined value. In an embodiment, the first predetermined value of the yaw and the pitch ranges between -20 degrees to 20 degrees with respect to a camera mounting plane. Further, the first predetermined value of the roll ranges between -15 degrees to 15 degrees with respect to the camera mounting plane. The feedback module 130 receives an input if any one of the yaw, the pitch and the roll of the head of the one or more users exceed the first predetermined value.
[033] In an embodiment, the first processing module 122 is further configured to estimate a gaze angle of the one or more users based on the real time images, if the detected yaw, pitch and roll are within the first predetermined value. The first processing module 122 is then configured to determine if the estimated gaze angle of the one or more users exceed a second predetermined value. In an embodiment, the second predetermined value of the gaze angle ranges between -20 degrees to 20 degrees with respect to a camera mounting plane. The feedback module 130 receives an input if the estimated gaze angle of the one or more users exceeds the second predetermined value.
[034] For example, the user ‘A’ is riding the vehicle and the yaw, the pitch and the roll of the head of the user is 12 degrees. The processing unit 120 determines that the yaw, pitch and the roll is within the prescribed value. Then, the gaze angle of the user ‘A’ is estimated. The gaze angle of the user ‘A’ is 25 degrees. In this scenario, the estimated gaze angle is above the prescribed value and therefore, the feedback module sends an alert to the user ‘A’ accordingly regarding the distraction of the user ‘A’.
[035] In an embodiment, as depicted in Figure 1, the processing unit 120 has the second processing module 124, the second processing module 124 is configured to detect whether eyes of the one or more users are closed for a time period exceeding a third predetermined time period based on the real time images. In an embodiment, the third predetermined value of the time period ranges between 5 seconds to 12 seconds. The second processing module 124 is further configured to detect whether the one or more users are yawning based on the real time images. The feedback module 130 is configured to receive an input if at least one of, the eyes of the one or more users are closed for a time period exceeding the third predetermined time period and the one or more users are yawning.
[036] For example, the user ‘B’ is riding the vehicle and the eyes of the user ‘B’ are closed for 15 seconds. The processing unit 120 determines that the time period is outside the prescribed value. In this scenario, the feedback module sends an alert to the user ‘B’ accordingly regarding the drowsiness of the user ‘B’.
[037] In an embodiment, as depicted in Figure 1, the processing unit 120 has the third processing module 126, the third processing module 126 is configured to detect whether the one or more users are riding with one hand; detect a lean angle of the one or more users based on the real time images. The third processing module 126 is configured to determine whether the lean angle of the one or more users exceeds a fourth predetermined value. In an embodiment, for detecting the lean angle of the rider, the Artificial Intelligence (AI) system is configured to identify the anomalous lean of the body of the rider indicating the fatigue or the bad body posture of the rider. The third processing module 126 is configured to detect whether the one or more users are riding in an unsafe riding posture. The feedback module 130 is configured to receive an input if at least one of, the one or more users are riding the vehicle with one hand and the lean angle of the one or more users exceed the fourth predetermined value and the one or more users are riding in the unsafe riding posture.
[038] The system 100 further includes an illumination sensor unit 140. The illumination sensor unit 140 is in communication with the processing unit 120. In an embodiment, the illumination sensor unit 140 is activated if the illumination around the vehicle ranges between 2 to 500 Lux The illumination sensor unit 140 is configured to detect a level of ambient light around the vehicle. Further, the processing unit 120 is configured to switch on a vehicle lighting system or an illuminator unit if the ambient light is below a predetermined threshold value of ambient light. For example, the brightness of the instrument cluster is increased if the detected ambient light is below a predetermined threshold. The system 100 further includes an auxiliary sensor unit 150. The auxiliary sensor unit 150 is in communication with the processing unit 120 and is configured to detect one or more vehicle parameters. In an embodiment, the auxiliary sensor unit 150 includes a speed sensor, a light dependent sensor (LDR), and the like. The processing unit 120 is configured to determine whether the one or more vehicle parameters are below a first predetermined threshold. In an embodiment, the one or more vehicle parameters comprise a state of charge of the battery of the vehicle, and the first predetermined threshold is the state of charge of the battery ranging between 15-20%. The processing unit 120 is further configured to switch off the one or more image sensors 110 and the illumination sensor unit 140 and the system 100 if the one or more vehicle parameters are below the first predetermined threshold. In an embodiment, the vehicle parameters include but is not limited state of charge of a battery of the vehicle. Thus, provision of the auxiliary sensor unit 150 prevents deep discharge of the battery.
[039] In another aspect as depicted in Figure 2, the present invention relates to a method 200 for monitoring a user in a vehicle. The steps involved in the method 200 for monitoring the user in the vehicle are illustrated in Figure 2. As illustrated, at step 202, one or more image sensors 110 are activated. In an embodiment, the one or more image sensors 110 are activated when the vehicle is switched ON by the user. This activates all the sensors of the vehicle for monitoring the user during the riding condition in real-time.
[040] At step 204, the one or more image sensors 110 captures real-time images of the one or more users riding the vehicle. Therefore, once the one or more image sensors 110 are activated, it captures the images and the videos of the one or more users in the real-time and sends it to the processing unit 120 for further processing. The processing unit 120 receives the real-time images of the one or more users riding the vehicle captured by the one or more image sensors 110.
[041] At step 206, the processing unit 120 determines one or more user parameters based on the real-time images of the one or more users received from the one or more image sensors 110 and processed by one or more processing modules of the processing unit 120. In an embodiment, the processing unit 120 includes one or more modules, the one or more modules is configured to determine the one or more user parameters. In an embodiment, one or more modules of the processing unit 120 are configured to receive real time images of the user for determining the one or more user parameters during the riding condition of the vehicle. In this regard, the one or more user parameters corresponds to one or more of: distraction of the user, drowsiness of the user, and bad body posture of the user.
[042] At step 208, the processing unit 120 determines whether the one or more user parameters exceed a predetermined threshold of the one or more user parameters. In an embodiment, the processing unit 120 includes a first processing module 122, a second processing module 124, and a third processing module 126. The first processing module 122 of the processing unit 120 is configured to determine the distraction of the one or more users based on the real time images of the one or more users. The second processing module 124 of the processing unit 120 is configured to determine drowsiness of the one or more users based on the real time images of the one or more users. The third processing module 126 of the processing unit 120 is configured to determine the unsafe riding posture of the one or more users based on the real time images of the one or more users.
[043] At step 210, the feedback module 130 generates an output command if the one or more user parameters exceed a predetermined threshold of the one or more user parameters. In an embodiment, the output command generated by the feedback module 130 includes at least one of: providing a visual alert to the one or more users, providing a haptic alert to the one or more users, and providing an audio alert to the one or more users.
[044] In an embodiment, the method further includes detecting, by the first processing module 122, one or more of a yaw, a pitch and a roll of a head of the one or more users based on the real time images. The method then includes determining, by the first processing module 122, if any one of the yaw, the pitch and the roll of the head of the one or more users exceed a first predetermined value. In an embodiment, the first predetermined value of the yaw and the pitch ranges between -20 degrees to 20 degrees with respect to a camera mounting plane. Further, the first predetermined value of the roll ranges between -15 degrees to 15 degrees with respect to the camera mounting plane. The feedback module 130 receives an input if any one of the yaw, the pitch and the roll of the head of the one or more users exceed the first predetermined value. Further, the method includes estimating, by the first processing module 122, a gaze angle of the one or more users based on the real time images, if the detected yaw, pitch and roll are within the first predetermined value; and determining, by the first processing module 122, if the estimated gaze angle of the one or more users exceed a second predetermined value. In an embodiment, the second predetermined value of the gaze angle ranges between -20 degrees to 20 degrees with respect to a camera mounting plane. The feedback module 130 receives an input if the estimated gaze angle of the one or more users exceeds the second predetermined value.
[045] In a further embodiment, the method further includes detecting, by the second processing module 124, whether eyes of the one or more users are closed for a time period exceeding a third predetermined time period based on the real time images. In an embodiment, the third predetermined value of the time period ranges between 5 seconds to 12 seconds. The method then includes detecting, by the second processing module 124, whether the one or more users are yawning based on the real time images. Then, the feedback module 130 is configured to receive an input if at least one of, the eyes of the one or more users are closed for a time period exceeding the third predetermined time period and the one or more users are yawning.
[046] In an embodiment, the method further includes detecting, by the third processing module 126, whether the one or more users are riding with one hand; and detecting, by the third processing module 126, a lean angle of the one or more users based on the real time images. The method further includes determining, by the third processing module 126, whether the lean angle of the one or more users exceed a fourth predetermined value; and detecting, by the third processing module 126, whether the one or more users are riding in an unsafe riding posture. Then, the feedback module 130 receives an input if at least one of, the one or more users are riding the vehicle with one hand and the lean angle of the one or more users exceed the fourth predetermined value and the one or more users are riding in the unsafe riding posture.
[047] For example, once the vehicle is powered ON, the camera captures the videos and the images of the user and sends the live feed to the vision processing unit. The vision processing unit is configured to determine the distraction, the drowsiness and the bad body posture of the user, if the helmet is detected by the processing unit. The first processing module 122 of the processing unit 120 is configured to determine the distraction of the user by estimating the head pose of the user. The head pose is estimated by the yaw-pitch-roll angles of the user. The first processing module 122 determines with respect to these angles if the head pose of the user is within the desired safe range. If the estimated head pose is within the desired limit, then the first processing module 122 estimates the gaze angle of the user and processes the same image input to check if the user is looking towards the correct direction, otherwise an indication is sent to the user. If the gaze angle of the user is outside the correct range, then the indication is sent to the user. Further, the second processing module 124 of the processing unit 120 is configured to determine the drowsiness of the user. The second processing module 124 detects if the eyes of the user are closed for a defined duration and if the user is yawning. These two measures indicate the drowsiness of the user. The second processing module 124 detects the facial landmarks to detect if the eyes of the user are closed for less than a defined duration and if the user is yawning. The indication is sent to the user if the eyes of the user are closed above a defined time duration and the user is yawning. Further, the third processing module 126 detects the bad body posture of the user. The third processing module 126 detects the body key-point and posture of the user. The key-points are used to detect if the user is riding with one hand, or if the user is riding with an unsafe lean angle or the user has an anomalous riding posture. The third processing module 126 is configured to detect each of these three conditions. If any of these scenarios are detected, then the indication is sent to the user.
[048] As further illustrated in Figure 2, the method includes detecting, by an illumination sensor unit 140, a level of ambient light around the vehicle. In an embodiment, the illumination sensor unit 140 is activated if the illumination around the vehicle ranges between 2 to 500 lux. The method then includes switching on, by the processing unit 120, a vehicle lighting system or an illuminator unit if the ambient light is below a predetermined threshold value of the ambient light. This ensures the safe riding condition of the user in the real-time. In an embodiment, the method includes detecting, by an auxiliary sensor unit 150, one or more vehicle parameters; determining, by the processing unit 120, whether the one or more vehicle parameters are below a first predetermined threshold; and switching off, by the processing unit 120, the one or more image sensors 110 and the illumination sensor unit 140 and the system 100 if the one or more vehicle parameters are below the first predetermined threshold. In an embodiment, the auxiliary sensor unit 150 includes a speed sensor, a light dependent sensor (LDR), and the like. In an embodiment, the vehicle parameters include but is not limited to aggressiveness factor, which is indicative of variation in throttle input, other rider control inputs such as braking, clutch actuation, and the like, lean angle data indicative of the lean of the vehicle by the user, and the illumination around the vehicle. In an embodiment, the one or more vehicle parameters comprise a state of charge of the battery of the vehicle, and the first predetermined threshold is the state of charge of the battery ranging between 15-20%.
[049] Figure 3 illustrates a method flow diagram for monitoring the user in the vehicle, in accordance with an embodiment of the present invention. As depicted in Figure 3 at step 302, the one or more sensors 110 capture the real time images of the user. At step 304, the processing modules detect the helmet of the user. If the helmet is not detected, then an output command is generated at step 306. If the helmet is detected, the processing modules then determines the distraction of the user, the drowsiness of the user and the bad body posture of the user.
[050] At step 308, the first processing module detects the yaw, the pitch and the roll of the head of the user to determine the distraction of the user. The first processing module then determines whether the yaw, the pitch, and the roll of the user is safe as shown at step 310. If the yaw, the pitch, and the roll of the user is unsafe, then the output command is generated to the user in the form of the alert at step 312. If the yaw, the pitch, and the roll of the user is safe, then the gaze angle of the user is detected at step 314. At step 316, the first processing module determines whether the gaze angle of the user is safe. If the gaze angle of the user is unsafe, then the output command is generated to the user in the form of the alert at step 318. If the gaze angle of the user is safe, the entire process is repeated to determine the distraction of the user.
[051] At step 320, the second processing module detects the facial key point of the user to determine the drowsiness of the user. The second processing module then detects the eyes of the user at step 322. At step 324, the second processing module determines whether the eyes of the user are closed. If the eyes of the user are closed, then the time period of the same is calculated at step 326. If the time period of the closed eyes exceeds above the predetermined threshold at step 328, then the output command is generated to the user in the form of the alert at step 330. The second processing module also determines the yawning of the user at step 332. At step 334, the second processing module determines whether the user is yawning. At step 336, the output command is generated to the user in the form of the alert if the user is yawning.
[052] At step 338, the third processing module detects the body key point of the user to determine the bad body posture of the user. The third processing module detects whether the user is riding the vehicle with one hand as shown at step 340. If the user is riding the vehicle with one hand as shown at step 342, then the output command is generated to the user in the form of the alert at step 344. At step 346, the third processing module detects the lean angle of the user. If the lean angle of the user exceeds above the predetermined threshold as shown at step 348, then the output command is generated to the user in the form of the alert at step 350. At step 352, the third processing module detects the riding posture of the user. If the user is riding the vehicle in an unsafe riding posture as shown at step 354, then the output command is generated to the user in the form of the alert at step 356.
[053] Figure 4 illustrates the method flow diagram for monitoring the user in the vehicle, in accordance with an alternative embodiment of the present invention. In an alternative embodiment of the present invention, the distraction of the user is determined only based on the yaw, pitch and roll of the head of the user and not on the basis of the gaze estimation. As depicted in Figure 4 at step 402, the one or more sensors 110 capture the real time images of the user. At step 404, the processing modules detect the helmet of the user. If the helmet is not detected, then an output command is generated at step 406. If the helmet is detected, the processing modules then determines the distraction of the user, the drowsiness of the user and the bad body posture of the user.
[054] At step 408, the first processing module detects the yaw, the pitch and the roll of the head of the user to determine the distraction of the user. The first processing module then determines whether the yaw, the pitch, and the roll of the user is safe as shown at step 410. If the yaw, the pitch, and the roll of the user is unsafe, then the output command is generated to the user in the form of the alert at step 412. The steps 420-456 of the Figure 4 corresponds to the steps 320-356 as explained in the Figure 3 to determine the drowsiness and the bad body posture of the user.
[055] Figure 5 illustrates the user parameters detection, in accordance with an embodiment of the invention. In operation, for example, the one or more image sensors 110 capture the real time images of the user. The one or more image sensors 110 capture the real time images of the user as a video input (VI), which is then converted to video stream (VS), and the video stream (VS) is then video encoded (VE) for further processing. Thereafter, the processing unit 120 detects the one or more user parameters based on the Artificial Intelligence model. In an embodiment, the processing unit 120 detects the one or more user parameters based on frame grabber (FG) and an image pre-processing (IPP) with the help of one or more modules. The processing unit 120 receives input from the image sensors 110 in relation to one or more of the following: a head pose estimation (HPE), yawn detection (YD), lip movement detection, seating & standing pose detection, hand pose estimation, mobile detection (MOD), gaze estimation (GE) and eye blink detection (ED). Thereafter, the first processing module 122 of the processing unit 120 detects a helmet (HD), a yaw-pitch-roll estimation (YPRE) of the head of the user, gaze estimation (GE), and a mobile detection (MOD). Based on the inputs the first processing module 122 of the processing unit 120, the distraction of the user is determined. The feedback module 130 generates indications for the user using one or more of a voice alert functionality, a haptic alert functionality, a display alert functionality, or a user activity report. Further, the second processing module 124 of the processing unit 120 detects a facial keypoint (FKD) of the user, the closed eyes of the user (CED), and the yawning (YD) of the user. Based on the inputs the second processing module 124 of the processing unit 120, the drowsiness of the user is determined. The feedback module 130 generates indications for the user using one or more of a voice alert functionality, a haptic alert functionality, a display alert functionality, or a user activity report. Lastly, the third processing module 126 of the processing unit 120 detects a body key-point of the user (BKD), the posture of the user (SD), one hand riding posture (OHRD) of the user, riding in standing position (RSPD) by the user, unsafe lean angle (ULAD) of the user and the unsafe riding posture (URPD) of the user. Based on the inputs the third processing module 126 of the processing unit 120, the bad body posture of the user is determined. The feedback module 130 generates indications for the user using one or more of a voice alert functionality, a haptic alert functionality, a display alert functionality, or a user activity report.
[056] Figure 6 illustrates the user parameters detection, in accordance with an alternative embodiment of the invention. In an alternative embodiment of the present invention, the distraction of the user is determined only based on the yaw, pitch and roll of the head of the user and not on the basis of the gaze estimation. In operation, for example, the one or more image sensors 110 capture real time images of the user. The one or more image sensors 110 capture the real time images of the user as a video input (VI), which is then converted to video stream (VS), and the video stream (VS) is then video encoded (VE) for further processing. Thereafter, the processing unit 120 detects the one or more user parameters based on the Artificial Intelligence model. In an embodiment, the processing unit 120 detects the one or more user parameters based on frame grabber (FG) and an image pre-processing (IPP) with the help of one or more modules. The processing unit 120 receives input from the image sensors 110 in relation to one or more of the following: a helmet detection (HD), yawn detection (YD), lip movement detection, seating & standing pose detection, hand pose estimation, mobile detection (MOD), and eye blink detection (ED). Thereafter, the first processing module 122 of the processing unit 120 detects a helmet (HD), a yaw-pitch-roll estimation (YPRE) of the head of the user, and a mobile detection (MOD). Based on the inputs the first processing module 122 of the processing unit 120, the distraction of the user is determined. The feedback module 130 generates indications for the user using one or more of a voice alert functionality, a haptic alert functionality, a display alert functionality, or a user activity report. Further, the second processing module 124 of the processing unit 120 detects a facial keypoint (FKD) of the user, the closed eyes of the user (CED), and the yawning (YD) of the user. Based on the inputs the second processing module 124 of the processing unit 120, the drowsiness of the user is determined. The feedback module 130 generates indications for the user using one or more of a voice alert functionality, a haptic alert functionality, a display alert functionality, or a user activity report. Lastly, the third processing module 126 of the processing unit 120 detects a body key-point of the user (BKD), the posture of the user (SD), one hand riding posture (OHRD) of the user, riding in standing position (RSPD) by the user, unsafe lean angle (ULAD) of the user and the unsafe riding posture (URPD) of the user. Based on the inputs the third processing module 126 of the processing unit 120, the bad body posture of the user is determined. The feedback module 130 generates indications for the user using one or more of a voice alert functionality, a haptic alert functionality, a display alert functionality, or a user activity report.
[057] Figure 7 illustrates the software architecture in relation to the present invention. As illustrated in Figure 7, the software architecture has a first processing module 122, a second processing module 124 and a third processing module 126. Further, the software architecture has a vision processing unit 170. The vision processing unit 170 is operatively coupled to a plurality of microservices 172. Herein, microservices are predefined libraries for supporting and enabling the functioning of the software architecture. Microservices 172 helps in capturing of the real time images using a hardware 178 such as the one or more image sensors 110. In operation, the microservices 172 receive the real time images from the hardware 178 through a hardware abstraction layer 176 and an operating system 174 and communicates the real time images to the vision processing unit 170. The first processing module 122, the second processing module 124 and the third processing module 126 coupled with the vision processing unit 170 determine whether the user is distracted, drowsy and riding the vehicle in the unsafe body posture. Similarly, microservice 2 is in relation to detection of ambient light, wherein the microservice 2 receives input from the relevant hardware 178, namely the illumination sensor unit 140 through operating system 174 and hardware abstraction layer 176 to be sent to the vision processing unit 170 for detection of ambient light. Based on the detection of the ambient light, the processing unit 120 determines whether to switch on the vehicle lighting system. Similarly, microservice 3 is in relation to detection of the state of charge of the battery, wherein microservice 3 receives input from relevant hardware 178 through the hardware abstraction layer 176 and the operating system 174 to be sent to the vision processing unit 170 for detection of state of charge of the battery. Based on the detection of the state of charge of the battery, the processing unit 120 determines whether to switch off the system or the illumination sensor 140. Further, a debug module 180 is provided for debugging the software as per requirement.
[058] Advantageously, the present invention provides a system and a method for monitoring a user in a vehicle, wherein the system monitors a physical state of the user using one or more modules in real time, which enhances the overall user experience and the safety of the user. Further, the present invention allows for providing an accurate, efficient, and reliable system for monitoring the user according to the riding style of the user. The present invention monitors the user activities/parameters in the real time to prevent accidents based on the distraction, drowsiness and the bad body posture of the user.
[059] Furthermore, the present invention generates an indication to be sent to the user and therefore, enhances the safety of the user in real-time. The indication is generated based on the user physical characteristics such as drowsiness detection, distraction detection, bad pose detection, and the like. The present invention allows for providing the comfort to the user based on the user physical characteristics. The present system being customised to generate the indication in real-time without the intervention of the user and therefore, it increases the performance, handling, market attractiveness of the vehicle. Further the present invention allows the vehicle to intervene in a middle of a trip, which allows the vehicle to make better informed and correct decisions, thus enhancing the safety and monitoring of the user.
[060] In addition, implementation of the system and method of the present invention is done in the real-time based on the activities of the user and the vehicle parameters, thus ensuring better safety and monitoring of the user in the vehicle. Further, the present system is cost-effective and reliable and hence, the system can be integrated with the vehicle for the safety of the user. The present invention allows for understanding the overall riding behaviour of the user and therefore, the user safety is enhanced by personalizing the vehicle of the user based on the requirements.
[061] In light of the abovementioned advantages and the technical advancements provided by the disclosed method and system, the claimed steps as discussed above are not routine, conventional, or well understood in the art, as the claimed steps enable the following solutions to the existing problems in conventional technologies. Further, the claimed steps clearly bring an improvement in the functioning of the device itself as the claimed steps provide a technical solution to a technical problem.
[062] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, non-volatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[063] While the present invention has been described with respect to certain embodiments, it will be apparent to those skilled in the art that various changes and modification may be made without departing from the scope of the invention as defined in the following claims.
List of Reference Numerals
100: System for monitoring a user in a vehicle
110: One or More Image Sensors
120: Processing Unit
122: First Processing Module
124: Second Processing Module
126: Third Processing Module
130: Feedback Module
140: Illumination Sensor Unit
150: Auxiliary Sensor Unit
170: Vision Processing Unit
172: Microservices
174: Operating System
176: Hardware Abstraction Layer
178: Hardware
180: Debug
200: Method for monitoring a user in a vehicle
, Claims:1. A system (100) for monitoring a user in a vehicle, the system (100) comprising:
one or more image sensors (110), the one or more image sensors (110) being configured to capture real time images of one or more users riding the vehicle;
a processing unit (120), the processing unit (120) being configured to receive the real time images of the one or more users riding the vehicle from the one or more image sensors (110), and the processing unit (120) having one or more processing modules, wherein the one or more processing modules being configured to determine one or more user parameters based on the real time images of the one or more users; and
a feedback module (130), the feedback module (130) being configured to receive an input from the processing unit (120) if any of the one or more user parameters exceed a predetermined threshold of the one or more user parameters, and the feedback module (130) being configured to generate an output command.
2. The system (100) as claimed in claim 1, wherein the plurality of processing modules comprise at least a first processing module (122), a second processing module (124) and a third processing module (126), wherein the first processing module (122) being configured to determine distraction of the one or more users based on the real time images of the one or more users, the second processing module (124) being configured to determine drowsiness of the one or more users based on the real time images of the one or more users, and the third processing module (126) being configured to determine unsafe riding posture of the one or more users based on the real time images of the one or more users.
3. The system (100) as claimed in claim 2, wherein the first processing module (122) being configured to:
detect one or more of a yaw, a pitch and a roll of a head of the one or more users based on the real time images; and
determine if any one of the yaw, the pitch and the roll of the head of the one or more users exceed a first predetermined value, wherein the feedback module (130) receives an input if any one of the yaw, the pitch and the roll of the head of the one or more users exceed the first predetermined value.
4. The system (100) as claimed in claim 3, wherein the first processing module (122) being configured to: estimate a gaze angle of the one or more users based on the real time images, if the detected yaw, pitch and roll are within the first predetermined value; and
determine if the estimated gaze angle of the one or more users exceed a second predetermined value, wherein the feedback module (130) receives an input if the estimated gaze angle of the one or more users exceed the second predetermined value.
5. The system (100) as claimed in claim 2, wherein the second processing module (124) being configured to:
detect whether eyes of the one or more users are closed for a time period exceeding a third predetermined time period based on the real time images; and
detect whether the one or more users are yawning based on the real time images, wherein the feedback module (130) receives an input if at least one of, the eyes of the one or more users are closed for a time period exceeding the third predetermined time period and the one or more users are yawning.
6. The system (100) as claimed in claim 2, wherein the third processing module (126) being configured to:
detect whether the one or more users are riding with one hand;
detect a lean angle of the one or more users based on the real time images;
determine whether the lean angle of the one or more users exceed a fourth predetermined value; and
detect whether the one or more users are riding in an unsafe riding posture, wherein the feedback module (130) receives an input if at least one of, the one or more users are riding the vehicle with one hand and the lean angle of the one or more users exceed the fourth predetermined value and the one or more users are riding in the unsafe riding posture.
7. The system (100) as claimed in claim 1, wherein the output command generated by the feedback module (130) comprises at least one of: providing a visual alert to the one or more users, providing a haptic alert to the one or more users, and providing an audio alert to the one or more users.
8. The system (100) as claimed in claim 1, comprising an illumination sensor unit (140), the illumination sensor unit (140) being in communication with the processing unit (120) and being configured to detect a level of ambient light around the vehicle, and the processing unit (120) being configured to switch on a vehicle lighting system if the ambient light is below a predetermined threshold value of ambient light.
9. The system (100) as claimed in claim 1, comprising an auxiliary sensor unit (150), the auxiliary sensor unit (150) being in communication with the processing unit (120) and being configured to detect one or more vehicle parameters; and
the processing unit (120) being configured to:
determine whether the one or more vehicle parameters are below a first predetermined threshold; and
switch off the one or more image sensors (110) or the illumination sensor unit (140) or switch off the system (100), if the one or more vehicle parameters are below the first predetermined threshold.
10. A method (200) for monitoring a user in a vehicle, the method (200) comprising the steps of:
capturing, by one or more image sensors (110), real-time images of one or more users riding the vehicle;
receiving, by a processing unit (120), the real-time images of the one or more users riding the vehicle captured by the one or more image sensors (110);
determining, by one or more processing modules of the processing unit (120), one or more user parameters based on the real time images of the one or more users;
receiving, by a feedback module (130), an input from the processing unit (120) if any of the one or more user parameters exceed a predetermined threshold of the one or more user parameters; and
generating, by the feedback module (130), an output command.
11. The method (200) as claimed in claim 10, the method (200) comprising the steps of:
determining, by a first processing module (122) of the processing unit (120), distraction of the one or more users based on the real time images of the one or more users;
determining, by a second processing module (124) of the processing unit (120), drowsiness of the one or more users based on the real time images of the one or more users; and
determining, by a third processing module (126) of the processing unit (120), unsafe riding posture of the one or more users based on the real time images of the one or more users.
12. The method (200) as claimed in claim 11, the method (200) comprising the steps of:
detecting, by the first processing module (122), one or more of a yaw, a pitch and a roll of a head of the one or more users based on the real time images; and
determining, by the first processing module (122), if any one of the yaw, the pitch and the roll of the head of the one or more users exceed a first predetermined value, wherein the feedback module (130) receives an input if any one of the yaw, the pitch and the roll of the head of the one or more users exceed the first predetermined value.
13. The method (200) as claimed in claim 12, the method (200) comprising the steps of:
estimating, by the first processing module (122), a gaze angle of the one or more users based on the real time images, if the detected yaw, pitch and roll are within the first predetermined value; and
determining, by the first processing module (122), if the estimated gaze angle of the one or more users exceed a second predetermined value, wherein the feedback module (130) receives an input if the estimated gaze angle of the one or more users exceed the second predetermined value.
14. The method (200) as claimed in claim 11, the method (200) comprising the steps of:
detecting, by the second processing module (124), whether eyes of the one or more users are closed for a time period exceeding a third predetermined time period based on the real time images; and
detecting, by the second processing module (124), whether the one or more users are yawning based on the real time images, wherein the feedback module (130) receives an input if at least one of, the eyes of the one or more users are closed for a time period exceeding the third predetermined time period and the one or more users are yawning.
15. The method (200) as claimed in claim 11, the method (200) comprising the steps of:
detecting, by the third processing module (126), whether the one or more users are riding with one hand;
detecting, by the third processing module (126), a lean angle of the one or more users based on the real time images;
determining, by the third processing module (126), whether the lean angle of the one or more users exceed a fourth predetermined value; and
detecting, by the third processing module (126), whether the one or more users are riding in an unsafe riding posture, wherein the feedback module (130) receives an input if at least one of, the one or more users are riding the vehicle with one hand and the lean angle of the one or more users exceed the fourth predetermined value and the one or more users are riding in the unsafe riding posture.
16. The method (200) as claimed in claim 10, wherein the output command generated by the feedback module (130) comprises at least one of: providing a visual alert to the one or more users, providing a haptic alert to the one or more users, and providing an audio alert to the one or more users.
17. The method (200) as claimed in claim 10, the method (200) comprising the steps of:
detecting, by an illumination sensor unit (140), a level of ambient light around the vehicle; and
switching on, by the processing unit (120), a vehicle lighting system if the ambient light is below a predetermined threshold value of ambient light.
18. The method (200) as claimed in claim 10, the method (200) comprising the steps of:
detecting, by an auxiliary sensor unit (150), one or more vehicle parameters;
determining, by the processing unit (120), whether the one or more vehicle parameters are below a first predetermined threshold; and
switching off, by the processing unit (120), the one or more image sensors (110) or the illumination sensor unit (140) or switch off the system (100), if the one or more vehicle parameters are below the first predetermined threshold.
| # | Name | Date |
|---|---|---|
| 1 | 202341055251-STATEMENT OF UNDERTAKING (FORM 3) [17-08-2023(online)].pdf | 2023-08-17 |
| 2 | 202341055251-REQUEST FOR EXAMINATION (FORM-18) [17-08-2023(online)].pdf | 2023-08-17 |
| 3 | 202341055251-PROOF OF RIGHT [17-08-2023(online)].pdf | 2023-08-17 |
| 4 | 202341055251-POWER OF AUTHORITY [17-08-2023(online)].pdf | 2023-08-17 |
| 5 | 202341055251-FORM 18 [17-08-2023(online)].pdf | 2023-08-17 |
| 6 | 202341055251-FORM 1 [17-08-2023(online)].pdf | 2023-08-17 |
| 7 | 202341055251-FIGURE OF ABSTRACT [17-08-2023(online)].pdf | 2023-08-17 |
| 8 | 202341055251-DRAWINGS [17-08-2023(online)].pdf | 2023-08-17 |
| 9 | 202341055251-DECLARATION OF INVENTORSHIP (FORM 5) [17-08-2023(online)].pdf | 2023-08-17 |
| 10 | 202341055251-COMPLETE SPECIFICATION [17-08-2023(online)].pdf | 2023-08-17 |
| 11 | 202341055251-Covering Letter [17-09-2024(online)].pdf | 2024-09-17 |