Abstract: DROWSINESS DETECTION AND ALERT SYSTEM ABSTRACT A drowsiness detection and alert system (100) is disclosed. The system (100) comprising: an image capturing unit (102) and an image processing unit (104). A controller unit (106) is configured to: receive the facial landmarks marked on the face identified in the frame from the image processing unit (104); calculate an Eye Aspect Ratio (EAR) and a Mouth Aspect Ratio (MAR), from the received facial landmarks using a Per clos algorithm; compare the calculated Eye Aspect Ratio (EAR) with a first threshold value, and the Mouth Aspect Ratio (MAR) with a second threshold value; determine a condition of drowsiness when the Eye Aspect Ratio (EAR) is less the first threshold value, and the Mouth Aspect Ratio (MAR) is greater than the second threshold value; and generate an alert, upon detecting the drowsiness condition. The system (100) eliminates physical sensors and reliance on camera-based monitoring. Claims: 10, Figures: 5 Figure 1A is selected.
Description:BACKGROUND
Field of Invention
Embodiments of the present invention generally relate to an automobile driving accessory and particularly to a drowsiness detection and alert system.
Description of Related Art
Driver fatigue is a critical factor contributing to road accidents worldwide, resulting in severe injuries and fatalities. To address this issue, numerous systems have been developed to detect and alert drivers when signs of drowsiness are identified. These initiatives stem from the growing number of accidents linked to drowsy driving, often caused by extended hours on the road, particularly among professional drivers and long-distance travelers.
Historically, fatigue detection relied on intermittent checks or self-reporting mechanisms, which lack real-time accuracy and depend heavily on subjective input or non-continuous monitoring. Early commercial solutions, such as dashboard warning lights and auditory alarms, typically responded to vehicle behavior like sudden lane deviations or erratic steering. However, these systems often failed to detect early signs of driver drowsiness, activating only after a decline in driving performance became apparent.
Recent technological advancements have led to more sophisticated solutions, including systems that monitor physiological indicators such as eye movement, head position, and facial expressions. While these systems offer improved detection accuracy, they frequently require additional hardware like external sensors and cameras, which increase implementation costs and complexity due to their integration with existing vehicle systems
Despite these advancements, current drowsiness detection systems face notable limitations. False positives remain a challenge, with routine behaviors like mirror checks or brief eye closures being misinterpreted as signs of fatigue. At the same time, genuine signs of drowsiness are overlooked, resulting in delayed or missed alerts. Additionally, the lack of continuous monitoring and adaptability to varying environmental conditions restricts the effectiveness of these systems in preventing accidents.
There is thus a need for an improved and advanced drowsiness detection and alert system that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
Embodiments in accordance with the present invention provide a drowsiness detection and alert system. The system comprising: an image capturing unit, arranged in a visual proximity of a user, and adapted to capture a real-time video of the user. The system further comprising: an image processing unit, adapted to receive the captured real-time video of the user. The image processing unit is configured to: identify a presence of a face in a frame of the receive real-time video, wherein the presence of the face is identified using a Hair Cascade His (HCH) algorithm; and mark facial landmarks on the face identified in the frame. A controller unit communicatively connected to the image processing unit. The controller unit is configured to: receive the facial landmarks marked on the face identified in the frame from the image processing unit; calculate an Eye Aspect Ratio (EAR) and a Mouth Aspect Ratio (MAR), from the received facial landmarks using a Per clos algorithm; compare the calculated Eye Aspect Ratio (EAR) with a first threshold value, and the Mouth Aspect Ratio (MAR) with a second threshold value; determine a condition of drowsiness when the Eye Aspect Ratio (EAR) is less the first threshold value, and the Mouth Aspect Ratio (MAR) is greater than the second threshold value; generate an alert, upon detecting the drowsiness condition.
Embodiments in accordance with the present invention further provide a method for detecting drowsiness and alerting a user using a user drowsiness detection and alerting system. The method comprising steps of: receiving facial landmarks marked on a face identified in a frame from an image processing unit; calculating an Eye Aspect Ratio (EAR) and a Mouth Aspect Ratio (MAR), from the received facial landmarks using a Per clos algorithm; comparing the calculated Eye Aspect Ratio (EAR) with a first threshold value, and the Mouth Aspect Ratio (MAR) with a second threshold value; determine a condition of drowsiness when the Eye Aspect Ratio (EAR) is less the first threshold value, and the Mouth Aspect Ratio (MAR) is greater than the second threshold value; generate an alert, upon detecting the drowsiness condition.
Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a drowsiness detection and alert system.
Next, embodiments of the present application may provide a drowsiness detection system that continuously monitors the driver's alertness, providing more accurate and timely detection of drowsiness.
Next, embodiments of the present application may provide a drowsiness detection system that uses AI-driven deep learning algorithms to analyze eye movements, facial expressions, and head positioning, eliminating the need for physical sensors attached to the driver, making it more comfortable and user-friendly.
Next, embodiments of the present application may provide a drowsiness detection system that uses advanced algorithms to assess multiple indicators of drowsiness such as Eye Aspect Ratio, Mouth Opening Ratio, Head Position, and so forth, the system reduces false positives and negatives, offering a higher level of detection precision.
Next, embodiments of the present application may provide a drowsiness detection system that dynamically adjusts its sensitivity based on the individual driver’s fatigue profile and external conditions like the time of day or driving duration, providing personalized monitoring.
Next, embodiments of the present application may provide a drowsiness detection system that stores and analyzes data in a cloud storage, allowing continuous learning and improvements in the system’s predictive accuracy. This further enables fleet operators to track and analyze fatigue trends across multiple drivers over time.
Next, embodiments of the present application may provide a drowsiness detection system that eliminates physical sensors and reliance on camera-based monitoring lowers hardware costs and simplifies installation, making it easier to integrate into a variety of vehicles without major modifications.
Next, embodiments of the present application may provide a drowsiness detection system that can provide suggestions for nearby rest stops if fatigue levels become critical, further enhancing driver safety.
Next, embodiments of the present application may provide a drowsiness detection system that can factor in environmental conditions like lighting and weather, adjusting its fatigue detection models for more accurate assessment in varying driving scenarios.
Next, embodiments of the present application may provide a drowsiness detection system that delivers alerts through auditory, visual, and haptic feedback, ensuring that the driver is effectively informed of their fatigue status and reducing the risk of accidents.
Next, embodiments of the present application may provide a drowsiness detection system that is highly scalable and can be improved over time with updates to the software, without requiring significant hardware changes.
These and other advantages will be apparent from the present application of the embodiments described herein.
The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
FIG. 1A illustrates a drowsiness detection and alert system, according to an embodiment of the present invention;
FIG. 1B illustrates a calculation of an Eye Aspect Ratio (EAR), according to an embodiment of the present invention;
FIG. 1C illustrates a calculation of a Mouth Aspect Ratio (MAR), according to an embodiment of the present invention;
FIG. 2 illustrates a block diagram of a controller unit of the drowsiness detection and alert system, according to an embodiment of the present invention; and
FIG. 3 depicts a flowchart of a method for detecting drowsiness and alerting a user using the drowsiness detection and alert system, according to an embodiment of the present invention.
The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
FIG. 1A illustrates a drowsiness detection and alert system 100 (hereinafter referred to as the system 100), according to an embodiment of the present invention. In an embodiment of the present invention, the system 100 may be installed and/or retrofitted in a vehicle. Further, the system 100 may be adapted to monitor actions and facial expressions of a user driving the corresponding vehicle, in an embodiment of the present invention. In an embodiment of the present invention, the system 100 may be adapted to alert the user when indication of drowsiness, sleepiness, tiredness, and so forth may be detected from the monitored actions and the facial expressions.
The vehicle may be, but not limited to, a passenger vehicle, a private vehicle, a freight carrier, a locomotive, an aerial vehicle, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the vehicle in which the system 100 may be installed and/or retrofitted, including known, related art, and/or later developed technologies.
The system 100 may comprise an image capturing unit 102, an image processing unit 104, a controller unit 106, and an alert unit 108.
In an embodiment of the present invention, the image capturing unit 102 may be installed in a driver cabin of the vehicle. The image capturing unit 102 may be installed in a such location, angle, and orientation, so that the user may be in an unobstructed field of view of the image capturing unit 102, in an embodiment of the present invention. The location of the installation of the image capturing unit 102 may be, but not limited to, a handle, a steering wheel, a crumple column, and so forth. Embodiments of the present invention are intended to include or otherwise cover any location of the installation of the image capturing unit 102, including known, related art, and/or later developed technologies.
In an embodiment of the present invention, the image capturing unit 102 may be configured to capture a real-time video of the user. In an embodiment of the present invention, the image capturing unit 102 may capture the real-time video of the user in high resolution. The high resolution of the real-time video to may ensure clear and high definition video of the driver user. Further, as the video may be captured in high definition, the photographic details such as, but not limited to, brightness, contrast, white balance, temperature, and so forth may be captured accurately.
In an embodiment of the present invention, the image capturing unit 102 may further comprise infrared emitters (not shown). The infrared emitters (not shown) may enable a capture of clear real-time videos in low light scenarios.
In an embodiment of the present invention, the image capturing unit 102 may further comprise a stabilization hardware (not shown), such as, but not limited to, an Optical Image Stabilizer (OIS), an Electronic Image Stabilizer (EIS), a sensor shifter, and so forth. The stabilization hardware may prevent blurry and jittery real-time video of the user that may induce from vibrations of the vehicle.
In another embodiment of the present invention, the image capturing unit 102 may be adapted to capture a plurality of image of the user. The images may be captured with a pre-defined time delay among the images, in an embodiment of the present invention.
In an embodiment of the present invention, the real-time video and/or the plurality of images captured by the image capturing unit 102 may be undergo through a process of nomenclature. The process of nomenclature may provide a unique name to each of the real-time video and/or the plurality of images. Further, the real-time video and/or the plurality of images with the assigned unique name may be saved into a repository (not shown). The repository may be, but not limited to, a memory unit, a database, a dataset, a spreadsheet, a text document, and so forth. Embodiments of the present invention are intended to include or otherwise cover any repository for storage of the real-time video and/or the plurality of images, including known, related art, and/or later developed technologies.
The image capturing unit 102 may be, but not limited to, a still camera, a video camera, a color balancer camera, a thermal camera, an infrared camera, a telephoto camera, a wide-angle camera, a macro camera, a Close-Circuit Television (CCTV) camera, a web camera, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the image capturing unit 102, including known, related art, and/or later developed technologies.
In an embodiment of the present invention, the captured real-time video of the user may be transmitted to a central control room (not shown) of a vehicle service provider (not shown). The captured real-time video of the user transmitted to the central control room may be manually monitored for drowsiness detection. In another embodiment of the present invention, the captured real-time video of the user transmitted to the image processing unit 104.
In an embodiment of the present invention, the image processing unit 104 may be a physical peripheral that may be physically installed with the image capturing unit 102 and may be configured to communicate with the controller unit 106. In another embodiment of the present invention, the image processing unit 104 may be remotely installed and virtually configured on a cloud based server (not shown). The virtual configuration of the image processing unit 104 may be achieved using means such as, but not limited to, an Oracle VMWare, a Sandbox, a VMware Horizon Client, and so forth. Embodiments of the present invention are intended to include or otherwise cover any means for achieving the virtual confutation of the image processing unit 104 over cloud based server.
In an embodiment of the present invention, the image processing unit 104 may be configured to receive the captured real-time video of the user. The image processing unit 104 may further be configured to disintegrate the received the real-time video to access frames in the received the real-time video. Further, upon disintegration, the image processing unit 104 may identify a presence of a face in one or more of the frames in the disintegrated real-time video. The presence of the face in one or more of the frames may be identified using algorithms such as, but not limited to, a OpenCV Haarcascade, a OpenCV Deep Neural Network (DNN), a Dlib Algorithm, a Multi-Task Cascaded Convolutional Neural Network (MTCNN), a Face Net Algorithm (FNA), and so forth. In a preferred embodiment of the present, the presence of the face in one or more of the frames may be identified using a Hair Cascade His (HCH) algorithm. In another preferred embodiment of the present, the presence of the face in one or more of the frames may be identified using a Python Computer Vision (CV) algorithm. Embodiments of the present invention are intended to include or otherwise cover any algorithm for identification of the presence of the face in one or more of the frames of the real-time video, including known, related art, and/or later developed technologies.
Further, upon identification of the presence of the face in one or more of the frames of the real-time video, the image processing unit 104 may be configured to mark facial landmarks on the face identified in one or more of the frames, in an embodiment of the present invention. In an embodiment of the present invention, the facial landmarks may be marked on a predefined location of the face identified in one or more of the frames. The location on the face for marking of the facial landmarks may be, but not limited to, eyes, lips, a mouth, a nose, a head, and so forth. Embodiments of the present invention are intended to include or otherwise cover any location on the face, identified in one or more of the frames, for marking of the facial landmarks, including known, related art, and/or later developed technologies. The facial landmarks may be marked on the predefined location of the face using algorithms such as, but not limited to, a FacemarkLBF model, a Dlib library, a Media pipe model, and so forth. In a preferred embodiment of the present, the facial landmarks may be marked on the predefined location of the face using a Python Computer Vision (CV) algorithm. Embodiments of the present invention are intended to include or otherwise cover any algorithm for marking of the facial landmarks on the predefined location of the face, including known, related art, and/or later developed technologies.
Further, after marking of the facial landmarks on the face identified in one or more of the frame, the image processing unit 104 may transmit the marked facial landmarks on the face to the controller unit 106.
In an embodiment of the present invention, the controller unit 106 may be communicatively connected to the image processing unit 104. The controller unit 106 may be configured to compare the facial landmarks marked on the face identified in one or more of the frame with the real-time video and/or the plurality of images with the assigned unique name saved into the repository. If the face identified in one or more of the frame matches with the face captured in the real-time video and/or the plurality of images, then the controller unit 106 may be configured to execute computer-executable instructions to generate an output relating to the system 100. Else, the controller unit 106 may be configured to store the face identified in one or more of the frame in the repository.
The controller unit 106 may be, but not limited to, a Programmable Logic Control (PLC) unit, a microprocessor, a development board, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the controller unit 106 including known, related art, and/or later developed technologies. In an embodiment of the present invention, the controller unit 106 may further be explained in conjunction with FIG. 2.
In an embodiment of the present invention, the alert unit 108 may be installed in the driver cabin of the vehicle. The alert unit 108 may be installed in an audio-visual proximity of the user. The alert unit 108 may be installed in collaboration of a seating arrangement of the user. The alert unit 108 may be adapted to alert the user of the corresponding vehicle. The alert unit 108 may alert the user, when the user may be exhibiting indications of a drowsiness condition, a sleepiness condition, a tiredness condition, a drooping of eyelids, a change in facial expressions, and so forth. Embodiments of the present invention are intended to include or otherwise cover any abnormal behavior that may be exhibited by the user, including known, related art, and/or later developed technologies.
In an embodiment of the present invention, the alert delivered by the alert unit 108 may be in form such as, but not limited to, an alarm, a periodically synchronized vibrations, flashing lights, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the that may be delivered by the alert unit 108, including known, related art, and/or later developed technologies.
Further, the alert delivered by the alert unit 108 may be in cohesion of the abnormal behavior that may be exhibited by the user. In an exemplary scenario, the alert unit 108 may be adapted to deliver the alarm, flash the lights, and initiate the vibration when the user may be exhibiting the drowsiness condition. However, the alert unit 108 may be adapted to only deliver the alarm when the user may be in an awake condition but may not be wearing seat belt or may not be paying attention on a road. Moreover, the alert unit 108 may be adapted to only deliver the alarm and initiate the vibration when the user may be in an awake condition and paying attention on that road but may not be physically in contact with a handle and/or a steering wheel of the vehicle.
The alert unit 108 may be, but not limited to, a Light Emitting Diode (LED), a buzzer, a speaker, pneumatically activated vibrators, a display unit, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the alert unit 108, including known, related art, and/or later developed technologies.
Further, the alert unit 108 may comprise a reset button (not shown). The reset button may be adapted to reset and/or deactivate the alert unit 108 after the alert unit 108 may have been activated. The reset button may be pressed by the user, after the user may have gain consciousness from drowsiness and may be paying attention on driving of the vehicle.
FIG. 1B illustrates a calculation of an Eye Aspect Ratio (EAR), according to an embodiment of the present invention. In an embodiment of the present invention, the Eye Aspect Ratio (EAR) may be a measure of a ratio of a height to a width of the eyes. The calculation of the Eye Aspect Ratio (EAR) may enable a detection of eye blinking or eye closing action by the user. The Eye Aspect Ratio (EAR) may be mathematically represented as using an equation 1:
Eye Aspect Ratio (EAR)=height of eyes(h):width of eyes(w) --- 1
Further, the Eye Aspect Ratio (EAR) may be calculated using an equation 2:
Eye Aspect Ratio (EAR)=(Height of eyes(h))/(Width of eyes(w)) ---2
FIG. 1C illustrates a calculation of a Mouth Aspect Ratio (MAR), according to an embodiment of the present invention. In an embodiment of the present invention, the Mouth Aspect Ratio (MAR) may be a measure of measure of a ratio of a height to a width of the mouth. The calculation of the Mouth Aspect Ratio (MAR) may enable a detection of yawning action by the user. The Mouth Aspect Ratio (MAR) may be mathematically represented as using an equation 3:
Mouth Aspect Ratio (MAR)=height of mouth(h):width of mouth(w) --- 3
Further, the Mouth Aspect Ratio (MAR) may be calculated using an equation 4:
Mouth Aspect Ratio (MAR)=(Height of mouth (h))/(Width of mouth (w)) ---4
FIG. 2 illustrates a block diagram of the controller unit 106 of the system 100, according to an embodiment of the present invention. The controller unit 106 may comprise the computer-executable instructions in form of programming modules such as a data receiving module 200, a data calculation module 202, a data comparison module 204, a data determination module 206, and an alert module 208.
In an embodiment of the present invention, the data receiving module 200 may be configured to receive the facial landmarks marked on the face identified in the frame from the image processing unit 104. Further, upon receipt of the facial landmarks marked on the face, the data receiving module 200 may be configured to transmit the received facial landmarks to the data calculation module 202.
The data calculation module 202 may be activated upon receipt of the facial landmarks from the data receiving module 200. In an embodiment of the present invention, the data calculation module 202 may be configured to calculate the Eye Aspect Ratio (EAR) from the received facial landmarks. In an embodiment of the present invention, the data calculation module 202 may further be configured to calculate the Mouth Aspect Ratio (MAR) from the received facial landmarks. In a preferred embodiment of the present invention, the Eye Aspect Ratio (EAR) and the Mouth Aspect Ratio (MAR) may be calculated using a Per clos algorithm. Embodiments of the present invention are intended to include or otherwise cover any algorithm for calculation of the Eye Aspect Ratio (EAR) and the Mouth Aspect Ratio (MAR), including known, related art, and/or later developed technologies.
The data calculation module 202 may further be configured to transmit the Eye Aspect Ratio (EAR) and the Mouth Aspect Ratio (MAR) to the data comparison module 204.
The data comparison module 204 may be activated upon receipt of the Eye Aspect Ratio (EAR) and the Mouth Aspect Ratio (MAR) from the data calculation module 202. The data comparison module 204 may be configured to compare the Eye Aspect Ratio (EAR) and the Mouth Aspect Ratio (MAR) with a first threshold value and a second threshold value respectively, in an embodiment of the present invention.
In an embodiment of the present invention, the data comparison module 204 may be configured to configured to calibrate the first threshold value and the second threshold value by accessing a set of ‘n’ frames from the real-time video of the user captured by the image capturing unit 102. The set of ‘n’ frames may be assessed using advanced machine learning algorithms techniques. Upon assessment of the ‘n’ frames and derivation of the first threshold value and the second threshold value. In an embodiment of the present invention, the ‘n’ may be any natural number.
However, if the first threshold value and the second threshold value may be unable to detect and flag the drowsiness condition in the user, then the data comparison module 204 may be configured to calibrated the first threshold value and the second threshold value by accessing more than 300 frames from the real-time video of the user captured by the image capturing unit 102. The data comparison module 204 may be configured to continue calibrating the first threshold value and the second threshold value, by using more and more frames from the real-time video of the user captured by the image capturing unit 102, until the first threshold value and the second threshold value may be derived that may be able to detect and flag the drowsiness condition in the user.
Further, the derived first threshold value and the second threshold value may be normalized. The normalization of the first threshold value and the second threshold value may provide an enablement of the system 100 for any kind of user exhibiting any kind of facial features and facial constructions.
In an embodiment of the present invention, the first threshold value may be in a range from 0.10 to 0.40. In a preferred embodiment of the present invention, the first threshold value may be 0.25. Embodiments of the present invention are intended to include or otherwise cover any first threshold value.
In an embodiment of the present invention, the second threshold value may be in a range from 0.50 to 1.00. In a preferred embodiment of the present invention, the second threshold value may be 0.75. Embodiments of the present invention are intended to include or otherwise cover any second threshold value.
In an embodiment of the present invention, the data comparison module 204 may be configured to compare the calculated Eye Aspect Ratio (EAR) with a first threshold value. Upon comparison, if the calculated Eye Aspect Ratio (EAR) may be greater than the first threshold value, then the data comparison module 204 may be configured to reactivate the data receiving module 200 to continue receiving the facial landmarks marked on the face identified in the frame from the image processing unit 104.
However, if the calculated Eye Aspect Ratio (EAR) may be less than the first threshold value, then the data comparison module 204 may be configured to compare the Mouth Aspect Ratio (MAR) to the second threshold value. Upon comparison, if the calculated Mouth Aspect Ratio (MAR) may be less than the second threshold value, then data comparison module 204 may be configured to reactivate the data receiving module 200 to continue receiving the facial landmarks marked on the face identified in the frame from the image processing unit 104.
However, if the calculated Mouth Aspect Ratio (MAR) may be greater than the second threshold value, then then the data comparison module 204 may be configured to transmit an activation signal to the data determination module 206.
The data determination module 206 may be activated upon receipt of the activation signal from the data comparison module 204. The data determination module 206 may be configured to detect an orientation of the head of the user. The data determination module 206 may be configured to detect an orientation of the head from the facial landmarks received from the data receiving module 200. Further, the data determination module 206 may be configured to calculate a percentage of time eyelids of the user remain closed. Moreover, if the calculated percentage of the time exceeds a threshold magnitude, then the data determination module 206 may be configured to flag the drowsiness condition in the user.
Upon flagging of the drowsiness condition in the user, the data determination module 206 may be configured to generate and transmit an alert signal to the alert module 208.
The alert module 208 may be activated upon receipt of the alert signal from the data determination module 206. In an embodiment of the present invention, the alert module 208 may be configured to generate the alert. The alert generated by the alert module 208 may be parsed through the alert unit 108. The parsing of the generated alert through the alert unit 108 may further in-turn alert the user of the corresponding vehicle, in an embodiment of the present invention.
FIG. 3 depicts a flowchart of a method 300 for detecting drowsiness and alerting the user using the system 100, according to an embodiment of the present invention.
At step 302, the system 100 may receive the facial landmarks marked on the face identified in the frame from the image processing unit 104.
At step 304, the system 100 may calculate the Eye Aspect Ratio (EAR) and the Mouth Aspect Ratio (MAR) from the received facial landmarks.
At step 306, the system 100 may compare the calculated Eye Aspect Ratio (EAR) with the first threshold value. Upon comparison, if the calculated Eye Aspect Ratio (EAR) is less than the first threshold value, then the method 300 may proceed to a step 308. Else, the method 300 may revert to the step 302.
At step 308, the system 100 may compare the calculated Mouth Aspect Ratio (MAR) with the second threshold value. Upon comparison, if the calculated Mouth Aspect Ratio (MAR) is greater than the second threshold value, then the method 300 may proceed to a step 310. Else, the method 300 may revert to the step 302.
At step 310, the system 100 may determine the condition of the drowsiness of the driver.
At step 312, the system 100 may generate the alert.
At step 314, the system 100 may transmit the generated alert to the alert unit 108.
While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A drowsiness detection and alert system (100), the system (100) comprising:
an image capturing unit (102), arranged in a visual proximity of a user, and adapted to capture a real-time video of the user;
an image processing unit (104), adapted to receive the captured real-time video of the user, and configured to:
identify a presence of a face in a frame of the receive real-time video, wherein the presence of the face is identified using a Hair Cascade His (HCH) algorithm; and
mark facial landmarks on the face identified in the frame; and
a controller unit (106) communicatively connected to the image processing unit (104), characterized in that the controller unit (106) is configured to:
receive the facial landmarks marked on the face identified in the frame from the image processing unit (104);
calculate an Eye Aspect Ratio (EAR) and a Mouth Aspect Ratio (MAR), from the received facial landmarks using a Per clos algorithm;
compare the calculated Eye Aspect Ratio (EAR) with a first threshold value, and the Mouth Aspect Ratio (MAR) with a second threshold value;
determine a condition of drowsiness when the Eye Aspect Ratio (EAR) is less the first threshold value, and the Mouth Aspect Ratio (MAR) is greater than the second threshold value; and
generate an alert, upon detecting the drowsiness condition.
2. The system (100) as claimed in claim 1, wherein the controller unit (106) is configured to calculate a percentage of time eyelids of the user remain closed.
3. The system (100) as claimed in claim 1, wherein the image capturing unit (102) is installed in a driver cabin of a vehicle.
4. The system (100) as claimed in claim 1, wherein the controller unit (106) is configured to transmit the generated alert to an alert unit (108).
5. The system (100) as claimed in claim 1, wherein the controller unit (106) is configured to detect an orientation of a head of the user to detect drowsiness of the user.
6. The system (100) as claimed in claim 1, wherein an alert unit (108) is installed in a driver cabin in an audio-visual proximity of the user.
7. A method (300) for detecting drowsiness and alerting a user using a user drowsiness detection and alerting system (100), the method (300) is characterized by steps of:
receiving facial landmarks marked on a face identified in a frame from an image processing unit (104);
calculating an Eye Aspect Ratio (EAR) and a Mouth Aspect Ratio (MAR), from the received facial landmarks using a Per clos algorithm;
comparing the calculated Eye Aspect Ratio (EAR) with a first threshold value, and the Mouth Aspect Ratio (MAR) with a second threshold value;
determine a condition of drowsiness when the Eye Aspect Ratio (EAR) is less the first threshold value, and the Mouth Aspect Ratio (MAR) is greater than the second threshold value; and
generate an alert, upon detecting the drowsiness condition.
8. The method (300) as claimed in claim 7, comprising a step of transmitting the generated alert to an alert unit (108).
9. The method (300) as claimed in claim 7, wherein an alert unit (108) is installed in a driver cabin in an audio-visual proximity of the user.
10. The method (300) as claimed in claim 7, wherein the image capturing unit (102) is installed in a driver cabin of a vehicle.
Date: January 06, 2025
Place: Noida
Nainsi Rastogi
Patent Agent (IN/PA-2372)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202541001040-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2025(online)].pdf | 2025-01-06 |
| 2 | 202541001040-REQUEST FOR EARLY PUBLICATION(FORM-9) [06-01-2025(online)].pdf | 2025-01-06 |
| 3 | 202541001040-POWER OF AUTHORITY [06-01-2025(online)].pdf | 2025-01-06 |
| 4 | 202541001040-OTHERS [06-01-2025(online)].pdf | 2025-01-06 |
| 5 | 202541001040-FORM-9 [06-01-2025(online)].pdf | 2025-01-06 |
| 6 | 202541001040-FORM FOR SMALL ENTITY(FORM-28) [06-01-2025(online)].pdf | 2025-01-06 |
| 7 | 202541001040-FORM 1 [06-01-2025(online)].pdf | 2025-01-06 |
| 8 | 202541001040-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-01-2025(online)].pdf | 2025-01-06 |
| 9 | 202541001040-EDUCATIONAL INSTITUTION(S) [06-01-2025(online)].pdf | 2025-01-06 |
| 10 | 202541001040-DRAWINGS [06-01-2025(online)].pdf | 2025-01-06 |
| 11 | 202541001040-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2025(online)].pdf | 2025-01-06 |
| 12 | 202541001040-COMPLETE SPECIFICATION [06-01-2025(online)].pdf | 2025-01-06 |