Abstract: Disclosed is an apparatus (100) for monitoring driver alertness. The apparatus comprises a camera (102) configured to be mounted within a vehicle to capture a live video feed of the driver's face, and the Raspberry Pi(103) will receive the images and process it to detect a human face using face detection algorithms, identify key facial landmarks including eyes, eyebrows, nose, and mouth with landmark detection algorithms, monitor the driver's eye movements by tracking these landmarks, and analyse the data on eye movements in real-time using pattern recognition algorithms to assess alertness levels then (104) server fetch the data and Alerts are generated to notify the driver of potential risks, such as signs of drowsiness. A communication interface (106) integrates the apparatus (100) with vehicle safety systems, enabling interventions based on the alerts. Additionally, a feedback mechanism (108) collects the driver's responses for adaptive monitoring. Fig. 1 Drawings / FIG. 1 / FIG. 2
Description:.
APPARATUS FOR MONITORING DRIVER ALERTNESS
Field of the Invention
The present disclosure generally relates to vehicle safety systems. Further, the present disclosure particularly relates to an apparatus for monitoring driver alertness.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
The consistent advancement in vehicle technology has necessitated parallel improvements in driver safety mechanisms. Among these, monitoring driver alertness has become a critical aspect due to the increase in accidents attributed to driver fatigue and distraction. Traditionally, efforts to enhance driver alertness were concentrated on educating drivers and enforcing regulations related to driving hours and breaks. Although these measures are beneficial, they rely heavily on self-assessment and compliance, which are not infallible.
With the advent of intelligent vehicle systems, the incorporation of automated monitoring tools that can detect signs of driver inattention has been explored. Early systems relied on simplistic measures such as steering wheel movement patterns or the vehicle’s trajectory within the lane to infer driver alertness. However, these systems often failed to discern between intentional driver actions and those caused by reduced alertness.
Further developments introduced more direct measures, utilizing in-vehicle cameras to observe the driver. Initial attempts to assess alertness via visual analysis were hampered by challenges in accurate face and eye detection, particularly under varying lighting conditions and driver postures. These systems were rudimentary, focusing on crude measures such as head position, without delving into the nuanced indicators of alertness reflected in facial expressions and micro-movements.
More advanced systems employed face detection algorithms, which marked an improvement but were often limited in their scope. They tended to focus solely on the detection of a face within the vehicle, neglecting the detailed analysis necessary to accurately gauge alertness levels. Moreover, the reliability of these systems under diverse conditions, such as low-light scenarios, was frequently questionable.
Sophisticated systems subsequently introduced the concept of tracking facial landmarks. By identifying specific points on the face—such as the eyes, eyebrows, nose, and mouth—these systems provided a framework for more nuanced analysis. For example, the tracking of eye movements enabled the detection of blink frequency and duration, both of which are indicators of driver fatigue. However, the algorithms used for landmark detection and subsequent behavioral analysis often lacked the necessary precision and real-time processing capabilities to function effectively in dynamic driving environments.
In the context of such developmental trajectory, it is evident that a need persists for an apparatus that not only detects and monitors driver alertness accurately but also integrates seamlessly with vehicle safety systems to provide proactive interventions. There is a demand for a solution that can interpret a comprehensive array of behavioral indicators in real-time, adapt its sensitivity based on feedback, and trigger appropriate safety measures, ultimately contributing to road safety and preventing accidents due to driver inattention.
In light of the above discussion, there exists an urgent need for solutions that overcome the problems associated with conventional systems and techniques for monitoring driver alertness.
Summary
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
The present disclosure aims to provide an apparatus for monitoring driver alertness, comprising a camera mounted within a vehicle to capture a live video feed of a driver's face, and a server in communication with the camera. The server is programmed to detect a human face within the video feed using face detection algorithms, identify key facial landmarks on the detected face, monitor movement patterns of the driver's eyes by tracking the position and movement of the landmarks corresponding to the eyes, analyze the collected data on eye movements in real-time, and generate alerts based on the analysis results. The apparatus also includes a communication interface for integrating with existing vehicle safety systems, and a feedback mechanism to collect feedback from the driver's responses to the alerts, enabling adaptive adjustments to the monitoring parameters.
In an embodiment, the camera of the apparatus is further configured to operate under various lighting conditions, ensuring reliable data acquisition regardless of time of day or weather conditions. The server is further programmed with pattern recognition algorithms that learn from collected data over time, improving the accuracy of alert generation based on historical patterns of the driver's eye behaviors. Additionally, the server can generate alerts that include audio and visual signals, selectively activated based on the severity of detected risk and the driver's previous responses. The sensitivity of the eye movement analysis can be adjusted based on external factors, including vehicle speed and driving conditions, tailoring the monitoring process to real-time driving scenarios.
In an embodiment, the communication interface of the apparatus is configured to transmit data related to the driver's alertness status to a remote server for further analysis, enabling fleet management applications to monitor driver safety across multiple vehicles. The apparatus further comprises an ambient light sensor coupled to the camera, adjusting camera settings based on ambient light levels inside the vehicle to enhance the quality of the captured video feed. The feedback mechanism includes a user interface through which the driver can provide manual feedback on the accuracy of the alerts, facilitating system calibration based on driver input. Moreover, the apparatus is integrated with the vehicle's navigation system, enabling generated alerts to include recommendations for nearby rest stops when signs of drowsiness are detected, promoting safe driving practices.
In another aspect, the present disclosure provides a method for monitoring driver alertness in a vehicle, involving capturing a live video feed of a driver's face by a camera mounted within the vehicle, detecting a human face within the captured video feed using face detection algorithms, identifying key facial landmarks on the detected face, monitoring movement patterns of the driver's eyes by tracking the position and movement of the landmarks, analyzing the collected data on eye movements in real-time, generating alerts based on the analysis results to notify the driver of potential risks, integrating the method with existing vehicle safety systems, and collecting feedback from the driver's responses to the alerts. This method ensures continuous monitoring and feedback of the driver's eye movements throughout the journey, enabling adaptive adjustments to the monitoring parameters.
Brief Description of the Drawings
The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates an apparatus for monitoring driver alertness, in accordance with the embodiments of the present disclosure.
FIG. 2 illustrates a method for monitoring driver alertness in a vehicle, in accordance with the embodiments of the present disclosure.
Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
FIG. 1 illustrates an apparatus (100) for monitoring driver alertness, in accordance with the embodiments of the present disclosure. The apparatus (100) comprises a camera (102), a server (104), a communication interface (106), a feedback mechanism (108) and other known components of a drowsiness detection system/application.
The term “apparatus” as used throughout the present disclosure relates to an apparatus (100) for monitoring driver alertness. The apparatus (100) is designed for use within a vehicle to monitor the driver's alertness level, thereby enhancing road safety. The apparatus (100) comprises several key components that work in conjunction to achieve this objective, including a camera (102) for visual monitoring, a server (104) for data analysis and alert generation, a communication interface (106) for system integration, and a feedback mechanism (108) for adaptive monitoring.
The term “camera” as used throughout the present disclosure relates to a camera (102) configured to be mounted within a vehicle. The camera (102) is adapted to capture a live video feed of a driver's face within the vehicle, ensuring that the driver’s facial features and movements can be accurately monitored. The camera (102) serves as the primary data collection tool within the apparatus (100), facilitating the detection and analysis of the driver’s alertness state. Optionally, the camera (102) may feature advanced imaging technologies to enhance the clarity and reliability of the captured video feed under various lighting conditions. A working example of the camera (102) includes its use in dimly lit environments, where it employs night vision capabilities to ensure continuous monitoring.
In an embodiment, the server (104) is in communication with said camera (102), said server (104) being programmed to detect a human face within the captured video feed by employing face detection algorithms, thereby ensuring that subsequent analysis is focused on facial features. The server (104) represents the analytical core of the apparatus (100), processing the video feed to identify key facial landmarks such as eyes, eyebrows, nose, and mouth. By using face landmark detection algorithms, the server (104) tracks movement patterns of the driver's eyes, including blinking frequency, eye closure duration, and gaze direction. This analysis enables the server (104) to determine the driver's level of alertness in real-time, comparing observed eye movements against predefined patterns associated with alertness, drowsiness, distraction, or impairment. Optionally, the server (104) may incorporate machine learning techniques to enhance the accuracy of its analysis over time. A working example of the server (104) involves it triggering alerts when patterns indicative of drowsiness is detected, prompting the driver to take preventive actions.
A communication interface (106) facilitates the integration of the apparatus (100) with existing vehicle safety systems. The integration includes triggering interventions such as activating seat vibrations or adjusting cabin temperature based on the generated alerts. The communication interface (106) thus enhances the functionality of the apparatus (100), ensuring that alerts not only notify the driver but also initiate preventive measures to mitigate potential risks. Optionally, the communication interface (106) may support wireless communication protocols to enable seamless connectivity with various vehicle systems. A working example of the communication interface (106) includes its activation of a cooling system within the vehicle when signs of drowsiness are detected, thereby helping to alert and refresh the driver.
Lastly, the apparatus (100) incorporates a feedback mechanism (108) configured to collect feedback from the driver's responses to the alerts. This feedback allows for adaptive adjustments to the monitoring parameters, ensuring the apparatus (100) remains effective over different driving conditions and individual driver characteristics. The feedback mechanism (108) is crucial for the continuous improvement of the apparatus (100), tailoring its operations to better suit the monitored driver’s needs. Optionally, the feedback mechanism (108) may use driver input to refine alert sensitivity and reduce false positives. A working example of the feedback mechanism (108) includes adjusting the threshold for drowsiness alerts based on driver feedback indicating too frequent or infrequent alerts.
In an embodiment, reliable data acquisition by the camera (102) is facilitated under various lighting conditions. Said camera (102) is further configured to operate effectively regardless of the time of day or weather conditions. The enhancement of the camera (102) to function in diverse lighting scenarios is achieved through advanced imaging technologies and settings adjustments. The ability to capture clear video feeds during both day and night times is crucial for continuous monitoring of the driver's facial features and movements. By employing technologies such as infrared illumination for low light conditions and dynamic range adjustment for scenarios of direct sunlight, the camera (102) maintains its capability to provide high-quality video feeds. This adaptation ensures that the apparatus (100) remains effective in monitoring driver alertness across different driving environments, thus contributing to the overall safety and reliability of the system.
In another embodiment, the server (104) is further programmed with pattern recognition algorithms that are configured to learn from collected data over time. Through the continuous analysis of the driver's eye behaviors and the incorporation of machine learning techniques, said server (104) enhances its capability to generate accurate alerts. The adaptation of pattern recognition algorithms allows for the refinement of alert generation based on historical patterns, leading to a reduction in false positives and an increase in the system's reliability. The learning process involves comparing current observations with stored data to identify trends and anomalies in eye movement behavior. Such refinement in the analysis process by the server (104) contributes to the development of a more personalized and effective monitoring system, thereby improving road safety by accurately detecting signs of drowsiness or distraction.
In yet another embodiment, the server (104) is further programmed to generate alerts that include audio and visual signals. Said alerts are configured to be selectively activated based on the severity of the detected risk and the driver's previous responses to similar alerts. The differentiation in alert types allows for a tailored response strategy, wherein more intrusive alerts can be reserved for situations of higher risk. The implementation of both audio and visual alerts ensures that the driver is effectively notified of potential dangers, thereby increasing the chance of timely corrective action. Such a strategy enhances the apparatus (100)'s utility by ensuring that drivers receive appropriate warnings that can aid in preventing accidents due to reduced alertness.
In a further embodiment, the sensitivity of the eye movement analysis conducted by the server (104) is adjusted based on external factors. Such factors include vehicle speed and driving conditions, which are crucial determinants of the required level of driver alertness. By tailoring the monitoring process to real-time driving scenarios, the system enhances its relevance and effectiveness. The adaptation of sensitivity ensures that the apparatus (100) provides meaningful alerts that reflect the current driving context, thereby aiding drivers in maintaining optimal alertness levels under varying conditions. This feature of the server (104) contributes significantly to the apparatus (100)'s role in promoting road safety by adjusting its operation in response to dynamic driving environments.
In a further embodiment, the communication interface (106) is configured to transmit data related to the driver's alertness status to a remote server. Such transmission enables fleet management applications to monitor driver safety across multiple vehicles, thereby enhancing the management of driver alertness on a broader scale. The capability of the communication interface (106) to support data transmission to external systems allows for the aggregation and analysis of alertness data, facilitating the identification of patterns and trends that may indicate systemic issues or opportunities for safety interventions. This integration supports proactive safety management practices within fleet operations, highlighting the apparatus (100)'s versatility and its contribution to improving road safety.
In another embodiment, an ambient light sensor (110) is coupled to the camera (102) to adjust the camera settings based on the ambient light level inside the vehicle. Such coupling enhances the quality of the captured video feed under varying light conditions. The ambient light sensor (110) automatically adjusts settings such as exposure and aperture to optimize video quality, ensuring that the camera (102) continues to capture detailed and clear images regardless of changes in the internal or external lighting environment. This adaptation is vital for the apparatus (100) to maintain its effectiveness in monitoring driver alertness by ensuring the consistent quality of the video feed, which is fundamental to accurate analysis and alert generation.
In an additional embodiment, the feedback mechanism (108) includes a user interface (112) through which the driver can provide manual feedback regarding the accuracy of the alerts. Said user interface (112) is configured to facilitate the calibration of the system based on driver input, enabling adjustments to be made to improve system performance. The inclusion of a mechanism for receiving direct feedback from users allows the apparatus (100) to adapt its operations in response to user experiences, enhancing the system's accuracy and reliability. The capability for drivers to interact with the apparatus (100) and influence its operation ensures that the monitoring and alert system remains aligned with the drivers' needs and preferences, thereby increasing its effectiveness in promoting driver alertness and safety.
In an embodiment, the apparatus (100) is integrated with the vehicle's navigation system (114), enhancing driver safety by providing recommendations for nearby rest stops when signs of drowsiness are detected. This integration facilitates a proactive approach to safe driving practices, offering not only alerts regarding potential risks but also actionable suggestions to mitigate these risks. Leveraging the navigation system (114) ensures that recommendations are both timely and relevant, utilizing real-time data and advanced mapping technologies to identify suitable locations for rest within a reasonable distance from the vehicle's current trajectory. Such functionality is crucial during long drives or nocturnal journeys when fatigue risks increase significantly. The system dynamically tailors its recommendations based on the severity of detected drowsiness, varying from suggestions for short breaks to advisories for more extended rest periods, depending on the immediate needs of the driver. This adaptive response mechanism enhances the personalized aspect of the alert system, thereby improving its effectiveness in real-time scenarios. Additionally, the seamless integration with the navigation system (114) allows for the automatic adjustment of the vehicle's route to incorporate the recommended rest stop, prioritizing ease of use for the driver. This comprehensive approach not only alerts drivers to imminent dangers but also provides practical solutions to avert potential accidents, highlighting the apparatus (100)'s role in promoting road safety. By combining technological innovation with practical driving assistance, the system represents a significant advancement in vehicle safety measures, aiming to reduce road accidents and improve overall driver well-being through informed and timely interventions.
FIG. 2 illustrates a method (200) for monitoring driver alertness in a vehicle, in accordance with the embodiments of the present disclosure. Capturing (202) involves a camera (102) mounted within the vehicle, which captures a live video feed of the driver's face. This step is crucial for obtaining real-time visual data, which forms the basis for subsequent analysis aimed at monitoring the driver's alertness levels. Detecting (204) is performed by a server (104) in communication with the camera (102), where a human face within the captured video feed is identified using face detection algorithms. This step ensures that the analysis focuses specifically on the driver's face for accurate monitoring. Identifying (206) involves the server (104) using face landmark detection algorithms to identify key facial landmarks on the detected face, such as eyes, eyebrows, nose, and mouth. These landmarks serve as reference points for further detailed analysis of facial expressions and movements. Monitoring (208) is carried out by the server (104), which tracks the position and movement of the landmarks corresponding to the eyes. This monitoring helps determine various eye behaviors, including blinking frequency, eye closure duration, and gaze direction, which are indicative of the driver's alertness. Analyzing (210) entails the server (104) using pattern recognition algorithms to analyze the collected data on eye movements in real-time. This analysis compares observed eye movements against predefined patterns associated with alertness, drowsiness, distraction, or impairment, enabling the assessment of the driver's state. Generating (212) involves the server (104) producing alerts based on the analysis results to notify the driver of potential risks. This step is critical for providing timely warnings that can prompt the driver to take necessary actions to mitigate any detected risks. Integrating (214) is performed by a communication interface (106), which integrates the method with existing vehicle safety systems. This integration enhances functionality by triggering interventions such as seat vibrations or audio alerts based on the generated alerts to further ensure the driver's alertness. Collecting (216) is carried out by a feedback mechanism (108), which gathers feedback from the driver's responses to the alerts. This feedback allows for adaptive adjustments to the monitoring parameters, ensuring the system's continuous effectiveness in monitoring and improving the driver's alertness throughout the journey.
Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
Claims
I/We claim:
An apparatus (100) for monitoring driver alertness, comprising:
a camera (102) configured to be mounted within a vehicle, said camera (102) being adapted to capture a live video feed of a driver's face within the vehicle; and
a server (104) in communication with said camera (102), said server (104) being programmed to:
detect a human face within the captured video feed by employing face detection algorithms, thereby ensuring that subsequent analysis is focused on facial features;
identify key facial landmarks on the detected face, selected from eyes, eyebrows, nose, and mouth, using face landmark detection algorithms, said landmarks serving as reference points for analyzing facial expressions and movements;
monitor movement patterns of the driver's eyes by tracking the position and movement of the landmarks corresponding to the eyes, thereby determining various eye behaviors including blinking frequency, eye closure duration, and gaze direction;
analyze, in real-time, the collected data on eye movements using pattern recognition algorithms to compare the observed eye movements against predefined patterns associated with alertness, drowsiness, distraction, or impairment; and
generate alerts based on the analysis results, wherein said alerts are configured to notify the driver of potential risks, including but not limited to signs of drowsiness, thereby prompting the driver to take preventive actions;
a communication interface (106) for integrating the apparatus (100) with existing vehicle safety systems to enhance functionality, wherein said integration includes triggering interventions such as activating seat vibrations or adjusting cabin temperature based on the generated alerts; and
a feedback mechanism (108) configured to collect feedback from the driver's responses to the alerts, allowing for adaptive adjustments to the monitoring parameters, thereby ensuring continuous monitoring and feedback of the driver's eye movements throughout the journey.
The apparatus (100) of claim 1, wherein said camera (102) is further configured to operate under various lighting conditions, thereby ensuring reliable data acquisition regardless of the time of day or weather conditions.
The apparatus (100) of claim 1, wherein said server (104) is further programmed with pattern recognition algorithms (104) that are configured to learn from the collected data over time, thereby improving the accuracy of alert generation based on historical patterns of the driver's eye behaviors.
The apparatus (100) of claim 1, wherein said server (104) is further programmed to generate alerts that include audio and visual alerts, configured to be selectively activated based on the severity of the detected risk and the driver's previous responses to similar alerts.
The apparatus (100) of claim 1, wherein said server (104) is further programmed to adjust the sensitivity of the eye movement analysis based on external factors, including vehicle speed and driving conditions, thereby tailoring the monitoring process to real-time driving scenarios.
The apparatus (100) of claim 1, wherein said communication interface (106) is further configured to transmit data related to the driver's alertness status to a remote server (104) for further analysis, thereby enabling fleet management applications to monitor driver safety across multiple vehicles.
The apparatus (100) of claim 1, further comprising an ambient light sensor (110) coupled to said camera (102), wherein said ambient light sensor (110) is configured to adjust the camera settings (102) based on the ambient light level inside the vehicle, thereby enhancing the quality of the captured video feed under varying light conditions.
The apparatus (100) of claim 1, wherein said feedback mechanism (108) includes a user interface (112) through which the driver can provide manual feedback regarding the accuracy of the alerts, said user interface (112) being configured to facilitate the calibration of the system based on driver input.
The apparatus (100) of claim 1, wherein said apparatus (100) is further integrated with the vehicle's navigation system (114), such that the generated alerts can include recommendations for nearby rest stops when signs of drowsiness are detected, thereby promoting safe driving practices.
A method (200) for monitoring driver alertness in a vehicle, comprising the steps of:
capturing (202), by a camera (102) mounted within the vehicle, a live video feed of a driver's face;
detecting (204), by a server (104) in communication with said camera (102), a human face within the captured video feed using face detection algorithms;
identifying (206), by said server (104), key facial landmarks on the detected face, including eyes, eyebrows, nose, and mouth, using face landmark detection algorithms;
monitoring (208), by said server (104), movement patterns of the driver's eyes by tracking the position and movement of the landmarks corresponding to the eyes to determine various eye behaviors including blinking frequency, eye closure duration, and gaze direction;
analyzing (210), in real-time by said server (104), the collected data on eye movements using pattern recognition algorithms to compare the observed eye movements against predefined patterns associated with alertness, drowsiness, distraction, or impairment;
generating (212), by said server (104), alerts based on the analysis results to notify the driver of potential risks;
integrating (214), by a communication interface (106), the method with existing vehicle safety systems to enhance functionality, including triggering interventions based on the generated alerts; and
collecting (216), by a feedback mechanism (108), feedback from the driver's responses to the alerts to allow for adaptive adjustments to the monitoring parameters, thereby ensuring continuous monitoring and feedback of the driver's eye movements throughout the journey.
APPARATUS FOR MONITORING DRIVER ALERTNESS
Disclosed is an apparatus (100) for monitoring driver alertness. The apparatus comprises a camera (102) configured to be mounted within a vehicle to capture a live video feed of the driver's face, and the Raspberry Pi(103) will receive the images and process it to detect a human face using face detection algorithms, identify key facial landmarks including eyes, eyebrows, nose, and mouth with landmark detection algorithms, monitor the driver's eye movements by tracking these landmarks, and analyse the data on eye movements in real-time using pattern recognition algorithms to assess alertness levels then (104) server fetch the data and Alerts are generated to notify the driver of potential risks, such as signs of drowsiness. A communication interface (106) integrates the apparatus (100) with vehicle safety systems, enabling interventions based on the alerts. Additionally, a feedback mechanism (108) collects the driver's responses for adaptive monitoring.
Fig. 1
Drawings
/
FIG. 1
/
FIG. 2
, Claims:I/We claim:
An apparatus (100) for monitoring driver alertness, comprising:
a camera (102) configured to be mounted within a vehicle, said camera (102) being adapted to capture a live video feed of a driver's face within the vehicle; and
a server (104) in communication with said camera (102), said server (104) being programmed to:
detect a human face within the captured video feed by employing face detection algorithms, thereby ensuring that subsequent analysis is focused on facial features;
identify key facial landmarks on the detected face, selected from eyes, eyebrows, nose, and mouth, using face landmark detection algorithms, said landmarks serving as reference points for analyzing facial expressions and movements;
monitor movement patterns of the driver's eyes by tracking the position and movement of the landmarks corresponding to the eyes, thereby determining various eye behaviors including blinking frequency, eye closure duration, and gaze direction;
analyze, in real-time, the collected data on eye movements using pattern recognition algorithms to compare the observed eye movements against predefined patterns associated with alertness, drowsiness, distraction, or impairment; and
generate alerts based on the analysis results, wherein said alerts are configured to notify the driver of potential risks, including but not limited to signs of drowsiness, thereby prompting the driver to take preventive actions;
a communication interface (106) for integrating the apparatus (100) with existing vehicle safety systems to enhance functionality, wherein said integration includes triggering interventions such as activating seat vibrations or adjusting cabin temperature based on the generated alerts; and
a feedback mechanism (108) configured to collect feedback from the driver's responses to the alerts, allowing for adaptive adjustments to the monitoring parameters, thereby ensuring continuous monitoring and feedback of the driver's eye movements throughout the journey.
The apparatus (100) of claim 1, wherein said camera (102) is further configured to operate under various lighting conditions, thereby ensuring reliable data acquisition regardless of the time of day or weather conditions.
The apparatus (100) of claim 1, wherein said server (104) is further programmed with pattern recognition algorithms (104) that are configured to learn from the collected data over time, thereby improving the accuracy of alert generation based on historical patterns of the driver's eye behaviors.
The apparatus (100) of claim 1, wherein said server (104) is further programmed to generate alerts that include audio and visual alerts, configured to be selectively activated based on the severity of the detected risk and the driver's previous responses to similar alerts.
The apparatus (100) of claim 1, wherein said server (104) is further programmed to adjust the sensitivity of the eye movement analysis based on external factors, including vehicle speed and driving conditions, thereby tailoring the monitoring process to real-time driving scenarios.
The apparatus (100) of claim 1, wherein said communication interface (106) is further configured to transmit data related to the driver's alertness status to a remote server (104) for further analysis, thereby enabling fleet management applications to monitor driver safety across multiple vehicles.
The apparatus (100) of claim 1, further comprising an ambient light sensor (110) coupled to said camera (102), wherein said ambient light sensor (110) is configured to adjust the camera settings (102) based on the ambient light level inside the vehicle, thereby enhancing the quality of the captured video feed under varying light conditions.
The apparatus (100) of claim 1, wherein said feedback mechanism (108) includes a user interface (112) through which the driver can provide manual feedback regarding the accuracy of the alerts, said user interface (112) being configured to facilitate the calibration of the system based on driver input.
The apparatus (100) of claim 1, wherein said apparatus (100) is further integrated with the vehicle's navigation system (114), such that the generated alerts can include recommendations for nearby rest stops when signs of drowsiness are detected, thereby promoting safe driving practices.
A method (200) for monitoring driver alertness in a vehicle, comprising the steps of:
capturing (202), by a camera (102) mounted within the vehicle, a live video feed of a driver's face;
detecting (204), by a server (104) in communication with said camera (102), a human face within the captured video feed using face detection algorithms;
identifying (206), by said server (104), key facial landmarks on the detected face, including eyes, eyebrows, nose, and mouth, using face landmark detection algorithms;
monitoring (208), by said server (104), movement patterns of the driver's eyes by tracking the position and movement of the landmarks corresponding to the eyes to determine various eye behaviors including blinking frequency, eye closure duration, and gaze direction;
analyzing (210), in real-time by said server (104), the collected data on eye movements using pattern recognition algorithms to compare the observed eye movements against predefined patterns associated with alertness, drowsiness, distraction, or impairment;
generating (212), by said server (104), alerts based on the analysis results to notify the driver of potential risks;
integrating (214), by a communication interface (106), the method with existing vehicle safety systems to enhance functionality, including triggering interventions based on the generated alerts; and
collecting (216), by a feedback mechanism (108), feedback from the driver's responses to the alerts to allow for adaptive adjustments to the monitoring parameters, thereby ensuring continuous monitoring and feedback of the driver's eye movements throughout the journey.
APPARATUS FOR MONITORING DRIVER ALERTNESS
| # | Name | Date |
|---|---|---|
| 1 | 202421033170-OTHERS [26-04-2024(online)].pdf | 2024-04-26 |
| 2 | 202421033170-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 3 | 202421033170-FORM 1 [26-04-2024(online)].pdf | 2024-04-26 |
| 4 | 202421033170-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 5 | 202421033170-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf | 2024-04-26 |
| 6 | 202421033170-DRAWINGS [26-04-2024(online)].pdf | 2024-04-26 |
| 7 | 202421033170-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf | 2024-04-26 |
| 8 | 202421033170-COMPLETE SPECIFICATION [26-04-2024(online)].pdf | 2024-04-26 |
| 9 | 202421033170-FORM-9 [07-05-2024(online)].pdf | 2024-05-07 |
| 10 | 202421033170-FORM 18 [08-05-2024(online)].pdf | 2024-05-08 |
| 11 | 202421033170-FORM-26 [12-05-2024(online)].pdf | 2024-05-12 |
| 12 | 202421033170-FORM 3 [13-06-2024(online)].pdf | 2024-06-13 |
| 13 | 202421033170-RELEVANT DOCUMENTS [17-04-2025(online)].pdf | 2025-04-17 |
| 14 | 202421033170-POA [17-04-2025(online)].pdf | 2025-04-17 |
| 15 | 202421033170-FORM 13 [17-04-2025(online)].pdf | 2025-04-17 |