Sign In to Follow Application
View All Documents & Correspondence

A Real Time Drowsiness Detection System

Abstract: Abstract “A REAL-TIME DROWSINESS DETECTION SYSTEM” A system equipped with dual cameras configured to capture video data simultaneously, with one facing a user and other facing towards a road, utilizes state-of-the-art machine learning 5 and deep learning technology for face and facial landmark detection models to calculate eye aspect ratio (EAR), which is further used to derive the real-time mean open eye aspect ratio (MOE) that determines user’s eye openness and attentiveness and subsequently MOE threshold values are carefully selected based on EAR 10 calibration threshold for varying eye sizes, and alerts are given to user when real-time MOE falls below MOE threshold level enabling user to take preventive measures to enhance driver safety and optimize driving experiences. 15 Figure 1 A Road facing camera (2) A User facing camera (3) A Hardware Device (1) ML and DL based Face and Facial Landmark Detection Model for EAR calculation (5) MOE Calculation Module (6) Alarm Device (7) ML and DL based Obstacle Detection Model (4)

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 May 2024
Publication Number
24/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-05-30

Applicants

Nervanik AI Labs Pvt. Ltd.
A – 1111, World Trade Tower, Off. S G Road, B/H Skoda Showroom, Makarba, Ahmedabad – 380051 Gujarat, INDIA.

Inventors

1. Pandya Nisarg Deveshbhai
B 129, New Shreejinagar Sector - 5, Nirnaynagar, Ahmedabad - 382481, Gujarat, INDIA.
2. Hemant Chaudahry
Sr. No. 36/2/2, Plot No.4, Snehdeep, Road No. 5, Tingrenagar, Pune-411 015 , Maharashtra, INDIA
3. G. Gnaneswara Rao
Flat No. 402, Sri Sai Parvathi Residency, Plot no-71, HPCL lay out P.M Palem, Madhurwada-530 041, Visakapatnam, INDIA.

Specification

Description:FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
The Patents Rules, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION: “A REAL-TIME DROWSINESS
DETECTION SYSTEM”
2. APPLICANTS:
(a) Name : Nervanik AI Labs Pvt. Ltd.
(b) Nationality : Indian
(c) Address : A – 1111, World Trade Tower, Off. S G Road, B/H
Skoda Showroom, Makarba, Ahmedabad – 380051
Gujarat, INDIA.
PROVISIONAL
The following specification
describes the invention.
? COMPLETE
The following specification
particularly describes the invention
and the manner in which it is to be
performed.
2
Field of invention
The present invention relates to the system for drowsiness detection
with machine learning and deep learning based face and facial landmark
detection models to calculate Eye Aspect Ratio (EAR) to derive Mean Open
Eye Aspect Ratio (MOE) which is an essential 5 indicator for identifying
drowsy events in a user.
Background of invention
Driving is an essential part of daily routine of humans. Transport
10 industry is highly dependent on road ways for on land freight movement.
Apart from industrial use there is plethora of usages in daily life form public
transport to private commute. Livelihood of many depends on driving
vehicles.
Road commute also involves risk of road accidents by variety of
15 reasons; drowsy driving is one of them. Even with the increasing presence of
automation, human operator continues to play important role in critical task
execution. According to global studies 21% of all the fatal accidents happen
due to drowsy driving. National Safety Council (NSC) reports that approx
100.000 accidents are reported every year with cause of drowsy driving,
20 among those approx 71,000 injuries and approx 6400 fatal causalities are
included. Financial loss due to this accidents accounts for close to $100 Billion
according to National highway traffic safety administration (NHTSA) only
considering USA. Driving the vehicle after 20 hours without sleep increases
3
your blood-alcohol level to 0.08% which is above legal drinking limit while
driving in USA. As per the current studies data is only available for certain
part of the world but such cases of fatigue driving is observed in every region
of the world.
Detecting the drowsiness in driver 5 can be done through different
methods. Primarily state of drowsiness is detected by monitoring three type
of measurements: physiological-based (using EEG, ECG, EOG and EMG
signals), Vehicle operation based (focusing of deviation from lane position,
movement of the steering wheel and pressure on the acceleration pedal) and
10 behavioral based measurements (including yawning, eye closure, head pose
etc.).
The patent application number KR10206808481 discloses a method of
detecting fatigue by capturing a user's face for a predetermined period of
time and observing the blink time of user’s eyes it makes a prediction of
15 fatigue state based on the frequent blinking. The invention lacks in making an
accurate prediction as only blinking duration cannot be the sufficient
parameter for detecting drowsiness in user, other factors like behavior while
eyes are open is essential to detect drowsiness beforehand.
The patent application number 202111024738 discloses a system to
20 prevent drowsy driving utilizes an eye closure ratio measured using the
blinking sensors to arrive at the conclusion. It lacks the consideration of other
parameters associated with drowsiness for example face orientation,
physiological parameters, user’s behavior while he is awake, early signs of
drowsiness by heavy eye lids leading to unconscious staring to loss of focus.
4
Current available systems lack the ability to predict the driver’s
tendency to fall asleep as they are not personalized for different users.
Already available system uses the data involving the closed-eye and related
behaviors, which is not sufficient to make prediction. Available solutions use
face detection model to calculate EAR of closed eyes 5 or blinking duration of
eyes to arrive on the conclusion of drowsy driving. While starting the drive,
drivers will not be feeling drowsiness hence will be alert but will gradually
start feeling sleepy and drowsy that may lead falling asleep on steering wheel
while driving and attention span of such period plays a crucial role so closed
10 eye behavior of measuring the EAR while person is about to sleep might not
be sufficient to arrive on reliable conclusion.
Hence, the present invention uses Eye Aspect Ratio (EAR) while user’s
eyes are open to provide a reliable system for drowsiness detection in user.
15 Object of Invention
The main object of “A REAL-TIME DROWSINESS DETECTION
SYSTEM” is to identify signs of drowsiness in user with machine learning
and deep learning modules to detect users face and facial landmarks in real
time.
20 Furthermore, the device can also employ machine learning to
recognize specific patterns in the driver’s behavior and adopt its warnings
based on individual driving style and preferences, making the system more
personalized and affective.
5
Yet another object of the present invention is to monitor the driver’s
state by tracking the eye movements and head orientation to gauge the
driver’s attention to the road and driving conditions.
Another, object of the present invention is to identify landmark using
the road side camera such as vehicle, road signs, 5 and pedestrian crossing,
enabling system to provide timely navigation assistance and safety alerts.
Another, object of the present invention is to continuously adapt and
improve its drowsiness detection algorithms by learning from the
continuously evolving data base of drowsy driving scenarios and driver
10 responses, the system can refine its accuracy and sensitivity over time by
utilizing its machine learning and deep learning capabilities.
These and other objects will be apparent based on the disclosure
herein.
15 Summary of invention
The present invention pertains to a system and method for drowsiness
detection in user that employs machine learning and deep learning models to
detect a user’s face with facial landmarks and calculate the mean open eye
aspect ratio (MOE) from values of Eye Aspect Ratio (EAR). In view of the
20 foregoing, an embodiment herein provides a system equipped with dual
cameras configured to capture video data simultaneously, with one facing a
user and other facing towards a road, with a face and facial landmarks
detection model combined with MOE calculation module; and an alarm
6
device, wherein the road facing camera captures the input and system
process the visuals for object detection and keeps user informed of the
upcoming obstacles, and the user facing camera captures the input and
system process the visuals of user while driving the vehicle using a face and
facial landmark detection model comprising 5 of machine learning and deep
learning algorithms used for calculating eye aspect ratio (EAR); essentially
specifying upon deriving a Mean Open Eye Aspect Ratio (MOE) using MOE
module that derives the personalized MOE depending on the user behavior
and eye size; and an alarm device to provide alerts in case of obstacle on road
10 or detection of drowsiness in user.
Brief description of drawings
Other objects, advantages and novel features of the invention will
become apparent from the following detailed description of the present
15 embodiment when taken in conjunction with the accompanying drawings.
Fig. 1 illustrates a Block Diagram of various components of the
systems
Fig. 2 illustrates general system flow that is initiated by input data
from capturing devices and provides output
20 Fig. 3 illustrates MOE Calculation Module and its flow denoting the
conditions and requirements of the MOE calculation process
Fig. 4 illustrates Face Detection
7
Fig. 5 illustrates Face detection with Landmark
Fig. 6 illustrates Eye Landmark points being considered for calculating
EAR
Fig. 7 illustrates MOE values vs. Time (in minutes) graph showing
5 drowsiness event
Fig. 8 illustrates MOE Values vs. Time (For smaller eyes) graph
showing drowsiness event
Fig. 9 illustrates MOE Values vs. Time (In Minutes) graph showing no
drowsiness event
10 Fig. 10 illustrates MOE Values vs. Time (In Minutes) graph showing no
drowsiness event
Detailed Description of Invention
Before explaining the present invention in detail, it is to be understood
15 that the invention is not limited in its application to the details of the
construction and arrangement of parts illustrated in the accompany
drawings. The invention is capable of other embodiment, as depicted in
different figures as described above and of being practiced or carried out in a
variety of ways. It is to be understood that the phraseology and terminology
20 employed herein is for the purpose of description and not of limitation.
It is to be also understood that the term "comprises" and grammatical
equivalents thereof are used herein to mean that other components,
8
ingredients, steps, etc. are optionally present. For example, an article
"comprising" (or "which comprises") components A, B, and C can consist of
(i.e., contain only) components A, B, and C, or can contain not only
components A, B, and C but also contain one or more other components.
A hardware device equipped with 5 dual cameras, one facing the road
and the other directed towards the driver's side, utilizes state-of-the-art
machine learning and deep learning technology to enhance driver safety and
optimize driving experiences. The primary focus of the driver side camera is
to monitor and analyze the driver's behavior and facial expressions to ensure
10 his attentiveness and emotional state during the journey.
The driver side camera, powered by machine learning and deep
learning modules, accurately detects and tracks the driver's face in real-time
[as shown in fig. 1]. By continually analyzing facial landmarks and
expressions [as shown in figure 2], the system can identify signs of
15 drowsiness, fatigue, distraction, or even potential signs of impairment. If the
driver displays signs of drowsiness or distraction, the device can issue timely
alerts, such as audio warnings, prompting the driver to regain focus or take a
break to prevent accidents. Furthermore, the device can also employ machine
learning to recognize specific patterns in the driver's behavior and adapt its
20 individual driving styles and preferences, making the system more
personalized and effective.
Apart from monitoring the driver's state, the driver side camera can
assist in enhancing driving performance. It can track eye movements and
9
head orientation to gauge the driver's attention to the road and driving
conditions. Additionally, the camera can analyze driving habits and provide
feedback on factors like following distance, lane-keeping, and reaction times,
promoting safer and more responsible driving behavior.
For instance, drooping eyelids, frequent 5 blinking, or a lack of eye
movement may signal that the driver is becoming drowsy and potentially at
risk of falling asleep behind the wheel. When the system detects these
drowsiness-related cues, it promptly issues warnings and these warnings can
be in the form of audio alerts, or even haptic feedback to grab the driver's
10 attention. The device may suggest taking a break, stopping at a nearby rest
area, or switching drivers if applicable.
Moreover, machine learning capabilities of the system enable it to
continuously adapt and improve its drowsiness detection algorithm. By
learning from a vast dataset of drowsy driving scenarios and driver
15 responses, the system can refine its accuracy and sensitivity over time
Facial Landmark detection model and implementation:
Deep-learning-based facial landmark detection systems have made
impressive strides in recent years. Drowsiness is characterized by yawning,
20 heavy eyelids, daydreaming, eye rubbing, an inability to concentrate, and
lack of attention. The detection of drowsiness in drivers is facilitated by using
facial landmarks and computer vision-based techniques. The system
comprising of machine learning and deep learning algorithms is equipped
10
with face and landmark detection models that have been specifically trained
on a comprehensive driver dataset, ensuring their accuracy and reliability in
identifying critical facial features and movements associated with
drowsiness. After predicting landmarks, out of all facial landmarks it is eye
region landmark that are extracted t 5 o calculate eye aspect ratio (EAR) values.
Method for calculating Eye Aspect Ratio (EAR):
Eye Aspect Ratio (EAR) is a metric commonly used in computer vision
and facial recognition applications to quantify and analyze eye-related
10 movements and behaviors. It is particularly useful in tasks like eye tracking,
gaze estimation, and drowsiness detection. EAR is calculated based on the
positions of specific facial landmarks, especially from eyes obtained through
facial landmark detection algorithms.
To understand EAR, it needs to define the relevant facial landmarks.
15 These landmarks consist of points representing the corners of the eyes, both
on the horizontal and vertical edges. Typically, six landmarks are used for
each eye: two for horizontal points (denoted as P1 and P4) [as shown in fig. 3]
and four vertical points in pair of 2 points for the (denoted as P2-P6 and P3-
P5) [as shown in fig. 3]. The position of these landmarks can be accurately
20 tracked in video or and an image.
EAR is calculated using the following formula:
EAR = (||P2 - P6|| + ||P3 - P5||) / 2 * ||P1 - P4||
11
Here, ||Pn - Pm|| represents the Euclidean distance between two points Pn
and Pm on the face.
Euclidean distance is separately and simultaneously calculated for
both left eye and right eye.
The EAR value is a representation 5 of the ratio between the vertical
distance (the distance between the top and bottom eyelid landmarks) and the
horizontal distance (the distance between the inner and outer corner
landmarks) of the eye. In other words, it reflects how open or closed the eye
is. When the eye is fully open, the EAR value will be higher, and as the eye
10 starts closing or blinking, the EAR value decreases. For example, when a
person blinks or has heavy eyelids due to drowsiness, the EAR value will
decrease momentarily as the vertical distance between the eyelids becomes
smaller. Conversely, when the eye is open wide, such as during moments of
alertness, the EAR value will be higher.
15 In drowsiness detection systems, a certain threshold for EAR is set
based on the behavior of the eye during wakeful and drowsy states. When
the EAR falls below EAR calibrated threshold level, it indicates the closed or
partially closed eyes; it indicates that the person may be experiencing
drowsiness.
20 When the eyes are open or close, the Eye Aspect Ratio (EAR) - a
numerical measure - exhibits distinct changes. As the eyes blink, the EAR
value either increases rapidly or decreases significantly.
12
Method of determining EAR Calibrated Threshold:
The system will calculate the open eye aspect ratio of the driver for the
duration of two minutes comprising 1200 frames. This data will then be used
to determine the median value of the open eye aspect ratio (EAR) over this
specified duration. Based on this median value, which 5 corresponds to the size
of the driver's eyes, the system will dynamically set the threshold to either
0.16 or 0.18.
EAR threshold of 0.16 serves as the minimum value for smaller eyes,
indicating when they may be considered closed or partially closed. After
10 thorough analysis of multiple data sets, it has been determined that for
relatively larger eyes, a threshold of 0.18 is the minimum value required to
reliably discern whether the individual with normally-sized eyes has closed
them or not. This approach allows for tailored thresholds based on eye size
variations, ensuring accurate detection across a range of eye sizes and shapes.
15
Mean Open Eye Aspect Ratio (MOE):
After calculating EAR values and determining an EAR calibrated
threshold to differentiate between open and closed eyes, the mean open eye
aspect ratio (MOE) is calculated. MOE represents the average EAR values
20 observed for open eyes over a specific time, providing valuable insights into a
person's eye behavior. By continuously monitoring and computing the EAR
values while the eyes are open, the MOE reflects the baseline level of eye
openness for an individual.
13
Method for MOE Calculation:
The process described involves collecting EAR data for duration of
two minutes, comprising 1200 frames of video. During this period, the system
ensures the quality of the data by considering certain conditions. The first
condition is that the face must be straight 5 for the EAR data to be considered.
This helps eliminate potential inaccuracies caused by varying head
orientations during data collection.
The second condition involves using an EAR calibrated threshold,
which is set at 0.16 or 0.18 (depending on eye size) to differentiate between
10 open and closed eyes. Only EAR values greater than this threshold are taken
into consideration. This step ensures that outliers or inaccurate data points
that fall below the calibrated threshold (which might indicate closed eyes or
other anomalies) are not included in the analysis.
When a frame meets both criteria (face is straight, and EAR value is
15 greater than the threshold), the EAR value is added to the data collection.
However, if either of the conditions is not met (the face is not straight or the
EAR value is below the threshold), the system inserts a placeholder value of
(-1) instead of the EAR value. This ensures that the ongoing data collection is
not disrupted and maintains a consistent duration of 1200 frames.
20 After the two-minute data collection period, the system performs data
analysis by calculating the mean of the EAR values collected. However,
before calculating the mean, the (-1) values are removed from the data to
exclude the placeholder values. This step ensures that the calculated mean is
14
representative of the actual EAR values during the two-minute period and is
not influenced by the placeholder values.
By following this data preprocessing and analysis approach, the
system can obtain a reliable and consistent mean EAR value for the twominute
duration, which serves as 5 the Mean Open Eye Aspect Ratio (MOE).
The MOE provides valuable insights into the driver's attentiveness and helps
predict the likelihood of drowsiness during extended periods of driving. This
information is crucial for timely interventions and ensuring road safety by
addressing drowsiness before it escalates into a potential hazard on the road.
10
Method for implementing Mean Open Eye Aspect Ratio (MOE):
Drowsiness is indeed a gradual process that progresses through
various stages before reaching complete eye closure. Recognizing this journey
and predicting drowsiness beforehand is crucial for preventing accidents
15 caused by inattentiveness on the road. The development of the Mean Open
Eye Aspect Ratio (MOE) considers the critical stages in this drowsy cycle,
with a focus on the mid-stage during heavy eye lids and frequent eye closure.
At the initial stage of drowsiness, drivers may experience yawning,
which is often an early sign of fatigue. As drowsiness sets in further, heavy
20 eyelids become noticeable, and drivers may find it increasingly challenging to
keep their eyes open consistently. Frequent eye closures start to occur, where
the eyes involuntarily shut momentarily, compromising the driver's alertness
and reaction times. It is during these mid-stages of drowsiness that MOE
15
plays a vital role. MOE is based on data from these critical mid-stages; it
becomes a robust predictor of a driver's attentiveness. A higher MOE
suggests that the person is generally more attentive, while a lower MOE
indicates a higher propensity for drowsiness.
MOE serves as a critical indicator for 5 predicting a person's behavior
state, especially in scenarios like drowsiness detection or attentiveness
monitoring. A higher MOE suggests that the person tends to keep their eyes
open more often, indicating alertness and attentiveness. On the other hand, a
lower MOE signifies that the person's eyes are frequently closing or blinking,
10 which might indicate drowsiness or reduced attentiveness.
By comparing real-time MOE values with the established MOE
threshold, the system can make predictions about the person's current
behavior state. If the MOE value falls below the threshold established by
system, it could signal a closed eye or partially closed eye, potentially
15 indicating drowsiness or fatigue. Overall, MOE plays a crucial role in
improving safety and enhancing performance by providing valuable
information about a person's eye behavior state and enabling proactive
interventions when necessary.
20 MOE Result analysis using graphical representation:
A 1 hour of video data was taken and inferred that data using the face
detection models and extracted EAR values and calculated MOE from above
16
mentioned methodology. After calculating MOE, a graph was plotted
describing MOE data with different devices and different users.
1. The graph of real-time MOE Values vs. Time given in figure 4 shows
MOE levels of Person A over the period of time, when EAR calibrated
threshold of 0.18 was selected for calculating 5 MOE, it was observed
that the values of MOE was falling below threshold line of 0.25 when
driver felt drowsy.
2. The graph of real-time MOE Values vs. Time given in figure 5 shows
MOE levels of Person B over the period of time having smaller eyes,
10 when EAR calibrated threshold of 0.16 was selected for calculating
MOE, it was observed that the values of MOE was falling below
threshold line of 0.22 when driver felt drowsy.
3. The graph of real-time MOE Values vs. Time given in figure 6 shows
MOE levels of Person C over the period of time, when EAR calibrated
15 threshold of 0.18 was selected for calculating MOE, it was observed
that the values of MOE does not fall blow threshold line of 0.25, it
stays above the threshold line for whole duration, that indicates that
there was no drowsiness event detected throughout the period.
4. The graph of real-time MOE Values vs. Time given in figure 6 shows
20 MOE levels of Person C over the period of time, when EAR calibrated
threshold of 0.18 was selected for calculating MOE, the values of MOE
does not fall blow threshold line of 0.25, it stays above the threshold
17
line for whole duration, that indicates that there was no drowsiness
event detected throughout the period.
MOE Threshold determination based on observations and analysis:
After calculating the Mean 5 Open Eye Aspect Ratio (MOE) from
continuous video data for drivers exhibiting drowsy behavior, the MOE
values are critical to identify patterns that could serve as indicators of an
impending drowsy event. Remarkably, it is consistently observed that there
was a distinct drop in MOE values before a valid drowsy event in more than
10 20 datasets collected from different devices and users.
To set a reliable threshold for identifying drowsy behavior based on
MOE values, it was further analyzed that individuals with an EAR threshold
of 0.16 must have an arithmetic MOE threshold greater than 0.20 to be
categorised as not sleepy. If their arithmetic MOE falls below 0.20, any value
15 will be considered indicative of a sleepy stage. Similarly, for those with an
EAR threshold of 0.18, their arithmetic MOE threshold must remain above
0.23 to be classified as not sleepy. If their arithmetic MOE is less than 0.23, it
will indicate an sleepy stage.
By plotting and examining the MOE values for numerous occurrences
20 of drowsy behavior, it is established that these thresholds strike a balance
between sensitivity and specificity in drowsiness detection. A higher MOE
threshold ensures effective capturing of instances where the EAR values
decrease significantly, indicating a higher likelihood of drowsiness. However,
18
the threshold is carefully chosen to avoid false positives and unnecessary
alerts. Using these MOE thresholds, this drowsiness detection system can
accurately predict the onset of drowsy behavior in real-time.
The invention has been explained in relation to specific embodiment. It
is inferred that the foregoing description is only 5 illustrative of the present
invention and it is not intended that the invention be limited or restrictive
thereto. Many other specific embodiments of the present invention will be
apparent to one skilled in the art from the foregoing disclosure.
All substitution, alterations and modification of the present invention
10 which come within the scope of the following claims are to; which the present
invention is readily susceptible without departing from the invention. The
scope of the invention should therefore be determined not with reference to
the above description but should be determined with reference to appended
claims along with full scope of equivalents to which such claims are entitled.
15
19
List of Reference Numerals
1 Hardware Device
2 Road Facing Camera
3 User Facing Camera
4 Machine Learing and Deep Learning B 5 ased Obstacle Detection Model
5 Machine learnig and Deep Learing Based Face and Facial Landmark
Detection Model
6 Mean Open Eye Aspect Ratio (MOE) Calculation Module
7 Alarm Device , Claims:We Claim:
1. A real-time drowsiness detection system comprising:
a hardware device (1) equipped with dual cameras configured to capture
video data simultaneously, with one road facing camera (2) and other
5 user facing camera (3);
a machine leaning and deep learning based obstacle detection model
(4) configured to detect objects on road;
a machine learning and deep learning based face and facial landmarks
detection model (5) for identifying and real-time monitoring of facial
10 landmarks of user to extract data for calculating the eye aspect ratio
(EAR) using eye region landmarks;
a Mean Open Eye Aspect Ratio (MOE) calculation module (6)
configured to calculate real-time MOE of user by extracted values of
EAR; and
15 an alarm device (7) configured to provide appropriate notifications
and alerts based on triggers caused by either of road side obstacles
detection or user side drowsiness detection or both simultaneous.
2. The system as claimed in claim 1, wherein the MOE calculation
module (6) calculates real-time MOE of user while user’s eyes are open
20 and EAR value to determine the openness state of an eye is referred
herein as EAR calibration threshold.
3. A Method of calculating personalized MOE of user, comprising;
a input received from the user facing camera (3);
21
a machine learning and deep learning based face and facial landmarks
detection model (5);
a Mean Open Eye Aspect Ratio (MOE) calculation module (6);
an alarm device (7);
wherein the MOE calculation module 5 (6) takes data of 120 seconds
comprising 1200 frames with precondition that the user has straight
face and concurrently captured EAR value by a machine learning and
deep learning based face and facial landmarks detection model (5) is
above EAR calibrated threshold, falling to satisfy either or both of
10 these two conditions flags a placeholder value (-1) to be considered in
place to ensure consistent data collection.
4. The method as claimed in claim 3, wherein EAR values are continually
being collected and fetched to the MOE calculation module (6) for
calculating real-time mean open eye aspect ratio (MOE).
15 5. The method as claimed in claim 3, wherein the EAR calibrated
threshold value is determined based on the median value of the open
eye aspect ratio (EAR) and based on calculated median value of EAR,
the system will dynamically set the threshold to either 0.16 or 0.18.
6. The method as claimed in claim 3, wherein the realtime MOE is
20 calculed based on the values of EAR that are above the EAR calibrated
threshold of 0.16 or 0.18.
22
7. The method as claimed in claim 3, wherein the MOE threshold is
selected based on EAR calibrated threshold, MOE threshold of 0.20 is
selected when EAR calibrated threshold is 0.16 and MOE threshold of
0.23 when EAR calibrated threshold is 0.18 and so on.
8. The method as claimed in claim 5 3, wherein the alarm device provides
timely alerts in form of audio-visual signal for instances when realtime
MOE falls below MOE threshold, indicates the drowsiness event
that is detected in user.
Dated this on 14th May, 2024.

Documents

Application Documents

# Name Date
1 202421037843-STATEMENT OF UNDERTAKING (FORM 3) [14-05-2024(online)].pdf 2024-05-14
2 202421037843-PROOF OF RIGHT [14-05-2024(online)].pdf 2024-05-14
3 202421037843-POWER OF AUTHORITY [14-05-2024(online)].pdf 2024-05-14
4 202421037843-FORM FOR STARTUP [14-05-2024(online)].pdf 2024-05-14
5 202421037843-FORM FOR SMALL ENTITY(FORM-28) [14-05-2024(online)].pdf 2024-05-14
6 202421037843-FORM 1 [14-05-2024(online)].pdf 2024-05-14
7 202421037843-FIGURE OF ABSTRACT [14-05-2024(online)].pdf 2024-05-14
8 202421037843-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-05-2024(online)].pdf 2024-05-14
9 202421037843-EVIDENCE FOR REGISTRATION UNDER SSI [14-05-2024(online)].pdf 2024-05-14
10 202421037843-DRAWINGS [14-05-2024(online)].pdf 2024-05-14
11 202421037843-DECLARATION OF INVENTORSHIP (FORM 5) [14-05-2024(online)].pdf 2024-05-14
12 202421037843-COMPLETE SPECIFICATION [14-05-2024(online)].pdf 2024-05-14
13 202421037843-STARTUP [15-05-2024(online)].pdf 2024-05-15
14 202421037843-FORM28 [15-05-2024(online)].pdf 2024-05-15
15 202421037843-FORM-9 [15-05-2024(online)].pdf 2024-05-15
16 202421037843-FORM 18A [15-05-2024(online)].pdf 2024-05-15
17 Abstract.jpg 2024-06-10
18 202421037843-FER.pdf 2024-07-10
19 202421037843-FER_SER_REPLY [17-12-2024(online)].pdf 2024-12-17
20 202421037843-US(14)-HearingNotice-(HearingDate-11-03-2025).pdf 2025-02-14
21 202421037843-Correspondence to notify the Controller [07-03-2025(online)].pdf 2025-03-07
22 202421037843-Written submissions and relevant documents [25-03-2025(online)].pdf 2025-03-25
23 202421037843-PatentCertificate30-05-2025.pdf 2025-05-30
24 202421037843-IntimationOfGrant30-05-2025.pdf 2025-05-30
25 202421037843-Request Letter-Correspondence [02-06-2025(online)].pdf 2025-06-02
26 202421037843-Power of Attorney [02-06-2025(online)].pdf 2025-06-02
27 202421037843-FORM28 [02-06-2025(online)].pdf 2025-06-02
28 202421037843-Covering Letter [02-06-2025(online)].pdf 2025-06-02

Search Strategy

1 ssrn-3356401E_26-06-2024.pdf
2 SEARCH_STRATEGY1E_26-06-2024.pdf

ERegister / Renewals