Sign In to Follow Application
View All Documents & Correspondence

A System And A Method For Detecting The State Of A Driver

Abstract: ABSTRACT A SYSTEM AND A METHOD FOR DETECTING THE STATE OF A DRIVER The present disclosure discloses a system(100) and a method(200) for detecting the state of a driver. The system(100) comprises a controlling device(102) includes a prompting module(104) to prompt a driver to initialize said controlling device(102) to identify the state of the driver an input module(106) of a controlling device(102) to receive an input command from the driver; an alert generation module(108) to generate an alert notification of the driver within a predefined time after receipt of the input command, and prompt the driver to perform the hand gesture in front of a plurality of sensors(108a) within a stipulated time after generation of the alert notification; a pre-processing module(110) to receive said analogous data based on the detection of the hand gesture of the driver, and convert said analogous data into digital data; a feedback module(112) to process said digital data to generate a feedback signal.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

STARKENN TECHNOLOGIES PRIVATE LIMITED
Bunglow-59-U, SN-90, 65 & 69, Vasant Vihar-IV, Baner – 411045, Pune, Maharashtra India

Inventors

1. SWASTID SHRIKANT BADVE
Bunglow-59-U SN-90, 65 & 69, Vasant Vihar-IV, Baner, Pune-411045, Maharashtra, India
2. KOUSTUBH VIDYADHAR TILAK
Flat No. 301, Shrividya Apartment, Plot No. 64, S.No.98, Right Bhusari Colony, Kothrud, Pune-411038, Maharashtra, India
3. NUPUR SANDEEP JHAVERI
Flat No-8, Venus Apartments, 87 Railway Lines, Solapur-413001, Maharashtra, India
4. ANAGHA VIJAY RAMANE
33, Shivprasad Society, Ganeshmala, Sinhgad Road, Pune-411030, Maharashtra, India

Specification

DESC:FIELD OF INVENTION
The present disclosure generally relates to the field of drowsiness detection systems. More particularly, the present disclosure relates to a system and a method for detecting the state of a driver.
DEFINITION
As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicates otherwise.
state of a driver: The term “state of a driver” hereinafter refers to a driver’s attentive state or driver’s inattentive state in real-time while driving a vehicle.
combination switch: The term “combination switch” hereinafter refers to a lever-style switch that controls multiple functions, including the headlights, turn signals, and windshield wipers of the vehicle.
set of processing rules: The term “set of processing rules” hereinafter refers to analogous data for converting analogous data into digital data.
set of feedback rules: The term “set of feedback rules” hereinafter refers to a positive feedback signal and/or negative feedback signal to indicate the driver’s inattentive state and/or attentive state.
BACKGROUND
The background information herein below relates to the present disclosure but is not necessarily prior art.
Driver drowsiness detection is a critical component in enhancing road safety by preventing accidents caused by drivers falling asleep or losing focus while driving. Traditionally, these systems often utilize camera-based technologies to monitor signs of fatigue through facial expressions and eye movements. However, these camera-based systems come with several technical and practical limitations.
The existing systems are inherently expensive due to the high cost of optical components and the technology required to process image data. Additionally, the installation and maintenance of such systems add further to the cost, making them less accessible for budget vehicles. Further, the effectiveness of camera-based detection systems heavily depends on lighting conditions. Poor lighting, such as during night-time driving or under bright sunlight, can significantly impair the system's ability to accurately detect the driver’s facial features and eye movements, leading to unreliable performance.
The existing drowsiness detection system addresses the limitations of camera-based systems by offering a cost-effective, privacy-respecting, and adaptable solution that can enhance driver safety without reliance on visual monitoring.
Therefore, there is felt a need for a system and a method for detecting the state of a driver that alleviates the aforementioned drawbacks.
OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
An object of the present disclosure is to provide a system for detecting the state of a driver.
Another object of the present disclosure is to provide a system with active intervention for driver monitoring.
Still another object of the present disclosure is to provide a system that detects the state of the driver using camera-less sensors.
Yet another object of the present disclosure is to provide a system that monitors the driver’s behaviour, and driving pattern and detects instances like distraction or drowsiness.
Still another object of the present disclosure is to provide a system that recognizes driver’s hand gestures.
Yet another object of the present disclosure is to provide a system that sends positive feedback to the system indicating the driver is alert.
Still another object of the present disclosure is to provide a method for detecting the state of a driver.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure envisages a system and method for detecting the state of a driver.
The system comprises a controlling device.
The controlling device includes a prompting module, an input module, an alert generation module, a pre-processing module, and a feedback module.
The prompting module of a controlling device is configured to prompt a driver and initialize the controlling device to identify the state of the driver in real-time.
The input module of a controlling device is configured to receive an input command from the driver for activation of the controlling device.
The alert generation module of the controlling device is configured to generate an alert notification of the driver within a predefined time after receipt of the input command and further configured to prompt the driver to perform the hand gesture in front of a plurality of sensors mounted on combination switch of the vehicle within a stipulated time after generation of the alert notification by the alert generation module.
The pre-processing module of the controlling device is configured to receive the analogous data from the plurality of sensors, based on the detection of the hand gesture of the driver in front of the plurality of sensors, and further configured to convert the analogous data into digital data by means of a set of processing rules.
The feedback module of the controlling device is configured to receive the digital data and process the digital data by means of a set of feedback rules to generate a feedback signal, wherein the feedback signal includes a positive feedback signal to indicate the driver’s attentive state or else generate a negative feedback signal to indicate the driver’s inattentive state.
In an aspect, the feedback module is configured to utilize a machine learning model to:
• analyze the activity signals received from the activity module, wherein the machine learning model is configured to learn from accumulated activity data and feedback signals to refine the feedback rules continuously;
• optimizes its predictions based on historical data regarding the driver's responses to alert notifications; and
• continuous learning process allows for adaptive adjustments in generating positive or negative feedback signals, thereby enhancing the accuracy; and responsiveness of the system in detecting the driver's alertness state.
In an aspect, the preprocessing module employs digital signal processing (DSP) techniques to:
• enhance the accuracy and reliability of converting analogous data to digital data by using noise filtering, signal amplification, and data compression; and
• ensures that the digital data relayed to subsequent modules is of high fidelity, enabling more precise detection of the driver's alertness state and more effective generation of alert notifications based on this data.
In an aspect, the alarm signal is generated at uniform time intervals to detect the alertness of the driver.
In an aspect, the centralized server's analytical module uses predictive analytics to anticipate potential driver inattention based on patterns detected in the vehicle feedback data.
In an aspect, the plurality of sensors is a combination of infrared, capacitive, and pressure sensors to provide a comprehensive data set for the inputting module.
The present disclosure also envisages a method for detecting the state of a driver. The method comprises the following steps:
• prompting, by a prompting module of a controlling device, a driver and initializing the controlling device to identify the state of the driver in real-time;
• receiving, by an input module of a controlling device, an input command from the driver for activation of the controlling device;
• generating, by an alert generation module of the controlling device, an alert notification of the driver within a predefined time after receipt of the input command;
• prompting, by the alert generation module, the driver to perform the hand gesture in front of a plurality of sensors mounted on the combination switch of the vehicle within a stipulated time after the generation of the alert notification by the alert generation module;
• receiving, by a pre-processing module of the controlling device, analogous data from the plurality of sensors, based on the detection of the hand gesture of the driver in front of the plurality of sensors;
• pre-processing, by the pre-processing module, the analogous data for converting the analogous data into digital data by means of a set of processing rules;
• receiving, by a feedback module of the controlling device, the digital data; and
• processing, by the feedback module, the digital data by means of a set of feedback rules to generate a feedback signal, wherein the feedback signal including of a positive feedback signal to indicate the driver’s attentive state or else generate a negative feedback signal to indicate the driver’s inattentive state.
In an aspect, the method further comprises the steps:
• receiving, by a vehicle operating module of the controlling device, the negative feedback signal from the feedback module; and
• deactivating, by the vehicle operating module, the propulsion mechanism of the vehicle to restrict the movement of the vehicle.
In an aspect, the plurality of sensors is a camera-less sensor selected from a group of sensors consisting of steering angle sensors, hand gesture sensors, motion detection sensors, pressure sensors, ultrasonic sensors, proximity sensors, and a combination thereof.
In an aspect, the analogous data includes a hand movement gesture of a driver in real-time, at least one of infrared, capacitive, and pressure data.
In an aspect, the step of processing includes applying, by the feedback module, digital signal processing techniques (DSP) on the converted digital data.
In an aspect, the step of generating includes adjusting, by the alert generation module, the frequency of the alert notification based on real-time analysis of driving conditions or time of day.
In an aspect, the step of prompting includes prompting, by the alert generation module, the driver for hand gesture performance by visual and/or auditory signals.
In an aspect, the step of processing includes applying, by the feedback module, a machine learning model to refine the set of feedback rules over time based on accumulated data.
In an aspect, the step of deactivating includes gradually reducing, by vehicle operating module, the vehicle speed before completely cutting down the acceleration of the vehicle.
In an aspect, the method further comprises the steps:
• establishing, by the feedback module, a communication between the controlling device and a remote cloud server over a wireless communication network; and
• transmitting, by the feedback module, the feedback signal indicating the driver’s attentive or inattentive state from the controlling device to the remote cloud server for further cloud based predictive analysis.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
A system and method for detecting the state of a driver of the present disclosure will now be described with the help of the accompanying drawing, in which:
Figure 1 illustrates a system for detecting the state of a driver in accordance with an embodiment of the present disclosure;
Figure 2A and Figure 2B illustrate a flow chart depicting steps involved in the method for detecting the state of a driver in accordance with an embodiment of the present disclosure;
Figure 3A-3C illustrates an exemplary pseudo-code depicting the implementation of the method for detecting the state of a driver;
Figure 4A-4C illustrates an exemplary pseudo-code depicting the implementation of a feedback module to utilize a machine learning model; and
Figure 5A-5B illustrates an exemplary pseudo-code depicting the implementation of a preprocessing module that employs digital signal processing (DSP) techniques.
LIST OF REFERENCE NUMERALS
100 - System
102 - Controlling Device
104 - Prompting Module
106 - Input Module
108 - Alert Generation Module
108a - Plurality Of Sensors
110 - Pre-Processing Module
112 - Feedback Module
114 - Vehicle Operating Module
116 - Remote Cloud Server
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details, are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a,” "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms “including,” and “having,” are open ended transitional phrases and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
When an element is referred to as being “engaged to,” "connected to," or "coupled to" another element, it may be directly engaged, connected, or coupled to the other element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed elements.
Driver drowsiness detection is a critical component in enhancing road safety by preventing accidents caused by drivers falling asleep or losing focus while driving. Traditionally, these systems often utilize camera-based technologies to monitor signs of fatigue through facial expressions and eye movements. However, these camera-based systems come with several technical and practical limitations.
The existing systems are inherently expensive due to the high cost of optical components and the technology required to process image data. Additionally, the installation and maintenance of such systems add further to the cost, making them less accessible for budget vehicles. Further, the effectiveness of camera-based detection systems heavily depends on lighting conditions. Poor lighting, such as during night-time driving or under bright sunlight, can significantly impair the system's ability to accurately detect the driver’s facial features and eye movements, leading to unreliable performance.
To address the issues of the existing systems and methods, the present disclosure envisages a system for detecting the state of a driver (hereinafter referred to as “system 100”) and a method for detecting the state of a driver (hereinafter referred to as “method 200”). The system 100 will now be described with reference to Figure 1 and the method 200 will be described with reference to Figure 2A and Figure 2B to Figure 5A-5B.
Referring to Figure 1, the system 100 comprises a controlling device 102 of a vehicle.
The controlling device 102 includes a prompting module 104, an input module 106, an alert generation module 108, a pre-processing module 110, and a feedback module 112.
The prompting module 104 of a controlling device 102 is configured to prompt a driver and initialize the controlling device 102 to identify the state of the driver in real-time.
The input module 106 of the controlling device 102 is configured to receive an input command from the driver for activation of the controlling device 102.
The alert generation module 108 of the controlling device 102 is configured to generate an alert notification of the driver within a predefined time after receipt of the input command.
The alert generation module 108 is further configured to prompt the driver to perform the hand gesture in front of a plurality of sensors 108a mounted on the combination switch of the vehicle within a stipulated time after the generation of the alert notification by the alert generation module 108.
The pre-processing module 110 of the controlling device 102 is configured to receive the analogous data from the plurality of sensors 108a, based on the detection of the hand gesture of the driver in front of the plurality of sensors 108a.
The pre-processing module 110 of the controlling device 102 is further configured to convert the analogous data into digital data by means of a set of processing rules.
The feedback module 112 of the controlling device 102 is configured to receive the digital data and process the digital data by means of a set of feedback rules to generate a feedback signal, wherein the feedback signal includes a positive feedback signal to indicate the driver’s attentive state or else generate a negative feedback signal to indicate the driver’s inattentive state.
The feedback module 112 is configured to utilize a machine learning model to:
• analyze the activity signals received from the activity module, wherein the machine learning model is configured to learn from accumulated activity data and feedback signals to refine the feedback rules continuously;
• optimizes its predictions based on historical data regarding the driver's responses to alert notifications; and
• continuous learning process allows for adaptive adjustments in generating positive or negative feedback signals, thereby enhancing the accuracy; and responsiveness of the system in detecting the driver's alertness state.
In an aspect, the preprocessing module 110 employs digital signal processing (DSP) techniques to:
• enhance the accuracy and reliability of converting analogous data to digital data by using noise filtering, signal amplification, and data compression; and
• ensures that the digital data relayed to subsequent modules is of high fidelity, enabling more precise detection of the driver's alertness.
In an aspect, the digital signal processing (DSP) techniques configured to perform:
• Data Acquisition: Collect analog data from the sensors;
• Noise Filtering: Apply filtering techniques to remove noise and irrelevant information;
• Signal Amplification: Amplify the relevant signal to make it more distinguishable for processing;
• Data Compression: Compress the data to optimize storage and transmission without losing critical information;
• Data Conversion: Convert the processed analog signal into digital data; and
• Data Validation: Ensure the integrity and fidelity of the digital data for subsequent use.
In an aspect, the alarm signal is generated at uniform time intervals to detect the alertness of the driver.
In an aspect, the remote cloud server 116 analytical module uses predictive analytics to anticipate potential driver inattention based on patterns detected in the vehicle feedback data.
In an aspect, the plurality of sensors 108a is a camera-less sensor selected from a group of sensors consisting of steering angle sensors, hand gesture sensors, motion detection sensors, pressure sensors, ultrasonic sensors, proximity sensors, and a combination thereof.
In an aspect, the plurality of sensors 108a is a combination of infrared, capacitive, and pressure sensors to provide a comprehensive data set for the inputting module.
In an exemplary embodiment, the system comprises a proximity sensor 108a mounted on the combination switch used for recognizing hand gestures. The hand gesture recognition will be used as an indication of the driver’s alertness. The proximity sensor 108a for hand gesture recognition is activated only when an alarm is generated. The driver has to gesture his/her hand in front of the proximity sensor mounted on the combination switch without taking the hand off the steering wheel. The alarm is generated at uniform time intervals which is configurable to detect the alertness of the driver. The mechanism can be activated for a specific time of the day particularly after lunch and at the night. The hand gesture movement is to be performed within a stipulated time period and if the driver fails to perform a hand gesture within the stipulated time, the sensor transmits the feedback to the system indicating the driver is drowsy and not alert and in response to this feedback, the system cuts off the accelerator and transmits feedback to the cloud analytics running on a remote server.
Figure 2A and Figure 2B illustrate a flow chart depicting steps involved in method 200 for detecting the state of a driver in accordance with an embodiment of the present disclosure. The order in which method 200 is described is not intended to be construed as a limitation, and any number of the described method steps may be combined in any order to implement method 200, or an alternative method. Furthermore, method 200 may be implemented by processing resource or computing device(s) through any suitable hardware, non-transitory machine-readable medium/instructions, or a combination thereof. The method 200 comprises the following steps:
At step 202, the method 200 includes prompting, by a prompting module 104 of a controlling device 102, a driver and initializing the controlling device 102 to identify the state of the driver in real-time.
At step 204, the method 200 includes receiving, by an input module 106 of a controlling device 102, an input command from the driver for activation of the controlling device 102.
At step 206, the method 200 includes generating, by an alert generation module 108 of the controlling device 102, an alert notification of the driver within a predefined time after receipt of the input command.
At step 208, the method 200 includes prompting, by the alert generation module 108, the driver to perform the hand gesture in front of a plurality of sensors 108a mounted on the combination switch of the vehicle within a stipulated time after the generation of the alert notification by the alert generation module 108.
At step 210, the method 200 includes receiving, by a pre-processing module 110 of the controlling device 102, analogous data from the plurality of sensors 108a, based on the detection of the hand gesture of the driver in front of the plurality of sensors 108a.
At step 212, the method 200 includes pre-processing, by the pre-processing module 110, the analogous data for converting the analogous data into digital data by means of a set of processing rules.
At step 214, the method 200 includes receiving, by a feedback module 112 of the controlling device 102, the digital data.
At step 216, the method 200 includes processing, by the feedback module 112, the digital data by means of a set of feedback rules to generate a feedback signal, wherein the feedback signal including of a positive feedback signal to indicate the driver’s attentive state or else generate a negative feedback signal to indicate the driver’s inattentive state.
In an aspect, the method 200 further comprises:
• receiving, by a vehicle operating module 114 of the controlling device 102, the negative feedback signal from the feedback module 112; and
• deactivating, by the vehicle operating module 114, the propulsion mechanism of the vehicle to restrict the movement of the vehicle.
In an aspect, the analogous data includes a hand movement gesture of a driver in real-time, at least one of infrared, capacitive, and pressure data.
In an aspect, the step of processing includes applying, by the feedback module 112, digital signal processing techniques (DSP) on the converted digital data.
In an aspect, the step of generating includes adjusting, by the alert generation module 108, the frequency of the alert notification based on real-time analysis of driving conditions or time of day.
In an aspect, the step of prompting includes prompting, by the alert generation module 108, the driver for hand gesture performance by visual and/or auditory signals.
In an aspect, the step of processing includes applying, by the feedback module 112, a machine learning model to refine the set of feedback rules over time based on accumulated data.
In an aspect, the step of deactivating includes gradually reducing, by vehicle operating module 114, the vehicle speed before completely cutting down the acceleration of the vehicle.
• In an aspect, the method 200 further comprises:
• establishing, by the feedback module 112, a communication between the controlling device 102 and a remote cloud server 116 over a wireless communication network; and
• transmitting, by the feedback module 112, the feedback signal indicating the driver’s attentive or inattentive state from the controlling device 102 to the remote cloud server 116 for further cloud based predictive analysis.
Figure 3A-3C illustrates an exemplary pseudo-code depicting the implementation of the method for detecting the state of a driver.
Figure 4A-4C illustrates an exemplary pseudo-code depicting the implementation of a feedback module to utilize a machine learning model.
Figure 5A-5B illustrates an exemplary pseudo-code depicting the implementation of a preprocessing module that employs digital signal processing (DSP) techniques.
In an operative configuration, the system 100 comprises a prompting module 104 of a controlling device 102 is configured to prompt a driver and initialize the controlling device 102 to identify the state of the driver in real-time. The input module 106 is integral to the initiation of the alert system. It is specifically designed to receive an input command from the driver, which activates the controlling device 102. This activation is essential for commencing the sequence that monitors the driver's attentiveness. The alert generation module 108 is configured to receive an activation command from the Input Module 106, to generate an alert notification for the driver. The notification is issued within a predefined timeframe to ensure immediate response capabilities.
Subsequently, it prompts the driver to perform a specific hand gesture. This gesture must be executed in front of a series of sensors 108a that are strategically mounted on the combination switch of the vehicle. The driver is required to complete this gesture within a stipulated time, set from the moment the alert is generated, to verify their attentiveness.
The Pre-processing Module 110 is configured to receive the analogous data from the plurality of sensors 108a which capture the driver's hand gesture. The primary function of this module is to accurately detect and interpret these gestures. Following detection, the Pre-processing Module 110 is responsible for converting the analog data into digital data. This conversion is conducted according to a predefined set of processing rules, which are designed to ensure the data's integrity and suitability for further analysis. The feedback Module 112 is configured to receive converted digital data which processes this information based on a specific set of feedback rules.
The feedback Module 112 is further configured to generate a feedback signal that directly corresponds to the driver’s state of attentiveness. If the processed data indicates that the driver is attentive and has responded appropriately to the alert and gesture prompt, a positive feedback signal is generated. Conversely, if the data suggests a lack of response or an inappropriate response indicating potential inattentiveness, a negative feedback signal is produced.
Advantageously, the system 100 provides for detecting the state of a driver. The system 100 provides a mechanism to detect the attentive state and inattentive state of the driver in real-time without camera. The system 100 provides a mechanism to detect patterns indicative of drowsiness, such as reduced steering corrections, consistent pedal pressure, or changes in heart rate variability. The system 100 provides a scalable solution that can be deployed in both new and older vehicle models without significant modifications. The system 100 provides auditory alerts or vibration cues, to re-engage the driver once signs of drowsiness are detected. This immediate feedback is crucial in preventing potential accidents caused by fatigue.
The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
The foregoing description of the embodiments has been provided for purposes of illustration and is not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a system and a method for detecting the state of a driver that:
• provide active intervention for driver monitoring;
• monitors the driver’s behaviour, and driving pattern;
• detect driver’s distraction or drowsiness;
• accurately recognizes driver’s hand gestures; and
• provide feedback for generating alerts in real-time.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation. ,CLAIMS:WE CLAIM:
1. A method (200) for detecting the state of a driver, said method (200) comprising the following steps:
• prompting, by a prompting module (104) of a controlling device (102), a driver and initializing said controlling device (102) to identify the state of the driver in real-time;
• receiving, by an input module (106) of said controlling device (102), an input command from the driver for activation of the controlling device (102);
• generating, by an alert generation module (108) of said controlling device (102), an alert notification of the driver within a predefined time after receipt of said input command;
• prompting, by said alert generation module (108), the driver to perform the hand gesture in front of a plurality of sensors (108a) mounted on a combination switch of the vehicle within a stipulated time after the generation of said alert notification by said alert generation module (108);
• receiving, by a pre-processing module (110) of the controlling device (102), analogous data from the plurality of sensors (108a), based on the detection of the hand gesture of the driver in front of said plurality of sensors (108a);
• pre-processing, by said pre-processing module (110), the analogous data for converting said analogous data into digital data by means of a set of processing rules;
• receiving, by a feedback module (112) of said controlling device (102), said digital data; and
• processing, by said feedback module (112), said digital data by means of a set of feedback rules to generate a feedback signal, wherein said feedback signal includes a positive feedback signal to indicate the driver’s attentive state or else generate a negative feedback signal to indicate the driver’s inattentive state.
2. The method as claimed in claim 1, further comprises:
• establishing, by said feedback module (112), a communication between the controlling device (102) and a remote cloud server (116) over a wireless communication network; and
• transmitting, by said feedback module (110), the feedback signal indicating the driver’s attentive or inattentive state from said controlling device (102) to said remote cloud server (116) for further cloud based predictive analysis.
3. The method as claimed in claim 1, further comprises:
• receiving, by a vehicle operating module (114) of the controlling device (102), said negative feedback signal from said feedback module (110); and
• deactivating, by said vehicle operating module (114), the propulsion mechanism of the vehicle to restrict the movement of the vehicle.
4. The method as claimed in claim 1, wherein said plurality of sensors (108a) is a camera-less sensor selected from a group of sensors consisting of steering angle sensors, hand gesture sensors, motion detection sensors, pressure sensors, ultrasonic sensors, proximity sensors, and a combination thereof.
5. The method as claimed in claim 1, wherein said analogous data includes a hand movement gesture of a driver in real-time, at least one of infrared, capacitive, and pressure data.
6. The method as claimed in claim 1, wherein said step of processing includes applying, by the feedback module (112), digital signal processing techniques (DSP) on said converted digital data.
7. The method as claimed in claim 1, wherein said step of generating includes adjusting, by the alert generation module (108), the frequency of the alert notification based on real-time analysis of driving conditions or time of day.
8. The method as claimed in claim 1, wherein said step of prompting includes prompting, by said alert generation module (108), the driver for hand gesture performance by visual and/or auditory signals.
9. The method as claimed in claim 1, wherein said step of processing includes applying, by said feedback module (112), a machine learning model to refine the set of feedback rules over time based on accumulated data.
10. The method as claimed in claim 3, wherein said step of deactivating includes gradually reducing, by said vehicle operating module (114), the vehicle speed before completely cutting down the acceleration of the vehicle.
11. A system (100) for detecting the state of a driver, said system (100) comprising:
• a controlling device (102) comprises:
i. a prompting module (104) of a controlling device (102) configured to prompt a driver to initialize said controlling device (102) to identify the state of the driver in real-time;
ii. an input module (106) of said controlling device (102) configured to receive an input command from the driver for activation of the controlling device (102);
iii. an alert generation module (108) of said controlling device (102) configured to generate an alert notification of the driver within a predefined time after receipt of the input command, and further configured to prompt the driver to perform the hand gesture in front of a plurality of sensors (108a) mounted on combination switch of the vehicle within a stipulated time after generation of the alert notification by said alert generation module (108);
iv. a pre-processing module (110) of said controlling device (102) configured to receive said analogous data from said plurality of sensors (108a), based on the detection of the hand gesture of the driver in front of the plurality of sensors (108a), and further configured to convert said analogous data into digital data by means of a set of processing rules; and
v. a feedback module (112) of said controlling device (102) configured to receive said digital data, and process said digital data by means of a set of feedback rules to generate a feedback signal, wherein said feedback signal includes a positive feedback signal to indicate the driver’s attentive state or else generate a negative feedback signal to indicate the driver’s inattentive state.
12. The system as claimed in claim 11, wherein said feedback module (112) is configured to utilize a machine learning model to:
• analyze the activity signals received from the activity module, wherein said machine learning model is configured to learn from accumulated activity data and feedback signals to refine the feedback rules continuously;
• optimizes its predictions based on historical data regarding the driver's responses to alert notifications; and
• continuous learning process allows for adaptive adjustments in generating positive or negative feedback signals, thereby enhancing the accuracy; and responsiveness of said system in detecting the driver's alertness state.
13. The system as claimed in claim 11, wherein said preprocessing module (110) employs digital signal processing (DSP) techniques to:
• enhance the accuracy and reliability of converting analogous data to digital data by using noise filtering, signal amplification, and data compression; and
• ensures that the digital data relayed to subsequent modules is of high fidelity, enabling more precise detection of the driver's alertness state and more effective generation of alert notifications based on this data.
14. The system as claimed in claim 11, wherein said alarm signal is generated at uniform time intervals to detect the alertness of the driver.
15. The system as claimed in claim 11, wherein said remote cloud server (116) analytical module uses predictive analytics to anticipate potential driver inattention based on patterns detected in the vehicle feedback data.
16. The system as claimed in claim 11, wherein said plurality of sensors (108a) is a combination of infrared, capacitive, and pressure sensors to provide a comprehensive data set for the inputting module.
Dated this 29th day of May, 2024

_______________________________
MOHAN RAJKUMAR DEWAN, IN/PA – 25
of R.K.DEWAN & CO.
Authorized Agent of Applicant

TO,
THE CONTROLLER OF PATENTS
THE PATENT OFFICE, MUMBAI

Documents

Application Documents

# Name Date
1 202321049056-STATEMENT OF UNDERTAKING (FORM 3) [20-07-2023(online)].pdf 2023-07-20
2 202321049056-PROVISIONAL SPECIFICATION [20-07-2023(online)].pdf 2023-07-20
3 202321049056-PROOF OF RIGHT [20-07-2023(online)].pdf 2023-07-20
4 202321049056-FORM 1 [20-07-2023(online)].pdf 2023-07-20
5 202321049056-DRAWINGS [20-07-2023(online)].pdf 2023-07-20
6 202321049056-DECLARATION OF INVENTORSHIP (FORM 5) [20-07-2023(online)].pdf 2023-07-20
7 202321049056-FORM-26 [21-07-2023(online)].pdf 2023-07-21
8 202321049056-Proof of Right [03-08-2023(online)].pdf 2023-08-03
9 202321049056-ENDORSEMENT BY INVENTORS [29-05-2024(online)].pdf 2024-05-29
10 202321049056-DRAWING [29-05-2024(online)].pdf 2024-05-29
11 202321049056-COMPLETE SPECIFICATION [29-05-2024(online)].pdf 2024-05-29
12 202321049056-FORM 18 [12-06-2024(online)].pdf 2024-06-12
13 Abstract1.jpg 2024-06-26
14 202321049056-REQUEST FOR CERTIFIED COPY [06-02-2025(online)].pdf 2025-02-06