Abstract: A SYSTEM AND METHOD FOR DETECTION OF SMOKING CONDUCT BY DRIVER A present invention relates to a system and method for detection of smoking conduct by driver. It includes hardware device equipped with dual-camera setup to monitor and analyse driver smoking behaviour in real-time through a video stream, or live stream. One camera facing road to capture traffic conditions and another camera directed to driver to analyse smoking behaviour in vehicle. Trained face detection model tracks driver face in real-time conditions like variable lighting or partial occlusion. Said facial feature of face tracks driver’s eye, nose, mouth and ears, specifically mouth through landmark detection model. Said model extracting MAR coordinates to monitoring mouth position and movements for detecting smoking relate behaviour. Simultaneously, video frames passing through YOLOv10 object detection model to identify cigarette/smoke. By combining outputs of MAR analysis and cigarette detection post processing in to proximity analysis model to detect cigarette/smoke associated with driver’s mouth. Generate real-time alerts/notification to driver. FIG 1
Description:FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
The Patents Rules, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION: A SYSTEM AND METHOD FOR DETECTION OF SMOKING CONDUCT BY DRIVER.
2. APPLICANT:
(a) NAME : Nervanik AI Labs Pvt. Ltd.
(b) NATIONALITY : Indian
(c) ADDRESS : A – 1111, World Trade Tower,
Off. S G Road, B/H Skoda Showroom,
Makarba, Ahmedabad – 380051
Gujarat, INDIA.
3. PREAMBLE TO THE DESCRIPTION
PROVISIONAL
The following specification describes the invention. þ COMPLETE
The following specification particularly describes the invention and the manner in which it is to be performed.
Field of the invention
The present invention relates to a system and method for detection of smoking conduct by driver. More particularly, the present invention relates to real-time detection of cigarette/smoke based on object detection model through a video stream or a live stream through an installed camera while driving.
Background of the invention
In today’s world, smoking behavior has become a public health problem that is difficult to solve worldwide, and smoking is well known to cause many diseases, either indirectly or directly, or even life threatening. Smoking while driving significantly increases multiple hazards which could potentially compromise road safety and endanger the life of both the driver and passengers. Driver smoking is a significant distraction and health hazard in vehicle cabins which continues to pose risks to road safety and driver wellbeing. It diverts the driver’s attention and at the same time cigarette smoke can build up inside the vehicle cabin creates visual obstruction through smoke which in turn increases discomfort for all occupants. There is also a fire risk within the vehicle cabin if ash or a lit cigarette is accidentally dropped. These risks are particularly concerning in vehicles used for public transport, fleet services or when children are present.
Early studies on smoking behavior recognition have emerged in a variety of smoking behavior detection methods. Many researchers at home and abroad have made a great deal of research on smoking behavior recognition, and generally include methods of detecting smoke, detecting smoking actions, and the like. With the rapid development of computer vision technology and hardware technology, smoking behavior detection via video images is currently the main stream of research. From the current research results, there are many different methods for realizing smoking behavior recognition based on images by using a deep learning algorithm, and the recognition of smoking behavior gestures can have the problems of complex smoking gestures, various complexion, various camera angles and the like, so that the recognized gestures have differences, and the misjudgment rate is high. The detection of the cigarette smoke can lead the cigarette smoke to be fused with the indoor white background because the smoke concentration of the cigarette is low and is easy to diffuse, the edge of the smoke is not obvious enough and the like, so that the cigarette smoke is difficult to distinguish and the accuracy is difficult to improve. The two methods of detecting cigarettes or identifying smoking actions based on human body articulation points have higher accuracy rate for large targets, but video monitoring images inevitably face the challenge of scale problems, namely the scale difference of the size of targets to be detected in different video monitoring images relative to the whole image is very large, and the overall performance of the existing detector is severely limited due to the challenge caused by the scale difference.
In addition, traditional methods often lack the precision and reliability needed for real-time detection within varying cabin conditions. Overall the modern vehicles employs air quality sensors or smoke detectors but these cannot specifically detect smoking, distinguish cigarette smoke from dust or vapor or identify the smoker. They also react only after smoke is present, rather than preventing the act. One of the biggest technical challenges is that smoking gestures often look similar to other everyday actions. For example, driver gestures like scratching, drinking or holding pens near the face can resemble smoking actions that visually resemble smoking which in turn increases the likelihood of false detections. This makes it very difficult for basic motion or gesture detection systems to reliably identify smoking without generating false alarms. Another challenge is the changing lighting conditions inside a vehicle such as sunlight, shadows, and nighttime driving, which can affect the accuracy of visual detection systems.
Even though there is widespread awareness of these dangers and existing regulations in many jurisdictions the practice of smoking in vehicles persists, especially those used for commercial or family purposes these regulations are often hard to enforce. Drivers may ignore such measures and without real-time monitoring, smoking often goes unnoticed. To address this critical issue, there is an urgent need for an automated and intelligent solution that can effectively detect and discourage smoking behavior while driving in real-time.
Therefore, there exists a need for a robust, intelligent, and real-time system and method that capable of accurately detecting smoking conduct inside vehicle cabins. Such a system and method must overcome the deficiencies of conventional object detection systems and integrate a multi-layered approach involving facial analysis, gesture recognition, and temporal validation. Therefore, the present invention provides a system and method for detection of smoking conduct by driver based on an optimized object detection YOLOv10 model.
The present invention employs a multifaceted approach enabling a more reliable and precise determination of a driver when in a smoking state while driving. It serves as a valuable tool for improving driver behavior monitoring, thereby making a significant contribution to both road safety, driver health and reducing smoking-related distractions in vehicles.
Therefore, the present invention addresses these needs by developing a robust system and method capable of detecting real-time cigarette smoking during driving. The present invention provides a system and process for real-time detection of smoking within varying cabin conditions. The present invention analyzes real-time video footage from in-vehicle cameras setup and identifies instances where the driver is actively smoking while driving. By integrating multiple verification layers, the present invention provides an intelligent approach to smoking detection in vehicles, offering substantial improvements over existing technologies.
Object of the invention
The main object of the present invention is to disclose a system and method for detection of smoking conduct by driver.
Another object of the present invention is to provide an object detection model based on YOLOv10 which identifies smoking-related elements such as cigarettes and smoke within varying cabin conditions.
The other object of the present invention is to provide the dual-camera device installed in vehicle dedicated for capturing video and a more specialized and effective approach to accurately detect and track real-time driver’s face and hand movements.
The further object of the present invention is to provide the device setup that is equipped with stored database and advance machine learning module which monitors both the external driving environment and internal environment of the vehicle to detect cigarette-to-mouth movements over multiple frames ensuring that only actual smoking behaviour is identified.
The other object of the present invention is to generate real-time alerts only when active smoking behavior is confirmed, thereby reducing false alarms and improving system reliability.
Further object of the present invention is optimized for high accuracy, real-time processing and minimal false positivity ensuring reliable deployment in commercial and fleet management scenarios.
Another object of the present invention is to provide a robust and reliable smoking detection solution that identifies cigarettes in various orientations and smoke patterns, even under challenging lighting or cabin conditions.
Still another object of the present invention is to provide multi-faceted approach by combining object detection YOLOv10 model, facial landmark tracking, temporal movement analysis and a dual camera approach, its higher accuracy, real time processing and robust false positive reduction make it a practical and scalable solution for real-world deployment in fleet management, commercial layers and driver monitoring systems.
Summary of the Invention
The present invention relates to a system and method for detection of smoking conduct by driver. It includes a hardware device equipped with dual-camera setup to monitor and analyse driver smoking behaviour in real-time through a video stream, or a live stream. One camera device facing road to capture traffic conditions and another camera device directed to driver to analyse smoking conduct in vehicle. Trained face detection model tracks driver face in real-time under conditions like variable lighting or partial occlusion. Said facial feature of face tracks driver’s eye, nose, mouth and ears, specifically mouth through landmark detection model. Said model extracting MAR coordinates to monitor mouth position and movements for detecting smoking relate behaviour. Simultaneously, video frames passing through object detection model based on YOLOv10 to identify cigarette or smoke in certain frames. By combining MAR analysis and cigarette/smoke detection outputs, post processing in to proximity analysis model to detect cigarette/smoke associated with driver’s mouth and generate real-time alerts/notification to driver.
Brief Description of the Drawings
Fig. 1 illustrates a flowchart of a system and method for detecting smoking behavior by driver in-vehicle according to the present invention.
Detailed description of the Invention
Before explaining the present invention in detail, it is to be understood that the invention is not limited in its application. The nature of invention and the manner in which it is performed is clearly described in the specification. The invention has various components and they are clearly described in the following pages of the complete specification. It is to be understood that the phraseology and terminology employed herein is for the purpose of description and not of limitation.
The present invention disclosure is related to face detection. The face detection is used to find and identify human faces in digital images and video.
Another disclosure of the present invention is landmark detection model that basically represent individual’s facial features of facial images as multi-dimensional vectors and stores the data of face images.
Further disclosure of the present invention is object detection model to detect objects which is optimized for real-time, high-accuracy detection of small objects.
As used herein, the term “MAR”, refers to a Mouth Aspect Ratio a metric that measure the openness of mouth.
As used herein, the term “model” refers to as the unique and addressable components of the software implemented in hardware which can be solved and modified independently without disturbing (or affecting in very small amount) other modules of the software implemented in hardware.
As used herein, the term “device”, refers to a unit of hardware, outside or inside the case or housing that is capable of providing input or of receiving output or of both.
As used herein, the term "database" refers to either a body of data, a relational database management system (RDBMS), or to both. The database includes any collection of data including hierarchical databases, relational databases, flat file databases, object-relational databases, object oriented databases, and any other structured collection of records or data that is stored in a computer system. The above examples are example only, and thus are not intended to limit in any way the definition and/or meaning of the term database.
As used herein the phrase “Trained face detection model” refers to the model has been developed using annotated training data specific to the system’s operational environment.
The present invention is a system and method for detection of smoking conduct by driver comprises a hardware device equipped with dual-camera setup to monitor and analyse driver smoking behaviour in real-time through a video stream, or a live stream as shown in Fig.1. One camera device facing road to capture traffic conditions and another camera device directed towards driver to analyse smoking behaviour in vehicle. Said dual-camera setup configured with stored database and machine learning model. Said machine learning model monitors and analyses the driver’s smoking behaviour and facial expression or mouth movements even under challenging conditions such as variable lighting or partial occlusion while driving.
Now, Fig. 1 illustrates a flowchart of a system and method for detection of smoking conduct by driver, the driver side camera device accurately detects the driver’s face in real-time video feed through trained face detection model. Said trained face detection model locates and tracks the driver’s face with precision, engaged in cigarette smoking while driving. The trained face detection model tracks driver face in real-time under conditions like variable lighting or partial occlusion or challenging angles. Further, as per the present invention, the said process includes a landmark detection model identifies and extracts 98 key facial landmarks monitoring in real-time, including location of the eyes, nose, mouth and ears from the driver’s face detected by the trained face detection model. The landmark detection model specifically detects mouth region, tracking mouth coordinates and movement continuously.
The landmark detection model extracting MAR (Mouth Aspect Ratio) coordinates to monitoring mouth position and movements for detecting smoking conduct. Specifically, focuses on the mouth area to detect the cigarette or smoking related gestures, such as puffing or inhaling. The MAR Coordinates results/outputs pass for post processing in to next step. Simultaneously, to detect the smoking objects, said process employs object detection model based on YOLOv10. The video feed passes through YOLOv10 object detection model to identify cigarettes or smoke in the driver’s area. Said model is trained on a diverse dataset of the cigarettes and smoke apperance under various conditions to ensure reliablity and accuracy. The present invention process performance rigorously evaluated through extensive testing across a diverse dataset encompassing 150 drivers in varying conditions. The results demonstrate impressive accuracy rates, with achieving 92% accuracy in cigarette detection and 90% accuracy in smoke pattern recognition. When combining these detection mechanisms with landmark analysis and proximity detection, the process reaches an overall accuracy of 94% in identifying smoking conduct. Notably, the present invention maintains a low false positive rate of less than 2% while operating in real-time at 30 frames per second, making it highly reliable for practical applications such as driver monitoring systems to emphasize the system’s real-time capabilities.
Therefore, the object detection model is optimized for real-time, high-accuracy detection of small objects like cigarettes and smoke. Said model, trained on the diverse custom dataset, identifies cigarettes in various orientations (e.g., held in hand or near the mouth) and smoke patterns in different lighting and densities. The object detection model capability to process multi-object detection allows it to identify both cigarettes and smoke within the same frame, ensuring a reliable detection pipeline. If cigarettes or smoke detected near driver area, pass for the post processing in to next model.
According to the present invention, by combining the results of MAR coordinates and YOLOv10 object detection model in to proximity analysis model to verify proximity and confirm smoking behaviour. Said model determine cigarette or smoke associated with the driver’s mouth and confirming that the driver is smoking. Said model confimed coorelation between cigartte and mouth movements, the smoking detection process integrates results from MAR coordinates and YOLOv10 object detection through the proximity analysis model. Said model establishes a spatial correlation between detected cigarettes and the driver’s mouth landmarks, confirming smoking behaviour. It also performs temporal analysis to detect consistent cigarette-to-mouth movements over time. Additionally, the presence of smoke near the mouth region strengthens the classification to avoid false positives and ensures that smoking conduct is identified only when these criteria are met over a specific duration.
Therefore, according to the present invention, if cigarette or smoke behavior is detected over a threshold of frames near mouth, the process generate real-time alerts or notiications. If no evidence of smoking behaviour found means no smoking is detected. Said process generate the live/real-time alerts to the driver’s , fleet operators or safety teams for record keeping and action.
The process for detection of smoking conduct by driver comprises:
S11: receiving real-time video footage of driver’s face captured by camera device while driving the vehicle.
S12: performing face detection on each frame of the video to accurately identify a face area through face detection model.
S13: detecting the driver eyes, nose, mouth and other features through landmark detection model.
S14: extracting MAR (mouth aspect ratio) coordinates of the mouth area from the landmark detection model;
S15: detecting cigarette or smoke in real-time through object detection model based on YOLOv10 in certain frames;
S16: combining MAR results and object detection results (cigarette/smoke);
S17: analysing proximity of cigarette or smoke to the mouth by integrate both results through proximity analysis model;
S20: generating live/real-time alerts to the driver’s, fleet operators or safety teams for record keeping and action of detected cigarette or smoke.
Finally, the present invention system and method classifies the behaviour as smoking or non-smoking based on the combined analysis of all inputs. If smoking is detected, real-time alerts are generated and notify to the driver, fleet operators. The present invention process robustness, real-time operation, and high precision make it a reliable solution for reducing distracted driving risks and enhancing road safety. Furthermore, the YOLOv10 model plays a pivotal role in detecting the key smoking-related objects (cigarettes and smoke) in real time. Its ability to operate efficiently in challenging environments, combined with its integration into the overall smoking detection pipeline, ensures accurate and actionable insights for promoting safer driving practices. This multi-layered architecture ensures a reliable and actionable solution for detecting and mitigating smoking behaviour, ultimately contributing to safer driving environments and enhanced road safety.
While various elements of the present invention have been described in detail, it is apparent that modification and adaptation of those elements will occur to those skilled in the art. It is expressly understood, however, that such modifications and adaptations are within the spirit and scope of the present invention as set forth in the following claims.
, Claims:We Claim:
1. A method for detection of smoking conduct by driver comprising:
receiving real-time video footage of driver’s face captured by camera device while driving the vehicle.
performing face detection on each frame of the video to accurately identify a driver face area through trained face detection model.
detecting the driver eyes, nose, mouth and other features through landmark detection model.
extracting MAR (mouth aspect ratio) coordinates of the mouth area the landmark detection model;
wherein
detecting cigarette or smoke in real-time through object detection model based on YOLOv10 in 30 frames per second;
combining MAR results and object detection results (cigarette/smoke);
analysing proximity of cigarette or smoke to the mouth by integrating with the above extracted results through proximity analysis model;
generating live/real-time alerts to the driver’s, fleet operators or safety teams for record keeping and action of detected cigarette or smoke.
2. The method for detection of smoking conduct by driver as claimed in claim 1, wherein landmark detection model extracting MAR (Mouth Aspect Ratio) coordinates to monitor mouth position and movements for detecting smoke conduct by the driver.
3. The method for detection of smoking conduct by driver as claimed in claim 1, wherein YOLOv10 object detection model to detect smoking objects and is trained on a diverse dataset of the cigarettes and smoke apperance under various conditions to ensure reliablity and accuracy.
4. The method for detection of smoking conduct by driver as claimed in claim 1, wherein the proximity analysis model establishes a spatial correlation between detected cigarettes and the driver’s mouth landmarks to confirm smoking behaviour of the driver.
5. A system for detection of smoking conduct by driver, comprises of:
a camera device (10) captures image/video footage of driver’s face in real-time ;
a trained face detection model locates and tracks the driver’s face from the video footage;
a landmark detection model detects the driver eyes, nose, mouth and other facial features specifically extract MAR (mouth aspect ratio) a mouth area/coordinates;
a object detection model based on YOLOv10 to identify cigarettes or smoke in the driver’s area in real-time;
a proximity analysis model to detect cigarette or smoke near mouth by combining the results of both MAR Coordinates and object detection model.
Dated this on 14th day of May 2025
| # | Name | Date |
|---|---|---|
| 1 | 202521046413-STATEMENT OF UNDERTAKING (FORM 3) [14-05-2025(online)].pdf | 2025-05-14 |
| 2 | 202521046413-PROOF OF RIGHT [14-05-2025(online)].pdf | 2025-05-14 |
| 3 | 202521046413-POWER OF AUTHORITY [14-05-2025(online)].pdf | 2025-05-14 |
| 4 | 202521046413-FORM FOR STARTUP [14-05-2025(online)].pdf | 2025-05-14 |
| 5 | 202521046413-FORM FOR SMALL ENTITY(FORM-28) [14-05-2025(online)].pdf | 2025-05-14 |
| 6 | 202521046413-FORM 1 [14-05-2025(online)].pdf | 2025-05-14 |
| 7 | 202521046413-FIGURE OF ABSTRACT [14-05-2025(online)].pdf | 2025-05-14 |
| 8 | 202521046413-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-05-2025(online)].pdf | 2025-05-14 |
| 9 | 202521046413-EVIDENCE FOR REGISTRATION UNDER SSI [14-05-2025(online)].pdf | 2025-05-14 |
| 10 | 202521046413-DRAWINGS [14-05-2025(online)].pdf | 2025-05-14 |
| 11 | 202521046413-DECLARATION OF INVENTORSHIP (FORM 5) [14-05-2025(online)].pdf | 2025-05-14 |
| 12 | 202521046413-COMPLETE SPECIFICATION [14-05-2025(online)].pdf | 2025-05-14 |
| 13 | 202521046413-STARTUP [15-05-2025(online)].pdf | 2025-05-15 |
| 14 | 202521046413-FORM28 [15-05-2025(online)].pdf | 2025-05-15 |
| 15 | 202521046413-FORM-9 [15-05-2025(online)].pdf | 2025-05-15 |
| 16 | 202521046413-FORM 18A [15-05-2025(online)].pdf | 2025-05-15 |
| 17 | Abstract.jpg | 2025-05-29 |
| 18 | 202521046413-FER.pdf | 2025-07-15 |
| 19 | 202521046413-FORM 3 [17-07-2025(online)].pdf | 2025-07-17 |
| 20 | 202521046413-FER_SER_REPLY [01-11-2025(online)].pdf | 2025-11-01 |
| 1 | 202521046413_SearchStrategyNew_E_SearchHistory(2)E_02-07-2025.pdf |