Abstract: A driver monitoring system (DMS) (102) for a vehicle (100) that accurately identifies an inattentive state of a driver of the vehicle (100) is provided. The DMS (102) includes a plurality of lighting elements (116A-N), an imaging sensor (104) that captures a facial image of the driver, and a lighting elements controller (114). The DMS (102) generates a feature vector corresponding to a face of the driver detected from the captured facial image. The DMS (102) then identifies a specific number of the lighting elements (116A-N) to be operated in the active state for identifying if the driver is an inattentive state. Further, the DMS (102) configures one or more associated systems (118 and 120) to perform one or more of alerting the driver and automatically controlling selected operations of the vehicle (100) upon determining the driver to be in the inattentive state.
Description:DRIVER ASSISTANCE SYSTEM AND AN ASSOCIATED METHOD
RELATED ART
[0001] Embodiments of the present specification relate generally to an advanced driver assistance system, and more particularly to a driver monitoring system and an associated method for monitoring an inattentive or a drowsy state of a driver of a vehicle.
[0002] Distracted driving remains one of the principal causes of road accidents and is historically responsible for 20–30 % of all road deaths. Such distracted driving often arises from commercial vehicle drivers and other drivers driving vehicles for prolonged duration without proper rest periods, leading to fatigue, inattention and sleepiness. Drivers may also become drowsy or inattentive when driving the vehicle under the influence of alcohol or drugs, texting or talking on a phone, eating, or suffering from a medical condition such as a stroke or a heart attack while driving the vehicle. Driver drowsiness, fatigue and/or inattention, thus may lead to serious accidents that hinder the life and safety of people and public infrastructure.
[0003] Accordingly, certain present-day vehicles include a driver monitoring system that monitors driver behavior in real-time and generates audio, visual or vibratory alarms when the driver is identified to be inattentive. For example, certain present driver monitoring systems may use a near-infrared (NIR) camera that is capable of capturing clear images in low light conditions for monitoring driver behavior. For example, US Patent No. 10853675B2 describes a driver monitoring system that uses an NIR camera to detect if a driver exhibits an abnormal state. However, the NIR camera is capable of capturing features of a face of the driver clearly only when a distance between the NIR camera deployed within the vehicle and the driver of the vehicle is less than a particular threshold, for example, 1.5 meters. This is because the NIR camera is generally not efficient enough to capture facial images of the driver sufficiently clearly at larger distances, thereby failing to promptly identify driver inattention and provide necessary alerts, which may endanger the lives and safety of people seated within or present in the surroundings of the vehicle.
[0004] Accordingly, there remains a need for an improved driver monitoring system that accurately identifies a level of alertness of drivers of different types of vehicles in all lighting conditions irrespective of a distance between the drivers and cameras that are used to capture images of the drivers.
BRIEF DESCRIPTION
[0005] It is an objective of the present disclosure to provide a driver monitoring system for a vehicle. The driver monitoring system includes a plurality of lighting elements disposed within the vehicle, an imaging sensor adapted to capture a facial image of a driver of the vehicle, and a lighting elements controller coupled to the lighting elements and adapted to switch one or more of the lighting elements between an inactive state and an active state. The driver monitoring system generates a feature vector corresponding to a face of the driver detected from the captured facial image.
[0006] Further, the driver monitoring system identifies a specific number of the lighting elements to be operated in the active state for detecting one or more facial features of the driver with a predetermined clarity by matching the generated feature vector with one of a set of reference feature vectors corresponding to the face of the driver detected from a set of reference facial images captured by the imaging sensor during an initial calibration of the driver monitoring system. Each of the set of reference feature vectors is mapped to a corresponding minimum number of the lighting elements to be operated in the active state for detecting the one or more facial features of the driver of the vehicle during the initial calibration. Furthermore, the driver monitoring system configures the imaging sensor to capture one or more facial images of the driver during operation of the vehicle while operating only the identified number of the lighting elements in the active state, and processes the captured facial images to identify if the driver is an inattentive state. Moreover, the driver monitoring system configures one or more associated systems in the vehicle to perform one or more of alerting the driver and automatically controlling one or more selected operations of the vehicle upon determining the driver to be in the inattentive state.
[0007] The plurality of lighting elements corresponds to a plurality of light emitting diodes. The imaging sensor corresponds to a near infrared camera. The lighting elements controller corresponds to a light emitting diode controller. The light emitting diode controller includes one of a voltage-switch driver, a constant-current driver, and a flash LED driver. The driver monitoring system includes an ambient monitoring system that is coupled to the imaging sensor to identify a prevailing weather condition from one or more images of the surroundings of the vehicle captured by the imaging sensor. The ambient monitoring system includes one or more of a global positioning system that identifies a current location of the vehicle and a digital clock that identifies a particular time during a day when the imaging sensor captures the facial image of the driver. The one or more associated systems includes a driver alert unit. The driver alert unit includes one or more of a siren that generates an audio alert, an infotainment system that generates a visual alert, and a vibration sensor. The vibration sensor generates a haptic feedback on one or more of a steering wheel, a seat, and a selected surface of the vehicle that is in contact with the driver when the driver is determined to be in the inattentive state.
[0008] The one or more associated systems includes a vehicle control unit. The vehicle control unit corresponds to one or more electronic control units deployed in the vehicle. The vehicle control unit automatically controls one or more of a throttle, a brake, and a steering wheel of the vehicle when the driver is identified to be in the inattentive state to perform one or more of automatically reducing a speed of the vehicle, navigating the vehicle, and stopping the vehicle in a safe area. The driver monitoring system corresponds to one or more of an adaptive front lighting system and a vehicle anti-theft system. The driver monitoring system includes one or more of a calibration database and a vehicle cloud database that store the set of reference feature vectors generated during the initial calibration of the driver monitoring system, and learning subsystem. The learning subsystem is communicatively coupled to one or more of the calibration database and the vehicle cloud database. The learning subsystem is iteratively trained to interpolate patterns from the set of reference feature vectors stored in one or more of the calibration database and in the vehicle cloud database for mapping different minimum number of the lighting elements to different reference facial images captured in different ambient lighting conditions for different drivers seated at different distances from the imaging sensor in different types of vehicles.
[0009] It is another objective of the present disclosure to provide a method for monitoring state of a driver of a vehicle. The method includes capturing a facial image of the driver of the vehicle using an imaging sensor when a predefined number of lighting elements selected from a plurality of lighting elements disposed within the vehicle are operated in an active state. Further, the method includes determining a bounding box enclosing a face of the driver in the captured facial image, determining a histogram of intensity levels of pixels in the determined bounding box, and generating a feature vector that includes values indicating the determined histogram of intensity levels of pixels in the determined bounding box. Furthermore, the method includes identifying a specific number of the lighting elements to be operated in the active state for detecting one or more facial features of the driver with a predetermined clarity by matching the generated feature vector with one of a set of reference feature vectors. Each of the set of reference feature vectors includes values indicating a corresponding reference histogram determined from a reference facial image captured by the imaging sensor during an initial calibration of a driver monitoring system.
[0010] Each of the set of reference feature vectors is mapped to a corresponding minimum number of the lighting elements to be operated in the active state for detecting the one or more facial features of the driver of the vehicle during the initial calibration. Moreover, the method includes switching the identified number of the lighting elements from an inactive state to the active state by a lighting elements controller. In addition, the method includes capturing one or more facial images of the driver during operation of the vehicle by the imaging sensor during operation of the vehicle while operating only the identified number of the lighting elements in the active state, and processing the captured facial images to identify if the driver of the vehicle is an inattentive state. The method further includes performing one or more of alerting the driver and automatically controlling one or more selected operations of the vehicle upon determining the driver to be in the inattentive state.
[0011] The initial calibration of the driver monitoring system includes capturing one or more reference facial images of a plurality of drivers operating the vehicle in different ambient lighting conditions and when seated at different distances from the imaging sensor. The imaging sensor captures each of the reference facial images when the predefined number of lighting elements selected from the plurality of lighting elements disposed within the vehicle are operated in the active state. The method further includes determining a corresponding size of each corresponding bounding box suitable for enclosing a corresponding face of each the plurality of drivers in the captured facial images, generating the corresponding bounding box of the corresponding size to enclose the corresponding face of each of the plurality of drivers in the captured facial images, and determining a corresponding reference histogram of intensity levels of pixels in each of the corresponding bounding box. Furthermore, the method includes determining a corresponding distance between the imaging sensor and each of the plurality of drivers operating the vehicle when the imaging sensor captures each of the reference facial images. Moreover, the method includes identifying a corresponding time during a day, a corresponding location of the vehicle, and a corresponding prevailing weather condition when the imaging sensor captures each of the reference facial images.
[0012] In addition, the method includes generating a corresponding reference feature vector for each of the reference facial images based on the corresponding size of the corresponding bounding box in that reference facial image, the corresponding reference histogram, the corresponding distance between the imaging sensor and the driver, the corresponding time during the day, the corresponding location of the vehicle, and the corresponding prevailing weather condition. The method further includes mapping the corresponding reference feature vector to the minimum number of the lighting elements to be operated in the active state for detecting the one or more facial features of the driver of the vehicle with the predetermined clarity.
[0013] Mapping the corresponding reference feature vector to the minimum number of the lighting elements includes iteratively switching a designated number of the plurality of lighting elements operating in the active state to the inactive state after generating the corresponding reference feature vector for a reference facial image. Further, the method includes recapturing the reference facial image of the driver after each iteration of switching the designated number of the lighting elements to the inactive state, and determining if the one or more of the facial features of the driver are detectable with the predetermined clarity from the reference facial image recaptured during each iteration. Furthermore, the method includes identifying a number of the lighting elements that are operated in the active state in a particular iteration during which the one or more of the facial features of the driver are determined to be detectable with the predetermined clarity as the minimum number of the lighting elements that are to be operated in the active state and mapping the identified minimum number of the lighting elements to the corresponding reference feature vector.
[0014] Identifying the specific number of the lighting elements to be operated in the active state for detecting the one or more facial features of the driver of the vehicle with the predetermined clarity while operating the vehicle during real-time includes determining a size of the bounding box suitable for enclosing the face of the driver. Further, the method includes determining a distance between the imaging sensor and the driver, and identifying a particular time during a day, a current location of the vehicle, and a prevailing weather condition by an ambient monitoring system in the vehicle when the imaging sensor captures the facial image of the driver. Furthermore, the method includes updating the feature vector based on the determined size of the bounding box, the determined distance between the imaging sensor and the driver of the vehicle, the identified time during the day, the identified current location of the vehicle, and the identified prevailing weather condition. In addition, the method includes identifying a matching feature vector from the set of reference feature vectors that matches the updated feature vector. The method further includes identifying the minimum number of the lighting elements that is mapped to the matching reference feature vector as the specific number of the lighting elements to be operated in the active state for detecting the one or more facial features of the driver with the predetermined clarity.
[0015] Determining if the one or more of the facial features of the driver are detectable with the predetermined clarity includes recapturing the facial image of the driver while operating the predefined number of lighting elements selected from the plurality of lighting elements in the active state when a clarity of the one or more facial features of the driver detected from the facial image is determined to be lesser than the predetermined clarity. Further, the method includes generating a new feature vector for the recaptured facial image. The new feature vector includes one or more of a size of a bounding box enclosing the face of the driver in the recaptured facial image, and a histogram of intensity levels of pixels determined from the bounding box in the recaptured facial image. Further, the new feature vector includes a distance determined between the driver and the imaging sensor when the imaging sensor recaptures the facial image of the driver, a particular time during the day when the imaging sensor recaptures the facial image, and a current location of the vehicle when the imaging sensor recaptures the facial image. In addition, the new feature vector includes a prevailing weather condition when the imaging sensor recaptures the facial image. Furthermore, the method includes identifying a matching feature vector from the set of reference feature vectors stored in the reference table that matches the generated new feature vector. Moreover, the method includes identifying the minimum number of the lighting elements that is mapped to the matching reference feature vector in the reference table as a new specific number of the lighting elements to be operated in the active state for detecting the one or more facial features of the driver with the predetermined clarity.
[0016] The initial calibration of the driver monitoring system includes iteratively training a learning subsystem communicatively coupled to the driver monitoring system to interpolate patterns from the set of reference feature vectors stored in one or more of a calibration database and a vehicle cloud database for mapping different minimum number of the lighting elements to different reference facial images captured in different ambient lighting conditions for different drivers seated at different distances from the imaging sensor in different types of vehicles. Capturing the one or more facial images of the driver during operation of the vehicle includes periodically updating a number of the lighting elements operating in the active state upon detecting a change in one or more of the determined size of the bounding box enclosing the face of the driver, the determined histogram, the determined distance between the imaging sensor and the driver of the vehicle, the identified time during the day, the identified current location of the vehicle, and the identified prevailing weather condition.
BRIEF DESCRIPTION OF DRAWINGS
[0017] These and other features, aspects, and advantages of the claimed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0018] FIG. 1 illustrates a block diagram depicting an exemplary driver monitoring system that monitors driver behavior, in accordance with aspects of the present disclosure;
[0019] FIGS. 2A-B illustrate a flow diagram depicting an exemplary method for calibrating the driver monitoring system of FIG. 1 for efficiently controlling operation of associated lighting elements for accurately monitoring driver behavior, in accordance with aspects of the present disclosure;
[0020] FIG. 3 illustrates an exemplary graphical representation depicting a reference histogram of intensity levels of pixels determined during an exemplary calibration of the driver monitoring system of FIG. 1, in accordance with aspects of the present disclosure;
[0021] FIGS. 4A-B illustrate a flow diagram depicting an exemplary method for identifying a minimum number of lighting elements to be operated in an active state for clearly capturing facial images during calibration of the driver monitoring system of FIG. 1, in accordance with aspects of the present disclosure; and
[0022] FIGS. 5A-C illustrate a flow diagram depicting an exemplary method for monitoring driver behavior in real-time using the driver monitoring system of FIG. 1, in accordance with aspects of the present disclosure.
DETAILED DESCRIPTION
[0023] The following description presents an exemplary advanced driver assistance system for monitoring driver behavior. Particularly, embodiments described herein disclose a driver monitoring system that efficiently controls operation of associated lighting elements for accurately monitoring a level of attention and alertness of a driver of a vehicle. As noted previously, a driver of a vehicle may lose attention, for example, when the driver is fatigued, drives the vehicle under the influence of alcohol or drugs, texts or talks on a phone while driving the vehicle, and/or suffers from a medical condition such as a stroke and a heart attack while driving the vehicle. The present driver monitoring system monitors such driver behavior, and upon detecting the driver behavior to be undesirable, provides necessary alerts to return the driver to an attentive state and/or automatically controls operations of the vehicle to prevent collisions and other mishaps.
[0024] As noted previously, conventional driver monitoring systems use NIR cameras for capturing facial images of the drivers for monitoring driver behavior. However, as previously noted, conventional NIR cameras are incapable of capturing facial images with sufficient clarity at large distances despite employing a plurality of lighting elements that are operated in active state to illuminate the drivers and vehicle interiors. Further, use of conventional NIR cameras, especially in larger vehicles such as trucks, commercial vehicles, trains, and off-road vehicles including vehicles used for farming, mining, and construction industries, leads to inaccurate identifications of driver distraction when a distance between the drivers and the NIR cameras mount within the vehicles typically exceeds 1.5 meters.
[0025] Moreover, it may be noted that approaches entailing continuously illuminating the face of the driver by operating all of the lighting elements in active state to capture better quality facial images to address aforementioned issues noted with conventional NIR cameras creates additional safety issues. In particular, continuously illuminating the face of the driver with all of the lighting elements often creates a glare in the eyes of the driver that causes the driver to lose depth perception and peripheral vision, while dilating his or her pupils. This glare leads to blurriness or temporary blindness, often causing accidents. Further, operating all of the lighting elements in the active state leads to increased power consumption, quick draining of batteries supplying power to the lighting elements, wear and tear of the lighting elements, and an increased cost towards maintenance of the lighting elements and batteries.
[0026] In order to overcome the aforementioned issues, the present driver monitoring system includes an NIR camera and a plurality of lighting elements deployed within a cabin space of a vehicle. Particularly, one or more of the lighting elements are selectively configured to illuminate a face of a driver to allow the NIR camera to capture facial images and facial features of the driver with sufficient clarity irrespective of the distance between the driver and the NIR camera mounted within the vehicle. Additionally, unlike conventional systems that deploy all the lighting elements in active state during imaging, the present driver monitoring system identifies a specific number of lighting elements to be operated in the active state suited for different prevailing imaging and ambient lighting conditions. The specific number of lighting elements to be operated in the active state for capturing the facial features of the driver with sufficient clarity in each of the different lighting conditions without creating undesirable glare is identified via an initial calibration of the driver monitoring system. Illuminating only the specific number of the lighting elements, as described herein, reduces power consumption, reduces wear and tear of the lighting elements, increases life of the batteries supplying power to the lighting elements, and reduces cost incurred towards maintenance of the lighting elements and batteries.
[0027] It may be noted that different embodiments of the present driver monitoring may be used in different application areas. For example, the driver monitoring system may be used in an adaptive front lighting system in a vehicle to dynamically control a specific number of lighting elements operated in the active state such that a driver of the vehicle perceives the road clearly, while a driver of an oncoming vehicle does not experience a glare. In another example, the driver monitoring system may be used in a vehicle anti-theft system that is capable of clearly capturing facial images and facial features of a driver starting a vehicle in all ambient lighting conditions to identify if the driver is a previously registered driver, and take corrective action such as safely disabling the vehicle upon failing to identify the driver. For clarity, an embodiment of the present driver monitoring system is described herein in greater detail with reference to a driver assistance system configured for monitoring an attention level of a driver of vehicle and initiating one or more corrective actions, as and when needed.
[0028] FIG. 1 illustrates a block diagram depicting a vehicle (100) that includes an exemplary driver monitoring system (102) for monitoring behavior of a driver of the vehicle (100). To that end, the driver monitoring system (102) includes an ambient monitoring system (108) that includes an imaging sensor (104), a global positioning system (GPS) (110), and a digital clock (112). In certain embodiments, the imaging sensor (104) may be an independent unit that is located within a cabin space of the vehicle (100) and is operatively coupled to the ambient monitoring system (108). In certain embodiments, the driver monitoring system (102) further includes a lighting elements controller (114), a plurality of lighting elements (116A-N), a driver alert unit (118), and a vehicle control unit (120) that are communicatively coupled to one or more of the other components in the driver monitoring system (102) via a communications link (122). Examples of the communications link (122) include an Ethernet, a camera serial interface, a controller area network, and FlexRay network.
[0029] In one embodiment, the driver monitoring system (102) may be deployed as part of one or more electronic control units (ECUs) such as advanced driver assistance system (ADAS) ECU and/or a cockpit ECU of the vehicle (100). Additionally, the driver monitoring system (DMS) (102), for example, may also include one or more general-purpose processors, specialized processors, graphical processing units, microprocessors, programming logic arrays, field programming gate arrays, integrated circuits, system on chips, and/or other suitable computing devices to monitor driver behavior and initiate a designated corrective action. Accordingly, certain operations of the driver monitoring system (102) may be implemented by suitable code on a processor-based system, such as a general-purpose or a special-purpose computer.
[0030] As previously noted, the present driver monitoring system (102) uses the associated components to accurately and cost-efficiently monitor driver behavior in different imaging conditions by selectively deploying only a subset of the lighting elements (116A-N) in the active state. In particular, the driver monitoring system (102) identifies the prevailing imaging conditions, for example, by detecting a level on ambient lighting and a distance between the driver and the imaging sensor (104). To that end, the driver monitoring system (102) configures the imaging sensor (104) that is deployed within a cabin space of the vehicle (100) to identify a distance between the driver seated in a driver seat of the vehicle (100) and the imaging sensor (104) when the driver starts the vehicle (100). Though the present driver monitoring system (100 uses the imaging sensor (104) to identify the distance between the driver and the imaging sensor (104), it is to be understood that the driver monitoring system (102) may use a different ranging sensor for identifying the distance between the driver and the imaging sensor (104). Examples of such a ranging sensor include an ultrasonic sensor, an infrared distance sensor, a light detection and ranging (LIDAR) sensor, a time-of-flight distance sensor, and a radio detection and ranging (RADAR) sensor.
[0031] Further, the GPS (110) determines a current location of the vehicle (100), and the digital clock (112) identifies a particular time of the day when the driver starts the vehicle (100). In addition, the driver monitoring system (102) identifies a weather condition prevailing in the surroundings of the vehicle (100) when the driver starts the vehicle (100), for example, using one or more images captured by the imaging sensor (104). In addition, the lighting elements controller (114) operates a specific number of the lighting elements (116A-N) in the vehicle (100) in an active state to sufficiently illuminate the cabin space and a face of the driver when the driver starts the vehicle (100). Examples of the lighting elements (116A-N) include a plurality of light emitting diodes (LEDs) (116A-N) and a cabin light deployed within a cabin space of the vehicle (100). For simplicity, the lighting elements (116A-N) are subsequently described as corresponding to LEDs (116A-N). Further, examples of the lighting elements controller (114) that controls operations of the LEDs (116A-N) by switching the LEDs (116A-N) between an active state and an inactive state include a voltage-switch driver, a constant-current driver, and a flash LED driver.
[0032] In certain embodiments, the lighting elements controller (114) operates either a predefined default number of LEDs (116A-N) or all of the LEDs (116A-N) when the vehicle (100) is first initialized. Subsequently, the driver monitoring system (102) is adapted to identify the specific number of the LEDs (116A-N) to be operated in the active state to capture facial features of the driver with predetermined clarity without creating a glare based on an initial calibration of the driver monitoring system (102), as described in detail with reference to FIGS. 5A-C.
[0033] Particularly, the lighting elements controller (114) controls operations of the LEDs (116A-N) deployed within the cabin space of the vehicle (100) to activate only the specific number of the LEDs (116A-N) for capturing one or more facial images and facial features of the driver with the predetermined clarity using the imaging sensor (104). Further, the driver monitoring system (102) processes the captured facial images to monitor an attention level of the driver of the vehicle 100), as described in detail with reference to FIGS. 5A-C. For example, the driver monitoring system (102) processes the captured facial images to identify signs of driver distraction such as itchy eyes, neck stiffness, back pain, yawning, frequent position changes, and lack of appropriate attention to environment such as road signs and pedestrians. In certain embodiments, the driver monitoring system (102) may also monitor vehicle signals to identify and/or further confirm lack of driver attention by identifying abnormal vehicle velocity changes, steering wheel motion, lateral position, or lane changes.
[0034] In one embodiment, the driver monitoring system (102) configures the driver alert unit (118) to generate one or more of an audio, visual, and haptic alert to the driver in order to revert the driver back to an attentive state upon identifying the driver to be distracted. For example, the driver alert unit (118) includes a siren deployed within the vehicle (100) that generates and provides an audio alert when the driver is identified to be in the inattentive state. In another example, the driver alert unit (118) includes an infotainment system that displays a visual alert on an associated display device, and/or a vibration sensor that is integrated within a seat and/or a steering wheel of the vehicle (100). The vibration sensor generates and provides a haptic alert when the driver monitoring system (102) identifies that the driver is in the inattentive state.
[0035] In certain embodiments, the driver monitoring system (102) configures the vehicle control unit (120) to automatically control one or more operations of the vehicle (100) when the driver does not revert to an attentive state even after providing one or more of the audio, visual, and haptic alerts to the driver. Specifically, the vehicle control unit (120) controls operations of the vehicle (100) by controlling an associated throttle, brake, and/or steering wheel when the driver is identified to be in the inattentive state. For example, the vehicle control unit (120) automatically reduces a speed of the vehicle (100) and navigates and stops the vehicle (100) in a safe area such as at a side of a road. Automatically initiating such corrective actions by the driver monitoring system (102) helps in preventing collision of the vehicle (100) with pedestrians, other vehicles and surrounding objects in the path of the vehicle (100).
[0036] The driver monitoring system (102), thus, ensures safety and health of the driver, passengers, and surrounding infrastructure by timely activating the driver alert unit (118) and the vehicle control unit (120) by accurately identifying driver inattentiveness. In particular, the driver monitoring system (102) accurately identifies driver inattentiveness irrespective of the varying imaging conditions encountered by the imaging sensor (104) during operation of the vehicle (100) in different driving conditions by virtue of a robust initial calibration. An exemplary method for calibrating the driver monitoring system (102) for accurately identifying driver inattentiveness irrespective of the varying imaging conditions is described in detail with reference to FIGS. 2A-B.
[0037] FIGS. 2A-B illustrate a flow diagram (200) depicting an exemplary method for calibrating the driver monitoring system (102) for identifying driver behavior during actual operation of the vehicle (100). The order in which the exemplary method is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the exemplary method or augmented by additional blocks with added functionality without departing from the claimed scope of the subject matter described herein.
[0038] According to aspects of the present disclosure, the driver monitoring system (102) is calibrated prior to use during actual operation of the vehicle (100) in the real world. For example, each instance of the driver monitoring system (102) may be calibrated independently post installation in the vehicle (100) during manufacturing of the vehicle (100) in a factory. Alternatively, multiple instances of the driver monitoring system (102) to be installed in multiple vehicles of the same model may be initialized with similar calibration parameters based on a single instance of calibration performed in a single vehicle. In certain further embodiments, each instance of the driver monitoring system (102) installed in each vehicle (100) may be calibrated independently in a service center to suit and address common imaging conditions prevalent in surrounding locations pre-or post delivery of the vehicle (100) to a customer.
[0039] In certain embodiments, calibration of the driver monitoring system (102) entails determining a minimum number of the LEDs (116A-N) to be operated in the active state for capturing facial features of a driver of the vehicle (100) with a predetermined clarity without creating a glare in the eyes of the driver. The minimum number of the LEDs (116A-N) to be operated in the active state at any particular instant of time varies based on different parameters such as ambient lighting conditions, a distance between the driver and the imaging sensor (104), a particular time of the day, a current location of the vehicle (100), and/or a prevailing weather condition. During a calibration stage, the driver monitoring system (102) determines the minimum number of the LEDs (116A-N) to be operated in the active state for different combinations of these parameters.
[0040] Accordingly, the method begins at step (202), where the imaging sensor (104) in the vehicle (100) captures one or more reference facial images of different drivers operating the vehicle (100) in different ambient lighting conditions and at different seating distances from the imaging sensor (104) deployed within the vehicle (100). In one embodiment, the drivers operating the vehicle (100) belong to different demographic groups such as different age groups, genders, body habitus, and ethnicities. Further, the different ambient lighting conditions include lighting conditions prevalent during daytime, noontime, evening-time, nighttime, during rains, and/or during navigation of the vehicle (100) via tunnels, under bridges, within parking structures, and nearby trees.
[0041] At step (204), the driver monitoring system (102) detects faces of the drivers from the reference facial images. At step (206), the driver monitoring system (102) determines sizes of bounding boxes suitable for enclosing the faces of the drivers detected from the reference facial images. At step (208), the driver monitoring system (102) generates the bounding boxes of the determined sizes around the faces of the drivers detected from the reference facial images. In one embodiment, the driver monitoring system (102) determines sizes of the bounding boxes and generates the bounding boxes using one or more techniques including a histogram of oriented gradients (HOG) feature extraction technique, a Haar feature extraction technique, a support vector machine (SVM) technique, and a convolutional neural network (CNN) technique.
[0042] For example, the driver monitoring system (102) determines sizes of first and second bounding boxes enclosing the faces of first and second drivers captured in first and second reference facial images as 10 by 10 pixels and 10 by 20 pixels, respectively. In the previously noted example, the driver monitoring system (102) generates the first and second bounding boxes of the determined sizes in order to realize lighting conditions prevailing at the surroundings of the vehicle (100) based on intensity levels of different pixels in the first and second bounding boxes. For instance, the intensity levels of pixels in the first bounding box generated around the face of the first driver captured in a daytime lighting condition will be different from the intensity levels of pixels in the second bounding box generated around the face of the second driver captured in a nighttime lighting condition. The driver monitoring system (102) determines the intensity levels of pixels in the first and second bounding boxes, and correlates the determined pixel intensities to ambient lighting conditions in order to determine the minimum number of the LEDs (116A-N) to be operated in the active state in those ambient lighting conditions, as described in detail with reference to FIGS. 4A-B.
[0043] At step (210), the driver monitoring system (102) determines a corresponding reference histogram of intensity levels of pixels within each generated bounding box in each of the reference facial images. To that end, the driver monitoring system (102) groups the pixels within each bounding box including their associated pixel intensities into a specified number of pixel intensity ranges. For example, in one embodiment, the driver monitoring system (102) groups the pixels within a specific bounding box of 10 by 10 pixels, in a reference facial image, enclosing the face into a first pixel intensity range, second pixel intensity range, third pixel intensity range, and fourth pixel intensity range corresponding to 0-64, 65-128, 129-198, and 199-255, respectively. In one embodiment, the values associated with the first, second, third, and fourth pixel intensity ranges are predetermined based on intensity values of pixels that generally vary between 0 and 255. Further, the driver monitoring system (102) groups the pixels into one or more of the four different groups that are predetermined to indicate a change in ambient lighting conditions to the driver monitoring system (102).
[0044] For instance, when the bounding box corresponds to 10 by 10 pixels, that is includes 100 pixels, the driver monitoring system (102), identifies that 50 pixels have pixel intensities within the first pixel intensity range (302) of 0-64, and 30 pixels have pixel intensities within the second pixel intensity range (304) of 65-128. Further, the driver monitoring system (102) identifies that 15 pixels have corresponding pixel intensities with the third pixel intensity range (306) of 129-198, and the remaining 5 pixels have pixel intensities within the fourth pixel intensity range (308) of 199-255.
[0045] In this example, the driver monitoring system (102) determines the reference histogram of intensity levels of pixels in the specific bounding box as being indicative of 50, 30, 15, and 5 pixels, as depicted in FIG. 3. In one embodiment, an X-axis (310) of the determined reference histogram represents the first, second, third, and fourth pixel intensity ranges (302, 304, 306, and 308). Similarly, a y-axis (312) of the determined reference histogram represents a number of pixels grouped into each of the first, second, third, and fourth pixel intensity ranges (302, 304, 306, and 308). It is to be understood that the driver monitoring system (102) similarly determines the corresponding reference histogram of intensity levels of pixels for each of the other bounding boxes in the reference facial images captured in different ambient lighting conditions using the imaging sensor (104).
[0046] In certain embodiments, the determined reference histogram is indicative of an ambient lighting condition prevailing in the surroundings of the vehicle (100). For example, when a reference facial image is captured in an ambient lighting condition that is dark or corresponds to a nighttime lighting condition, majority of pixels in the associated bounding box may include pixel intensities within the range of 0-64. However, when the reference facial image is captured in a different ambient lighting condition that is bright or corresponds to a daytime lighting condition, majority of pixels in the associated bounding box may include pixel intensities within the range of 199-255. Thus, the driver monitoring system (102) identifies the ambient lighting condition prevailing at in surroundings of the vehicle (100) to determine the reference histogram of intensity values of pixels in the bounding box. Further, in certain embodiments, the driver monitoring system (102) determines the minimum number of the LEDs (116A-N) to be operated in the active state based on the identified ambient lighting condition such as a lighting condition prevalent during daytime, noontime, evening-time, or nighttime.
[0047] At step (212), the imaging sensor (104) identifies corresponding distances between the imaging sensor (104) and the drivers of the vehicle (100) when the imaging sensor (104) captures the reference facial images. In certain embodiments, the minimum number of the LEDs (116A-N) to be operated in the active state varies with a distance between the imaging sensor (104) and a driver of the vehicle (100). For example, the minimum number of the LEDs (116A-N) to be operated in the active state for capturing facial features of the driver with a predetermined clarity may be 6 when the driver is seated in proximity to the imaging sensor (104) at a distance of 0.5 meter. However, when the driver is seated away from the imaging sensor (104) at a distance of 1 meter, comparatively a higher number of the LEDs (116A-N) is to be operated in the active state for sufficiently illuminating the driver and for clearly capturing facial features of the driver with the predetermined clarity. The imaging sensor (104) identifies the corresponding distances between the imaging sensor (104) and the drivers of the vehicle (100) in order to enable the driver monitoring system (102) to determine the minimum number of the LEDs (116A-N) to be operated in the active state based on the corresponding distances identified by the imaging sensor (104).
[0048] Similarly, at step (214), the ambient monitoring system (108) determines a particular time of the day, a current location of the vehicle, and a prevailing weather condition when the imaging sensor (104) captures each of the reference facial images. In certain embodiments, the lighting condition prevailing at the surroundings of the vehicle (100) varies with one or more parameters including the particular time of the day, the current location of the vehicle (100), and the prevailing weather condition. For example, a first lighting condition prevailing at 10 AM in India during raining will be different from a second lighting condition prevailing at 4 PM in India during bright sun. Accordingly, the minimum number of the LEDs (116A-N) to be operated in the active state in these two different lighting conditions vary. The ambient monitoring system (108) determines these parameters and provides these parameters as inputs to the driver monitoring system (102) in order to determine the minimum number of the LEDs (116A-N) to be operated in the active state based on the determined particular time of the day, current location of the vehicle (100), and prevailing weather condition.
[0049] At step (216), the driver monitoring system (102) generates a corresponding feature vector for each of the reference facial images based on a size of a corresponding bounding box, a corresponding reference histogram, a corresponding distance between the imaging sensor (104) and a driver of the vehicle (100), and a corresponding particular time of the day, current location of the vehicle (100) and prevailing weather condition. At step (218), the driver monitoring system (102) stores the corresponding feature vector generated for each of the reference facial images as a stored reference correlation in a reference table (130) in an associated calibration database (124). An example of the reference table (130) is presented herein with reference to Table 1.
[0050] Table 1 – Reference table (130) storing exemplary feature vectors generated for different reference facial images
Drivers 1 2 3 N
RFI 1 2 3 N
FV 1 2 3 N
BBS 10*10 10*20 10*10 10*10
RHILP 50, 30, 15, 5 10, 30, 62, 98 70, 20, 5, 5 45, 25, 20, 10
DIS 2 meters 3 meters 2.5 meters 1.2 meters
PTOD 10 AM 10 AM 10 PM 6 PM
Current location 12° N, 77° E 13° N, 80° E 8° N, 77° E 20° N, 72.8° E
PWC Raining Bright Sun Snowing Cloudy
Min no of LEDs 12 2 10 8
[0051] In Table 1, ‘RFI’ corresponds to the reference facial images, ‘FV’ corresponds to feature vectors generated for corresponding reference facial images, ‘BBS’ corresponds to sizes of the bounding boxes in the reference facial images, and ‘RHILP’ corresponds to a reference histogram of intensity level of pixels determined from each of the bounding boxes. Further, ‘DIS’ corresponds to a distance between the imaging sensor (104) and a particular driver when the imaging sensor (104) captures each of the reference facial images, and ‘PTOD’ corresponds to a particular time of the day when the imaging sensor (104) captures each of the reference facial images. Moreover, ‘PWC’ corresponds to the prevailing weather condition, and ‘min no of LEDs’ corresponds to a minimum number of the LEDs (116A-N) to be operated in the active state.
[0052] At step (220), the driver monitoring system (102) maps the corresponding feature vector generated for each of the reference facial images to a minimum number of the LEDs (116A-N) to be operated in the active state for clearly capturing facial features of the drivers of the vehicle (100) without creating a glare in the eyes of the drivers. An exemplary method by which the driver monitoring system (102) maps a particular feature vector to the minimum number of the LEDs (116A-N) is described in detail with reference to FIGS. 4A-B.
[0053] FIGS. 4A-B illustrate a flow diagram (400) depicting an exemplary method for identifying a minimum number of the LEDs (116A-N) to be operated in an active state for clearly capturing facial features of a driver during calibration of the driver monitoring system (102) of FIG. 1. At step (402), the lighting elements controller (114) activates a predefined number of the LEDs (116A-N) deployed within a cabin space of the vehicle (100) for initiating the calibration process. In certain embodiments, the predefined number of the LEDs (116A-N) corresponds to all of, for example, 16 LEDs (116A-N) in the vehicle (100). Accordingly, the lighting elements controller (114) activates all 16 LEDs in the vehicle (100) when the driver starts the vehicle (100). At step (404), the imaging sensor (104) captures a reference facial image of the driver seated at a particular distance from the imaging sensor (104) operating the vehicle (100) in a particular ambient lighting condition prevailing in the surroundings of the vehicle (100) with all the LEDs (116A-N) operating in the active state.
[0054] At step (406), the driver monitoring system (102) identifies if facial features of interest of the driver are detectable with predetermined clarity from the reference facial image. In certain embodiments, the driver monitoring system (102) uses one or more techniques such as Manhattan distance matching technique and a Euclidean distance matching technique to identify if the facial features of interest are detectable with the predetermined clarity from the reference facial image. Examples of such facial features of interest include one or more of an extent of eye-closure, eye gazing direction, eye blinking movement, head position, and head orientation of the driver. As used herein, the term “predetermined clarity” refers to a predetermined minimum image quality value determined for a reference facial image based on one or more selected image quality attributes having values within corresponding defined ranges. These image quality attributes, for example, may include one or more of resolution, sharpness, noise, dynamic range, contrast, distortion, artefacts, uniformity, color accuracy, chromatic aberration, occlusion, and blur. The driver monitoring system (102), for example, identifies that the facial features of interest of the driver are detectable with predetermined clarity when values of the selected image quality attributes such as image resolution, sharpness, occlusion and blur in the reference facial image are determined to be within their corresponding defined range of values. In one embodiment, the specific ranges for different image quality attributes used for determining clarity, for example, are selected during calibration to allow the driver monitoring system (102) to quantitatively measure an extent of eye-closure, eye gazing direction, eye blinking movement, head position, and head orientation of the driver from the reference facial image. The driver monitoring system (102) subsequently uses the identified facial features of interest having predetermined clarity for quantitatively measuring one or more parameters that indicate an alertness level of the driver from the reference facial image. Alternatively, when the driver monitoring system (102) identifies that the facial features of interest are not detectable from the reference facial image with the predetermined clarity, the driver alert unit (118) notifies the driver of the vehicle (100) that the driver monitoring system (102) may not accurately monitor an inattentive state of the driver and the processing reverts to step (404).
[0055] Further, at step (408), the driver monitoring system (102) determines a size of a bounding box suitable for enclosing the face of the driver in the reference facial image. In one embodiment, the driver monitoring system (102) determines the size of the bounding box suitable for enclosing the face, for example, by using the Manhattan distance matching technique and/or Euclidean distance matching technique, in order to identify one or more of the extent of eye-closure, eye gazing direction, eye blinking movement, head position, and head orientation of the driver. The driver monitoring system (102) further computes a reference histogram of intensity levels of pixels in the bounding box enclosing the face of the driver, as noted previously with reference to FIGS. 2A-B and 3. Further, at step (408), the imaging sensor (104) identifies a distance between the imaging sensor (104) and the driver of the vehicle (100) when the imaging sensor (104) captures the reference facial image. Moreover, the ambient monitoring system (108) determines the particular time of the day, the current location of the vehicle (100), and the prevailing weather condition when the imaging sensor (104) captures the reference facial image.
[0056] Subsequently, at step (410), the driver monitoring system (102) generates a reference feature vector for the reference facial image based on the determined size of the bounding box, the computed reference histogram of intensity levels of pixels in the bounding box, the identified distance between the imaging sensor (104) and the driver, and the determined particular time of the day, current location of the vehicle (100), and prevailing weather condition.
[0057] At step (412), the lighting elements controller (114) iteratively switches a designated number of the LEDs (116A-N) operating in the active state to the inactive state after generating the reference feature vector. Subsequently, the imaging sensor (104) recaptures the reference facial image of the driver after each iteration of switching the designated number of the LEDs (116A-N) to the inactive state. For example, the lighting elements controller (114) switches 2 of the 16 LEDs (116A-N) operating in the active state to the inactive state such that only 14 out of 16 LEDs (116A-N) are operated in the active state. Subsequently, the imaging sensor (104) recaptures the reference facial image of the driver when only 14 out of 16 LEDs (116A-N) are operated in the active state. The lighting elements controller (114) further switches 2 of the LEDs (116A-N) operating in the active state to the inactive state such that only 12 out of 16 LEDs (116A-N) are operated in the active state. The imaging sensor (104) then recaptures the reference facial image of the driver with only 12 out of 16 LEDs (116A-N) being operated in the active state. The lighting elements controller (114) further switches 2 of the LEDs (116A-N) to the inactive state such that only 10 out of 16 LEDs (116A-N) are operated in the active state. The imaging sensor (104) further recaptures the reference facial image of the driver with only 10 out of 16 LEDs (116A-N) being operated in the active state.
[0058] At step (414), the driver monitoring system (102) determines if one or more of the facial features of interest of the driver are detectable with a predetermined clarity from the reference facial image recaptured during each iteration. For example, the driver monitoring system (102) determines if one or more of the facial features of interest are detectable from a first reference facial image recaptured when only 14 out of 16 LEDs (116A-N) are operated in the active state. Similarly, the driver monitoring system (102) determines if one or more of the facial features of interest are detectable from a second reference facial image recaptured when only 12 out of 16 LEDs (116A-N) are operated in the active state. Likewise, the driver monitoring system (102) determines if one or more of the facial features are detectable from a third reference facial image recaptured when only 10 out of 16 LEDs (116A-N) are operated in the active state.
[0059] At step (416), the driver monitoring system (102) identifies a minimum number of the LEDs (116A-N) operated in the active state in a particular iteration during which one or more of the facial features of interest are determined to be still detectable with the predetermined clarity as the minimum number of the LEDs (116A-N) to be operated in the active state. For instance, in the previously noted examples, the driver monitoring system (102) identifies that the facial features of interest the driver are detectable with the predetermined quality from the first and second reference facial images recaptured in a first and second iteration when 14 and 12 out of 16 LEDs (116A-N), respectively, are operated in the active state. However, the driver monitoring system (102) identifies that the facial features of interest are not detectable with the predetermined clarity from the third reference facial image recaptured in a third iteration when 10 out of 16 LEDs (116A-N) are operated in the active state. In the previously noted example, the driver monitoring system (102) identifies 12 LEDs (116A-N) that are operated in the active state in the second iteration as the minimum number of the LEDs (116A-N) to be operated in the active state for detecting the facial features of interest of the driver without creating a glare in the eyes of the driver. Subsequently, at step (418), the driver monitoring system (102) maps the identified minimum number of the LEDs (114A-N) to the reference feature vector.
[0060] As described herein above, FIGS. 4A-B depict the method for mapping the reference feature vector of a particular reference facial image to the minimum number of the LEDs (116A-N) to be operated in the active state. It is be understood that the driver monitoring system (102) similarly maps reference feature vectors of other reference facial images captured in different ambient lighting conditions for different drivers seated at different distances from the imaging sensor (104) to corresponding minimum number of the LEDs (114A-N). Alternatively, in certain embodiments, the driver monitoring system (102) includes a learning subsystem (126) such as a neural network or artificial intelligence-based subsystem that is iteratively trained to interpolate patterns from a set of predetermined feature vectors stored locally or in an associated vehicle cloud database (128) and generate feature vectors for mapping different minimum number of the LEDs (116A-N) to different reference facial images captured in different ambient lighting conditions for different drivers seated at different distances from the imaging sensor (104) in different types of vehicles.
[0061] Further, the driver monitoring system (102) stores the reference feature vectors mapped to the corresponding minimum number of the LEDs (116A-N) as stored correlations in the reference table (130) in the associated calibration database (124). Post calibrating the driver monitoring system (102) and generating the reference table (130), the driver monitoring system (102) uses the reference table (130) for accurately monitoring an inattentive state of the driver of the vehicle (100) irrespective of a distance between the driver and the imaging sensor (104) deployed within the vehicle (100), as described in detail with reference to FIGS. 5A-C.
[0062] FIGS. 5A-C illustrate a flow diagram (500) depicting an exemplary method for monitoring driver behavior in real-time using the driver monitoring system of FIG. 1. At step (502), the driver monitoring system (102) configures the lighting elements controller (114) to activate a predefined number of the LEDs (116A-N) when the ignition of the vehicle (100) is activated by the driver. In one embodiment, the predefined number of the LEDs (116A-N) corresponds to all LEDs (116A-N) in the vehicle (100), or only a subset of the LEDs (116A-N) in the vehicle (100). In another embodiment, the driver monitoring system (102) determines a number of the LEDs (116A-N) to be operated in the active state when the ignition of the vehicle (100) is activated by the driver rather than configuring the lighting elements controller (114) to activate the predefined number of the LEDs (116A-N). In such an embodiment, the driver monitoring system (102) determines the number of the LEDs (116A-N) to be operated in the active state based on one or more imaging conditions prevailing in the surrounding of the vehicle (100), for example, including the ambient lighting conditions, time of the day, weather and location of the vehicle (100).
[0063] At step (504), the imaging sensor (104) captures a facial image of the driver when operating the predefined number of the LEDs (116A-N) in the active state. At step (506), the driver monitoring system (102) identifies if facial features of interest of the driver are detectable from the facial image of the driver with predetermined clarity. As noted previously, examples of the facial features of intertest include one or more of the face of the driver, an extent of eye-closure, eye gazing direction, eye movement, eye blinking movement, head position, head orientation, neck position and neck orientation of the driver. When the driver monitoring system (102) identifies that the facial features of interest are not detectable with the predetermined clarity, the driver alert unit (118) notifies the driver of the vehicle (100) that the driver monitoring system (102) may not accurately monitor an inattentive state of the driver and the processing reverts to step (504). Alternatively, at step (508), the lighting elements controller (114) deactivates the predefined number of the LEDs (116A-N) when the driver monitoring system (102) identifies that the facial features of interest are detectable with the predetermined clarity from the facial image of the driver. At step (510), the driver monitoring system (102) determines a size of a bounding box suitable for enclosing the face of the driver in the captured facial image. At step (512) the driver monitoring system (102) generates the bounding box of the determined size around the face of the driver detected from the captured facial image and further identifies a histogram of intensity levels of pixels in the bounding box, as described previously with reference to FIGS. 2A-B and 3. Further, at step (514), the imaging sensor (104) determines a distance between the driver and the imaging sensor (104), and the ambient monitoring system (108) identifies a particular time of the day, a current location of the vehicle (100), and a prevailing weather condition when the imaging sensor (104) captures the facial image of the driver. In one embodiment, the ambient monitoring system (108) provides the identified particular time of the day, current location of the vehicle (100), and prevailing weather condition as inputs to the driver monitoring system (102).
[0064] At step (516), the driver monitoring system (102) generates a feature vector based on the determined size of the bounding box, the determined histogram of intensity levels of pixels in the bounding box, the determined distance between the driver and the imaging sensor (104), and the identified particular time of the day, current location of the vehicle (100), and prevailing weather condition. An exemplary feature vector that is generated by the driver monitoring system (102) is represented herein using equation (1).
GFV=[BBS=10*10,HILP=(50,30,15,5),DIS=2 mts,PTOD=10 AM,CL=12° N,77° E,PWC=Raining] (1)
[0065] In equation (1), ‘GFV’ corresponds to the generated feature vector, ‘BBS’ corresponds to the size of the bounding box, and ‘HILP’ corresponds to the determined histogram of intensity level of pixels in the bounding box. Further, ‘DIS’ corresponds to the determined distance between the driver and the imaging sensor (104), ‘PTOD’ corresponds to the particular time of the day, ‘CL’ corresponds to the particular location of the vehicle (100), and ‘PWC’ corresponds to the prevailing weather condition.
[0066] Subsequently, at step (518), the driver monitoring system (102) identifies a matching feature vector from a plurality of feature vectors stored in the reference table (130) that matches the generated feature vector. For example, the matching feature vector stored in the reference table (130) that matches the generated feature vector is represented herein using equation (2).
RFV=[BBS=10*10,RHILP=(50,30,15,5),DIS=2 mts,PTOD=10 AM,CL=12° N,77° E,PWC=Raining] (2)
[0067] In equation (2), ‘RFV’ corresponds to the reference feature vector generated during the initial calibration of the driver monitoring system (102) that matches the generated feature vector.
[0068] In certain embodiments, the generated feature vector may not match exactly with any of reference feature vectors stored in the reference table (130). In such scenarios, the driver monitoring system (102) identifies the matching feature vector from the reference feature vectors, for example, using a Manhattan distance equation that is represented herein using equation (3).
MD = [(Pixels in RFV-Pixels in GFV) + (Pixels in FTR in RFV-Pixels in FTR in GFV) + (Pixels in STR in RFV-Pixels in STR in GFV) + (Pixels in TTR in RFV-Pixels in TTR in GFV) + (Pixels in FOTR in RFV-Pixels in FOTR in GFV)] (3)
[0069] In equation (3), ‘MD’ corresponds to a Manhattan distance between a reference feature vector and the generated feature vector, ‘RFV’ corresponds to the reference feature vector stored in the reference table (130), and ‘GFV’ corresponds to the generated feature vector. Further, ‘FTR’ corresponds to the first pixel intensity range of 0-64, ‘STR’ corresponds to the second pixel intensity range of 65-128, ‘TTR’ corresponds to the third pixel intensity range of 129-198, and ‘FOTR’ corresponds to the fourth pixel intensity range of 199-255.
[0070] For example, the driver monitoring system (102) determines the Manhattan distance between the reference feature vector stored first in the reference table (130) and the generated feature vector as ‘14’ using equation (3) when a total number of pixels in each of the RFV and GFV is 100. Further, the driver monitoring system (102) determines the Manhattan distance as ‘14’ when a number of pixels in the first pixel intensity range in the RFV and in the GFV is 50 and 48, a number of pixels in the second pixel intensity range in the RFV and in the GFV is 30 and 25, a number of pixels in the third pixel intensity range in the RFV and in the GFV is 15 and 22, and a number of pixels in the fourth pixel intensity range in the RFV and in the GFV is 5 and 5, respectively.
[0071] In one embodiment, the driver monitoring system (102) identifies the reference feature vector stored in the reference table (130) as the matching feature vector when the determined Manhattan distance is lesser than an exemplary designated distance threshold of 60. Alternatively, when the determined Manhattan distance is greater than the designated distance threshold of 60, the driver monitoring system (102) identifies a different reference feature vector stored in the reference table (130) whose associated Manhattan distance with reference to the generated feature vector is lesser than 60 as the matching feature vector.
[0072] Subsequently, at step (520), the driver monitoring system (102) identifies a minimum number of the LEDs (116A-N) mapped to the reference feature vector stored in the reference table (130) that matches the generated feature vector. For example, the driver monitoring system (102) identifies that the minimum number of the LEDs (116A-N) mapped to the matching reference feature vector stored in the reference table (130) corresponds to 12. Subsequently, the driver monitoring system (102) determines the identified minimum number of the LEDs (116A-N) as the specific number of the LEDs (116A-N) to be operated in the active state for detecting the facial features of interest of the driver with the predetermined clarity without creating a glare in the eyes of the driver.
[0073] Further, at step (522), the lighting elements controller (114) switches the identified number of the LEDs (116A-N) from the inactive state to the active state to illuminate the driver and a cabin space of the vehicle (100) for capturing facial features of the driver in real-time with the predetermined clarity. With respect to the previously noted example, the lighting elements controller (114) switches 12 of the total 16 LEDs (116A-N) from the inactive state to the active state to illuminate the driver and the cabin space of the vehicle (100) for capturing the facial features in real-time with predetermined clarity.
[0074] Subsequently, at step (524), the imaging sensor (104) continuously captures one or more facial images of the driver with only the identified number of the LEDs (116A-N) operating in the active state. At step (526), the driver monitoring system (102) processes the captured facial images to monitor if the driver of the vehicle (100) is in an inattentive state. For example, the driver monitoring system (102) identifies that the driver is in the inattentive state when the processed images indicate that the driver’s eyes are closed continuously for a designated duration, the driver has not gazed at the vehicle’s front windscreen for a particular duration, and/or the head of the driver leans forward or sideward and is not in an upright position.
[0075] At step (528), the driver alert unit (118) generates one or more of an audio, visual, and haptic alert in order to revert the driver back to an attentive state upon identifying the driver to be in the inattentive state. At step (530), the vehicle control unit (120) automatically controls one or more operations of the vehicle (100) when the driver does not revert to the attentive state even after providing one or more of the audio, visual, and haptic alerts to the driver. For example, the vehicle control unit (120) automatically reduces a speed of the vehicle (100) and navigates and stops the vehicle (100) in a safe area such as at a side of a road by controlling an associated throttle, brake, and/or steering wheel when the driver is identified to be in the inattentive state. Automatically initiating such a corrective action helps in preventing collision of the vehicle (100) with pedestrians, other vehicles and surrounding objects in the path of the vehicle (100).
[0076] In certain embodiments, the ambient lighting conditions prevailing in the surroundings of the vehicle (100) vary from time to time when the driver drives the vehicle (100). Therefore, the minimal number of the LEDs (116A-N) determined by the driver monitoring system (102) as suitable for capturing facial features with predetermined clarity when operating the vehicle (100) in one ambient lighting condition might not be suitable for a different ambient lighting condition. For example, the driver monitoring system (102) may initially operate 6 LEDs (116A-N) in the active state when the prevailing lighting condition corresponds to well-lit conditions when starting the vehicle (100) parked in an open home parking spot. After a while, as the vehicle (100) navigates out of the parking spot and through different paths such as via a tunnel, the prevailing lighting condition will vary. In such a scenario, the driver monitoring system (102) may not appropriately identify one or more facial features of the driver with the predetermined clarity with only 6 LEDs (116A-N) being operated in the active state. When the driver monitoring system (102) fails to identify the facial features of the driver with the predetermined clarity using facial images that are continuously captured by the imaging sensor (104), the driver monitoring system (102) re-determines a new specific number of the LEDs (114A-N) to be operated in the active state for a new lighting condition by repeating all the steps 504-530 described previously with reference to FIGS. 5A-C. The present driver monitoring system (102), thus, efficiently adapts to different imaging and ambient lighting conditions to accurately monitor driver behavior, thereby ensuring life and safety of the people and infrastructure within and surrounding the vehicle (100).
[0077] As noted previously, conventional driver monitoring systems using a NIR camera cannot accurately monitor an inattentive state of a driver when the driver is seated at large distance, such as greater than 1.5 meters from a driver-monitoring camera. In real-world scenarios, a distance between the driver and the driver-monitoring camera varies from one vehicle to another vehicle based on the vehicle’s type and dimension. Especially in larger vehicles such as commercial trucks, vans, buses, railroad trains, and larger-sized cars, the driver-monitoring camera may be deployed at a distance greater than 1.5 meters from the driver’s seat. In such vehicles, conventional driver monitoring systems may not be able to accurately monitor driver behavior in different imaging and ambient lighting conditions.
[0078] In contrast, the present driver monitoring system (102) accurately monitors driver behavior in all ambient lighting conditions even when the drivers are seated at a distance greater than 1.5 meters from a driver-monitoring camera. As a result, the driver monitoring system (102) can be used for monitoring the drivers of all types of vehicles irrespective of its types and dimensions. Further, the driver monitoring system (102) allows for cost-efficiently, yet accurately monitoring the inattentive state of the driver by determining and operating the minimum number of the LEDs (116A-N) in the active state for capturing good quality facial images irrespective of different lighting conditions, seating positions, driver demographics, and vehicle types via a robust initial calibration of the driver monitoring system (102). Illuminating only the minimum number of the LEDs (116A-N) reduces wear and tear of the LEDs (116A-N), power consumption, and cost of maintaining the LEDs (116A-N) and batteries, while increasing the life of the batteries supplying power to the LEDs (116A-N).
[0079] Additionally, the driver monitoring system (102) allows for adaptive control of the imaging sensor (104) position and orientation to ensure capturing of good quality facial images for accurate identification of driver distraction. Furthermore, the driver monitoring system (102) also enables interpolation of the data patterns to continually learn and identify the minimum number of LEDs to be operated in the active state to cost-efficiently and accurately monitor an attention level of different drivers in different imaging conditions, thus enabling the driver monitoring system (102) to be quickly deployed and initialized for use in different types of vehicles. The driver monitoring system (102) also enables initiating automatic corrective actions such as reducing speed of the vehicle (100), and navigating and stopping the vehicle (100) in a safe spot upon identifying abnormal driver behavior. Embodiments of the present driver monitoring system (102), thus, prevents undesirable collisions and mishaps commonly arising from driver distraction, thereby ensuring the life and safety of the driver and passengers in the vehicle (100), and people and infrastructure in the surroundings of the vehicle (100).
[0080] Although specific features of various embodiments of the present systems and methods may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments shown in the different figures.
[0081] While only certain features of the present systems and methods have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes.
LIST OF NUMERAL REFERENCES:
100 Vehicle
102 Driver monitoring system
104 Imaging sensor
108 Ambient monitoring system
110 GPS
112 Digital clock
114 Lighting elements controller
116A-N Lighting elements
118 Driver alert unit
120 Vehicle control unit
122 Communications link
124 Calibration database
126 Learning subsystem
128 Vehicle cloud database
130 Reference table
200-220 Flow diagram depicting steps of an exemplary method for calibrating a driver monitoring system
302, 304, 306, 308 Pixel intensity ranges
310 X-axis of a reference histogram
312 Y-axis of a reference histogram
400-418 Flow diagram depicting steps of an exemplary method for mapping a feature vector to a minimum number of lighting elements to be operated in an active state
500-530 Steps of a method for monitoring an inattentive state of a vehicle driver
, Claims:
We claim:
1. A driver monitoring system (102) for a vehicle (100), comprising:
a plurality of lighting elements (116A-N) disposed within the vehicle (100);
an imaging sensor (104) adapted to capture a facial image of a driver of the vehicle (100);
a lighting elements controller (114) coupled to the lighting elements (116A-N) and adapted to switch one or more of the lighting elements (116A-N) between an inactive state and an active state, wherein the driver monitoring system (102):
generates a feature vector corresponding to a face of the driver detected from the captured facial image;
identifies a specific number of the lighting elements (116A-N) to be operated in the active state for detecting one or more facial features of the driver with a predetermined clarity by matching the generated feature vector with one of a set of reference feature vectors corresponding to the face of the driver detected from a set of reference facial images captured by the imaging sensor (104) during an initial calibration of the driver monitoring system (102), and wherein each of the set of reference feature vectors is mapped to a corresponding minimum number of the lighting elements (116A-N) to be operated in the active state for detecting the one or more facial features of the driver of the vehicle (100) during the initial calibration; and
configures the imaging sensor (104) to capture one or more facial images of the driver during operation of the vehicle (100) while operating only the identified number of the lighting elements (116A-N) in the active state;
processes the captured facial images to identify if the driver is an inattentive state; and
configures one or more associated systems (118 and 120) in the vehicle (100) to perform one or more of alerting the driver and automatically controlling one or more selected operations of the vehicle (100) upon determining the driver to be in the inattentive state.
2. The driver monitoring system (102) as claimed in claim 1, wherein the plurality of lighting elements (116A-N) corresponds to a plurality of light emitting diodes (116A-N), wherein the imaging sensor (104) corresponds to a near infrared camera (104), and wherein the lighting elements controller (114) corresponds to a light emitting diode controller (114), wherein the light emitting diode controller (114) comprises one of a voltage-switch driver, a constant-current driver, and a flash LED driver.
3. The driver monitoring system (102) as claimed in claim 1, wherein the driver monitoring system (102) comprises an ambient monitoring system (108) that is coupled to the imaging sensor (104) to identify a prevailing weather condition from one or more images of the surroundings of the vehicle (100) captured by the imaging sensor (104), wherein the ambient monitoring system (108) comprises one or more of a global positioning system (110) that identifies a current location of the vehicle (100) and a digital clock (112) that identifies a particular time during a day when the imaging sensor (104) captures the facial image of the driver.
4. The driver monitoring system (102) as claimed in claim 1, wherein the one or more associated systems (118 and 120) comprises a driver alert unit (118), wherein the driver alert unit (118) comprises one or more of a siren that generates an audio alert, an infotainment system that generates a visual alert, and a vibration sensor that generates a haptic feedback on one or more of a steering wheel, a seat, and a selected surface of the vehicle (100) that is in contact with the driver when the driver is determined to be in the inattentive state.
5. The driver monitoring system (102) as claimed in claim 1, wherein the one or more associated systems (118 and 120) comprises a vehicle control unit (120), wherein the vehicle control unit (120) corresponds to one or more electronic control units deployed in the vehicle (100) that automatically control one or more of a throttle, a brake, and a steering wheel of the vehicle (100) when the driver is identified to be in the inattentive state to perform one or more of automatically reducing a speed of the vehicle (100), navigating the vehicle (100), and stopping the vehicle (100) in a safe area.
6. The driver monitoring system (102) as claimed in claim 1, wherein the driver monitoring system (102) corresponds to one or more of an adaptive front lighting system and a vehicle anti-theft system.
7. The driver monitoring system (102) as claimed in claim 1, wherein the driver monitoring system (102) comprises:
one or more of a calibration database (124) and a vehicle cloud database (128) that store the set of reference feature vectors generated during the initial calibration of the driver monitoring system (102);
a learning subsystem (126) communicatively coupled to one or more of the calibration database (124) and the vehicle cloud database (128), wherein the learning subsystem (126) is iteratively trained to interpolate patterns from the set of reference feature vectors stored in one or more of the calibration database (124) and in the vehicle cloud database (128) for mapping different minimum number of the lighting elements (116A-N) to different reference facial images captured in different ambient lighting conditions for different drivers seated at different distances from the imaging sensor (104) in different types of vehicles.
8. A method for monitoring state of a driver of a vehicle (100), comprising:
capturing a facial image of the driver of the vehicle (100) using an imaging sensor (104) when a predefined number of lighting elements selected from a plurality of lighting elements (116A-N) disposed within the vehicle (100) are operated in an active state;
determining a bounding box enclosing a face of the driver in the captured facial image;
determining a histogram of intensity levels of pixels in the determined bounding box;
generating a feature vector that comprises values indicating the determined histogram of intensity levels of pixels in the determined bounding box;
identifying a specific number of the lighting elements (116A-N) to be operated in the active state for detecting one or more facial features of the driver with a predetermined clarity by matching the generated feature vector with one of a set of reference feature vectors, wherein each of the set of reference feature vectors comprises values indicating a corresponding reference histogram determined from a reference facial image captured by the imaging sensor (104) during an initial calibration of a driver monitoring system (102), and wherein each of the set of reference feature vectors is mapped to a corresponding minimum number of the lighting elements (116A-N) to be operated in the active state for detecting the one or more facial features of the driver of the vehicle (100) during the initial calibration;
switching the identified number of the lighting elements (116A-N) from an inactive state to the active state by a lighting elements controller (114);
capturing one or more facial images of the driver during operation of the vehicle (100) by the imaging sensor (104) during operation of the vehicle (100) while operating only the identified number of the lighting elements (116A-N) in the active state;
processing the captured facial images to identify if the driver of the vehicle (100) is an inattentive state; and
performing one or more of alerting the driver and automatically controlling one or more selected operations of the vehicle (100) upon determining the driver to be in the inattentive state.
9. The method as claimed in claim 8, wherein the initial calibration of the driver monitoring system (102) comprises:
capturing one or more reference facial images of a plurality of drivers operating the vehicle (100) in different ambient lighting conditions and when seated at different distances from the imaging sensor (104), wherein the imaging sensor (104) captures each of the reference facial images when the predefined number of lighting elements selected from the plurality of lighting elements (116A-N) disposed within the vehicle (100) are operated in the active state;
determining a corresponding size of each corresponding bounding box suitable for enclosing a corresponding face of each the plurality of drivers in the captured facial images;
generating the corresponding bounding box of the corresponding size to enclose the corresponding face of each of the plurality of drivers in the captured facial images;
determining a corresponding reference histogram of intensity levels of pixels in each of the corresponding bounding box;
determining a corresponding distance between the imaging sensor (104) and each of the plurality of drivers operating the vehicle (100) when the imaging sensor (104) captures each of the reference facial images;
identifying a corresponding time during a day, a corresponding location of the vehicle (100), and a corresponding prevailing weather condition when the imaging sensor (104) captures each of the reference facial images;
generating a corresponding reference feature vector for each of the reference facial images based on the corresponding size of the corresponding bounding box in that reference facial image, the corresponding reference histogram, the corresponding distance between the imaging sensor (104) and the driver, the corresponding time during the day, the corresponding location of the vehicle (100), and the corresponding prevailing weather condition; and
mapping the corresponding reference feature vector to the minimum number of the lighting elements (116A-N) to be operated in the active state for detecting the one or more facial features of the driver of the vehicle (100) with the predetermined clarity.
10. The method as claimed in claim 9, wherein mapping the corresponding reference feature vector to the minimum number of the lighting elements (116A-N) comprises:
iteratively switching a designated number of the plurality of lighting elements (116A-N) operating in the active state to the inactive state after generating the corresponding reference feature vector for a reference facial image;
recapturing the reference facial image of the driver after each iteration of switching the designated number of the lighting elements (116A-N) to the inactive state;
determining if the one or more of the facial features of the driver are detectable with the predetermined clarity from the reference facial image recaptured during each iteration; and
identifying a number of the lighting elements (116A-N) that are operated in the active state in a particular iteration during which the one or more of the facial features of the driver are determined to be detectable with the predetermined clarity as the minimum number of the lighting elements (116A-N) that are to be operated in the active state and mapping the identified minimum number of the lighting elements (116A-N) to the corresponding reference feature vector.
11. The method as claimed in claim 9, wherein identifying the specific number of the lighting elements (116A-N) to be operated in the active state for detecting the one or more facial features of the driver of the vehicle (100) with the predetermined clarity while operating the vehicle (100) during real-time comprises:
determining a size of the bounding box suitable for enclosing the face of the driver;
determining a distance between the imaging sensor (104) and the driver when the imaging sensor (104) captures the facial image of the driver;
identifying a particular time during a day, a current location of the vehicle (100), and a prevailing weather condition by an ambient monitoring system (108) in the vehicle (100) when the imaging sensor (104) captures the facial image of the driver;
updating the feature vector based on the determined size of the bounding box, the determined distance between the imaging sensor (104) and the driver of the vehicle (100), the identified time during the day, the identified current location of the vehicle (100), and the identified prevailing weather condition;
identifying a matching feature vector from the set of reference feature vectors that matches the updated feature vector; and
identifying the minimum number of the lighting elements (116A-N) that is mapped to the matching reference feature vector as the specific number of the lighting elements (116A-N) to be operated in the active state for detecting the one or more facial features of the driver with the predetermined clarity.
12. The method as claimed in claim 8, wherein determining if the one or more of the facial features of the driver are detectable with the predetermined clarity comprises:
recapturing the facial image of the driver while operating the predefined number of lighting elements selected from the plurality of lighting elements (116A-N) in the active state when a clarity of the one or more facial features of the driver detected from the facial image is determined to be lesser than the predetermined clarity;
generating a new feature vector for the recaptured facial image, wherein the new feature vector comprises one or more of a size of a bounding box enclosing the face of the driver in the recaptured facial image, a histogram of intensity levels of pixels determined from the bounding box in the recaptured facial image, a distance determined between the driver and the imaging sensor (104) when the imaging sensor (104) recaptures the facial image of the driver, a particular time during the day when the imaging sensor (104) recaptures the facial image, a current location of the vehicle (100) when the imaging sensor (104) recaptures the facial image, and a prevailing weather condition when the imaging sensor (104) recaptures the facial image;
identifying a matching feature vector from the set of reference feature vectors stored in a reference table (130) that matches the generated new feature vector;
identifying the minimum number of the lighting elements (116A-N) that is mapped to the matching reference feature vector in the reference table (130) as a new specific number of the lighting elements (116A-N) to be operated in the active state for detecting the one or more facial features of the driver with the predetermined clarity.
13. The method as claimed in claim 8, wherein the initial calibration of the driver monitoring system (102) comprises iteratively training a learning subsystem (126) communicatively coupled to the driver monitoring system (102) to interpolate patterns from the set of reference feature vectors stored in one or more of a calibration database (124) and a vehicle cloud database (128) for mapping different minimum number of the lighting elements (116A-N) to different reference facial images captured in different ambient lighting conditions for different drivers seated at different distances from the imaging sensor (104) in different types of vehicles.
14. The method as claimed in claim 11, wherein capturing the one or more facial images of the driver during operation of the vehicle (100) comprises periodically updating a number of the lighting elements (116A-N) operating in the active state upon detecting a change in one or more of the determined size of the bounding box enclosing the face of the driver, the determined histogram, the determined distance between the imaging sensor (104) and the driver of the vehicle (100), the identified time during the day, the identified current location of the vehicle (100), and the identified prevailing weather condition.
| # | Name | Date |
|---|---|---|
| 1 | 202241072476-CLAIMS [06-07-2023(online)].pdf | 2023-07-06 |
| 1 | 202241072476-POWER OF AUTHORITY [15-12-2022(online)].pdf | 2022-12-15 |
| 2 | 202241072476-FORM-9 [15-12-2022(online)].pdf | 2022-12-15 |
| 2 | 202241072476-ENDORSEMENT BY INVENTORS [06-07-2023(online)].pdf | 2023-07-06 |
| 3 | 202241072476-FER_SER_REPLY [06-07-2023(online)].pdf | 2023-07-06 |
| 3 | 202241072476-FORM-26 [15-12-2022(online)].pdf | 2022-12-15 |
| 4 | 202241072476-FORM 3 [15-12-2022(online)].pdf | 2022-12-15 |
| 4 | 202241072476-FORM 3 [06-07-2023(online)].pdf | 2023-07-06 |
| 5 | 202241072476-FORM 18 [15-12-2022(online)].pdf | 2022-12-15 |
| 5 | 202241072476-FER.pdf | 2023-01-06 |
| 6 | 202241072476-FORM 1 [15-12-2022(online)].pdf | 2022-12-15 |
| 6 | 202241072476-COMPLETE SPECIFICATION [15-12-2022(online)].pdf | 2022-12-15 |
| 7 | 202241072476-FIGURE OF ABSTRACT [15-12-2022(online)].pdf | 2022-12-15 |
| 7 | 202241072476-DRAWINGS [15-12-2022(online)].pdf | 2022-12-15 |
| 8 | 202241072476-FIGURE OF ABSTRACT [15-12-2022(online)].pdf | 2022-12-15 |
| 8 | 202241072476-DRAWINGS [15-12-2022(online)].pdf | 2022-12-15 |
| 9 | 202241072476-FORM 1 [15-12-2022(online)].pdf | 2022-12-15 |
| 9 | 202241072476-COMPLETE SPECIFICATION [15-12-2022(online)].pdf | 2022-12-15 |
| 10 | 202241072476-FER.pdf | 2023-01-06 |
| 10 | 202241072476-FORM 18 [15-12-2022(online)].pdf | 2022-12-15 |
| 11 | 202241072476-FORM 3 [15-12-2022(online)].pdf | 2022-12-15 |
| 11 | 202241072476-FORM 3 [06-07-2023(online)].pdf | 2023-07-06 |
| 12 | 202241072476-FORM-26 [15-12-2022(online)].pdf | 2022-12-15 |
| 12 | 202241072476-FER_SER_REPLY [06-07-2023(online)].pdf | 2023-07-06 |
| 13 | 202241072476-FORM-9 [15-12-2022(online)].pdf | 2022-12-15 |
| 13 | 202241072476-ENDORSEMENT BY INVENTORS [06-07-2023(online)].pdf | 2023-07-06 |
| 14 | 202241072476-POWER OF AUTHORITY [15-12-2022(online)].pdf | 2022-12-15 |
| 14 | 202241072476-CLAIMS [06-07-2023(online)].pdf | 2023-07-06 |
| 15 | 202241072476-PatentCertificate13-05-2025.pdf | 2025-05-13 |
| 16 | 202241072476-IntimationOfGrant13-05-2025.pdf | 2025-05-13 |
| 1 | 202241072476E_05-01-2023.pdf |