Sign In to Follow Application
View All Documents & Correspondence

Real Time Vehicle Threat Alert System And Related Method Thereof

Abstract: The present invention relates to a real-time vehicle threat alert system (200). The system (200) includes a plurality of the cameras (214), (215) … (200n) configured to capture a first predetermined data indicative of a facial-recognition data and a predetermined ambient data corresponding to a vehicle surrounding condition. Further, the system (200) includes a plurality of sensors (216), (218), (225), a global positioning system (GPS) (219). A predefined signals generated from the plurality of sensors (216), (218), (225) and the GPS (219), respectively, provides a cumulative signals indicative of a second predetermined data. The system (200) also includes a navigation system (217) configured to generate a predefined navigation system data and a Control module (202). An Artificial Intelligence (AI) module (201) communicatively connected to the control module (202) is configured for computing the first predetermined data, the second predetermined data and the predetermined ambient data to determine a driver load data based on a plurality of driver load indices. Figure: 02

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 January 2022
Publication Number
30/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

TATA MOTORS PASSENGER VEHICLES LIMITED
Floor 3, 4, Plot-18,, Nanavati Mahalaya, Mudhana Shetty Marg, BSE, Fort, Mumbai, Mumbai City, Maharashtra

Inventors

1. Prasad Roy
Floor 3, 4, Plot-18,, Nanavati Mahalaya, Mudhana Shetty Marg, BSE, Fort, Mumbai City, Maharashtra, 400001
2. Navneet Pise
Floor 3, 4, Plot-18,, Nanavati Mahalaya, Mudhana Shetty Marg, BSE, Fort, Mumbai City, Maharashtra, 400001
3. Baburao Rane
18 Grosvenor Place, London, SW1X 7HS, United Kingdom

Specification

FORM 2
THE PATENTS ACT 1970
[39 OF 1970]
AND
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION [See section 10; rule 13]
TITLE OF THE INVENTION
REAL-TIME VEHICLE THREAT ALERT SYSTEM AND RELATED
METHOD THEREOF
APPLICANT(s)
TATA MOTORS PASSENGER VEHICLES LIMITED
an Indian company having its registered office at Floor 3, 4, Plot-18,
Nanavati Mahalaya, Mudhana Shetty Marg, BSE,
Fort, Mumbai City, Mumbai – 400 001 Maharashtra, INDIA.
PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD OF THE INVENTION
Present disclosure, in general, relates to a field of automobiles. Particularly, but not exclusively, the present disclosure relates to real-time vehicle threat alert system and related method thereof.
BACKGROUND OF THE INVENTION
Majority of the accidents on roads are caused due to human error. Due to lack of strict traffic rules and long driving hours for commercial vehicles, drivers trend to drive more than 7-8 hours at a stretch, resulting in fatigue and drowsiness. Drink and driving is also another major concern, which results in lack of control and focus by driver vehicle while driving and thus, a slight distraction or misjudgment results in an accident. The situation of drink and drive not only bring very big danger for the vehicle driver, but also threatens other people life, property safety. Furthermore, there is also possibility that another road user makes a mistake that puts the driver of a vehicle in a situation where a collision cannot be avoided.
Recent road data analysis shows that most of the vehicle accidents involving front-to-rear collisions are said to be due to lack of avoidance behavior and perception of danger and misjudgment. Therefore, various types of safe driving systems are developed to alert vehicle drivers of potential accidents or threats.
In an existing vehicle threat alert system, a Driver Monitoring System (DMS) and the related technologies of a particular vehicle are configured to alert the vehicle driver via an audio chime or steering wheel vibrations. However, with such alert system, the vehicles in surrounding of the particular vehicle, and travelling at a predefined distance from the particular vehicle are not configured to receive warning signals that may corresponds to an abnormal driving behavior of the particular vehicle. Moreover, when vehicles are at very high speed on the highway, the reaction time is very limited and thus sudden proactive action by the vehicle driver in response to said alert causes more harm as the driver may be losing control

on the vehicle. Thus, the vehicle drivers of the surrounding vehicle are unable or have limited abilities for assessing threats originating from a particular vehicle for avoiding possible collisions with other vehicles or objects.
Present disclosure is directed to overcome one or more limitations stated above or any other limitations associated with the known arts.
SUMMARY OF THE INVENTION
One or more shortcomings of the prior art are overcome by a system as claimed and additional advantages are provided through the device and a system as claimed in the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.
In one non-limiting embodiment of the disclosure, a real-time vehicle threat alert system is configured for monitoring a plurality of vehicle conditions via cellular networks. The system includes a plurality of cameras. At least one of the plurality of the cameras is mounted in a vehicle cabin, and configured to be inclined at a predetermined angle such as to capture a first predetermined data. The first predetermined data corresponds to a facial recognition data from a user of the vehicle. At least one of the plurality of the cameras is integrally installed with a Driver Monitoring system (DMS) in a vehicle interior. The at least one camera of the plurality of the cameras faces towards a driver of the particular vehicle and captures images corresponding to a facial recognition of the driver.
In an embodiment, the captured images are indicative of a facial recognition data and stored as a first predetermined data in the DMS.
In an embodiment, the facial recognition data is fetched to the control module via a communication protocol such as Ethernet.

In an embodiment, the cumulative signals from a plurality of sensors and the GPS is referred a second predetermined data The plurality of sensors include a gyroscope sensor configured to detect an inclination of the vehicle relative to a gravitational acceleration, an accelerometer that detects gravitational acceleration in addition to an actual vehicle acceleration and a vehicle speed sensor. Further, the global positioning system (GPS) satellites corresponding data provides accurate position, location, velocity, and time information for vehicle.
In an embodiment, an Artificial Intelligence (AI) module integrally built with an emergency alert logic is configured for overlaying the second predetermined data with the first predetermined data to determine a driver load data by analyzing a plurality of driver load indices.
In an embodiment, upon indication of signal corresponding to the driver load indices such as WARN index, ALERT index, ACT index, the AI module determines abnormality signals corresponding to the vehicle running condition and hence generates warn warning or emergency signals.
In an embodiment, a predetermined ambient data corresponds to data collected from a vehicle ambient through at least one camera of the plurality of the cameras being mounted on a vehicle exterior. The predetermined ambient data includes weather data, vehicle navigation and map data. The AI module integrally built with the emergency alert logic is configured to compute the predetermined ambient data to determine the emergency alert signal.
In an embodiment, the AI module performs rating of a predetermined ambient
condition in a range of values. Based on determined value for the predetermined
ambient condition, AI module dispatches the emergency alert signal predetermined
message.
In an embodiment, the emergency alert signals generated from the particular vehicle
and upon computation by the AI module are transmitted to the surrounding vehicles

through V2V cellular communication and to the at least one infrastructure such as police station, hospital, via V2X cellular communication. Thereby, warning the surrounding vehicles of the possible collision for preventing the accident and hazardous situation.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The novel features and characteristics of the disclosure are set forth in the appended claims. The disclosure itself, however, as well as a preferred mode of use, further objectives, and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiments when read in conjunction with the accompanying figures. One or more embodiments are now described, by way of example only, with reference to the accompanying figures wherein like reference numerals represent like elements and in which:
Figure 1 is a vehicle-environment diagram illustrating a vehicle to vehicle (V2V)
and a vehicle to everything (V2X) architecture through a plurality of
communication protocols, in accordance with an embodiment of the present
disclosure.
Figure 2 illustrates a real-time alert system for a vehicle, in accordance with an
embodiment of the present disclosure
Figure 3. shows a flow-chart illustrating a method of generating real-time threat
alert signals for the vehicle based on a predefined data from the DMS, data based
on a plurality of vehicle operating conditions, the GPS data, Navigation system
data, in accordance with an embodiment of the present disclosure.

Figure 4. shows a flow-chart illustrating a method of generating real-time threat alert signals for the vehicle based on processing of the predetermined ambient data by the AI module, in accordance with an embodiment of the present disclosure. Figure 5. shows an application processor illustrating at least one hardware and at least software platform for integration of Driver Monitoring and Cellular stack layers, in accordance with present disclosure.
Figure 6. shows a data flow model illustrating the flow of the data from the application processor in to software blocks of the emergency alert logic, which is running in the application processor of the AI module, in accordance with present disclosure.
Figure 7. shows a stack model illustrating the emergency alert logic of the AI module, in accordance with present disclosure.
The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the system and method illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The foregoing has broadly outlined the features and technical advantages of the present disclosure in order that the detailed description of the disclosure that follows may be better understood. Additional features and advantages of the disclosure will be described hereinafter which forms the subject of the claims of the disclosure. It should be appreciated by those skilled in the art that, the conception and specific embodiments disclosed may be readily utilized as a basis for modifying other devices, systems, assemblies and mechanisms for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that, such equivalent constructions do not depart from the scope of the disclosure as set forth in the appended claims. The novel features which are believed to be characteristics of the disclosure, to its device or system, together with further objects and advantages will be better understood from the following description

when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusions, such that a system or a device that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.
Reference will now be made to the exemplary embodiments of the disclosure, as illustrated in the accompanying drawings. Wherever possible, same numerals have been used to refer to the same or like parts. The following paragraphs describe the present disclosure with reference to Figs. 1-7. It is to be noted that the system may be employed in any vehicle including but not limited to a passenger vehicle, a utility vehicle, commercial vehicles, and any other vehicle with an exhaust system. For a sake of clarity, a vehicle is not shown.
Figure 1 is a vehicle-environment diagram (100) illustrating a vehicle to vehicle (V2V) and a vehicle to everything (V2X) architecture through a plurality of communication protocols, in accordance with an embodiment of the present disclosure. In an illustrated embodiment, the plurality of communication protocols includes the cellular network communication (103), cloud network, CAN, Ethernet, and alike. In an illustrated embodiment, a predetermined emergency-alert signal is generated based on computation of a predefined data by Artificial Intelligence (AI) module (201) (illustrated in Figure 2) from a particular vehicle (102). The predetermined emergency-alert signal may be in the form of a visual display messages, audio messages such as alarm or chimes. Upon computation, the predetermined emergency-alert signal is transmitted to surrounding vehicles (101),

(104) travelling at the predetermined distance from the particular vehicle (101). In an illustrated embodiment, the emergency alert signal generated by the particular vehicle (102) is transmitted to the surrounding vehicle (101), (104) travelling at a distance A through a communication protocol such as cloud network, CAN. Further, the emergency alert signal generated from the particular vehicle (102) is transmitted to the surrounding vehicle (104) travelling at a distance A’ through the communication protocol such as cellular network (103). In an embodiment, the Artificial Intelligence (AI) module (201) is provided to process the recorded data and warn the driver about vehicle running condition, raise a threat alert to nearby vehicles, through vehicle to vehicle (V2V) cellular network and to nearby infrastructure such as Patrolling agency, Emergency and rescue services, Intimation to predefined contacts, Insurance agency police station, hospital through vehicle to infrastructure (V2X) cellular communication in an emergency condition. Thus, the system (100) utilizes cellular V2X technologies to transmit the emergency alert signal from the particular vehicle. Particularly, the emergency alert signal data is computed based on the data fetched from beyond line of sight, a front and rear of the particular vehicle (201).
Figure 2 illustrates a real-time alert system (200) for the vehicle (102), in accordance with an embodiment of the present disclosure. The disclosed system is configured for monitoring a plurality of vehicle conditions via cellular networks. In accordance with the present disclosure. In an illustrated embodiment, the system includes a plurality of cameras including camera 1 (214), camera 2 (215)… camera n (200n). In an illustrated embodiment, at least one of the plurality of the cameras (214), (215) ,…, (200n) may be mounted in a vehicle cabin, and configured to be inclined at a predetermined angle such as to capture a first predetermined data. The first predetermined data corresponds to a facial recognition data from a user of the vehicle. In particular, at least of the plurality of the cameras (214), (215)… (200n) is integrally installed with a Driver Monitoring system (DMS) (220) in a vehicle interior and located on a vehicle A-pillar. The at least one camera of the plurality of the cameras (214), (215)… (200n) faces towards a driver of the particular vehicle (102) and captures images corresponding to a facial recognition of the driver. The

captured images is indicative of a facial recognition data and stored as first predetermined data in the DMS (220). In an illustrated embodiment, the facial recognition data is fetched to the control module (202) via a communication protocol such as Ethernet. Also, at least another camera of the plurality of cameras (214), (215)… (200n) may be mounted on vehicle exterior such as on at least a portion of vehicle exterior. The at least another camera may be configured to fetch a predetermined ambient data. Further, in an illustrated embodiment, the system (200) also includes a plurality of sensors (216), (218), (225), a navigation system (217), and a global positioning system (GPS) (219). The plurality of sensors (216), (218), (225) include a gyroscope sensor (216) configured to detect an inclination of the vehicle relative to a gravitational acceleration, an accelerometer that detects gravitational acceleration in addition to an actual vehicle acceleration. Further, the global positioning system (GPS) satellites corresponding data provides extremely accurate position, location, velocity, and time information for vehicle.
In an illustrated embodiment, cumulative signals from the plurality of sensors (216), (218), (225) and the GPS (219) is referred as a second predetermined data. Also, as per an embodiment, the second predetermined data is also referred as vehicle running data. Thus, the second predetermined data corresponding to signals from the plurality of sensors (216), (218), (225) and also containing information indicative of GPS location, vehicle speed, real time traffic, road surface, the gyroscope sensor (216) and the accelerometer (225) is transmitted to the control unit (202) through the communication protocol. The fetched the first predetermined data and the second predetermined data is transmitted to the AI module (201) being communicatively connected to the control module (202). Further, the predetermined ambient data corresponding to a vehicle surrounding condition is fetched to the control unit (202). The fetched the first predetermined data is transmitted to the AI module (201).
In an illustrated embodiment, the AI module (201) is configured to compute the first predetermined data based on pre-stored data points corresponding to a predefined facial data. The AI module (201) further overlays the second

predetermined data with the first predetermined data to determine a driver load data by analyzing a plurality of driver load indices. The driver load indices are determined in terms of the first predetermined data corresponding to the facial recognition of a driver of the vehicle and the second predetermined data corresponding to the vehicle operating condition data. The driver load indices includes a NORMAL index indicative of a normal driving condition, a DRIFT index indicative of a vehicle drifting mode, a WARN index indicative of warning signal generation, an ALARM index indicative of an alarm signal generation and an ACT index indicative of required action signal. With these indices, the driving manner of a vehicle driver for a particular vehicle is determined. During NORMAL index indication, the AI module (201) determines the normal running condition of the particular vehicle as well as normal driving behavior of a vehicle driver. For drift mode indication, the AI module (201) determines a vehicle drifting condition and the normal driving manner of the vehicle driver. With indication of signal corresponding to the WARN index, the AI module (201) determines abnormality signals corresponding to the vehicle running condition and hence generates warn warning or emergency signals. This activates ALERT index and generates therefrom a condition in the form of an alert signal. Upon actuation of the Alert index or WARN Index or both, the AI module (201) computes the data corresponding to ALERT and WARN signals and generates an emergency alert signals which is transmitted to surrounding vehicles (101), (104) travelling at a predetermined range from the particular vehicle (102) through the cellular vehicle to vehicle (V2V) network. Also, the system is configured to transmit the emergency alert signal to a plurality of infrastructure such as hospital, police stations etc. through cellular V2I communication network. The system as per present disclosure is configured for transmitting in vehicle data when corresponding to an alert or emergency data to vehicle to everything (V2X) through a cellular network (103).
Thus, the disclosed system (200) is configured to alert a vehicle operator of the surrounding vehicles (101), (104) and travelling at the predetermined distance from the particular vehicle (102) of a real-time hazardous conditions and thereby facilitating an assessment of probability of collision between vehicles and accident

avoidance. Particularly, the emergency alert signals, warn messages is transmitted through a Human Machine Interface (HMI) (221) and is displayed on a vehicle HMI display module (224). At the same time, an alert module (222) in the form of a speaker (223) or warning lamp is also actuated to warn surrounding vehicles (101), (104) in vicinity to the particular vehicle (102). Further, an ACT index, sends message for required actions to be taken from vehicles to infrastructure through a cellular V2I (vehicle to infrastructure) communication. Particularly, with the ACT data index indicates the surrounding vehicles (101), (104) in the vicinity to take pro-active action and stay away from this potentially dangerous vehicle, which can cause major catastrophe on the road.
In an illustrated embodiment, the AI module (201) is integrally built with an emergency alert logic that is configured to compute a predetermined amount of receiving data from the Control Module (202) through a stacked layer model. The predetermined amount of receiving data herein refers to the first predetermined data, the second predetermined data and the predetermined ambient data. The layers of the stacked layer model includes an Application Layer (226) sub-divided into an emergency alert layer (229) and a bridge layer (203). The emergency alert layer (229) is configured to handle the emergency alert signal through an Android framework for transmitting it to the surrounding vehicles (101), (104) or infrastructure. Further, the bridge layer (203) sets a connecting link between with a middleware (227) using the Android framework. The middleware (226) is substantially be divided into driver monitoring application (204), vehicle network services (206).
The bridge layer (203) further includes sub-layers including a Packet Data Convergence Protocol (PDCP) layer (209) controls data, herein referred as V2X messages (208), for efficient transmission in a wireless bandwidth where bandwidth is small when transmitting an IP packet such as IPv4 or IPv6. Further, in an illustrated embodiment, a MAC layer (211) is connected to an upper layer RLC layer (210) by a logical channel, and the logical channel includes a control channel for transmitting information of a control plane according to the type of information

to be transmitted. It is divided into a traffic channel that transmits user plane information.
The Radio Link Control (RLC) layer (210) is configured to control the data size so that the lower layer is suitable for transmitting data to the radio section by segmenting data received from the upper layer. Further, the V2X message (208) corresponding to an emergency alert signal and warning signal is transmitted through a plurality of communication protocols such as Antenna device installed on the particular vehicle (102), wireless communication such as cellular V2X and V2V communication.
The driver monitoring application (204) computes a predetermined driver monitoring data of the first predetermined data for processing. The vehicle network services refers to a vehicle internet protocol and configured to transmit the emergency alert signals generated for displaying on the vehicle HMI Module (224). Further the stack layer model is divided into a hardware abstraction layer (228) including the Automotive HAL layer and the CAN layer. The hardware abstraction layer (228) is configured for transmitting a data received from vehicle to infrastructure communication network.
Advantageously, as per an embodiment of the present disclosure, the disclosed system forms an active safety system that facilitates preventive safety, ranging from drowsiness detection to determining driving behavior, in particular, by assisting in detecting and handling potential hazards from surrounding vehicles or objects, thereby reducing the number of accidents and their consequences.
Figure 3. shows flow-chart (300) illustrating a method of generating real-time threat alert signals for the vehicle based on a predefined data from the DMS (220),data based on a plurality of vehicle operating conditions, the GPS data (219), Navigation system data (217), in accordance with an embodiment of the present disclosure. In an illustrated embodiment, a predefined data from the DMS (220), data based on a plurality of vehicle operating conditions, the GPS (219) data,

Navigation system data (217) corresponds to the first predetermined data. In an illustrated embodiment, the first predetermined data corresponding to the facial recognition data is captured by the at least one of the plurality of the cameras (214), (215)… (200n) being integrally built with DMS (220). The second predetermined data based on the plurality of vehicle operating parameters determined by the GPS (219), the plurality of sensors (216), (218), (225) is transmitted to the control module. The AI module (201) (as illustrated in Figure 2) integrally built with the emergency alert logic is configured to compute the first predetermined data and the second predetermined data to determine the emergency alert signal.
With reference to an illustration, the disclosed method begins at step 301. The first predetermined data is transmitted to the control module by the plurality of cameras is fetched to the AI module (201), at step 302. The first predetermined data corresponds to a cumulative data collected from the at least one camera mounted interior of the vehicle and at least one another camera mounted exterior to the vehicle. At step 303, the second predetermined data transmitted to the control module (201) by the plurality of the sensors (216, 218, 225) and the GPS (219) is fetched to the AI module (201) in a predefined time period of 500 msec. The second predetermined data corresponds to data indicative of the vehicle operating parameters determined by a vehicle location, vehicle inclination angle, GPS (219) data, vehicle speed, vehicle acceleration, vehicle drifting or running condition. At step 304, the AI module (201) determines the first predetermined data to detect whether a facial recognition data points corresponds to a pre-stored facial data points in the AI module (201) in a predefined time period of 2 sec. The pre-stored facial data points corresponds to data points which defines the driver condition during driving. The driver condition may relate to drowsiness or normal condition. At step 305, the AI module (201) is configured for overlaying the fetched first predetermined data with the second predetermined data for a predefined time period of 5 sec. Based on the overlaying data, the AI module detects a vehicle predetermined load condition based on the plurality of the driver load data indices, at step 306. In an example, the driver load data indices may be determined in the form of a look up table with values of the driver load index from “1” to “5”. That

is, a value indicating “1” refers to the NORMAL driver load index, a value indicating “2” refers to the DRIFT driver load index. A value indicating “3” refers to the WARN load driver index , a value indicating “4” as ALARM driver index and the value indicating “5” refers to as ACT driver index data. Based on the driver load index value detected at step 306, the AI module compute a vehicle predetermined condition data. If the driver load index is WARN driver load index or ALARM load index or both, the method moves to step 307. At step 307, the AI module computes the emergency alert signal through the emergency alert logic. Further, at step 308, the AI module dispatches the emergency alert signal predetermined message to the surrounding vehicles (101), (104) (shown in Figure 1) through V2V cellular communication and to at least one infrastructure such as police station, hospital, via V2X cellular communication. Thereby, warning the surrounding vehicles of the possible collision for avoiding the accident. The method ends at step 309.
Figure 4. shows a flow-chart (400) illustrating a method of generating real-time threat alert signals for the vehicle based on processing of the predetermined ambient data by the AI module (201), in accordance with an embodiment of the present disclosure. In an illustrated embodiment, the predetermined ambient data corresponding to data collected from a vehicle ambient through at least one camera of the plurality of the cameras (214), (215)… (200n) being mounted on a vehicle exterior. The predetermined ambient data includes weather data, vehicle navigation and map data. The AI module (201) (as illustrated in Figure 2) integrally built with the emergency alert logic is configured to compute the predetermined ambient data to determine the emergency alert signal.
With reference to an illustration, the disclosed method begins at step 401. The predetermined ambient data is transmitted to the control module (202) by the plurality of cameras is fetched to the AI module (201) in a predefined time period of 10 sec., at step 402. The predetermined ambient data corresponds to a cumulative data collected from the at least one camera mounted exterior of the vehicle (101). At step 403, a predetermined location data, for a predefined distance of 50 meter

radius from GPS location, is transmitted by the GPS (219) to the control module (201) by the plurality of the sensors (216), (218), (225) and the GPS (219) are fetched to the AI module (201). At step 404, the AI module (201) is configured for overlaying the predetermined ambient data and the predetermined location data. Based on the overlaying data, the AI module updates the V2V and V2I data in a predefined time period of 500 msec, at step 405.
At step 406, based on the updated V2V and V2I message indicative of the data cumulative of a vehicle location data and driving condition of the vehicle, the AI module performs rating of a predetermined ambient condition in a range of values “1” to “4”. Value “1” refers to ‘Smooth’ outside condition, value “2” is determined as ‘Navigable’ , value “3” is referred as ‘Tough’ road condition and value “4” refers to ‘Worse’ ambient condition. Based on detected value, the AI module (201) computes the emergency alert signal through the emergency alert logic, at step 407. Further, at step 408, the AI module (201) dispatches the emergency alert signal predetermined message to the surrounding vehicles (101), (104) (shown in Figure 1) through V2V cellular communication and to the at least one infrastructure such as police station, hospital, via V2X cellular communication. Thereby, warning the surrounding vehicles of the possible collision for preventing the accident and hazardous situation. The method ends at step 409.
Figure 5. shows an application processor (500) illustrating at least one hardware and at least software platform for integration of Driver Monitoring and Cellular stack layers, in accordance with present disclosure. In an illustrated embodiment, vehicle-to-vehicle communication and driver monitoring technologies augmented with real time road conditions can be used to alert the vehicles around the vicinity and also on the entire stretch of the highway (beyond the line of sight) to warn of the potentially a unsafe driver who is potentially drunk or drowsy and can pose as a major risk to all.

Further as illustrated in Figure 5, NIP board (502) connects with a plurality of vehicle interfaces (505). Thus, NIP board (502) comprises the microcontroller (503), a vehicle connector (501), and a plurality of modules (506) including connectivity modules of CAN, CAN-FD, LIN, RS-485 and Ethernet. In an embodiment, the power module is configured to power the MCU ADC supply in order to reduce the ripple. Further, as illustrated in an embodiment, the Control module (202) support external analog and digital inputs. In an illustrated embodiment, the control module (202) has ports such as Ethernet Switch IC with other Ethernet Ports. The Ethernet switch facilities the communication between NIP and SOC. Further, the control module (202) supports a plurality of On Board LED Outputs via n-MOS Led driving Circuit (511). Further, the NIP board (502) includes an Audio Codec. The Control module (202) is configured to use an external from Micro Crystal for MCU.
In an illustrated embodiment, the SoC board (505) comprises Telematics Interface Controller Board (508). The SoC board (505) is configured to provide Connectivity, Artificial Intelligence / Machine Learning, Security, and Gateway applications (512), (513). The SoC. board (505) further includes a PMIC (507). The PMIC (507) is a power management integrated circuit (PMIC) (507) designed for high performance applications. SoC Crystal requires two crystals for its internal clocking operations. RTC Crystal is used in NXP i.MX 8QM MEK. In an illustrated embodiment, a SPI flash is implemented in the control module (202) is used as the Fast Boot Flash by the SoC. The LPDDR4 memory is implemented in the Control module (202). Further, a Micro SD Card Connector (509), J5 has been provided on the control module (202) for interfacing of Micro SD cards to the SoC. A HDMI out Connector, has been provided on the control module (202) for interfacing of an external HDMI display to the i.MX 8QM SoC. Further, in an illustrated embodiment, the accelerometer and gyroscope Sensor / Inertial Measurement Unit (IMU): An IMU sensor, A USB 2.0 Micro Type-AB connector, has been provided on the control module (202) SoC Board. A Two-Port USB 3.0 Hub, from TI has been provided on the control module (202) SoC Board to expand the only USB 3.0 Further, 5G capability in Control module (202) is a single SIM 5G module with C-

V2X capability. AG15 by Quectel is a vehicular grade C-V2X module. It provides C-V2X capability to the Control module (202). The AG15 Module also supports GNSS with Dead-Reckoning (DR) Function using an IMU (Inertial Measurement Unit) Sensor. The NB-IoT capability of Control module (202) is implemented The Wi-Fi and Bluetooth Combo Module is configured to give Wi-Fi and BT functionality to the Control module (202).
Figure 6. shows a data flow model (600) illustrating the flow of the data from the application processor (500) to a plurality of software blocks (601), (602), (603), (604), (605), (606), (607) of the emergency alert logic (701), which is running in the application processor of the AI module (201) in accordance with present disclosure. The illustrated embodiment of Figure 6 is described in Figure 5 of the present disclosure. In an illustrated embodiment, the block 601 indicates driver facial attribute detection, eye tracking, and head movement based on images captures by the at least one camera of the plurality of the cameras a plurality of cameras (214), (215)… (200n), the plurality of the cameras (214), (215) … (200n). In an illustrated embodiment, the at least one camera of the plurality of the cameras a plurality of cameras (214), (215)… (200n) faces towards a driver of the particular vehicle and captures images corresponding to a facial recognition of the driver. At block 602, the driver load indexes and rates the outside/ambient condition is computed by the AI module (201). At blocks 603 and 604, the AI module (201) is integrally built with the emergency alert logic (701) and computes the context aware driver load index and driver load indexes, respectively, and generates the emergency alert signals if data load index corresponds to WARN index, ALERT index, and ACT index. The emergency alert signal is processed through vehicle to everything (V2X) software stack based on LINUX network protocol stack (703) (shown in Figure 7). The emergency alert signal is transmitted through cloud based services as indicated at block 607 to vehicle to everything (V2X) and vehicle to infrastructure (V2I) as indicated at block 606.

Figure 7. Shows a stack model (700) illustrating the emergency alert logic (701) of the AI module (201) (as illustrated in Figure 2), in accordance with present disclosure. The AI module (201) integrally built with the emergency alert logic (701) computes the first predetermined data, the second predetermined data and the predetermined ambient data and analyses the driver load indexes and rates the outside/ambient condition and generates the emergency alert signals if data load index corresponds to WARN index, ALERT index, ACT index. The emergency alert signal is processed through vehicle to everything (V2X) software stack based on LINUX network protocol stack (703). Also, a predetermined data corresponding to driver monitoring system (DMS) data is transmitted to control module (202) (illustrated in Figure 2) and process through a DMS stack based on Android application (702). Also, in an illustrated embodiment, the emergency alert logic includes software resources including a XEN hypervisor (704) that is configured to run the software stacks on the AI Module (201).
Equivalents:
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims

may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also

thereby described in terms of any individual member or subgroup of members of the Markush group.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Referral Numerals:

Reference Number Description
100 Vehicle-environment diagram
101, 104 Surrounding vehicles
102 Particular vehicle
103 Cellular network communication
200 Real-time vehicle threat alert system
201 AI module
202 Control Module
203 Bridge Module
204 Driver Monitoring Application
205 Vehicle Network Services
206 Automotive HAL layer
207 CAN Physical layer
208 V2X Messages
209 PDCP
210 RLC
211 MAC Layer
212 Physical Layer
213 Wireless Channel
214, 215, 200n Plurality of cameras

216 Gyroscope sensor
217 Navigation system
218 Vehicle speed sensor
219 GPS
220 DMS
221 HMI interface
222 Alert Module
223 Speaker
224 Display Module
225 Accelerometer
226 Application Layer
227 Middleware
228 Hardware abstraction layer
229 Emergency Alert Layer
300 A Flow-chart illustrating a method of
generating real-time threat alert
signals for the vehicle based on a
predefined data from the DMS, data
based on a plurality of vehicle
operating conditions, the GPS data,
Navigation system data
301-309 Method 300 flow steps
400 A flow-chart illustrating a method of
generating real-time threat alert
signals for the vehicle based on
processing of the predetermined
ambient data
400-409 Method 400 flow steps
500 Application processor
600 Data flow model
700 Stack model

701 Emergency Alert Logic
702 DMS stack based on Android
application
703 Vehicle to everything (V2X) software
stack based on LINUX network
protocol stack (703)
704 XEN Hypervisor

We Claim:
1. A real-time vehicle threat alert system (200), the system (200) comprising:
a plurality of cameras (214), (215)… (200n), the plurality of the cameras (214), (215) … (200n) configured to capture a first predetermined data indicative of a facial-recognition data and a predetermined ambient data corresponding to a vehicle surrounding condition;
a plurality of sensors (216), (218), (225);
a global positioning system (GPS) (219);
a predefined signals generated from the plurality of sensors (216), (218), (225) and the GPS (219), respectively, provides a cumulative signals indicative of a second predetermined data;
a navigation system (217) configured to generate a predefined navigation system data;
a Control module (202), the Control Module (202) configured to receive the first predetermined data, the predetermined ambient data, the second predetermined data and the predefined navigation system data via a communication protocol; and
an Artificial Intelligence (AI) module (201) communicatively connected to the control module (202), the Artificial Intelligence (AI) module configured for computing the first predetermined data, the second predetermined data and the predetermined ambient data to determine a driver load data based on a plurality of driver load indices.

2. The real-time vehicle alert system as claimed in claim 1 wherein the plurality of the cameras (214), (215)… (200n) is integrally installed with a Driver Monitoring system (DMS) (220) in a vehicle interior and located on a vehicle A-pillar.
3. The real-time vehicle alert system as claimed in claim 1 wherein the plurality of cameras (214), (215)… (200n) is mounted in at least a portion of a vehicle exterior.
4. The real-time vehicle alert system as claimed in claim 1 wherein, the plurality of sensors (216), (218), (225) includes a gyroscope sensor (216) configured to detect an inclination of the vehicle relative to a gravitational acceleration, an accelerometer that detects gravitational acceleration in addition to an actual vehicle acceleration.
5. The real-time vehicle alert system as claimed in claim 1 wherein, the global positioning system (GPS) satellites corresponding data provides extremely accurate position, location, velocity, and time information for vehicle.
6. The real-time vehicle alert system as claimed in claim 1 wherein, the second predetermined data is indicative of a vehicle running data.
7. The real-time vehicle alert system as claimed in claim 1 wherein, the second predetermined data corresponds to the cumulative signals generated from the plurality of sensors and contains information indicative of GPS location, vehicle speed, real time traffic, road surface, the gyroscope sensor (216) and the accelerometer (225) is transmitted to the control unit (202) through the communication protocol. The fetched the first predetermined data and the second predetermined data is transmitted to the AI module (201) being communicatively connected to the control module (202).

8. The real-time vehicle alert system as claimed in claim 1 wherein, the driver load indices includes a NORMAL index indicative of a normal driving condition, a DRIFT index indicative of a vehicle drifting mode, a WARN index indicative of warning signal generation, an ALARM index indicative of an alarm signal generation and an ACT index indicative of required action signal.
9. The real-time vehicle alert system as claimed in claim 1 wherein, during a NORMAL index indication, the AI module (201) determines the normal running condition of a particular vehicle and the normal driving behavior of a vehicle driver.
10. The real-time vehicle alert system as claimed in claim 1 wherein, during a DRIFT mode indication, the AI module (201) determines a vehicle drifting condition and the normal driving manner of the vehicle driver.
11. The real-time vehicle alert system as claimed in claim 1 wherein, at an indication of signal corresponding to the WARN index, the AI module (201) determines abnormality signals corresponding to the vehicle running condition and generates warn warning or emergency signals.
12. The real-time vehicle alert system as claimed in claim 1 wherein, at an indication of the Alert index or the WARN Index or both, the AI module (201) computes the data corresponding to ALERT and WARN signals and generates an emergency alert signals which is transmitted to surrounding vehicles (101), (104) travelling at a predetermined range from the particular vehicle (102) through the cellular vehicle to vehicle (V2V) network.
13. The real-time vehicle alert system as claimed in claim 1 wherein, the emergency alert signals is transmitted through a Human Machine Interface (HMI) (221) and is displayed on a vehicle HMI display module (224).

14. The real-time vehicle alert system as claimed in claim 1 wherein, an alert module (222) in the form of a speaker (223) or warning lamp is actuated to warn surrounding vehicles (101), (104) in vicinity to the particular vehicle (102).
15. The real-time vehicle alert system as claimed in claim 1 wherein, at an indication of the ACT index, the AI module (201) sends message for required actions to be taken from vehicles to infrastructure through a cellular V2I (vehicle to infrastructure) communication.
16. The real-time vehicle alert system as claimed in claim 1 wherein, the AI
module (201) is configured to compute the first predetermined data, the
second predetermined data and the predetermined ambient data through a
stacked layer model.
17. The real-time vehicle alert system as claimed in claim 1 wherein, the stacked layer model includes an Application Layer (226) sub-divided into an emergency alert layer (229) and a bridge layer (203).
18. The real-time vehicle alert system as claimed in claim 1 wherein, the emergency alert layer (229) is configured to handle the emergency alert signal through an Android framework for transmitting it to the surrounding vehicles (101), (104) or infrastructure.
19. The real-time vehicle alert system as claimed in claim 1 wherein, the bridge layer (203) sets a connecting link between with a middleware (227) using the Android framework; and wherein the middleware (226) is substantially be divided into driver monitoring application (204), vehicle network services (206).
20. A method of generating real-time threat alert signals by a real-time vehicle threat alert system (200), the method comprising:

capturing, by a plurality of cameras (214), (215)… (200n), a first
predetermined data indicative of a facial-recognition data and a
predetermined ambient data corresponding to a vehicle surrounding
condition;
providing, a cumulative signals indicative of a second predetermined data from a predefined signals generated from the plurality of sensors (216), (218), (225) and the GPS (219), respectively;
generating, by a navigation system (217), a predefined navigation system data (217);
receiving, by a control module (202) via a communication protocol, the first predetermined data, the predetermined ambient data, the second predetermined data and the predefined navigation system data; and
computing, by an Artificial Intelligence (AI) module, the first predetermined data, the second predetermined data and the predetermined ambient data to determine a driver load data based on a plurality of driver load indices.

Documents

Application Documents

# Name Date
1 202221004532-STATEMENT OF UNDERTAKING (FORM 3) [27-01-2022(online)].pdf 2022-01-27
2 202221004532-PROVISIONAL SPECIFICATION [27-01-2022(online)].pdf 2022-01-27
3 202221004532-FORM 1 [27-01-2022(online)].pdf 2022-01-27
4 202221004532-DRAWINGS [27-01-2022(online)].pdf 2022-01-27
5 202221004532-FORM 3 [27-01-2023(online)].pdf 2023-01-27
6 202221004532-ENDORSEMENT BY INVENTORS [27-01-2023(online)].pdf 2023-01-27
7 202221004532-DRAWING [27-01-2023(online)].pdf 2023-01-27
8 202221004532-CORRESPONDENCE-OTHERS [27-01-2023(online)].pdf 2023-01-27
9 202221004532-COMPLETE SPECIFICATION [27-01-2023(online)].pdf 2023-01-27
10 202221004532-FORM 18 [02-02-2023(online)].pdf 2023-02-02
11 Abstract1.jpg 2023-02-10
12 202221004532-FORM-26 [23-06-2023(online)].pdf 2023-06-23
13 202221004532-ORIGINAL U-R 6(1A) FORM 26-060723.pdf 2023-09-12
14 202221004532-FER.pdf 2025-02-28
15 202221004532-RELEVANT DOCUMENTS [28-08-2025(online)].pdf 2025-08-28
16 202221004532-Proof of Right [28-08-2025(online)].pdf 2025-08-28
17 202221004532-PETITION UNDER RULE 137 [28-08-2025(online)].pdf 2025-08-28
18 202221004532-OTHERS [28-08-2025(online)].pdf 2025-08-28
19 202221004532-FER_SER_REPLY [28-08-2025(online)].pdf 2025-08-28

Search Strategy

1 202221004532E_07-03-2024.pdf