Sign In to Follow Application
View All Documents & Correspondence

Method And System For Crash Detection Of Electric Vehicle(s)

Abstract: ABSTRACT METHOD AND SYSTEM FOR CRASH DETECTION OF ELECTRIC VEHICLE(S) The present disclosure describes a system (100) for detecting a crash of a vehicle. The system (100) comprises at least one Inertial Measurement Unit (IMU) sensor (102) configured to sense multi-dimensional inertial data of the vehicle, a vehicle communication interface (104) configured to retrieve vehicle operational data from a Controller Area Network (CAN), a data processing unit (106) communicably coupled to the IMU sensor (102) and the vehicle communication interface (104), and an output interface (108) communicably coupled to the data processing unit. Further, the data processing unit (106) is configured to cluster correlated multi-dimensional inertial data and vehicle operational data into a plurality of operating states, with each operating state representing a discrete behavioral condition of the vehicle. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 August 2024
Publication Number
27/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Matter Motor Works Private Limited
301, PARISHRAM BUILDING, 5B RASHMI SOC., NR. MITHAKHALI SIX ROADS, NAVRANGPURA AHMEDABAD, GUJARAT, INDIA - 380010

Inventors

1. KUMAR PRASAD TELIKEPALLI
"IP Department MATTER, DCT, C/O Container Corporations of India Ltd., Domestic Container Terminal Gate No. 4, Shed No 1, Khodiyar, Gujarat 382421"
2. SATISH THIMMALAPURA
"IP Department MATTER, DCT, C/O Container Corporations of India Ltd., Domestic Container Terminal Gate No. 4, Shed No 1, Khodiyar, Gujarat 382421"
3. JATIN PRAKASH
"IP Department MATTER, DCT, C/O Container Corporations of India Ltd., Domestic Container Terminal Gate No. 4, Shed No 1, Khodiyar, Gujarat 382421"
4. ROHAN R. LODAYA
"IP Department MATTER, DCT, C/O Container Corporations of India Ltd., Domestic Container Terminal Gate No. 4, Shed No 1, Khodiyar, Gujarat 382421"
5. Mohak Vyas
"IP Department MATTER, DCT, C/O Container Corporations of India Ltd., Domestic Container Terminal Gate No. 4, Shed No 1, Khodiyar, Gujarat 382421"

Specification

DESC:METHOD AND SYSTEM FOR CRASH DETECTION OF ELECTRIC VEHICLE(S)
CROSS REFERENCE TO RELATED APPLICATIONS
The present application claims priority from Indian Provisional Patent Application No. 202421061348 filed on 13/08/2024, the entirety of which is incorporated herein by a reference.
TECHNICAL FIELD
Generally, the present disclosure relates to the field of accident detection and alerting systems. Particularly, the present disclosure relates to a system and method for accident detection and alerting systems for a vehicle.
BACKGROUND
With the rise of autonomous vehicles and traffic intensity, reliable crash detection systems are essential for the safe operation and navigation of the vehicles. Advanced crash detection systems provide dependable crash detection mechanisms and increase consumer confidence in new automotive technologies.
Conventionally, crash or fall detection systems in vehicles primarily rely on threshold-based mechanisms. The above-mentioned systems monitor sensor parameters such as acceleration, gyroscopic motion, and speed, and trigger alerts as a measured value crosses a predefined threshold. For instance, a sudden drop in speed or a spike in acceleration beyond a fixed limit is interpreted as a potential crash event. The systems are often configured with static logic coded into the vehicle's electronic control units (ECUs), relying heavily on Inertial Measurement Units (IMUs) or airbag deployment signals. In some cases, signal fusion is performed between IMU data and basic vehicle signals such as brake or throttle activity, but the logic remains linear and threshold-dependent.
However, there are certain underlining problems associated with the above-mentioned existing mechanism for crash detection mechanism. For instance, the threshold fails to account for the complex and varied dynamics of real-world driving, leading to a high rate of false positives, such as misclassifying a pothole impact or harsh braking as a crash. Moreover, the conventional systems lack contextual awareness, failing to distinguish between aggressive but controlled manoeuvres and actual collision or fall scenarios. Further, the conventional systems do not adapt to evolving driving behavior or different vehicle types, limiting the robustness. Therefore, the lack of learning ability and absence of historical pattern recognition restrict the performance in diverse environments such as off-road terrain, high-density urban traffic, or two-wheelers motion characteristics.
Therefore, there exists a need for a mechanism for detecting the crash and/or fall of a vehicle that is efficient and overcomes one or more problems as mentioned above.
SUMMARY
An object of the present disclosure is to provide a system for detecting a crash of a vehicle.
Another object of the present disclosure is to provide a method of detecting a crash of a vehicle.
Yet another object of the present disclosure is to provide a system and method for detecting a crash of a vehicle capable of accurately detecting of occurrence of a crash of a vehicle.
In accordance with a first aspect of the present disclosure, there is provided a system for detecting a crash of a vehicle, the system comprises:
- at least one Inertial Measurement Unit (IMU) sensor configured to sense multi-dimensional inertial data of the vehicle;
- a vehicle communication interface configured to retrieve vehicle operational data from a Controller Area Network (CAN)
- a data processing unit communicably coupled to the IMU sensor and the vehicle communication interface; and
- an output interface communicably coupled to the data processing unit,
wherein the data processing unit is configured to cluster correlated multi-dimensional inertial data and vehicle operational data into a plurality of operating states, with each operating state representing a discrete operational pattern of the vehicle.
The system and method for detecting a crash of a vehicle, as described in the present disclosure, are advantageous in terms of providing a system with enhanced safety and efficiency for detecting crash and/or fall events of a vehicle. Advantageously, model training via a machine learning algorithm enables the vehicle to make instantaneous adjustments in vehicle parameters such as (but not limited to) speed, steering, and braking in the event of the possibility of a crash and/or fall, thereby enhancing the overall safety and performance of the vehicle. Further, the invention addresses the shortcomings of conventional threshold-based detection systems by leveraging artificial intelligence to learn complex state patterns, improve detection reliability, and reduce false alarms. The system is capable of adapting to different vehicle types, road environments, and driving behaviors without requiring manual reconfiguration.
In accordance with another aspect of the present disclosure, there is provided a method of detecting a crash of a vehicle, the method comprising:
- segmenting multi-dimensional inertial data and vehicle operational data into a plurality of clusters, via a clustering module ;
- generating a set of association rules from the clustered data, via a rule derivation module;
- initiating model training of a neural network by utilizing the set of association rules based on a machine learning algorithm, via a GNN engine;
- generating a plurality of nodes and edges for the trained neural network, via the GNN engine; and
- detecting a future operating state of the vehicle based on the trained neural network, via the GNN engine.
Additional aspects, advantages, features, and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments constructed in conjunction with the appended claims that follow.
It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
BRIEF DESCRIPTION OF DRAWINGS
The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
Figures 1 and 2 illustrate a block diagram of a system for detecting a crash of a vehicle, in accordance with different embodiments of the present disclosure.
Figure 3 illustrates a flow chart of a method of detecting a crash of a vehicle, in accordance with another embodiment of the present disclosure.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
As used herein, the terms “crash”, “collision”, and “accident” are used interchangeably and refer to a vehicle collision with another object, vehicle, or surface. The crash may occur due to (but not limited to) loss of vehicle control, mechanical failure, adverse conditions, or human error. Further, the crash also refer to a fall of the vehicle in an anomalous event such as, a rollover, tipping event, skidding-induced overturn, or loss of vertical stability resulting from abrupt manoeuvres, terrain irregularities, or structural imbalance during motion or stationary conditions. The severity of the crash primarily depends on the speed, weight, and angle of collision of the vehicle. The crash of the vehicle causes a sudden deceleration in the vehicle, leading to damage and potential injuries.
As used herein, the terms “Inertial Measurement Unit sensor” and “IMU sensor” are used interchangeably and refer to a compact electronic device that quantitatively measures the vehicle’s specific force, angular rate, and magnetic field using a combination of accelerometers, gyroscopes, and optionally magnetometers. Further, the IMU sensors generate multi-dimensional inertial data along three orthogonal axes (X, Y, Z), capturing acceleration, angular velocity, orientation, and tilt. The sensor functions as a core data source for determining the motion characteristics and dynamic state of the vehicle in real-time. Furthermore, the IMU sensors are embedded within the vehicle and provide high-frequency, timestamped motion data critical for analyzing driving behavior, detecting anomalies, and enabling advanced event classification. The types of IMU sensors include 6-axis and 9-axis units. Specifically, the 6-axis IMU integrates a tri-axial accelerometer and a tri-axial gyroscope, and a 9-axis IMU additionally incorporates a tri-axial magnetometer to determine orientation with respect to the Earth's magnetic field. Moreover, for vehicle operating state detection, IMU sensor data undergoes preprocessing to remove noise and align the time series. The processed data is fused with vehicle operational signals to derive meaningful patterns. Further, a clustering technique is applied to segment the motion data into behaviourally relevant operating states. The states are then used to generate state transition rules or graph-based representations that support real-time classification of current or predicted vehicle behavior.
As used herein, the term “multi-dimensional inertial data” refers to a structured sensor output collected along multiple axes of motion, typically including linear acceleration and angular velocity across three spatial dimensions (X, Y, Z). Specifically, the data originates from an Inertial Measurement Unit (IMU) and represents dynamic motion characteristics such as, but is not limited to, vehicle acceleration, rotational motion, orientation, and tilt. Further, each dimension corresponds to an independent inertial component, enabling precise reconstruction of the vehicle’s physical movement in space. The data is time-synchronized and sampled at high frequency to ensure fine-grained resolution of transient motion events. Furthermore, the types of multi-dimensional inertial data include linear acceleration vectors, angular velocity measurements, orientation angles (roll, pitch, yaw), and tilt metrics. Furthermore, the data acquisition involves recording continuous signals from accelerometers and gyroscopes embedded in the IMU. The technique of processing begins with temporal alignment and noise filtering, followed by correlation with vehicle network data such as brake actuation or throttle input. The clustering algorithms segment the inertial data into discrete motion patterns that represent specific behavioral states. The clustered data points contribute to rule extraction or graph construction processes used for state classification and behavioral prediction in vehicle systems.
As used herein, the terms “vehicle communication interface” and “communication interface” are used interchangeably and refer to a hardware and software subsystem configured to retrieve operational data from internal vehicle networks, including the Controller Area Network (CAN) bus. The interface facilitates structured access to digital signals exchanged between electronic control units (ECUs) that govern vehicle subsystems such as engine control, braking, transmission, and steering. Further, the communication interface is designed for real-time, bidirectional communication. The interface operates on standardized automotive protocols to extract status parameters including speed, throttle position, braking force, gear position, and clutch activity. The retrieved signals are time-aligned and mapped to their functional identifiers (message IDs) based on the CAN database (DBC) definitions. Furthermore, the types of vehicle communication interfaces include On-Board Diagnostics II (OBD-II) adapters, dedicated CAN transceivers, and embedded gateways integrated within telematics control units. The interface interacts with the IMU sensor data by synchronizing timestamps and facilitating signal-level correlation for multi-sensor fusion. The technique for utilizing the interface involves initializing message filters, continuously polling or subscribing to target CAN frames, and decoding the payloads into interpretable vehicle parameters.
As used herein, the term “vehicle operational data” refers to a set of time-stamped signals generated by various subsystems within a vehicle that describe the functional and dynamic status during operation. The data comprises measurable parameters such as, but not limited to, vehicle speed, brake activation, throttle position, steering angle, gear selection, clutch engagement, and engine revolutions per minute (RPM). The operational data are sourced primarily from electronic control units (ECUs) over the Controller Area Network (CAN) bus, and the data reflects real-time driver inputs, mechanical responses, and vehicle control actions. Further, each parameter is encoded within specific CAN messages and represents a component of the vehicle’s operating behavior in both normal and abnormal driving conditions. The types of vehicle operational data are typically categorized based on subsystem origin, such as powertrain, chassis, braking, and transmission systems. The technique for utilizing vehicle operational data in state detection involves acquiring and decoding relevant CAN signals through a vehicle communication interface, aligning the operational data temporally with inertial data, and correlating signal patterns with motion dynamics. Subsequently, the combined dataset is subjected to unsupervised clustering or graph-based modeling to identify distinct behavioral states. The states are further used to derive transition logic and classify vehicle activities, including acceleration events, gear shifts, braking behavior, or anomalies such as potential crash incidents.
As used herein, the term “Controller Area Network” and “CAN” are used interchangeably and refer to a robust, real-time communication protocol designed for reliable data exchange between electronic control units (ECUs) within a vehicle. The CAN enables decentralized communication without a host computer by allowing multiple ECUs, such as engine control, braking, steering, and transmission systems, to transmit and receive messages over a shared two-wire bus. Further, each message on the CAN bus contains a unique identifier, priority level, and data payload, enabling precise and collision-free communication under high-speed conditions. Furthermore, the protocol supports error detection, retransmission, and message arbitration, ensuring high reliability in safety-critical applications. The types of CAN include Classical CAN (with data frames up to 8 bytes) and CAN FD (Flexible Data-rate), which allows higher throughput and larger payloads up to 64 bytes. The utilization of CAN in vehicles involves the use of a CAN transceiver and controller to read and decode messages associated with operational parameters such as speed, brake status, throttle input, and gear position. Moreover, the signals are time-synchronized and passed to higher-level software modules or fusion algorithms, which correlate the signals with inertial data for behavior modeling. In state detection of the vehicle, CAN data contributes to generating context-aware representations of vehicle operation for clustering, rule derivation, or machine learning-based classification.
As used herein, the terms “data processing unit” and “processing unit” are used interchangeably and refer to a computing subsystem configured to receive, process, and analyze multi-source sensor data within a vehicle system. Specifically, the data processing unit executes specialized algorithms for real-time clustering, correlation, rule extraction, and machine learning-based inference. Further, in a vehicle state detection architecture, the unit interfaces directly with the inertial measurement unit (IMU) and vehicle communication interface to acquire synchronized motion and operational data. Architecturally, the unit may include processing cores, memory modules, and dedicated accelerators to support tasks such as, but not limited to, unsupervised learning, signal transformation, and graph-based modeling. The types of data processing units include embedded microprocessors, digital signal processors (DSPs), system-on-chip (SoC) platforms, and neural network accelerators, depending on computational demand. The operation of the processing unit involves initial filtering and synchronization of input signals, followed by multi-dimensional clustering of correlated inertial and CAN data into discrete operating states. Subsequently, the states form the basis for extracting temporal or associative rules, which are used to construct graph neural network (GNN) structures comprising nodes and transition edges. The unit infers current or future operating states by traversing these structures in real time, enabling detection of behavioral anomalies or significant vehicle events such as falls or crashes.
As used herein, the terms “output interface” and “interface” are used interchangeably and refer to an output interface refers to a communication subsystem within a vehicle state detection system that transmits processed results, alerts, or classified states from the data processing unit to external arrangements or internal vehicle components. Further, the output interface enables downstream actions such as, but not limited to, triggering safety mechanisms, logging diagnostic information, or updating driver-assistance displays. The interface maintains data integrity and timing accuracy, ensuring that the output reflects the current operational state or detected anomalies of the vehicle without latency-induced distortion. The types of output interfaces include wired protocols, such as, but not limited to, CAN, Ethernet, and UART, as well as wireless protocols such as Bluetooth, Wi-Fi, or cellular interfaces, depending on the system architecture. Operationally, the output interface receives the classified behavioral state or anomaly detection result from the data processing unit and encodes the information into a predefined message format. The message may include state identifiers, confidence scores, timestamps, and severity levels. Subsequently, the formatted output is transmitted to actuators, storage modules, or remote servers for further action, visualization, or post-analysis.
As used herein, the terms “operating states” and “states” are used interchangeably and refer to a discrete, data-defined behavioral condition of a vehicle determined by analyzing correlated inertial and operational signals over time. Each operating state represents a unique combination of motion dynamics (such as acceleration, rotation, or tilt) and system activities (such as braking, throttle input, or gear changes), reflecting the vehicle’s functional context during a specific period of operation. Further, the states are derived through clustering or rule-based segmentation of multi-dimensional sensor data, enabling classification of normal driving patterns, transitional manoeuvres, or anomalous events. The types of operating states include, but are not limited to, steady acceleration, deceleration, idle, sharp turning, gear shifting, braking, coasting, and critical events such as crash, fall, or instability. The defining and identifying operating states involve preprocessing synchronized IMU and vehicle CAN data, applying clustering algorithms to group similar patterns, and labelling each cluster as a distinct state. The states are further used to build a graph structure with nodes representing operating states and edges representing transitions, allowing the system to infer real-time vehicle behavior and detect deviations from expected operation.
As used herein, the terms “behavioral condition” and “condition” are used interchangeably and refer to a categorized representation of vehicle behavior derived from sensor-based analysis of motion and operational signals. Further, each condition signifies a specific driving pattern or vehicle state defined by temporal and spatial variations in parameters such as, but not limited to, acceleration, angular velocity, orientation, braking activity, throttle response, and gear transition. The behavioral conditions serve as higher-level abstractions built upon raw sensor data, capturing the contextual intent or mechanical outcome of vehicle operation. Further, the conditions reflect transitions between stable, transitional, or anomalous driving phases, including states such as, but not limited to, acceleration, deceleration, sharp turn, idling, gear shifting, coasting, and impact response. The types of behavioral conditions include steady motion, abrupt deceleration, high-speed cornering, braking under load, instability due to uneven terrain, rollover tendency, and fall or crash detection. The behavioral conditions involve acquiring time-aligned inertial and vehicle operational data, followed by feature extraction and unsupervised clustering. The clusters are labeled based on signal characteristics and used to derive rule sets that capture temporal associations between conditions.
As used herein, the term “angular velocity” refers to the rate and direction of rotation of the vehicle around one or more principal axes, and is measured in radians or degrees per second. The angular velocity quantifies the changes in vehicle orientation by capturing rotational motion dynamics through gyroscopic sensors embedded in the inertial measurement unit (IMU). Further, the angular velocity reflects critical aspects of manoeuvring, such as turning, tilting, spinning, or sudden rotational shifts, providing essential information for understanding vehicle stability and dynamic behavior. The types of angular velocity include roll rate (rotation around the vehicle’s longitudinal axis), pitch rate (rotation around the lateral axis), and yaw rate (rotation around the vertical axis). The angular velocity is determined by continuous sensing via gyroscopes that output time-series rotational velocity data. Subsequently, the data is processed and synchronized with other vehicle signals to characterize driving manoeuvres, detect unusual rotational events, and support behavioral condition classification within state detection models.
As used herein, the term “vehicle orientation” refers to the spatial alignment of the vehicle relative to a fixed reference frame, typically expressed through angular measurements such as, but is not limited to, roll, pitch, and yaw angles or quaternion representations. The vehicle orientation represents the vehicle’s attributes in three-dimensional space, indicating the tilt, lean, and heading direction at any given moment. The orientation is derived by integrating and fusing sensor data from accelerometers, gyroscopes, and magnetometers within the inertial measurement unit (IMU), providing a continuous and precise estimate of the vehicle’s pose during motion. The types of vehicle orientation include roll (rotation about the vehicle’s longitudinal axis), pitch (rotation about the lateral axis), and yaw (rotation about the vertical axis). The vehicle orientation is obtained via sensor fusion algorithms such as Kalman filtering or complementary filtering that combine angular velocity and acceleration measurements to compensate for sensor noise and drift. Advantageously, the accurate orientation data enables classification of driving manoeuvres, detection of unusual tilts or skids, and contributes to the identification of operating states and behavioral conditions in vehicle monitoring systems.
As used herein, the term “tilt angle” refers to the angular deviation of the vehicle’s body from a defined reference position, typically the horizontal plane, measured relative to the direction of gravity. The tilt angle quantifies the degree of the vehicle leaning or inclining laterally (side-to-side) or longitudinally (front-to-back), providing information about stability, road conditions, and potential rollover risk. Further, the tilt angle is derived primarily from accelerometer data within the inertial measurement unit (IMU) by isolating the gravity vector component and calculating the orientation with respect to the vehicle’s frame of reference. The types of tilt angle include lateral tilt (side lean) and longitudinal tilt (forward or backward pitch). The way for determining tilt angle involves continuous measurement of acceleration along multiple axes, followed by trigonometric calculations or sensor fusion techniques to separate gravitational acceleration from dynamic motion. Consequently, the accurate tilt angle measurements enable detection of uneven terrain, sharp turns, inclines, and abnormal vehicle postures, serving as a key parameter for defining operating states and behavioral conditions in vehicle monitoring and safety systems.
As used herein, the term “clustering module” refers to a computational component within a data processing system that groups multi-dimensional input data into distinct clusters based on similarity metrics. The module processes correlated sensor signals, such as inertial measurements and vehicle operational parameters, to identify patterns or states that share common characteristics. Further, by partitioning data into clusters, the module enables the abstraction of raw sensor inputs into representative operating states or behavioral conditions. Furthermore, the types of clustering techniques employed by the module include k-means clustering, hierarchical clustering, density-based clustering (DBSCAN), and model-based clustering, selected based on data characteristics and performance requirements. Furthermore, the technique of operation involves preprocessing synchronized sensor data, applying a chosen clustering algorithm to segment the data into groups, and optimizing cluster parameters, such as, but is not limited to, cluster count using evaluation methods such as, but is not limited to, the elbow method or silhouette analysis. The resulting clusters form the basis for rule derivation and state representation in subsequent analytical models.
As used herein, the term “rule derivation module” refers to a specialized processing unit designed to extract meaningful patterns and relationships from clustered multi-dimensional sensor data by generating a set of logical association rules. The module analyses grouped data points obtained from the clustering process to identify frequent co-occurrences and dependencies among variables, enabling the formalization of behavioral patterns or operating state transitions. The association rules represent conditional relationships that describe how certain sensor readings or states predict or influence others within the vehicle’s operational context. The module employs data mining techniques such as the Apriori algorithm or FP-Growth to systematically scan the clustered data and identify item sets that meet predefined thresholds of support and confidence. Further, the method includes preprocessing clustered data into transaction-like formats, pruning insignificant or redundant rules, and ranking rules based on statistical significance and relevance to vehicle behavior. The derived rules serve as an interpretable knowledge base for training advanced models, such as, but is not limited to, graph neural networks, facilitating accurate state classification and anomaly detection within the vehicle monitoring system.
As used herein, the term “clusters” refers to a representative group of data points within a multi-dimensional dataset that exhibit similar characteristics or patterns, identified through unsupervised learning techniques. Specifically, in the vehicle sensor data, clusters correspond to sets of correlated inertial and operational signals that collectively describe distinct operating conditions or behavioral modes of the vehicle. Each cluster abstracts raw, high-dimensional sensor measurements into a meaningful category, facilitating efficient analysis and interpretation of complex driving patterns. The types of clustering include centroid-based methods (such as k-means), hierarchical clustering, density-based clustering (DBSCAN), and model-based clustering approaches, each suited to different data distributions and application needs. The technique for forming clusters involves preprocessing synchronized sensor data, selecting an appropriate similarity metric (Euclidean distance), applying the clustering algorithm, and validating cluster quality using metrics like the silhouette score or the elbow method. The resulting clusters serve as foundational elements for subsequent rule derivation and state classification processes in vehicle behavior monitoring systems.
As used herein, the term “association rules” refers to a conditional relationship between sets of items or events within a dataset, indicating that the occurrence of one set implies the presence of another with measurable statistical significance. In vehicle sensor data analysis, the association rules represent patterns linking specific combinations of inertial and operational signals to subsequent vehicle behaviors or state transitions. Further, the rules quantify the strength and frequency of co-occurrences, facilitating the detection of regularities and anomalies in vehicle operation. Furthermore, the types of association rules include rules that satisfy minimum thresholds for support, closed rules that capture maximal item sets without redundancy, and rare rules that identify infrequent but potentially significant patterns. The way to derive association rules involves converting clustered sensor data into transaction-like formats, applying algorithms such as Apriori or FP-Growth to identify frequent item sets, and calculating metrics including support, which measures the proportion of data containing the itemset, and confidence, which assesses the reliability of the rule.
As used herein, the term “predefined frequency threshold” refers to a quantifiable minimum occurrence rate used to determine the significance of patterns or events within a dataset. In the context of association rule mining, the frequency threshold defines the least proportion or count of data instances in which a particular itemset or event must appear to be considered relevant for further analysis. Specifically, establishing such a threshold ensures that only commonly recurring and statistically meaningful patterns contribute to rule formation, thereby filtering out noise and rare occurrences that do not provide reliable insights into vehicle behavior. The types of predefined frequency thresholds include absolute counts, which specify a fixed number of occurrences, and relative support values, which define the threshold as a fraction or percentage of the total dataset. The technique to apply the predefined frequency threshold involves scanning the clustered sensor data to tally occurrences of candidate item sets, comparing the counts against the threshold, and retaining only those item sets meeting or exceeding the threshold.
As used herein, the term “associative rule mining techniques” refers to data analysis methods that identify meaningful relationships and patterns between variables within large datasets by learning frequent item sets and generating rules. The above-mentioned techniques analyze co-occurrences of data elements to reveal correlations, dependencies, or sequences that characterize behaviors or events. In the vehicle sensor data, the associative rule mining uncovers consistent patterns between inertial measurements and operational signals, enabling the extraction of rules that describe vehicle states and transitions. The types of associative rule mining techniques include Apriori, which iteratively identifies frequent item sets by pruning infrequent candidates, FP-Growth, which uses a compact tree structure (frequent pattern tree) to efficiently mine frequent patterns without candidate generation, and Eclat, which applies depth-first search and vertical data formats for pattern discovery. The technique further involves transforming clustered sensor data into transactional form, scanning for frequent item sets that meet predefined support and confidence thresholds, and generating association rules based on the item sets.
As used herein, the term “neural network” refers to a computational model inspired by the structure and functioning of biological neural systems, designed to process complex data by simulating interconnected layers of nodes, or neurons. Each neuron receives input signals, applies a weighted transformation, and passes the output through an activation function to subsequent neurons, enabling the network to learn patterns, relationships, and representations from input data. The neural networks consist of an input layer, one or more hidden layers, and an output layer, facilitating hierarchical feature extraction and decision-making based on training data. The types of neural networks include feedforward neural networks, having information flows unidirectionally from input to output; Convolutional Neural Networks (CNNs), specialized for spatial data like images; Recurrent Neural Networks (RNNs), designed to handle sequential data through feedback loops; and Graph Neural Networks (GNNs), which operate on graph-structured data to capture relationships among nodes and edges. The technique of training a neural network involves feeding labeled or unlabelled input data, adjusting connection weights using optimization algorithms such as backpropagation and gradient descent, and iteratively minimizing prediction errors to improve performance on tasks such as classification, regression, or anomaly detection.
As used herein, the term “machine learning” refers to a systematic computational procedure that enables a system to learn patterns, relationships, or representations from data without explicit programming for specific tasks. Specifically, the algorithms analyze input data to build models that make predictions, classify information, or identify anomalies by recognizing underlying structures and adapting based on experience. The machine learning algorithms facilitate automation and improvement in decision-making processes through iterative training on examples. The types of machine learning algorithms include supervised learning, which uses labeled data to train models for classification or regression; unsupervised learning, which discovers hidden patterns or groupings within unlabelled data; semi-supervised learning, combining labeled and unlabelled data for enhanced training; and reinforcement learning, having an agent learning optimal actions through trial and error interactions with an environment. The technique of training involves selecting appropriate features from data, choosing an algorithm suited to the problem, training the model on datasets, validating performance using test data, and tuning parameters to optimize accuracy and generalization.
As used herein, the term “nodes” refers to fundamental units or entities within a graph or network that represent discrete data points, states, or objects. In the context of a Graph Neural Network (GNN) for vehicle state detection, each node corresponds to a specific operating state or condition of the vehicle, encapsulating relevant features derived from sensor and operational data. Further, the nodes serve as connection points through which information flows and relationships with other nodes are established via edges. The types of nodes include simple data points with static attributes, dynamic nodes that update the states over time, and composite nodes that aggregate information from multiple sources. The technique of utilizing nodes involves assigning each node a feature vector representing the vehicle’s behavioral characteristics, connecting nodes based on transitions or interactions, and processing node information through network layers to enable pattern recognition, state classification, or anomaly detection within the vehicle’s operational context.
As used herein, the term “edges” refers to the connections or links between nodes in a graph or network that represent relationships, interactions, or transitions between the entities represented by the nodes. In a Graph Neural Network (GNN) for vehicle state detection, the edges symbolize the transitions or sequences between different operating states of the vehicle, capturing one state's evolution into another based on sensor and operational data. The Types of edges include directed edges, which indicate a one-way relationship or transition from a source node to a target node; undirected edges, representing mutual or bidirectional relationships; weighted edges, which assign a value indicating the strength or frequency of the connection; and temporal edges, encoding time-dependent transitions. The way of employing edges involves defining connections between nodes based on observed signal sequences, assigning weights or attributes reflecting transition probabilities or frequencies, and utilizing the edges in network computations to model vehicle behavior dynamics and predict future states.
In accordance with a first aspect of the present disclosure, there is provided a system for detecting a crash of a vehicle, the system comprises:
- at least one Inertial Measurement Unit (IMU) sensor configured to sense multi-dimensional inertial data of the vehicle;
- a vehicle communication interface configured to retrieve vehicle operational data from a Controller Area Network (CAN)
- a data processing unit communicably coupled to the IMU sensor and the vehicle communication interface; and
- an output interface communicably coupled to the data processing unit,
wherein the data processing unit is configured to cluster correlated multi-dimensional inertial data and vehicle operational data into a plurality of operating states, with each operating state representing a discrete behavioral condition of the vehicle.
Referring to figure 1, in accordance with an embodiment, there is described a system 100 for detecting a crash of a vehicle. The system 100 comprises at least one Inertial Measurement Unit (IMU) sensor 102 configured to sense multi-dimensional inertial data of the vehicle, a vehicle communication interface 104 configured to retrieve vehicle operational data from a Controller Area Network (CAN), a data processing unit 106 communicably coupled to the IMU sensor 102 and the vehicle communication interface 104, and an output interface 108 communicably coupled to the data processing unit. Further, the data processing unit 106 is configured to cluster correlated multi-dimensional inertial data and vehicle operational data into a plurality of operating states, with each operating state representing a discrete behavioral condition of the vehicle.
The system 100 for detecting the crash of a vehicle operates by continuously acquiring multi-dimensional inertial data from at least one Inertial Measurement Unit (IMU) sensor 102. The IMU sensor 102 delivers real-time data streams representing acceleration, angular velocity, orientation, and tilt angle of the vehicle. Simultaneously, a vehicle communication interface 104 retrieves operational data such as speed, brake engagement, throttle position, and clutch status from the Controller Area Network (CAN) bus. The two data streams are routed to a data processing unit 106 responsible for real-time analysis and interpretation of the vehicle's physical and operational behavior. Further, the data processing unit 106 applies clustering algorithms to the combined IMU 102 and CAN data to identify and segment patterns in vehicle behavior. Specifically, each cluster corresponds to a unique operating state, representing a specific behavioral condition of the vehicle, such as cruising, turning, braking, skidding, falling, or crashing. Subsequently, association rule mining is performed on the clusters to extract frequently occurring correlations in the data, resulting in the derivation of interpretable rules. The rules serve as input to a Graph Neural Network (GNN) engine, which models ride dynamics through nodes and edges, with nodes representing learned ride states and edges representing valid transitions between them, enabling the creation of a robust behavioral graph of vehicle operation. Beneficially, the integration of clustering, association rule mining, and GNN-based modeling allows for highly accurate detection and prediction of crash and fall events. The system 100 ensures timely and context-aware detection by continuously analyzing state transitions and forecasting future conditions based on historical patterns. The above-mentioned approach enables real-time alerts, improves rider safety, and supports automated decision-making in safety-critical environments. Further advantages include reduced false positives, improved interpretability of behavioral transitions, and scalable deployment across various vehicle types and operating environments.
In an embodiment, the multi-dimensional inertial data of the vehicle comprises at least one of acceleration, angular velocity, vehicle orientation, and tilt angle, and wherein the vehicle operational data comprises at least one speed, brake activation, throttle position, and clutch status. Specifically, the acceleration reflects changes in linear motion across multiple axes, enabling identification of sudden forces such as impacts or abrupt halts. Further, the angular velocity captures rotational movement around the vehicle’s pitch, roll, and yaw axes, essential for detecting skids, rolls, or spins. Furthermore, the orientation provides an absolute reference of the vehicle’s position relative to the Earth's frame, ensuring precise monitoring of the spatial posture. Furthermore, the tilt angle measures inclination from the ground plane, which is vital for identifying unstable positions or tipping conditions prior to a fall or crash. Structurally, the data processing unit receives continuous input from the IMU sensor and applies filtering and normalization to remove noise and standardize measurements. Further, the temporal segmentation techniques group time-series data into meaningful intervals, while feature extraction algorithms identify trends and anomalies across multiple axes. The correlation mapping links specific inertial signatures to predefined motion categories, such as, but is not limited to, sharp turns, loss of balance, or collision scenarios. The processed data elements are fed into the clustering module, which assigns each event to an operating state based on statistical similarity and behavioral relevance. Advantageously, the utilization of multi-dimensional inertial data enables highly granular and context-aware motion analysis. Further, the real-time interpretation of orientation and tilt angle allows early detection of instability, as acceleration and angular velocity provide strong indicators of impact severity and directional forces. Subsequently, the combined analysis reduces false positives by distinguishing between aggressive driving and true crash or fall conditions. Further, the speed reflects the longitudinal motion profile and helps to differentiate between stationary, low-speed, and high-speed conditions. Furthermore, the brake activation identifies deceleration events and assists in understanding braking intensity and timing. Furthermore, the throttle position indicates engine power demand, providing insight into acceleration intent or load conditions. Furthermore, the clutch status reveals gear engagement or disengagement, which is particularly relevant during rapid deceleration or manoeuvre transitions. The data processing unit receives real-time operational data through a vehicle communication interface connected to the Controller Area Network (CAN). The statistical methods quantify correlations between operational inputs and inertial responses, while dynamic time warping or windowed analysis techniques align asynchronous data streams. The processed signals are fed into a clustering module, which classifies data segments into distinct operational states based on recurring control patterns and mechanical responses during normal or abnormal behavior. Advantageously, the incorporation of vehicle operational data improves behavioral state classification accuracy by contextualizing inertial patterns with mechanical inputs. For instance, the brake activation combined with a drop in speed and a sudden pitch shift strengthens the evidence of an emergency stop or potential collision. The throttle and clutch data enhance interpretability during gear shifts or recovery actions. Subsequently, the combined analysis supports earlier detection of anomalies, reduces misclassification during aggressive manoeuvres, and enables system adaptability across vehicle platforms. Further advantages of the above-mentioned data include deeper insight into driver behavior, improved robustness of crash and fall detection models, and enhanced predictive capability under varying driving conditions.
Referring to figure 2, in accordance with an embodiment, there is described a system 100 for detecting a crash and/or fall event of a vehicle. The system 100 comprises at least one Inertial Measurement Unit (IMU) sensor 102 configured to sense multi-dimensional inertial data of the vehicle, a vehicle communication interface 104 configured to retrieve vehicle operational data from a Controller Area Network (CAN), a data processing unit 106 communicably coupled to the IMU sensor 102 and the vehicle communication interface 104, and an output interface 108 communicably coupled to the data processing unit. Further, the data processing unit 106 is configured to cluster correlated multi-dimensional inertial data and vehicle operational data into a plurality of operating states, with each operating state representing a discrete behavioral condition of the vehicle. The processing unit 106 comprises a clustering module 110, a rule derivation module 112, and a Graph Neural Network (GNN) 114 engine. The processing unit 106 integrates the clustering module 110, the rule derivation module 112, and a Graph Neural Network (GNN) engine 114 to enable structured analysis and prediction of the vehicle behavior. Specifically, the clustering module 110 receives multi-dimensional inertial and operational data and applies unsupervised learning algorithms to segment the input into discrete behavioral patterns. Furthermore, the algorithms such as DBSCAN, K-Means, or Gaussian mixture models analyze feature vectors derived from time-series signals, grouping similar events into clusters that represent ride states, such as, but is limited to, acceleration, deceleration, turning, instability, or crash. Furthermore, the rule derivation module 112 processes the clustered data to extract association rules using techniques such as Apriori or FP-Growth. Furthermore, each rule represents a frequent co-occurrence between specific features or cluster transitions, filtered by thresholds for support, confidence, and lift. The rules form a knowledge base that reflects real-world vehicle dynamics and are used to establish deterministic relationships among ride states. The module encodes the relationships as edge definitions for the GNN engine 114, allowing the creation of a directed graph structure with behavioral dependencies are explicitly defined. The GNN engine 114 uses the rule-derived graph structure to perform node and edge-level learning through message-passing algorithms. Furthermore, each node represents a ride state, and each edge encodes a transition based on historical data. The training involves updating node embeddings using rule-informed edges to reflect temporal and contextual dependencies. The configuration enables future state prediction by analyzing the evolving graph of vehicle behavior. Advantageously, the combination of the above-mentioned modules provides high accuracy in modeling temporal sequences, enhanced interpretability through rule-based graph formation, and scalable deployment across various datasets without requiring manual labelling.
In an embodiment, the clustering module 110 is configured to segment the multi-dimensional inertial data and the vehicle operational data into a plurality of clusters based on ride pattern characteristics. Specifically, the clustering module 110 receives pre-processed multi-dimensional inertial data and vehicle operational data to identify recurring ride patterns. The feature vectors are generated from time-series inputs using statistical descriptors, frequency-domain transforms, and signal morphology indicators. The vectors are normalized and fed into clustering algorithms such as, but not limited to, DBSCAN for density-based grouping, K-Means for centroid-based segmentation, or Spectral Clustering for capturing non-linear relationships. The clustering module 110 groups data into clusters, with each cluster representing a distinct ride pattern with shared dynamic and operational characteristics. The segmentation is performed in temporal windows to preserve the time-ordering of events, enabling detection of short-duration anomalies or transitions between behavioral states. Further, the cluster labeling is guided by heuristic thresholds or data-driven rules, distinguishing conditions such as cruising, braking, cornering, tipping, or impact. Furthermore, each cluster acts as a marker of a consistent behavioral signature, forming the foundation for rule derivation and GNN modeling. Advantageously, the clustering based on ride pattern characteristics enables unsupervised recognition of meaningful behavior without reliance on predefined labels. Further, the real-time segmentation supports early detection of transitions, making the system responsive to sudden deviations. The benefits of clustering include automatic discovery of behavior modes, improved resilience to sensor noise through pattern generalization, and adaptability to diverse driving styles and terrains. Further, the clustering ensures interpretability of vehicle behavior through compact state representation, improving both downstream decision-making and prediction accuracy.
In an embodiment, the rule derivation module 112 is configured to receive the plurality of clusters and generate a set of association rules from the clustered data, wherein the rules satisfy a predefined frequency threshold and are determined using associative rule mining. The rule derivation module 112 receives the plurality of clusters generated from multi-dimensional inertial and vehicle operational data and applies associative rule mining techniques to extract logical relationships between clustered patterns. Specifically, each cluster represents a discrete ride state, is treated as a transaction containing multiple features or event labels. Further, the module utilizes algorithms such as Apriori, FP-Growth, or ECLAT to identify frequent item sets, where each item set signifies co-occurring behavioral traits or state transitions. Further, the association rules are constructed by analyzing conditional dependencies between the item sets. Furthermore, the rules are filtered based on predefined thresholds for support, confidence, and lift to ensure statistical relevance and practical applicability. The support quantifies a rule's appearance in the dataset, the confidence measures the probability of the consequence given the antecedent, and lift evaluates the rule's significance relative to random occurrence. Furthermore, the high-confidence rules linking specific ride states with crash-indicative patterns or instability transitions are prioritized. Consequently, the final rule set captures both frequent and context-sensitive behavioral sequences, forming a structured representation of vehicle dynamics. The rule derivation process provides an interpretable, data-driven foundation for understanding vehicle behavior. Furthermore, the association rules reveal high-impact feature combinations and critical transitions, enhancing transparency in system decision-making. The advantages of the association rules include low computational overhead during inference, adaptability to evolving datasets, and compatibility with graph-based learning architectures. Further, the rule-based modeling improves system robustness by embedding explainable logic into behavioral state prediction, enabling accurate, traceable, and context-aware detection of crash and fall events.
In an embodiment, the GNN engine 114 is configured to receive the set of association rules and initiate model training of a neural network by utilizing the set of association rules, via a machine learning algorithm. The GNN engine 114 receives the association rules derived from clustered data and transforms the rules into a graph structure for training the Graph Neural Network (GNN) model. Specifically, each rule defines a directed relationship between antecedent and consequent behavioral states, represented as edges between nodes in the graph. The nodes correspond to ride states identified from the clustering process, and edges capture probabilistic or deterministic transitions guided by rule confidence and support. The graph acts as the structural backbone for message propagation during the training phase. Furthermore, the training of the Graph Neural Network proceeds through iterative message-passing algorithms, with each node's embeddings updated based on information aggregated from neighbouring nodes and associated edge weights. The algorithms, such as GraphSAGE, GCN (Graph Convolutional Network), or GAT (Graph Attention Network), are applied to learn low-dimensional vector representations for each ride state, preserving both local and global context. Further, the node features include statistical descriptors of the corresponding cluster, and the edge features encode the strength and frequency of the transition rule. The integration of association-rule-informed graphs into GNN training enables behavior prediction with high interpretation rates.
In an embodiment, the GNN engine 114 is configured to generate a plurality of nodes for the trained neural network, with each node representing a ride state determined based on the derived rules. Specifically, the GNN engine 114 generates a plurality of nodes in the trained neural network, with each node representing a distinct ride state derived from the previously formed association rules. The ride states are obtained from clustered data, with each cluster encapsulating a specific behavioral signature such as braking, turning, instability, or fall transition. Further, the association rules link the clusters by identifying frequent sequential or conditional occurrences. Each valid rule establishes the existence of one or more ride states, which are instantiated as nodes during graph construction. Furthermore, each node is initialized with feature embeddings that summarize the statistical and temporal characteristics of the corresponding ride state. The features may include, but is not limited to, average acceleration, angular velocity, frequency of occurrence, and contextual indicators from operational inputs. Furthermore, the embeddings evolve through training by exchanging information with neighbouring nodes across connected edges. The message-passing iterations refine node representations to capture nuanced relationships between different states, enabling precise encoding of vehicle dynamics and behavioral evolution. Furthermore, the node-based modeling enables structured learning with each behavior state maintaining a distinct and interpretable identity. The graph-based learning enhances generalization across various ride conditions by preserving relational structure. Advantages of trained neural networks include modular representation of vehicle behavior, improved traceability through state-specific embeddings, and high-resolution modeling of complex transitions. Moreover, the node generation rooted in derived rules ensures alignment with real-world driving patterns, reducing model ambiguity and strengthening inference performance in crash or fall detection tasks.
In an embodiment, the GNN engine 114 is configured to generate a plurality of edges for the trained neural network, with each edge representing a transition between ride states based on the plurality of clusters. The GNN engine 114 generates a plurality of edges for the trained neural network, with each edge representing a transition between ride states identified from the clustering process. Further, the transitions are extracted from temporal sequences of clustered data and reinforced using association rules that define the co-occurrence and progression of behavior patterns. Furthermore, each edge links a pair of nodes corresponding to source and destination ride states, and the directionality captures the temporal flow of vehicle dynamics. Specifically, the edges are embedded with features that encode transition-specific properties such as frequency, duration, and statistical confidence derived from rule mining. The above-mentioned attributes influence the message-passing process during training, enabling context-aware updates of node embeddings. Furthermore, the overall graph structure evolves into a temporal knowledge graph of vehicle behavior, where edge connectivity reflects both data-driven evidence and rule-based logic. Beneficially, the edge-based modeling enables precise representation of dynamic state evolution, capturing behavior that leads to another under real-world conditions. Advantages of generating the edges include improved learning of progression patterns critical for crash and fall detection, reduced reliance on dense labelling, and increased transparency through interpretable edge weights. Further, the edges derived from clusters and rules ensure consistency with actual operational sequences, resulting in a robust and scalable framework for behavioral state analysis.
In an embodiment, the GNN engine 114 is configured to detect a future operating state of the vehicle based on the trained neural network and the corresponding transition edges. The GNN engine 114 detects a future operating state of the vehicle by performing inference over the trained graph structure, with each node representing a current ride state and each edge encoding a possible transition. Further, the input data from real-time inertial and operational sources is mapped to the most relevant node using similarity metrics derived from feature embeddings. Subsequently, as the current state is identified, the engine 114 traverses connected edges to estimate the probable subsequent state based on learned transition probabilities and context-dependent edge weights. Furthermore, the prediction utilizes message-passing mechanisms and temporal graph traversal techniques. The node embeddings are propagated through the graph using edge-conditioned filters that account for both structural topology and dynamic context. Specifically, the probabilistic reasoning is applied to evaluate each possible transition path, incorporating both direct and multi-hop relations. The highest-ranking node along a valid transition path is selected as the future operating state, providing predictive insight into the vehicle's next behavioral condition. Beneficially, the future state prediction enables proactive identification of high-risk scenarios, such as, but is not limited to, transitions leading to loss of balance, instability, or collision. Further, the anticipation of the states supports early-warning systems and enhances real-time safety interventions. Advantages of the state’s prediction include low-latency prediction, contextual accuracy through graph-based representation, and adaptability to new behavior patterns without extensive reconfiguration. Moreover, the predictive inference grounded in a trained neural graph ensures data-driven decision-making with structural transparency and high interpretability.
In accordance with a second aspect, there is described a method of detecting a crash of a vehicle, the method comprises:
- segmenting multi-dimensional inertial data and vehicle operational data into a plurality of clusters, via a clustering module;
- generating a set of association rules from the clustered data, via a rule derivation module;
- initiating model training of a neural network by utilizing the set of association rules based on a machine learning algorithm, via a GNN engine;
- generating a plurality of nodes and edges for the trained neural network, via the GNN engine; and
- detecting a future operating state of the vehicle based on the trained neural network, via the GNN engine.
Figure 3 describes a method of detecting a crash of a vehicle. The method 200 starts at a step 202. At the step 202, the method comprises segmenting multi-dimensional inertial data and vehicle operational data into a plurality of clusters, via a clustering module 110. At a step 204, the method comprises generating a set of association rules from the clustered data, via a rule derivation module 112. At a step 206, the method comprises initiating model training of a neural network by utilizing the set of association rules based on a machine learning algorithm, via a GNN engine 114. At a step 208, the method comprises generating a plurality of nodes and edges for the trained neural network, via the GNN engine 114. At a step 210, the method comprises detecting a future operating state of the vehicle based on the trained neural network, via the GNN engine 114. The method 200 ends at the step 210.
In an embodiment, the method 200 comprises scaling the received input data in a consistent range, via the data processing unit 104.
In an embodiment, the method 200 comprises clustering correlated multi-dimensional inertial data and vehicle operational data into a plurality of operating states, with each operating state representing a discrete behavioral condition of the vehicle.
In an embodiment, the method 200 comprises segmenting multi-dimensional inertial data and vehicle operational data into a plurality of clusters, via a clustering module 110. Furthermore, the method 200 comprises generating a set of association rules from the clustered data, via a rule derivation module 112. Furthermore, the method 200 comprises initiating model training of a neural network by utilizing the set of association rules based on a machine learning algorithm, via a GNN engine 114. Furthermore, the method 200 comprises generating a plurality of nodes and edges for the trained neural network, via the GNN engine 114. Furthermore, the method 200 comprises detecting a future operating state of the vehicle based on the trained neural network, via the GNN engine 114.
Based on the above-mentioned embodiments, the present disclosure provides significant advantages such as (but not limited to) enhanced efficiency for detecting crash and/or fall event of a vehicle, and real-time adjustments in vehicle parameters based on model training, thereby, ensuring overall safety and performance of the vehicle.
It would be appreciated that all the explanations and embodiments of the system 100 also apply mutatis-mutandis to the method 200.
In the description of the present invention, it is also to be noted that, unless otherwise explicitly specified or limited, the terms “disposed,” “mounted,” and “connected” are to be construed broadly, and may for example be fixedly connected, detachably connected, or integrally connected, either mechanically or electrically. They may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Modifications to embodiments and combinations of different embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as “including”, “comprising”, “incorporating”, “have”, and “is” used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural where appropriate.
Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the present disclosure, the drawings, and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.
,CLAIMS:WE CLAIM:
1. A system (100) for detecting a crash of a vehicle, the system (100) comprises:
- at least one Inertial Measurement Unit (IMU) sensor (102) configured to sense multi-dimensional inertial data of the vehicle;
- a vehicle communication interface (104) configured to retrieve vehicle operational data from a Controller Area Network (CAN);
- a data processing unit (106) communicably coupled to the IMU sensor (102) and the vehicle communication interface (104); and
- an output interface (108) communicably coupled to the data processing unit,
wherein the data processing unit (106) is configured to cluster correlated multi-dimensional inertial data and vehicle operational data into a plurality of operating states, with each operating state representing a discrete operational pattern of the vehicle.

2. The system (100) as claimed in claim 1, wherein the multi-dimensional inertial data of the vehicle comprises at least one of acceleration, angular velocity, vehicle orientation, and tilt angle, and wherein the vehicle operational data comprises at least one speed, brake activation, throttle position, and clutch status.

3. The system (100) as claimed in claim 1, wherein the processing unit (106) comprises a clustering module (110), a rule derivation module (112), and a Graph Neural Network (GNN) engine (114).

4. The system (100) as claimed in claim 3, wherein the clustering module (110) is configured to segment the multi-dimensional inertial data and the vehicle operational data into a plurality of clusters based on ride pattern characteristics.

5. The system (100) as claimed in claim 3, wherein the rule derivation module (112) is configured to receive the plurality of clusters and generate a set of association rules from the clustered data, wherein the rules satisfy a predefined frequency threshold and are determined using associative rule mining techniques.

6. The system (100) as claimed in claim 3, wherein the GNN engine (114) is configured to receive the set of association rules and initiate model training of a neural network by utilizing the set of association rules, via a machine learning algorithm.

7. The system (100) as claimed in claim 3, wherein the GNN engine (114) is configured to generate a plurality of nodes for the trained neural network, with each node representing a ride state determined based on the derived rules.

8. The system (100) as claimed in claim 3, wherein the GNN engine (114) is configured to generate a plurality of edges for the trained neural network, with each edge representing a transition between ride states based on the plurality of clusters.

9. The system (100) as claimed in claim 3, wherein the GNN engine (114) is configured to detect a future operating state of the vehicle based on the trained neural network and the corresponding transition edges.

10. A method (200) of detecting a crash a vehicle, the method (200) comprises:
- segmenting multi-dimensional inertial data and vehicle operational data into a plurality of clusters, via a clustering module (110);
- generating a set of association rules from the clustered data, via a rule derivation module (112);
- initiating model training of a neural network by utilizing the set of association rules based on a machine learning algorithm, via a GNN engine (114);
- generating a plurality of nodes and edges for the trained neural network, via the GNN engine (114); and
- detecting a future operating state of the vehicle based on the trained neural network, via the GNN engine (114).

Documents

Application Documents

# Name Date
1 202421061348-PROVISIONAL SPECIFICATION [13-08-2024(online)].pdf 2024-08-13
2 202421061348-FORM FOR SMALL ENTITY(FORM-28) [13-08-2024(online)].pdf 2024-08-13
3 202421061348-FORM 1 [13-08-2024(online)].pdf 2024-08-13
4 202421061348-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-08-2024(online)].pdf 2024-08-13
5 202421061348-DRAWINGS [13-08-2024(online)].pdf 2024-08-13
6 202421061348-DECLARATION OF INVENTORSHIP (FORM 5) [13-08-2024(online)].pdf 2024-08-13
7 202421061348-FORM-5 [17-06-2025(online)].pdf 2025-06-17
8 202421061348-DRAWING [17-06-2025(online)].pdf 2025-06-17
9 202421061348-COMPLETE SPECIFICATION [17-06-2025(online)].pdf 2025-06-17
10 202421061348-FORM-9 [18-06-2025(online)].pdf 2025-06-18
11 Abstract.jpg 2025-07-02
12 202421061348-Proof of Right [15-09-2025(online)].pdf 2025-09-15