Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Assisting A Rider Of A Vehicle To Prevent A Collision

Abstract: ABSTRACT A SYSTEM AND METHOD FOR ASSISTING A RIDER OF A VEHICLE TO PREVENT A COLLISION The present invention provides a system (100) and method (200) for assisting a rider of a vehicle (101) to prevent a collision. The system (100) comprises at least one image capturing unit (102) configured for capturing one or more images and/or videos of an environment (103) surrounding the vehicle (101) driven by the rider in real time. The system further comprises an electronic control unit (104) in communication with the at least one image capturing unit (102). The electronic control unit (104) is configured for detecting one or more objects of interest in the environment (103) from the one or more captured images and videos, determining an intention and/or location of the one or more objects of interest and a level of risk associated with the intention and/or location of the one or more objects of interest and performing one or more pre-defined actions corresponding to the determined level of risk. Reference Figure 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 May 2022
Publication Number
48/2023
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

TVS MOTOR COMPANY LIMITED
“Chaitanya” No.12 Khader Nawaz Khan Road, Nungambakkam Chennai Tamil Nadu India

Inventors

1. Shaun Qien Yeau Tan
“Chaitanya” No 12 Khader Nawaz Khan Road, Nungambakkam Chennai Tamil Nadu 600 006 India

Specification

Description:FIELD OF THE INVENTION
[001] The present invention relates to a system and a method for assisting a rider of a vehicle to prevent an accident/collision. More particularly, the present invention relates to the system and the method which determines intention and/or location of one or more objects of interests surrounding the rider, for assisting the rider of the vehicle to prevent any collision.

BACKGROUND OF THE INVENTION
[002] In modern era, an increase in the number of road accidents/collisions is a major societal concern. Some examples of scenarios which may cause a safety risk to a rider of a vehicle includes sudden opening of car doors by peer riders, sudden lane switching of peer vehicles, sudden stepping out of pedestrians in front of the vehicle, sudden overtaking maneuver by the peer vehicles, vehicle path crossing and the likes.
[003] Many a times, the rider of a vehicle may not be able to detect presence of peer vehicles/pedestrians on a road, for example, when the peer vehicles/pedestrians are in blind spots of the rider, which may lead to accidents/collisions. Even if the rider of the vehicle detects the presence of the peer vehicles/pedestrians, many a times, an intention of peer riders/ pedestrians is not discernible by the rider. This issue is further complicated due to the emergence of self-driving cars.
[004] Also, the actions of the peer riders/pedestrians may be so subtle that it is not easy for the rider of the vehicle to discern the same. For example, the intention of the pedestrians to cross the road, intention of the peer riders to open a door of a car may not be easily detected or discernible by the rider of the vehicle which may lead to accidents/collisions. Also, many a times, the rider of the vehicle is not able to detect presence of obstacles on the road such as trees, rocks, pits, speed breakers etc. Even if the rider is able to detect the presence of the obstacles, it might be difficult for the rider to ascertain how the obstacles may damage the vehicle and/or cause accidents/collisions.
[005] In view of the foregoing, there is a need-felt to overcome the above-mentioned disadvantages of the prior art.

SUMMARY OF THE INVENTION
[006] In one aspect of the present invention, a system for assisting a rider of a vehicle to prevent a collision is disclosed.
[007] The system comprises an electronic control unit and an image capturing unit. The electronic control unit and the image capturing unit are mounted on the vehicle and are in communication with each other. It is to be understood that the term “image capturing unit” includes a single image capturing unit mounted on the vehicle or multiple image capturing units mounted on the vehicle. In an embodiment, the image capturing unit may be a monocular camera capable of recording images and/or videos in real time. In an embodiment, a first image capturing unit is mounted at a front portion of the vehicle and a second image capturing unit is mounted at a rear portion of the vehicle.
[008] The images and/or videos captured by the image capturing unit are transmitted to the electronic control unit. The electronic control unit is configured for detecting objects of interest in the environment surrounding the vehicle from the one or more captured images and/or videos. It is to be understood that the term “objects of interest” can be a single object of interest or multiple objects of interest in the images and/or videos captured by the image capturing unit.
[009] On detecting the objects of interest, the electronic control unit is further configured to determine an intention and/or location of the detected objects of interest and a level of risk associated with the intention and/or location of the detected objects of interest. Based on the determination, the electronic control unit is configured to perform pre-defined actions corresponding to the determined level of risk. The term “pre-defined actions” include a single pre-defined action or multiple pre-defined actions.
[010] In an embodiment, the electronic control unit comprises an object of interest detection unit, a feature extraction unit, an intention and location classification unit and a risk level determination unit.
[011] The object of interest detection unit is in communication with the image capturing unit and detects objects of interest in the environment surrounding the vehicle from the one or more captured images and/or videos. The objects of interest detected by the object of interest detection unit are objects which pose a threat of collision to the vehicle. The objects of interest in the environment surrounding the vehicle may be peer vehicles, pedestrians and/or obstacles. The obstacles may be animals, trees, stones, footpaths, dividers, signboards, signals, barricades, potholes, diversions, rocks, speed breakers, pits and the likes.
[012] The feature extraction unit is in communication with the object of interest detection unit and is configured to extract features of the objects of interest. In an embodiment, the features extracted by the feature extraction unit may comprise a gaze direction of the pedestrians, gait characteristics of the pedestrians, a direction faced by the pedestrians, posture of the pedestrians, actions performed by the pedestrians such as looking into the mobile phone or speaking into the mobile phones indicating the alertness of the pedestrians on the road, actions performed by riders of the peer vehicles such as looking into the mobile phone or speaking into the mobile phones indicating the alertness of the peer riders on the road, motion parameters of peer vehicles such as speed of the peer vehicles, an indicator light status of the peer vehicles, a hazard light status of the peer vehicles, dimensions of the obstacles and/or distance of the peer vehicles, pedestrians and obstacles from the vehicle. This list of extracted features should not be construed as limiting and may comprise other extracted features of the peer riders, peer vehicles, pedestrians and/or obstacles which are significant in determining the intention and/or location of the peer riders, peer vehicles, pedestrians and/or obstacles. In an embodiment, the features of the peer riders, peer vehicles, pedestrians and/or obstacles are extracted using artificial intelligence and deep learning techniques.
[013] The intention and location classification unit is in communication with the feature extraction unit and configured for determining the intention and/or location of the objects of interest and classifying the intention and/or the location of the objects of interest based on the extracted features of the objects of interest. In an embodiment, the determination of the intention and/or location and classification of the same by the intention and location classification unit is performed using artificial intelligence or deep learning techniques.
[014] The extracted features include actions performed by the peer riders and/or pedestrians and the intensity by which such actions are performed. In an embodiment, the classification may include pedestrians intending to cross a road, pedestrians crossing the road, pedestrians walking on a footpath, peer vehicles performing an overtaking maneuver, peer vehicles performing a braking maneuver, peer vehicles performing a turning maneuver, peer vehicles with one or more doors opened or intended to be opened, peer vehicles performing a lane change maneuver, peer vehicles crossing path of the vehicle, peer vehicles in a blind spot of the rider, pedestrians in a blind spot of the rider, pedestrian performing an action on mobile phone, peer riders performing an action on mobile phone, trees blocking path of the vehicle, rocks blocking path of the vehicle, speed breakers having a height greater than a pre-defined height, pits having a depth greater than a pre-defined depth. This list of classification should not be construed as limiting and may comprise other classifications based on the intention and/or location of the objects of interest.
[015] The risk level determination unit is in communication with the intention and location classification unit and configured to determine the level of risk associated with the intention and/or location of the objects of interest based on the classification performed by the intention and location classification unit. In an embodiment, the risk level for each classification may be pre-configured in the system or may be determined in real time using artificial intelligence or deep learning techniques. Based on the determined level of risk associated with the objects of interest, the risk level determination unit performs pre-defined actions to assist the rider to prevent an accidents/collisions
[016] In an embodiment, the pre-defined actions may comprises a first pre-defined action when the determined level of risk is greater than a first pre-defined level and a second pre-defined action when the determined level of risk is greater than a second pre-defined level. The second pre-defined level of risk is greater than a first pre-defined level of risk.
[017] In an embodiment, the first pre-defined action may be performed by a radar signal generator unit and/or a rider warning unit. The radar signal generator unit is mounted on the vehicle and is in communication with the electronic control unit/risk level determination unit. The radar signal generator unit is configured for broadcasting location of the vehicle to the peer vehicles and/or the pedestrians when the determined level of risk is greater than the first pre-defined level. The radar signal generator unit warns the peer vehicles and/or pedestrians to change their course of ride or course of action to avoid the vehicle in order to prevent the accident/collision.
[018] The rider warning unit is also mounted on the vehicle and is in communication with the electronic control unit. The rider warning unit is configured for transmitting warning signals to the rider of the vehicle when the determined level of risk is greater than the first pre-defined level. The warning signals may include visual, audio and/or haptic feedback to the rider of the vehicle. The rider warning unit warns the rider to change his course of ride or course of action to avoid the peer vehicles and/or pedestrians in order prevent the accident/collision.
[019] The second pre-defined action may be performed by a vehicle braking unit. The vehicle braking unit is also in communication with the electronic unit/risk level determination unit. The vehicle braking unit is configured for activating an emergency braking operation of the vehicle when the determined level of risk is greater than the second pre-defined level. It is to be understood that second pre-defined action is performed when warning the rider, peer riders and/or pedestrians is not sufficient to prevent accident/collision. However, this should not be construed as limiting and the second pre-defined action may be performed without performing the first pre-defined action in emergency situations.
[020] In another aspect of the present invention, a method for assisting a rider of a vehicle to prevent a collision is disclosed.
[021] The method comprises a step of capturing images and/or videos of an environment surrounding the vehicle driven by the rider in real time. The step of capturing is performed by a single image capturing unit or multiple image capturing units mounted on the vehicle.
[022] The method further comprises a step of detecting objects of interest in the environment from the one or more images and/or videos captured by the image capturing unit. The step of detecting objects of interest is performed by an electronic control unit. The electronic control unit is in communication with the image capturing unit. The images and/or videos captured by the image capturing unit in real time are transmitted to the electronic control unit.
[023] In an embodiment, the electronic control unit comprises an object of interest detection unit and the step of detecting is performed by the object of interest detection unit. The objects of interest are objects posing threat of collision to the vehicle. The objects of interest in the environment surrounding the vehicle may be peer vehicles, pedestrians and/or obstacles. The peer vehicles may be bicycles, scooters, motorcycles, cars, trucks and the likes. The obstacles may be animals, trees, stones, footpaths, dividers, signboards, signals, barricades, potholes, diversions, rocks, speed breakers, pits and the likes.
[024] The method further comprises determining an intention and/or location of objects of interest and a level of risk associated with the intention and/or location of the objects of interest. The step of determining is performed by the electronic control unit.
[025] In an embodiment, the electronic control unit comprises a feature extraction unit and an intention and location classification unit. The step of determining comprises a step of extracting features of the objects of interest and a step of determining and classifying the intention and/or location of the objects of interest based on the extracted features of the objects of interest. The step of extracting is performed by the feature extraction unit and the step of determining the intention and/location of the objects of interest and classifying the same is performed by the intention and location classification unit.
[026] In an embodiment, the features extracted by the feature extraction unit may comprise a gaze direction of the pedestrians, gait characteristics of the pedestrians, a direction faced by the pedestrians, posture of the pedestrians, actions performed by the pedestrians to indicate the alertness of the pedestrians, actions performed by riders of the peer vehicles to indicate the alertness of the peer riders, motion parameters of peer vehicles such as speed of the peer vehicles, an indicator light status of the peer vehicles, a hazard light status of the peer vehicles, dimensions of the obstacles and/or a distance of the peer vehicles, pedestrians and obstacles from the vehicle. This list of extracted features should not be construed as limiting and may comprise other extracted features of the peer vehicles, peer riders, pedestrians and/or obstacles which are significant in determining the intention and/or location of the peer vehicles, pedestrians and/or obstacles.
[027] In an embodiment, the classifications may be pedestrians intending to cross a road, pedestrians crossing the road, pedestrians walking on a footpath, peer vehicles performing an overtaking maneuver, peer vehicles performing a braking maneuver, peer vehicles performing a turning maneuver, peer vehicles with one or more doors opened or intended to be opened, peer vehicles performing a lane change maneuver, peer vehicles crossing path of the vehicle, peer vehicles in a blind spot of the rider, pedestrians in a blind spot of the rider, pedestrian performing an action on mobile phone, peer riders performing an action on mobile phone, trees blocking path of the vehicle, rocks blocking path of the vehicle, speed breakers having a height greater than a pre-defined height, pits having a depth greater than a pre-defined depth. This list of classification should not be construed as limiting and may comprise other classifications based on the intention and/or location of the objects of interest.
[028] The method further comprises performing pre-defined actions corresponding to the level of risk determined for intention and/or location of the objects of interest. The step of performing pre-defined actions is performed by the electronic control unit.
[029] In an embodiment, the electronic unit comprises a risk level determination unit. The step of performing comprises determining the level of risk associated with the one or more objects of interest based on the classification and performing the pre-defined actions corresponding to the determined level of risk. The step of performing is performed by the risk level determination unit.
[030] The pre-defined actions may comprise a first pre-defined action when the determined level of risk is greater than a first pre-defined level and a second pre-defined action when the determined level of risk is greater than a second pre-defined level. The second pre-defined level of risk is greater than a first pre-defined level of risk as the first pre-defined action is associated with warning the rider of the vehicle, peer riders, peer vehicle and/or pedestrians whereas the second pre-defined action is associated with controlling of the vehicle to prevent the accident/collision.
[031] The method further comprises a step of broadcasting location of the vehicle to the peer riders, peer vehicles and/or pedestrians and/or transmitting warning signals to the rider of the vehicle when the determined level of risk is greater than the first pre-defined level. The step of broadcasting is performed by a radar signal generator unit mounted on the vehicle and is in communication with the electronic control unit /risk level determination unit. The step of transmitting warning signals is performed by a rider warning unit mounted on the vehicle and is in communication with the electronic control unit /risk level determination unit.
[032] The method further comprises a step of activating an emergency braking operation of the vehicle when the determined level of risk is greater than the second pre-defined level. The step of activating the emergency braking operation is performed by a vehicle braking unit mounted on the vehicle and is in communication with the electronic control unit /risk level determination unit.

BRIEF DESCRIPTION OF THE DRAWINGS
[033] Reference will be made to embodiments of the invention, examples of which may be illustrated in accompanying figures. These figures are intended to be illustrative, not limiting. Although the invention is generally described in context of these embodiments, it should be understood that it is not intended to limit the scope of the invention to these particular embodiments.
Figure 1 and Figure 2 are block diagrams of a system for assisting a rider of a vehicle to prevent a collision, in accordance with an embodiment of the present invention.
Figure 3 is a flowchart illustrating a method for assisting a rider of a vehicle to prevent a collision, in accordance with an embodiment of the present invention.
Figure 4a and Figure 4b is a flowchart illustrating a method for assisting a rider of a vehicle to prevent a collision, in accordance with another embodiment of the present invention.

DETAILED DESCRIPTION OF THE INVENTION
[034] Various features and embodiments of the present invention here will be discernible from the following further description thereof, set out hereunder.
[035] It is to be understood that the term vehicle and peer vehicles includes two-wheelers, three-wheelers and four wheelers including bicycles, motorcycles, scooters, cars, trucks, rickshaws, and the likes. It is to be understood that vehicle and/or peer vehicles may be vehicles having conventional internal combustion engine, electric vehicles, or hybrid vehicles. It is also to be understood that the term rider has been used for a person riding the vehicle and peer rider has been used for a person riding the peer vehicles. The term “peer vehicle” means a vehicle in an environment surrounding the vehicle and the term “peer rider” means a person riding the peer vehicle. The environment may comprise peer vehicles driven by peer riders and/or autonomous vehicles, obstacles and pedestrians.
[036] Figure 1 and Figure 2 are block diagram of a system 100 for assisting a rider of a vehicle 101 to prevent a collision, in accordance with an embodiment of the present invention.
[037] As shown, the system 100 comprises an image capturing unit 102 and an electronic control unit 104. The electronic control unit 104 and the image capturing unit 102 are in communication with each other. The electronic control unit 104 and the image capturing unit 102 are mounted on the vehicle 101. The image capturing unit 102 is configured to capture images and/or videos of an environment 103 surrounding the vehicle 101. It is to be understood that there can be a single image capturing unit or multiple image capturing units mounted on the vehicle 101. It is also to be understood that the image capturing unit 102 can capture one image and/or video or multiple images and/or videos of the environment 103 surrounding the vehicle 101. The images and/or videos are captured in real time. In an embodiment, a first image capturing unit 102 is mounted at a front portion of the vehicle 101 and a second image capturing unit 102 is mounted at a rear portion of the vehicle 101. This embodiment, however, should not be construed as limiting and the image capturing unit 102 can be mounted at other locations on the vehicle 101 such as left and right sides of the vehicle 101. In an embodiment, the image capturing unit 102 is a monocular camera.
[038] The images and/or the videos captured by the image capturing unit 102 in real time are transmitted to the electronic control unit 104. The images and/or the videos captured by the image capturing unit 102 are transmitted to the electronic control unit 104 through transmission modes. For example, standards for Internet and other packet switched network transmission may be used. The network may include wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a cellular telephone network. Further, the network may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof. It is to be understood that the present invention is not limited to transmission modes with any particular standards and protocols. Such modes are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions are considered equivalents thereof.
[039] The electronic control unit 104 is configured for detecting objects of interest in the captured images and/or videos. It is to be understood that there may be a single object of interest or multiple objects of interest in the captured images and/or videos. The objects of interest are objects which may pose a threat of collision to the vehicle 101 driven by the rider. The objects of interest may be peer vehicles, pedestrians, and/or obstacles in the environment 103 surrounding the vehicle 101. The obstacles may be animals, trees, rocks, stones, footpaths, dividers, signboards, signals, barricades, potholes, diversions, speed breakers, pits and the likes.
[040] On detection of objects of interest from the captured images and/or videos, an intention and/or location of the objects of interest is determined and a level of risk associated with the intention and/or location of the objects of interest is determined by the electronic control unit 104. Based on the determination, the electronic control unit 104 performs pre-defined actions to prevent accident/collision of the vehicle 101.
[041] In an embodiment, as shown in Figure 2, the electronic unit 104 comprises an object of interest detection unit 104a, a feature extraction unit 104b, an intention and location classification unit 104c, and a risk level determination unit 104d. The communication between the object of interest detection unit 104a, the feature extraction unit 104b, the intention and location classification unit 104c, and the risk level determination unit 104d, as discussed in subsequent paragraphs, may be established through physical connections or may be established wirelessly.
[042] The object of interest detection unit 104a is in communication with the image capturing unit 102 and is configured to detect objects of interest in the environment 103 surrounding the vehicle 101 from the captured images and/or videos. In an embodiment, the object of interest detection unit 104a detects objects of interest using artificial intelligence and/or deep learning techniques. In an embodiment, the objects of interest determination unit 104a classifies the objects (using classification algorithms) in the input images and/or videos captured from the image capturing unit 102, as “of interest or not”. If the object is of interest, then the object of interest detection unit 104a determines a specific location of the object (with respect to the vehicle using different object localization algorithms).
[043] The feature extraction unit 104b is in communication with the object of interest detection unit 104a and is configured to extract a single feature or multiple features of the objects of interest. In an embodiment, the features are extracted by the feature extraction unit using artificial intelligence and deep learning techniques. In an embodiment, the features extracted by the feature extraction unit 104b may comprise a gaze direction of the pedestrians, gait characteristics of the pedestrians, a posture of the pedestrians, actions performed by the pedestrians such as interaction with electronic devices to indicate the alertness of pedestrians while walking, actions performed by peer riders such as interaction with electronic devices to indicate the alertness of peer riders while driving, motion parameters of peer vehicles such as speed of the peer vehicles, an indicator light status of the peer vehicles, a hazard light status of the peer vehicles, dimensions such as height, width, length and/or depth of the obstacles and/or a distance of the peer vehicles, pedestrians and obstacles from the vehicle 101. The list of extracted features should not be construed as limiting and may include other features of the peer vehicles, peer riders, pedestrians and/or obstacles which are significant in determining the intention and/or location of the peer vehicles, pedestrians and/or obstacles.
[044] The intention and classification unit 104c is in communication with the feature extraction unit 104b. Based on extracted features of the peer vehicles, pedestrians and/or obstacles, the intention and location classification unit 104c determine the intention and/or location of the objects of interest and classifies the intention and/or location of the objects of interest based on the features extracted by the feature extraction unit 104b. The intention of the peer riders and/or pedestrians is determined based on sudden actions and/or subtle actions of the peer riders, the peer vehicles and/or the pedestrians. The sudden action and/or subtle action of the peer riders, peer vehicles and/or pedestrians can be determined by identifying changes in the extracted features of the objects of interest and applying techniques such as deep learning or artificial intelligence to accurately predict one or more final actions of the peer riders, peer vehicles and/or pedestrians. For example, if the gaze of the pedestrian changes in the extracted features, the intention of the pedestrian may be to cross the road or move in a particular direction on the road. Similarly, change in the gait of the pedestrian may indicate the speed with which the pedestrian intends to cross the road. Similarly, movement of the peer vehicle in a direction along a width of the road may indicate the intention of the peer rider to overtake the vehicle 101. Similarly, sudden decrease in the speed of the vehicle 101 may indicate the intention of the peer rider to perform a braking maneuver or may indicate the intention of the peer rider to use his personal digital assistant while riding the peer vehicle. Similarly, blinking of the turn signal lamps of the peer vehicles may indicate the intention of the peer riders to change directions. Similarly, movement of the peer rider in the peer vehicle or movement of fellow passengers (passengers other than the peer rider in the peer vehicle) in a direction towards a door of the peer vehicle along with the gaze of the peer rider and fellow passengers may indicate the intention of the peer riders and fellow passengers to open the door. In other words, changes in the extracted features of the objects of interest indicate the intention of the objects of interest to perform one or more final actions.
[045] In an embodiment, the intention and/or location of the objects of interest are determined and classified unit using artificial intelligence and deep learning techniques. In an embodiment, the classification of the intention and/or location of the objects of interest may comprise pedestrians intending to cross a road, pedestrians crossing the road, pedestrians walking on a footpath, peer vehicles performing an overtaking maneuver, peer vehicles performing a braking maneuver, peer vehicles performing a turning maneuver, peer vehicles with one or more doors opened or intended to be opened, peer vehicles performing a lane change maneuver, peer vehicles crossing path of the vehicle, peer vehicles in a blind spot of the rider, pedestrians in a blind spot of the rider, pedestrian performing an action on mobile phones, peer riders performing an action on mobile phones, trees blocking path of the vehicle 101, rocks blocking path of the vehicle 101, speed breakers having a height greater than a pre-defined height, pits having a depth greater than a pre-defined depth, proximity of foot path and divider to the vehicle 101, and the likes.
[046] It is to be understood that the object of interest detection unit 104a, feature extraction unit 104b, and the intention and location classification unit 104c are previously trained and validated on a variety of scenarios to identify objects of interest, then extract features of the identified objects of interest, and then determine the intent and location based on the extracted feature of the one or more objects of interest. The risk level determination unit 104d is in communication with the intention and location classification unit 104c and is configured to determine the level of risk associated with the objects of interests based on the classification. The risk level determination unit 104d determines the level of risk based on parameters including classification of the objects of interest as well as intensity of action and/or intention of the objects of interest. In an embodiment, the level of risk associated with the intention and/or location of the objects of interest is determined using artificial intelligence and/or deep learning techniques. Based on the determined level of risk associated with the intention and/or location of the objects of interest, the risk level determination unit 104d performs pre-defined actions. The pre-defined actions are pre-configured in the electronic control unit 104. It is to be understood that risk level determination unit 104d is trained and validated on different scenarios to quantify the level of the risk to the rider of the vehicle 101. The risk level may be indicative of the intensity of the consequences of the intention and location of the objects of interest to the rider. For example, in case the peer vehicles are moving at speeds such as 60-70 kmph and changing lanes affecting the vehicle 101, the risk level for the vehicle 101 may be higher. Similarly, the door of the peer vehicle being opened in close proximity to the lane of the vehicle 101 is going to be of high risk. Similarly, sudden increase and/or decrease in the speed of the peer vehicles in proximity of vehicle 101 indicates high risk. Similarly, sudden change in the direction of the pedestrian crossing the road indicates indecisiveness of the pedestrian and therefore high risk. Similarly, the sudden increase or decrease in the speed of the pedestrian crossing the road indicates high risk. Similarly, sudden movement of the peer vehicle along a width of the road indicates high risk. In a similar manner, the risk level determination unit 104d also determines low level of risks to the vehicle. The movement of peer vehicles and/or pedestrians at a substantial distance from the vehicle 101 may pose low risk or no risk to the vehicle. For example, gait, gaze and direction of the pedestrian may indicate an intention of the pedestrian not to cross the road or safely cross the road. Similarly, speed of the peer vehicle and distance from the vehicle 101 indicates that the peer vehicles maintain a safe distance from the vehicle 100. Similarly, obstacles such as rocks, trees, potholes, pits, speed breakers and the likes may indicate high level of risk or low level of risk based on dimensions of the obstacles as well as proximity to the vehicle 101.
[047] It is to be understood that level of risk associated with the one or more objects of interest determines pre-defined actions to be performed by the risk level determination unit 104d. The first pre-defined actions are performed by the risk level determination unit 104d when the determined level of risk to the vehicle 101 from the one or more objects of interest is low. The second pre-defined actions are performed by the risk level determination unit 104d when the determined level of risk to the vehicle 101 from the one or more objects of interest is high.
[048] In other words, the first pre-defined actions are performed by the risk level determination unit 104d, when the determined level of risk is greater than a first pre-defined level and the second pre-defined actions are performed by the risk level determination unit 104d, when the determined level of risk is greater than a second pre-defined level. The first pre-defined actions are action which warns the rider, peer vehicles, peer riders and/or pedestrians to prevent accident/collision. The second pre-defined actions include measures to control the vehicle to prevent accident/collision. The second pre-defined level of risk is greater than the first pre-defined level of risk as the first pre-defined actions are associated with warning the rider, peer riders, peer vehicle and/or pedestrians to prevent accidents/collisions whereas the second pre-defined actions are associated with controlling the vehicle to prevent accidents/collisions.
[049] In an embodiment, as shown in Figure 1 and Figure 2, the system further comprises a radar signal generator unit 106 and/or a rider warning unit 108 in communication with the electronic control unit 104. The radar signal generator unit 106 and/or rider warning unit 108 are mounted on the vehicle 101. The radar signal generator unit 106 is configured for broadcasting information such as location of the vehicle 101 to the peer vehicles and/or pedestrians when the determined level of risk is greater than the first pre-defined level. The rider warning unit 108 is configured for transmitting warning signals to the rider of the vehicle 101 when the determined level of risk is greater than the first pre-defined level. The radar signal generator unit 106 warns the peer riders and/or the pedestrians with respect to location of the vehicle 101. On receiving such information, the peer riders and/or pedestrians may change course of ride and/or course of action to prevent accident/ collision. The rider warning unit 108 warns the rider of the vehicle 101 with respect to the peer vehicles, pedestrians and/or obstacles. On receiving such information, the rider of the vehicle 101 may change his course of actions or course of ride to prevent accident/collision. Warnings from the rider warning unit 108 can be in the form of audio alert, visual alert and/or haptic alert. The audio alert can be generated by an audio alert device including a buzzer, a horn and/or a speaker mounted on the vehicle. The visual alert can be generated by a video alert device such as a light emitting diodes or alpha numeric displays near or inside the instrument cluster/speedometer of the vehicle 101. The visual alert can be provided by blinking of the light emitting diodes or alpha numeric displays with a pre-defined time interval. The haptic alert devices can be mounted on vehicle 101 at location where the body of the rider comes in contact with the vehicle 101. For example, a seat of the vehicle, handlebars of the vehicle 101, fuel tank of the vehicle 101 and the likes. Also, in case the warning is ignored by the rider of the vehicle, peer riders and/or pedestrians, level of risk is increased by the risk level determination unit 104d and is made greater than second pre-defined level and emergency braking, preferably anti-lock braking, is initiated by the electronic control unit 104/risk level determination unit 104d of the vehicle 101.
[050] It is to be understood that radar signal generation unit 106 is active only when the determined level of risk is greater than the first pre-defined level, thereby resulting in saving of battery through which it is powered.
[051] In an embodiment, as shown in both Figure 1 and Figure 2, the system 100 comprises a vehicle braking unit 110 in communication with the electronic control unit 104. The vehicle braking unit 110 is configured for activating an emergency braking operation of the vehicle 101 when the determined level of risk is greater than the second pre-defined level. On activation of the emergency braking operation, the vehicle 101 is stopped to prevent accident or collision.
[052] It is to be understood that the electronic control unit 104 transmits signals to the radar signal generator unit 106, rider warning unit 108 and/or vehicle braking unit 110 through transmission modes. For example, standards for Internet and other packet switched network transmission may be used. The network may include wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a cellular telephone network. Further, the network may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof. It is to be understood that the present invention is not limited to transmission modes with any particular standards and protocols. Such modes are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions are considered equivalents thereof.
[053] Figure 3 is a flowchart illustrating a method 200 for assisting a rider of a vehicle 101 to prevent a collision, in accordance with an embodiment of the present invention.
[054] At step 201, the method 200 comprises capturing images and/or videos of an environment 103 surrounding the vehicle 101 driven by the rider in real time. The step of capturing images and/or videos is performed by the image capturing unit 102. It is to be understood that the term image capturing unit 102 may include a single image capturing unit or multiple image capturing units mounted on the vehicle 101.
[055] At step 202, the method comprises detecting objects of interest in the environment from the captured images and/or videos. The step of detecting objects of interest from the captured images and/or videos is performed by an electronic control unit 104. The electronic control unit 104 is in communication with the image capturing unit 102. The images and/or videos captured by the image capturing unit 102 are transmitted to the electronic control unit 104. .
[056] At step 203, the method comprises determining an intention and/or location of the objects of interest and a level of risk associated with the intention and/or location of the objects of interest. The step of determining 203 is performed by the electronic control unit 104.
[057] At step 204, the method comprises performing, based on the determination in step 203, pre-defined actions corresponding to the determined level of risk. The step of performing 204 is performed by the electronic control unit 104. The pre-defined actions may comprise a first pre-defined action when the determined level of risk is greater than a first pre-defined level and a second pre-defined action when the determined level of risk is greater than a second pre-defined level. The second pre-defined level of risk is greater than a first pre-defined level of risk as the first pre-defined action is associated with warning the rider of the vehicle 101, peer riders, peer vehicle and/or pedestrians whereas the second pre-defined action is associated with controlling of the vehicle 101 to prevent the accident/collision.
[058] In an embodiment, the first pre-defined action may be performed by a radar signal generator unit 106 and/or a rider warning unit 108. The radar signal generator unit 106 is mounted on the vehicle 101 and is in communication with the electronic control unit 104. The radar signal generator unit 106 is configured for broadcasting location of the vehicle 101 to the peer vehicles and/or the pedestrians when the determined level of risk is greater than the first pre-defined level. The radar signal generator unit 106 warns the peer vehicles and/or pedestrians to change their course of ride or course of action to avoid the vehicle 101 in order to prevent the accident/collision.
[059] The rider warning unit 108 is also mounted on the vehicle 101 and is in communication with the electronic control unit 104. The rider warning unit is configured for transmitting warning signals to the rider of the vehicle 101 when the determined level of risk is greater than the first pre-defined level. The warning signals may include visual, audio and/or haptic feedback to the rider of the vehicle. The rider warning unit warns the rider to change his course of ride or course of action to avoid the peer vehicles and/or pedestrians in order prevent the accident/collision.
[060] The second pre-defined action may be performed by a vehicle braking unit 110. The vehicle braking unit 110 is also in communication with the electronic unit 104. The vehicle braking unit 110 is configured for activating an emergency braking operation of the vehicle 101 when the determined level of risk is greater than the second pre-defined level. It is to be understood that second pre-defined action is performed when warning the rider, peer riders and/or pedestrians is not sufficient to prevent accident/collision. However, this should not be construed as limiting and the second pre-defined action may be performed without performing the first pre-defined action in emergency situations.
[061] Figures 4a-4b is a flow chart illustrating a method 300 for assisting a rider of a vehicle 101 to prevent a collision, in accordance with an embodiment of the present invention.
[062] At step 301, the method 300 comprises capturing images and/or videos of an environment 103 surrounding the vehicle 101 driven by the rider in real time. The step of capturing images and/or videos is performed by the image capturing unit 102. It is to be understood that the term image capturing unit 102 may include a single image capturing unit or multiple image capturing units mounted on the vehicle 101.
[063] At step 302, the method comprises detecting objects of interest in the environment from the captured images and/or videos. The step of detecting objects of interest from the captured images and/or videos is performed by an object of interest detection unit 104a. The object of interest detection unit 104a is in communication with the image capturing unit 102. The images and/or videos captured by the image capturing unit 102 are transmitted to the object of interest detection unit 104a.
[064] At step 303, the method comprises determining an intention and/or location of the objects of interest and a level of risk associated with the intention and/or location of the objects of interest. The step of determining 303 is performed by a feature extraction unit 104b and an intention and location classification unit 104c. The step 303 of determining comprises a step of extracting 303a one or more features of the one or more objects of interest and a step of classifying 303b the intention and/or location of the one or more objects of interest based on the extracted features of the one or more objects of interest. The step of extracting 303a is performed by the feature extraction unit 104b and the step of classifying 303b is performed by the intention and location classification unit 104c.
[065] At step 304, the method comprises performing, based on the determination in step 303, pre-defined actions corresponding to the determined level of risk. The step of performing 304 is performed by the risk level determination unit 104d of the electronic control unit 104 and comprises
[066] a step 304a of determining the level of risk associated with the one or more objects of interest and a step 304b of performing the pre-defined actions corresponding to the determined level of the risk. The pre-defined actions may comprise first pre-defined actions and/or second pre-defined actions.
[067] The step 304b of performing further comprises the step of determining 304b1 whether the determined level of risk is equal to or greater than a first pre-defined level. In case the determined level of risk is not equal to or greater than the first pre-defined level, no action is taken by the risk level determination unit 104d as shown in step 304b2. In case the determined level of risk is greater than the first pre-defined level, the risk level determination unit 104d transmit instructions/signals for activating the radar signal generation unit 106 as shown in step 304b3 and/or rider warning unit 108 as shown in step 304b4 and further determines whether the determined level of risk is equal to or greater than a second pre-defined level as shown in step 304b5. In case the determined level of risk is not equal to or greater than the second pre-defined level, the risk level determination unit 104d continues with steps 304b3 and/or step 304b4. In case the determined level of risk is equal to or greater than the second pre-defined level, the risk level determination unit 104d transmit information or signal to a vehicle braking unit 110 for activating an emergency braking operation of the vehicle 101 to prevent collision/accident of the vehicle.
[068] In an embodiment, the steps 304a, 304b1, 304b2, 304b3, 304b4, 304b5, and 304b6 are repeated until the risk determined is addressed. The steps 201-204 and 301-304 are repeated in real-time during the course of ride of the vehicle. In an embodiment, the user may actuate the system manually on the vehicle, to perform the steps 201-204 and 301-304 based on the current scenario on the road such as a crowded road, marketplace, etc.
[069] In one non-limiting example, the environment 103 around the vehicle 101 comprises one stationary peer vehicle such as a car, one moving peer vehicle such as a motorcycle, a first pedestrian crossing the road, a second pedestrian intending to cross the road and a speed breaker on the road.
[070] The image capturing unit 102 captures one or more images and/or videos of the environment 103 surrounding the vehicle 101 in real time. The one or more images and/or videos of the stationary peer vehicle, the moving peer vehicle, the first pedestrian, the second pedestrian and speed breaker are captured by the image capturing unit 102. Owing to close proximity of the stationary peer vehicle, the moving peer vehicle, the first pedestrian, the second pedestrian and the speed breaker to the vehicle 101, the electronic control unit 104/object of interest detection unit 104a detects the stationary peer vehicle, the moving peer vehicle, the first pedestrian, the second pedestrian and the speed breaker to be objects of interest. The electronic control unit 104/feature extraction unit 104b extracts the features of the detected objects of interest. The features extracted by the electronic control unit 104/feature extraction unit 104b indicates no movement in the stationary vehicle, sudden movement of the moving peer vehicle along width of the road, sudden change in direction of movement of the first pedestrian, operating of a mobile phone by the second pedestrian and therefore halt in the process of crossing the road and length, breadth and height of the speed breaker. Based on the extracted features, the intention and location classification unit 104c determines the intention of the objects of interest. The electronic control unit 104/ intention and location classification unit 104c determines no intention with respect to the stationary peer vehicle, intention to overtake the vehicle 101 by the moving peer vehicle, indecisive action of the first pedestrian while crossing the road, intention of crossing the road by the second pedestrian and the height of the speed breaker being less than the height of chassis of the vehicle from the ground. The intention of the moving peer vehicle is classified in the category of vehicle intending to perform a lane change maneuver. Also, the intention of the first pedestrian is classified in the category of the pedestrian crossing the road. Also, the intention of the second pedestrian is classified in the category of pedestrian intending to cross the road. Also, as the stationary peer vehicle and speed breaker pose no risk to the vehicle, the same may be classified as no risk objects. The electronic control unit 104 /risk level determination unit 104d, thereafter, determines the level of risk associated with intention and/or location of the objects of interest. High risk may be associated with moving peer vehicle and first pedestrian. Low risk may be associated with second pedestrian. No level of risk may be associated with the stationary peer vehicle and the speed breaker. The electronic control unit 104/risk level determination unit 104d, thereafter, compares the level of risk associated with moving peer vehicle, first pedestrian and second pedestrian with the first pre-defined level. In case the level of risk associated with moving peer vehicle, first pedestrian and second pedestrian is equal to or greater than the first pre-defined level, location of the vehicle 101 is broadcasted to the moving peer vehicle, first pedestrian as well as second pedestrian. Further, warning signal are provided to the rider of the vehicle 101. Thereafter, the level of risk is compared with the second pre-defined level.
[071] In case the level of the risk associated with the moving peer vehicle, the first pedestrian, and/or the second pedestrian is less than the second pre-defined level, the electronic control unit 104/risk level determination unit 104d will not take any second pre-defined action and continue comparing the associated level of risk with first pre-defined level and perform first pre-defined action or no action accordingly. In case the level of risk associated with the moving peer vehicle, the first pedestrian, and/or second pedestrian is greater than the second pre-defined level, the electronic control unit 104/risk level determination unit 104d will perform an emergency braking operation to prevent collision of the vehicle 101 with the moving peer vehicle, the first pedestrian, and/or the second pedestrian. It is to be understood that the comparison of the level of risk with first pre-defined level and second pre-defined level is performed in real time and first pre-defined action and second pre-defined action are performed accordingly. In case the rider of the vehicle 101 and/or one or more objects of interest having risk of level greater than the first pre-defined level but less that the second pre-defined level do not heed to the broadcasting information or warning transmitted by the radar signal generator unit 106 and rider warning unit 108, the electronic control unit 104/risk level determination unit 104d will continue comparing the level of the risk with the second pre-defined level and perform the second pre-defined action i.e., emergency braking operation when the level of risk is equal to or exceeds the second pre-defined level.
[072] In an embodiment, if rider of the vehicle 101 and/or one or more objects of interests having risk of level greater than the first pre-defined level but less that the second pre-defined level do not heed to the broadcasting information or warning transmitted by the radar signal generator unit 106 and rider warning unit 108, the electronic control unit 104/risk level determination unit 104d will increase the level of risk associated with the objects of interest to be greater than the second pre-defined level and perform the second pre-defined action of controlling the vehicle such as initiating an emergency braking operation.
[073] It is to be understood that typical hardware configuration of the electronic control unit 104 can include a set of instructions that can be executed to cause the electronic control unit 104 to perform the above-disclosed method.
[074] The electronic control unit 104 may include a processor which may be a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor may be one or more general processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analysing and processing data. The processor may implement a software program, such as code generated manually i.e. programmed.
[075] The electronic control unit 104 may include a memory. The memory may be a main memory, a static memory, or a dynamic memory. The memory may include, but is not limited to computer readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. The memory is operable to store instructions executable by the processor. The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor executing the instructions stored in the memory.
[076] The electronic control unit 104 may further include a display unit such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, a cathode ray tube or other now known or later developed display device for outputting determined information. The display may act as an interface for the user to see the functioning of the processor, or specifically as an interface with the software stored in the memory.
[077] Additionally, the electronic control 104 unit may include an input device configured to allow a user to interact with any of the components of the electronic control unit 104. The input device may be a number pad, a keyboard, or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control or any other device operative to interact with the electronic control unit.
[078] The electronic control unit 104 may also include a disk or optical drive unit. The disk drive unit may include a computer-readable medium in which one or more sets of instructions, e.g. software, can be embedded. Further, the instructions may embody one or more of the methods or logic as described. In a particular example, the instructions may reside completely, or at least partially, within the memory or within the processor during execution by the electronic control unit 104. The memory and the processor also may include computer-readable media as discussed above. The present invention contemplates a computer-readable medium that includes instructions or receives and executes instructions responsive to a propagated signal so that a device connected to a network can communicate data over the network. Further, the instructions may be transmitted or received over the network. The network may include wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a cellular telephone network. Further, the network may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed.
[079] The claimed features/method steps of the present invention as discussed above are not routine, conventional, or well understood in the art, as the claimed steps enable the following solutions to the existing problems in conventional technologies. Specifically, the technical problem of not being able to detect and/or discern the intention and/location of the objects of interest in the environment surrounding the vehicle is solved by present invention.
[080] The present invention can not only detect the presence of peer riders, peer vehicles and/or pedestrians in the environment surrounding the vehicle but also determine their intended future actions i.e. intention determination of the peer vehicles, peer riders and/or pedestrians.
[081] The present invention determines risks and level of such risks in real time and on-board the vehicle. The level of risk associated with location and/or intention of the objects of interest and comparison of the associated level of risk with the first pre-defined level and the second defined level is performed in real time. In a scenario, when the rider of the vehicle 101 or the one or more objects of interest does not register or pay heed to the first pre-defined actions and the associated level of risk equals or exceeds the second pre-defined level, the electronic control unit 104/risk level determination unit 104d will perform the second pre-defined action.
[082] The present invention does not require any external sensors and/or servers for its operation. The electronic control unit 104 receives information from the sensors such as image capturing unit 102 mounted on the vehicle 101. Accordingly, no external hardware is required for the implementation of the present invention.
[083] The present invention can also detect obstacles and discern the damage that can be caused by the obstacles. For example, the present invention can discern the dimensions of the obstacles such as a rock or a pit, distance of the obstacles from the vehicle 101 and the level of risk associated with such an obstacle. In case the rock is big enough or the pit is deep enough to damage the chassis of the vehicle or other components of the vehicle, the rider warning unit will warn the rider to change the course of his ride to avoid such obstacles.
[084] The present invention comprises radar signal generation unit 106 for warning the peer riders, peer vehicles and/or pedestrians with respect to the location of the vehicle in order to prevent accident/collision. The present invention also comprises warning unit 108 for warning the rider in order to prevent accident collision. The present invention, therefore, warns the rider as well as peer vehicles, peer riders and/or pedestrians to avoid accident/collisions.
[085] The present invention also comprises a vehicle braking unit 110 to perform an emergency braking operation for stopping the vehicle 101 to prevent the occurrence of the accident/collision.
[086] The present invention uses radar signal generator unit 106 only when the determined level of risk is greater than or equal to a first pre-defined level unlike the prior arts where the radar signal generation units are always active. This saves battery of the vehicle which is used to power the radar signal generation unit.
[087] While the present invention has been described with respect to certain embodiments, it will be apparent to those skilled in the art that various changes and modification may be made without departing from the scope of the invention as defined in the following claims. , Claims:WE CLAIM:

1. A system (100) for assisting a rider of a vehicle (101) to prevent a collision, the system (100) comprising:
at least one image capturing unit (102) mounted on the vehicle (101), the at least one image capturing unit (102) configured for capturing, in real time, at least one of one or more images and one or more videos of an environment (103) surrounding the vehicle (101) driven by the rider;
an electronic control unit (104), mounted on the vehicle (101) and in communication with the at least one image capturing unit (102), configured for: detecting one or more objects of interest in the environment (103) from the one or more captured images and videos; determining at least one of an intention and a location of the one or more objects of interest; determining a level of risk associated with the intention and the location of the one or more objects of interest; and performing, based on the determination, one or more pre-defined actions corresponding to the determined level of risk.

2. The system (100) as claimed in claim 1, wherein the at least one image capturing unit (102) comprises: a first image capturing unit mounted at a front portion of the vehicle (101) and a second image capturing unit mounted at a rear portion of the vehicle (101).

3. The system (100) as claimed in claim 1, wherein the electronic control (104) unit comprises:
an object of interest detection unit (104a) in communication with the at least one image capturing unit (102) and configured for detecting the one or more objects of interest in the environment (103) from the one or more captured images and videos, the one or more objects of interest being the objects posing threat of collision to the vehicle (101);
a feature extraction unit (104b) in communication with the object of interest detection unit (104a) and configured for extracting one or more features of the one or more objects of interest;
an intention and location classification unit (104c) in communication with the feature extraction unit (104b) and configured for classifying at least one of the intention and the location of the one or more objects of interest based on the extracted features of the one or more objects of interest;
a risk level determination unit (104d) in communication with the intention and location classification unit (104c) and configured for determining the level of risk associated with at least one of the intention and the location of the objects of interest, the risk level determination unit (104d) further configured to perform the one or more pre-defined actions corresponding to the determined level of risk.

4. The system (100) as claimed in claim 3, wherein the one or more objects of interest are at least one of: one or more peer vehicles, one or more pedestrians, and one or more obstacles.

5. The system (100) as claimed in claim 4, wherein the features extracted by the feature extraction unit (104b) comprise at least one of: a gaze direction of the one or more pedestrians, gait characteristics of the one or more pedestrians, a posture of the one or more pedestrians, one or more actions performed by the one or more pedestrians, one or more actions performed by one or more riders of the peer vehicles, one or more motion parameters of the one or more peer vehicles, an indicator light status of the one or more peer vehicles, a hazard light status of the one or more peer vehicles, dimensions of the one or more obstacles, a distance of the one or more peer vehicles from the vehicle (101), a distance of the one or more pedestrians from the vehicle (101) and a distance of the one or more obstacles from the vehicle (101).

6. The system (100) as claimed in claim 4, wherein the classification of at least one of the intention and the location of the one or more objects of interest comprises at least one of: one or more pedestrians intending to cross a road, one or more pedestrians crossing the road, one or pedestrians walking on a footpath, one or more peer vehicles performing an overtaking maneuver, one or more peer vehicles performing a braking maneuver, one or more peer vehicles performing a turning maneuver, one or more peer vehicles with one or more doors opened or intended to be opened, one or more peer vehicles performing a lane change maneuver, one or more peer vehicles crossing path of the vehicle (101), one or more peer vehicles in a blind spot of the rider, one or more pedestrians in a blind spot of the rider, one or more trees blocking path of the vehicle (101), one or more rocks blocking path of the vehicle (101), one or more speed breakers having a height greater than a pre-defined height and one or more pits having a depth greater than a pre-defined depth.

7. The system (100) as claimed in claim 1 or claim 3, wherein the one or more pre-defined actions comprises one or more first pre-defined actions when the determined level of risk is greater than a first pre-defined level and one or more second pre-defined actions when the determined level of risk is greater than a second pre-defined level, the second pre-defined level of risk being greater than a first pre-defined level of risk.

8. The system (100) as claimed in claim 7, comprising at least one of:
a radar signal generator unit (106) mounted on the vehicle (101) in communication with the electronic control unit (104) and configured for broadcasting location of the vehicle (101) to the one or more peer vehicles and the pedestrians, when the determined level of risk is greater than the first pre-defined level; and
a rider warning unit (108) mounted on the vehicle (101) in communication with the electronic control unit (104) and configured for transmitting warning signals to the rider of the vehicle (101) when the determined level of risk is greater than the first pre-defined level.

9. The system (100) as claimed in claim 8, wherein the warning signals comprises: visual, audio and/or haptic feedback to the rider of the vehicle (101).

10. The system (100) as claimed in claim 7 or 8 comprising: a vehicle braking unit (110) in communication with the electronic control unit (104) and configured for activating an emergency braking operation of the vehicle (101), when the determined level of risk is greater than the second pre-defined level.

11. A method (200, 300) for assisting a rider of a vehicle (101) to prevent a collision, the method (200, 300) comprising:
capturing (201, 301), by at least one image capturing unit (102), one or more images and videos of an environment (103) surrounding the vehicle (101) driven by the rider in real time;
detecting (202, 302), by an electronic control unit (104) mounted on the vehicle (101) in communication with the at least one image capturing unit (102), one or more objects of interest in the environment (103) from the one or more captured images and videos;
determining (203, 303), by the electronic control unit (104), at least one of an intention and a location of the one or more objects of interest and a level of risk associated with the intention and the location of the one or more objects of interest; and
performing (204, 304), by the electronic control unit (104), based on the determination, one or more pre-defined actions corresponding to the determined level of risk.

12. The method (200) as claimed in claim 11, wherein the step of detecting (202, 302) is performed by an object of interest detection unit (104a) of the electronic control unit (104), the one or more objects of interest being the objects posing threat of collision to the vehicle (101).

13. The method (200) as claimed in claim 11, wherein:
the step of determining (303) comprises: extracting (303a), by a feature extraction unit (104b) of the electronic control unit (104), one or more features of the one or more objects of interest; and classifying (303b), by an intention and location classification unit (104c) of the electronic control unit (104), at least one of the intention and the location of the one or more objects of interest based on the extracted features of the one or more objects of interest; and
the step of performing (304) comprises: determining (304a), by a risk level determination unit (104d) of the electronic control unit (104), the level of risk associated with at least one of the intention and the location of the objects of interest; and performing (304b), by the risk level determination unit (104d), the one or more pre-defined actions corresponding to the determined level of risk.

14. The method (200) as claimed in claim 13, wherein the one or more objects of interest comprises: one or more peer vehicles, one or more pedestrians and one or more obstacles.

15. The method (200) as claimed in claim 14, wherein the features extracted by the feature extraction unit (104b) comprises at least one of: a gaze direction of the one or more pedestrians, gait characteristics of the one or more pedestrians, a posture of the one or more pedestrians, one or more actions performed by the one or more pedestrians, one or more actions performed by riders of the one or more peer vehicles, one or more motion parameters of the one or more peer vehicles, an indicator light status of the one or more peer vehicles, a hazard light status of the one or more peer vehicles, dimensions of the one or more obstacles, a distance of the one or more peer vehicles from the vehicle (101), a distance of the one or more pedestrians from the vehicle (101) and a distance of the one or more obstacles from the vehicle (101).

16. The method (200) as claimed in claim 14, wherein the classification of at least one of the intention and location of objects of interest comprises at least one of: one or more pedestrians wishing to cross the road, one or more pedestrians crossing the road, one or more pedestrians walking on the footpath, one or more peer vehicles performing an overtaking maneuver, one or more peer vehicles performing a braking maneuver, one or more peer vehicles performing a turning maneuver, one or more peer vehicles with one or more doors opened or intended to be opened, one or more peer vehicles performing a lane change maneuver, one or more peer vehicles crossing path of the vehicle (101), one or more peer vehicles in a blind spot of the rider, one or more pedestrians in a blind spot of the rider, one or more trees blocking path of the vehicle (101), one or more rocks blocking path of the vehicle (101), one or more speed breakers having a height greater than a pre-defined height and one or more pits having a depth greater than a pre-defined depth.

17. The method (200) as claimed in claim 11 or claim 13, wherein the one or more pre-defined actions comprises one or more first pre-defined actions when the determined level of risk is greater than a first pre-defined level and one or more second pre-defined actions when the determined level of risk is greater than a second pre-defined level, the second pre-defined level of risk being greater than a first pre-defined level of risk.

18. The method (200) as claimed in claim 17, comprising at least one of:
broadcasting, by a radar signal generator unit (106) mounted on the vehicle (101) in communication with the electronic control unit (104), location of the vehicle (101) to at least one of the one or more peer vehicles and the pedestrians when the determined level of risk is greater than the first pre-defined level; and
transmitting, by a rider warning unit (108) mounted on the vehicle (101) in communication with the electronic control unit (104), warning signals to the rider of the vehicle (101) when the determined level of risk is greater than the first pre-defined level.

19. The method (200) as claimed in claim 18, wherein the warning signals comprises: video, audio and/or haptic feedback to the rider of the vehicle (101).

20. The method (200) as claimed in claim 17 or 18, comprising: activating, by a vehicle braking unit (110) in communication with the electronic control unit (104), an emergency braking operation of the vehicle (101) when the determined level of risk is greater than the second pre-defined level.

Dated this 31st day of May 2022
TVS MOTOR COMPANY LIMITED
By their Agent & Attorney


(Nikhil Ranjan)
of Khaitan & Co
Reg No IN/PA-1471

Documents

Application Documents

# Name Date
1 202241031216-STATEMENT OF UNDERTAKING (FORM 3) [31-05-2022(online)].pdf 2022-05-31
2 202241031216-REQUEST FOR EXAMINATION (FORM-18) [31-05-2022(online)].pdf 2022-05-31
3 202241031216-PROOF OF RIGHT [31-05-2022(online)].pdf 2022-05-31
4 202241031216-POWER OF AUTHORITY [31-05-2022(online)].pdf 2022-05-31
5 202241031216-FORM 18 [31-05-2022(online)].pdf 2022-05-31
6 202241031216-FORM 1 [31-05-2022(online)].pdf 2022-05-31
7 202241031216-FIGURE OF ABSTRACT [31-05-2022(online)].jpg 2022-05-31
8 202241031216-DRAWINGS [31-05-2022(online)].pdf 2022-05-31
9 202241031216-DECLARATION OF INVENTORSHIP (FORM 5) [31-05-2022(online)].pdf 2022-05-31
10 202241031216-COMPLETE SPECIFICATION [31-05-2022(online)].pdf 2022-05-31
11 202241031216-FORM 3 [12-03-2023(online)].pdf 2023-03-12
12 202241031216-FER.pdf 2024-09-24
13 202241031216-FORM 3 [17-12-2024(online)].pdf 2024-12-17
14 202241031216-FER_SER_REPLY [17-02-2025(online)].pdf 2025-02-17
15 202241031216-COMPLETE SPECIFICATION [17-02-2025(online)].pdf 2025-02-17

Search Strategy

1 Document17E_12-09-2024.pdf