Abstract: Disclosed herein a system and method for providing driving assistance to vehicle’s driver, wherein said system comprises of an image/video capturing means located on front portion of the vehicle and configured to capture information related to a plurality of real time obstacle(s) present on road and real time road driving conditions, a processor communicatively coupled with the image capturing means and an electronic control unit of the vehicle for receiving information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle, and a guiding indicator in signalling communication with the processor for assisting driver in safe driving. The disclosed system and method provides an accurate, cost and user friendly safe driving assistance to the driver.
Description:A system and method for driving assistance to vehicle’s driver
Field of the Invention
This invention relates to a system and method for providing driving assistance to vehicle’s driver. More particularly, the present invention relates to a system and method for providing driving assistance to vehicle’s driver based on the obstacles present on the road and road driving conditions.
Background of the Invention
Vehicle accidents are a significant concern worldwide, and improving safety on the roads is a top priority. Human error is a leading cause of accidents, and there is a need for technologies that can assist drivers in avoiding collisions and other hazardous situations.
Driver Assistance Systems are rapidly evolving technology aimed at assisting drivers in a variety of ways. Modern automobiles are equipped with Driver Assistance Systems, a set of technologies and features that improve comfort, convenience and safety while driving. These features use sensors, cameras and other technologies to perceive and react to the environment and other road users. The future of autonomous driving is being ushered by driving assistance that will revolutionize the way we drive today.
Enhancing driver safety by lowering the risk of accidents is generally the main objectives of the Driver assistance systems. The technology achieves this by keeping an eye on the area around the vehicle and alerting the driver to potential dangers. For instance, lane departure warning systems employ cameras to identify when a vehicle is veering off its lane and alert the driver. Similar to this, forward collision warning systems employ radar or cameras to identify potential collisions with objects or vehicles in front of the vehicle and alert the driver.
Although the term "Driver Assistance System " is frequently used in relation to cars and high-end vehicles, but it is rarely applied to two-wheelers. The cost of integrating such systems into low-cost vehicles and the complexity of the required space account for this in large part. The cost of implementation of driving assistance in two-wheelers is one of the main challenges. The cost of high-end and low-priced vehicles differs significantly, and two-wheelers are typically seen as more affordable than cars. The cost of developing, manufacturing, and integrating driving assistance technologies into vehicles can be high, and this expense is frequently passed on to the customer. As a result, Driving assistance systems are typically only found in high-end two-wheelers and not in every two-wheeler vehicles.
The complexity of the space required to integrate driving assistance systems in two-wheelers presents another difficulty. Because two-wheelers are much smaller than cars, there isn't much room to mount sensors, cameras, and other components. Furthermore, the distinct design of two-wheelers makes it challenging to mount such systems without sacrificing the vehicle's functionality or appearance.
The operation of driving assistance systems in two-wheelers is also fraught with difficulties. For instance, many driving assistance systems in cars rely on radar or lidar sensors, which may not be appropriate for two-wheelers, due to space constraints. Furthermore, features like automatic emergency braking might not be suitable for a two-wheeler because abrupt braking could result in the rider losing control and colliding. There is still a scope of improvement in creating driving assistance systems for two-wheelers despite availability of many similar systems in the market.
In the existing system for driving assistance provided in vehicles, the methodology is based on the identification of the road markers, traffic signs and the presence of vehicles within or out of the specified limit of the road marker. Generally, the road markers or dividers or other driving indicators on road can be easily found on the streets of the cities but it’s uncertain to find them on every road. In some cases, the road markers or traffic signs shed off over time, hence the existing methodology fails and provide incorrect and inaccurate driving assistance to the rider.
In other existing system for driving assistance provided in vehicles, driving assistance relies on the communication among the road, other vehicle and host vehicle, therefore in case of failure of any one of the parametric components, the driving assistance system shall fail to guide rider.
In the prior arts, the assistance is provided based on the information related to road markers captured by the camera or sensed information related to the presence of obstacle on the road, but consideration of the driving road condition in real time for providing assistance in driving is still unexplored in the prior arts. Sensor based driving assistance systems involves complication in terms of positioning of sensors and especially in cases of two-wheeler and further an expensive affair for two-wheelers due to the incorporation of too many sensors.
The reliance on traffic signs and lane markings is another challenge in developing Driving Assistance System for two-wheelers as there are many roads across the world, which is still being developed, and many areas lack proper lane markings or traffic signs, hence making the existing driver assist systems inaccurate and inefficient.
One of the most important factors influencing vehicle controllability is road condition. Different types of roads have varying degrees of grip, which can affect vehicle handling and stability. A wet or slippery road, for example, can reduce friction between the tyres and the road, making it more difficult for the driver to control the vehicle. A bumpy or uneven road surface, on the other hand, can cause the vehicle to lose stability and affect its handling. Another important factor influencing vehicle controllability is vehicle speed. The greater the vehicle's speed, the more difficult it is for the driver to maintain control. At high speeds, even minor changes in road conditions can have a significant impact on the vehicle's handling and stability.
In modern vehicles, various sensors and technologies are used to monitor road conditions. Traction control systems, anti-lock braking systems, and stability control systems are examples of systems that can detect and compensate for changes in road conditions. Incorporating these sensors in two wheelers can become costly. The task could be simplified by using the same camera that is being used for detecting vehicles to identify the road conditions. This will help in reducing the complexity of incorporating a lot of sensors and will also help in maintaining the costs low. Integrating advanced sensors into two-wheelers can be a time-consuming and costly task. However, it is still critical to ensure that the rider has all of the information required to maintain control and stay safe on the road. Using current imaging equipment that is already being used to detect cars on the road to assess road conditions is one method to simplify the process of adding sensors. The camera can be trained to recognize various road surfaces, such as wet, dry, or gravel, and adjust the vehicle's performance accordingly. The trained convolutional neural network was capable of detecting road conditions with an accuracy of approximately 80%. An accuracy rate of around 80% when detecting road conditions using a CNN is quite impressive. This means that in 80% of cases, the model correctly identifies the road condition. However, there is still a 20% chance of false positives or false negatives occurring. When the model predicts a road condition that does not exist in reality, this is referred to as a false positive. For example, if the road is dry asphalt but the model predicts gravel, it may generate unnecessary alerts and increase the rider's workload. False negatives occur when the model fails to predict an actual road condition, such as a wet or slippery road. In some cases, however, false predictions may actually improve safety. If the model predicts gravel on a dry asphalt road, the rider may slow down and ride more cautiously. In this case, the incorrect prediction may actually help safeguard the rider. As a result, the 80% accuracy rate should be viewed in context, as should the real-world impact of false positives and false negatives. These errors may have different consequences depending on the situation, so it is critical to design the system accordingly.
There are various other solutions that have been provided according to the existing arts, but all these solutions still have challenges because of their limited applications and inefficient functioning. It is, therefore, important to work on the alternative solution to develop a system and method for providing driving assistance to driver of the vehicle which functions irrespective of the presence of lane markers on road. There is also a need to provide a system and method which will provide an accurate driving assistance independent of the areas in which the vehicle is being driven and obviates the complexity and challenges of the prior arts.
Summary of the Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter. Nor is this summary intended to be used to limit the claimed subject matter’s scope.
Both the foregoing summary and the following detailed description provide examples and are explanatory only. Accordingly, the foregoing summary and the following detailed description should not be considered to be restrictive. Further, features or variations may be provided in addition to those set forth herein. For example, embodiments may be directed to various feature combinations and sub-combinations described in the detailed description.
It is one of the objectives of the present invention to provide a system and method for driving assistance in two-wheeler which is capable of preventing impending collision with obstacles or accident on the road.
It is one of the objectives of the present invention to provide a system and method for driving assistance in two-wheeler which is capable of accurately guiding vehicle’s driver for safe driving.
It is one of the objectives of the present invention to provide a system and method for driving assistance in two-wheeler whose implementation and interpretation are cost friendly.
It is one of the objectives of the present invention to provide a system and method for driving assistance in two-wheeler that avoid sensors for providing driving assistance to the driver.
It is one of the objectives of the present invention to provide a system and method for driving assistance in two-wheeler which is capable of estimating possible collision, if any, and alerting the driver about the same in real time.
As a preliminary matter, it will readily be understood by one having ordinary skill in the relevant art that the present disclosure has broad utility and application. As should be understood, any embodiment may incorporate only one or a plurality of the above-disclosed aspects of the disclosure and may further incorporate only one or a plurality of the above-disclosed features. Furthermore, any embodiment discussed and identified as being “preferred” is considered to be part of a best mode contemplated for carrying out the embodiments of the present disclosure. Other embodiments also may be discussed for additional illustrative purposes in providing a full and enabling disclosure. Moreover, many embodiments, such as adaptations, variations, modifications, and equivalent arrangements, will be implicitly disclosed by the embodiments described herein and fall within the scope of the present disclosure.
Accordingly, while embodiments are described herein in detail in relation to one or more embodiments, it is to be understood that this disclosure is illustrative and exemplary of the present disclosure and are made merely for the purposes of providing a full and enabling disclosure. The detailed disclosure herein of one or more embodiments is not intended, nor is to be construed, to limit the scope of patent protection afforded in any claim of a patent issuing here from, which scope is to be defined by the claims and the equivalents thereof. It is not intended that the scope of patent protection be defined by reading into any claim a limitation found herein that does not explicitly appear in the claim itself.
Additionally, it is important to note that each term used herein refers to that which an ordinary artisan would understand such term to mean based on the contextual use of such term herein. To the extent that the meaning of a term used herein—as understood by the ordinary artisan based on the contextual use of such term—differs in any way from any particular dictionary definition of such term, it is intended that the meaning of the term as understood by the ordinary artisan should prevail.
Furthermore, it is important to note that, as used herein, “a” and “an” each generally denotes “at least one,” but does not exclude a plurality unless the contextual use dictates otherwise. When used herein to join a list of items, “or” , “/” denotes “at least one of the items,” but does not exclude a plurality of items of the list. Finally, when used herein to join a list of items, “and” denotes “all of the items of the list.”
In accordance with one embodiment of the present invention, there is provided a system for providing driving assistance to a vehicle’s driver, comprising an image/video capturing means located on front portion of the vehicle and configured to capture information related to a plurality of real time obstacle(s) present on road and real time road driving conditions, a processor communicatively coupled with the image capturing means and an electronic control unit of the vehicle for receiving information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle, and a guiding indicator in signalling communication with the processor for assisting driver in safe driving, wherein the processor is configured to guide driver through the guiding indicator based on analysing the information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle.
In accordance with another embodiment of the present invention, there is provided a system for providing driving assistance to a vehicle’s driver, comprising an image/video capturing means located on front and/or side front portion of the vehicle and configured to capture information related to a plurality of real time obstacle(s) present on road and real time road driving conditions, a processor communicatively coupled with the image capturing means and an electronic control unit of the vehicle for receiving information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle, and a guiding indicator in signalling communication with the processor for assisting driver in safe driving, wherein the processor is configured to guide driver through the guiding indicator based on analysing the information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle, wherein said image capturing means is provided with an identification module configured to identify characteristics of the obstacle(s) and road driving conditions and transmit the same to the processor.
In accordance with one embodiment of the present invention, there is provided a system for providing driving assistance to a vehicle’s driver, comprising an image/video capturing means located on front and/or side front portion of the vehicle and configured to capture information related to a plurality of real time obstacle(s) present on road and real time road driving conditions, a processor communicatively coupled with the image capturing means and an electronic control unit of the vehicle for receiving information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle, and a guiding indicator in signalling communication with the processor for assisting driver in safe driving, wherein the processor is configured to guide driver through the guiding indicator based on analysing the information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle, wherein the guiding indicator is provided on the vehicle in close visible proximity of driving position of the drive and configured to assist driver in safe driving through alert signals generated based on the determined alert zone as received from the processor, wherein the alert zone includes amber zone, red zone and green zone.
In accordance with one embodiment of the present invention, there is provided a system for providing driving assistance to a vehicle’s driver, comprising an image/video capturing means located on front and/or side front portion of the vehicle and configured to capture information related to a plurality of real time obstacle(s) present on road and real time road driving conditions, a processor communicatively coupled with the image capturing means and an electronic control unit of the vehicle for receiving information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle, and a guiding indicator in signalling communication with the processor for assisting driver in safe driving, wherein the processor is configured to guide driver through the guiding indicator based on analysing the information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle, wherein the processor determines/categorises an alert zone based on the road driving conditions, vehicle’s braking distance and speed of ego vehicle, and characteristics of the obstacle(s) are identified and taken into consideration at a later stage to determine if the object lies in the alert zone.
In accordance with one of the above embodiments of the present invention, wherein the processor is provided with a storage unit pre-stored with the details of a range of obstacle(s) predetermined to be present on the road and the multiple possible road driving conditions, wherein the system used for identifying vehicles, pedestrians and animals uses pre-determined features of these objects to identify its category.
In accordance with one of the above embodiments of the present invention, wherein the storage unit is configured to constantly store information related to real time obstacle(s) and road driving conditions captured by the image capturing means, and the corresponding braking distance so as to build a historical database for providing accurate assistance in safe driving in the future.
In accordance with another embodiment of the present invention, there is provided a method for providing driving assistance to a vehicle’s driver, comprising capturing information related to real time obstacle(s) present on road and real time road driving conditions from an image capturing means, and vehicle braking from an electronic control unit, identifying characteristics of the obstacle(s) and real time road driving conditions, and transmitting the captured and identified information to a processor, and guiding the driver to safe driving by way of a guiding indicator provided on the vehicle, based on analysing the identified characteristics of the obstacle(s) and road driving conditions in combination with real time braking distance of the vehicle, wherein said method comprises of determining a plurality of alert zones based on identified characteristics of the obstacle(s) and road driving conditions in combination with real time braking distance of the vehicle, wherein said method comprises of generating and indicating alert signals through guiding indicator, based on the determined alert zone as received from the processor.
Brief Description of the Drawings
Figure 1 shows a two-wheeler vehicle with driver assistance system.
Figure 2 shows a flowchart for stage 1 executed by driving assistance system for determining alert zones based on road conditions.
Figure 3 shows a flowchart for stage 2 executed by driving assistance system for determining alert based on vehicle’s acceleration or deceleration.
Figure 4 shows a flowchart for alert based on determining vehicles inside the alert zone that are decelerating.
Figure 5 shows a flowchart for determining the real world position of the ego vehicle based on stage 1 output and input from camera 2.
Detailed Description of the Invention
The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While many embodiments of the disclosure may be described, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the methods described herein may be modified by substituting, reordering, or adding stages to the disclosed methods. Accordingly, the following detailed description does not limit the disclosure. Instead, the proper scope of the disclosure is defined by the appended claims. The present disclosure contains headers. It should be understood that these headers are used as references and are not to be construed as limiting upon the subjected matter disclosed under the header.
In the present invention, the driving assistance system is intended to locate and identify obstacles in proximity such as pedestrians, other vehicles, road dividers, danger causing articles or substances on road, and other obstacles and based on the obstacle's position and the speed of the vehicle, the processor decides whether the identified obstacle is in the danger zone of the driver’s vehicle and whether there is a chance of a collision.
In accordance with one embodiment of the present invention, the image capturing means is configured to acquire image of the obstacle on the road and perform pre-processing of the acquired image followed by its feature extraction and post-processing in order to identify the obstacle along with its attributes.
In the present invention, image acquisition is the process of acquiring images with a camera that is mounted on a moving vehicle. These cameras are an essential part of driving assistance systems which typically uses high-resolution digital cameras that take pictures in real-time and are designed to work in a variety of lighting and weather situations, including rain and low light. The identification module of the driver assist system is intended to detect and track objects in the images, such as other vehicles and pedestrians are used to process the pictures taken by the cameras.
Generally, the quality of the images that the cameras record has a significant impact on the accuracy and dependability of the driver assist systems, therefore, it becomes crucial to use top-notch cameras that are calibrated and set up properly for the particular application.
In one of the embodiments, the images are captured using a 5 mega pixel USB camera, which can then be processed, wherein the camera is selected such that the captured Frame per second (FPS) is sufficient enough to maintain a good accuracy level in real time.
In the present invention, in order to increase the accuracy of the images captured by the camera, camera is configured to be calibrated which is the process of figuring out the intrinsic and extrinsic parameters of the camera system, wherein the extrinsic parameters describe the position and orientation of the camera in relation to the scene, whereas intrinsic parameters describe the focal length, image sensor format, and lens distortion. While calibrating a camera, images of a calibration pattern with known dimensions are taken, and the camera's parameters are inferred from these images to obtain a planar pattern with a known size and pattern, like a grid or checkerboard, serves as the calibration pattern most often, wherein images of the calibration pattern are taken while the camera is situated at various angles and separations from the calibration pattern throughout the calibration process, wherein the camera's parameters, such as the focal length, image sensor format, and lens distortion, are then estimated using the images.
Further, using the obtained patterns in the output images, the intrinsic and extrinsic parameters of a camera are calculated, which are then used to account for image distortion. This activity ensures that the output images are the most accurate representation of the actual surroundings.
In accordance with a further embodiment of the present invention, edge detection of the obstacle is performed and the same is employed to extract crucial information from the image, wherein edges in an image are areas where there is a noticeable shift in colour or intensity between adjacent regions, therefore, by locating these regions and emphasising them in the image, edge detection is performed and wherein, with the aid of input images, edge detection is helpful in comprehending the immediate environment and further assist in concentrating on the most pertinent and significant features, such as object boundaries or other salient features in the image, as opposed to taking into account all the information in an image.
Further, once edges have been identified in an image, they can be used for a number of operations, including object detection, object recognition, and image segmentation. For instance, performing object detection enable locating and identifying objects in an image by using edge information and in order to match edges in the identified image with recognised object templates in a memory database, edge-based object recognition techniques are used.
Furthermore, object detection technique is used in the present invention, wherein said technique enables detecting object in digital images or videos and locating objects of interest within an image and classify them into predefined categories, wherein object detection techniques identify objects in images or videos using various techniques such as deep learning, machine learning, and image processing, which further comprise of several steps such as image pre-processing, feature extraction, object detection, and post-processing, wherein during the pre-processing step, the image is prepared for object detection by adjusting its brightness, contrast, and colour balance to improve image quality, wherein during the feature extraction step, key features within the image such as edges, corners, and textures are identified and these features are used by object detection techniques to detect and locate objects in images, wherein this is accomplished through the use of various techniques such as sliding window detection, region proposal networks, and object detection networks.
Further, the object detection techniques generates a set of bounding boxes that enclose the objects of interest in the image, where the object detection results are refined in the post-processing step to improve accuracy and eliminate false detections. Non-maximum suppression, which removes redundant detections, and thresholding, which eliminates detections that fall below a certain confidence score.
In accordance with another embodiment of the present invention, in order to identify objects in pictures and videos, the cutting-edge object detection algorithm is employed which is based on deep learning methods. Further, the convolutional neural network (CNN) family, on which the algorithm is based, is created to perform object detection tasks with high accuracy and efficiency. In order to process each anchor, a series of convolutional layers are applied after the input image is divided into a grid of smaller regions known as anchors by the object detection techniques. A set of bounding boxes that depict the location and size of the objects in the image are produced by these layers. Feature pyramid networks (FPNs) are a method used by objection detection technique to detect objects at various scales and resolutions. FPNs create a set of feature maps with various resolutions by combining the features from various CNN layers. Because of this, said technique recognise both small and large objects in a single image. Additionally, objection detection technique employs a method known as BiFPN, or bidirectional feature pyramid network, wherein BiFPN is a more effective variant of FPN that creates high-quality feature maps at various scales by combining top-down and bottom-up pathways.
Further, there are several models in the EfficientDet family, ranging from EfficientDet-D0 to EfficientDet-D7, with progressively more parameters and higher accuracy. These models can recognise and categorise objects in different image and video formats because they are trained on sizable datasets like COCO and Pascal VOC. On several benchmark datasets, EfficientDet has demonstrated state-of-the-art performance, making it a potent object detection algorithm. It is a desirable option for real-time applications where efficiency and speed are essential due to its high accuracy and efficiency.
In accordance with another embodiment of the present invention, when an object is recognised in a still or moving picture, its position is initially encoded in image coordinates, which are typically expressed as pixel coordinates, wherein these coordinates show where the object is positioned in relation to the image's origin. These pixel coordinates, however, might not actually reflect the actual location of the object in the real world. Therefore, it is necessary to convert the object coordinates from image coordinates to world coordinates in order to obtain the relative positions of the nearby objects.
Further, the coordinate transformation and image rectification is described as below:
Coordinate transformation entails translating the coordinates of the objects in the image to their actual-world locations which is accomplished by perspective transformation which uses transformation matrices for the same. Further, Camera calibration is based on estimating the intrinsic and extrinsic parameters of the camera and applying them to determine the true representation of the world in a 2-dimensional frame, wherein the extrinsic parameters describe the camera's position and orientation in relation to the world coordinate system, whereas intrinsic parameters describe the camera's internal characteristics, such as focal length and pixel size. Further, various other technique can be used to convert object coordinates from image coordinates to world coordinates once the camera's parameters have been estimated. In order to do this, the object coordinates must be back-projected into three dimensions from the image plane of the camera, therefore, the resulting world coordinates show the object's actual location.
Similarly, another procedure is used to get a bird's eye view of the road and the objects on it. This aids in analysing the actual position of the objects as well as tracking these objects in order to calculate their relative speed. Further, it is simpler to analyse the actual position of the objects and follow their movement by getting a bird's eye view of the road and the objects that are present on it. Due to the removal of perspective distortion and lens distortion, this view offers a more accurate and thorough perspective of the scene.
In order to calculate an object's relative speed, which is necessary for applications like advanced driver assistance systems (ADAS), one can track the objects in the bird's eye view. By comparing the positions of the objects in successive frames and calculating the distance they have travelled, it is possible to determine the relative speed of the objects, wherein Potential collisions can be identified using this information, and the necessary steps can be taken to prevent them.
Further, object tracking is performed by the driver assist system which seeks to detect and follow objects of interest in a video sequence over time. This can include tracking other vehicles, pedestrians, cyclists, and other obstacles on the road.
Centroid tracking is an object tracking algorithm used in video sequences which begins by using an object detection algorithm to detect objects in the first frame of the video sequence and after detecting the objects, the algorithm assigns each one a unique ID and computes their centroids (the centre point of the detected object's bounding box), wherein the algorithm predicts the new location of the objects in subsequent frames of the video by computing the Euclidean distance between the centroids of the objects in the current frame and the centroids in the previous frame, wherein if the distance between two centroids is less than a certain threshold, the algorithm treats the objects as identical and updates their centroid positions to reflect the new position.
Further, in accordance with another embodiment, when a new object is detected in the video sequence, the algorithm assigns it a new ID and tracks its centroid position in subsequent frames and also includes mechanisms for dealing with occlusions, which occur when objects temporarily disappear from view and reappear later.
In the present invention, the centroid tracker step is computationally efficient and capable of tracking multiple objects at the same time. As after detecting and tracking objects in a video sequence, motion parameters such as position, velocity, and acceleration are estimated and these parameters are then used to forecast the future motion of objects and make decisions based on their behaviour.
In accordance with a further embodiment of the present invention, the processor calculates the relative distance between detected objects which in return helps to assist in determining the level of danger and potential collision risks, wherein the position of the object in the image is transformed into real-world coordinates using perspective transformation and other techniques before the relative distance of the objects is calculated. After determining the object's position in the real world, the distance between the object and the driver’s vehicle is calculated and the relative speed of the object is calculated using information from the video sequence, in addition to the distance.
Further, the velocity of the identified object is estimated by comparing its position in successive frames and this data can then be used to calculate collision time and other safety parameters. In the present Driving Assistance System, determining the relative distance and speed of objects aids in decision-making and the execution of appropriate responses to potential collision risks so operate properly and increase traffic safety, obtain accurate and trustworthy distance and speed estimation.
In order to determine the likelihood of a collision and take the necessary precautions to avoid one, the driving assistance system estimates accurate relative speed of detected vehicles and accordingly indicates how fast it is moving in relation to the driver’s vehicle and determine whether an object is approaching or edging away from the driver’s vehicle by estimating the relative speed of objects that have been detected. In this manner, the driver may be warned or the driving assistance system may be activated using this information.
In the present invention, the driving assistance system is capable of determining danger zones for the vehicle based on the road condition result predicted by the road monitoring system and the speed data received from the vehicle. Following the definition of the danger zones, the model is able to correctly identify various objects in its surroundings and determine their position using perspective transformation, wherein the perspective transformation ensures that the location determined is in real-world coordinates rather than image coordinates, and once the vehicle speed is determined, the system calculates the vehicle's relative distance and speed in relation to driver’s vehicle, and once all of the necessary factors are known, the system determines whether the detected object is in the danger zone and whether the object has a chance of colliding with the driver’s vehicle.
In accordance with one embodiment of the present invention, for some applications, immediate action based on analysed data, real-time processing is required and the camera used for this application records at a rate of 20 frames per second and to keep up with real-time conditions, the hardware used to process the data is configured to match the inferencing speed of at least 20 frames per second and if the inferencing processor or hardware is unable to match the required frame rate, the inferencing will lag behind the real-time situation. This means that alerts and actions based on the input will be delayed, potentially risking the rider's safety.
For example, if the camera detects a slippery road and sends an alert to the rider too late, the rider may be unable to react in time, potentially leading to an accident. As a result, having hardware that can match the required frame rate is critical to ensuring that inference is done in real-time. One approach is to use specialised hardware optimised for real-time processing, such as GPUs or dedicated inference accelerators. These hardware solutions are intended for real-time applications because they are configured to handle large amounts of data and perform calculations quickly. When selecting hardware, it is also important to consider the power and thermal constraints to ensure that it can operate efficiently.
The present invention is for front-facing collision avoidance system, which is intended to detect obstacles and warn the driver to avoid collisions. Accidents can, however, happen from a variety of directions, such as the side or back of the vehicle. Therefore, the scope of the present invention can be expanded to operate in various scenarios to address this issue.
In another embodiment, a driver assist system is created, for instance, to detect and warn drivers of vehicles that are in their blind spots. Blind spots can be very dangerous when changing lanes or merging into traffic because they are regions that the driver cannot see through their mirrors or windows. By optimising the system to run on a less expensive processor, the system's cost can also be decreased. This may contribute to a higher level of system affordability and accessibility. To make sure the processor operates within the ideal temperature range, it is also critical to take into account the cooling aspect of the device. The processor can be harmed by overheating, which will reduce its efficiency. The cooling system can be used in future work to make sure the processor operates within the ideal temperature range. This can be accomplished by including a cooling system to dissipate the heat produced by the processor, such as a heat sink or a fan. Addressing this problem will enable the system to run more effectively and dependably, enhancing both its overall performance and lifespan.
In accordance with another embodiment of the present invention, there is provided a collision avoidance system designed to improve road safety by identifying road conditions, detecting objects in the vehicle's visible zone, and warning the driver about potential collision risks based on the vehicle's speed. To detect the environment and identify potential hazards or obstacles, the system uses a camera. Once the data from these input frames has been analysed, the system applies sophisticated techniques to calculate the likelihood of a collision based on the speed, distance, and direction of the vehicle. The system automatically warns the driver through visual and audible warnings, such as flashing lights, beeps, or voice commands, if it determines that there is a high risk of collision. This enables the driver to make the necessary manoeuvres to avoid or lessen the collision, such as applying the brakes or veering away from the object. The collision avoidance system is a significant improvement in safety technology because it lessens the severity of collisions when they do happen and helps to prevent accidents. This system can enhance driver awareness and reaction times by giving drivers real-time information about potential risks while driving, ultimately saving lives and lowering the number of accidents and fatalities on our roads.
In accordance with another embodiment of the present invention, the data obtained through calculations in real time may be stored for other use cases as well, wherein the disclosed system stores the vehicle’s speed and location data only till the vehicle is present in the peripheral vision. Once the detected vehicle is out of the camera frame, the speed and location data of that object is erased from the memory.
In accordance with a further embodiment of the present invention, based on the region the obstacle is in and relative velocity of the obstacle, there may be a high, medium, low chance of collision, wherein a value maybe assigned to ‘high’, ‘medium’, ‘low’ based on empirical evidence or testing result and if during testing, it is found that rider can evade the obstacle in 19 out of 20 cases of ‘high’ alert, it may be said that high alert indicates 95% chance of collision.
In accordance with a preferred embodiment of the present invention, referring to Figure 1, there is provided a two-wheeler vehicle (ego vehicle) capable of providing driving assistance to the driver, wherein camera 1 and camera 2 are provided on the front headlight and on mudguard of the vehicle for capturing details with regard to the obstacles on road and road driving conditions, wherein the camera 1 and camera 2 are provided with a processor which produces a visual and/or haptic feedback based on its analysis of the captured details by the camera 1 and camera.
In accordance with another preferred embodiment of the present invention, referring to figure 2, driving assistance to the driver is provided by way of two stages – stage 1 and stage 2, wherein said stage 1 comprises of acquiring image with the help of camera 1, identifying real time road conditions employing deep learning model, determining road driving bad conditions with more than 50% accuracy and determining alert zones based on the road conditions, wherein, in case road driving bad conditions is determined to be less than 50%, no alert is provided and default conditions are used.
Further, referring to figure 3, image is acquired by camera 2 for analysis based on the alert zones and sent for processing to stage 2, wherein said stage 2 comprises of acquiring image with the help of camera 2, identifying obstacles (person and/or animal and/or vehicle) in the captured frame and check its presence inside the alert zone and upon finding the obstacle inside the alert zone, determining speed of the identified obstacle relative to the ego vehicle and checks if the vehicle is accelerating, then there is no alert as the ego vehicle will move out of the alert zone if it keeps on accelerating otherwise an alert is produced, wherein if no obstacles are found within the alert zone, there is no determination of speed of the obstacles relative to the ego vehicle till an obstacle enters into the alert zone.
In accordance with a preferred embodiment of the present invention, referring to figure 4, camera 1 acquires details of the road for analysis to identify the road driving conditions followed by obtaining vehicle speed from the ECU of the vehicle, which in return determine the alert zones based on stopping distance calculated from the vehicle speed and road driving conditions, wherein camera 2 further captures image to identify obstacles from frames captured for analysing the captured image and the alert zone obtained from analysis of camera 1, wherein in case the identified obstacle (for eg: vehicles) is within the stopping distance of ego vehicle, then the processor determine vehicles inside the alert zone that are decelerating and accordingly produce alert.
In accordance with another preferred embodiment of the present invention, referring to figure 5, input from camera 2 is fed into the processor which identify objects in the frame captured by camera 2, determine the pixel position of identified objects, transform the pixel position into a real world birds eye point of view and determine the real world position of the vehicle with respect to ego vehicle.
While the invention is amenable to various modifications and alternative forms, some embodiments have been illustrated by way of example in the drawings and are described in detail above. The intention, however, is not to limit the invention by those examples and the invention is intended to cover all modifications, equivalents, and alternatives to the embodiments described in this specification.
The embodiments in the specification are described in a progressive manner and focus of description in each embodiment is the difference from other embodiments. For same or similar parts of each embodiment, reference may be made to each other.
It will be appreciated by those skilled in the art that the above description was in respect of preferred embodiments and that various alterations and modifications are possible within the broad scope of the appended claims without departing from the spirit of the invention with the necessary modifications.
Based on the description of disclosed embodiments, persons skilled in the art can implement or apply the present disclosure. Various modifications of the embodiments are apparent to persons skilled in the art, and general principles defined in the specification can be implemented in other embodiments without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure is not limited to the embodiments in the specification but intends to cover the most extensive scope consistent with the principle and the novel features disclosed in the specification. , Claims:We claim:
1. A system for providing driving assistance to a vehicle’s driver, comprising:
an image/video capturing means located on front and/or side front portion of the vehicle and configured to capture information related to a plurality of real time obstacle(s) present on road and real time road driving conditions;
a processor communicatively coupled with the image capturing means and an electronic control unit of the vehicle for receiving information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle; and
a guiding indicator in signalling communication with the processor for assisting driver in safe driving,
wherein the processor is configured to guide driver through the guiding indicator based on analysing the information related to characteristics of the obstacle(s) and road driving conditions, and braking distance of the vehicle.
2. The system as claimed in claim 1, wherein said image capturing means is provided with an identification module configured to identify characteristics of the obstacle(s) and road driving conditions and transmit the same to the processor.
3. The system as claimed in claim 1, wherein the guiding indicator is provided on the vehicle in close visible proximity of driving position of the driver.
4. The system as claimed in claim 1, wherein the processor determines/categorises an alert zone based on the road driving conditions and braking distance of the vehicle.
5. The system as claimed in claim 1, wherein guiding indicator is configured to assist driver in safe driving through alert signals generated based on the determined alert zone as received from the processor.
6. The system as claimed in claim 4, wherein the alert zone includes amber zone, red zone and green zone.
7. The system as claimed in claim 1, wherein the processor is provided with a storage unit pre-stored with the details of a range of obstacle(s) predetermined to be present on the road and the multiple possible road driving conditions.
8. The system as claimed in claim 7, wherein the storage unit is configured to constantly store information related to real time obstacle(s) and road driving conditions captured by the image capturing means, and the corresponding braking distance so as to build a historical database for providing accurate assistance in safe driving in the future.
9. A method for providing driving assistance to a vehicle’s driver, comprising:
capturing information related to real time obstacle(s) present on road and real time road driving conditions from an image capturing means, and vehicle braking from an electronic control unit;
identifying characteristics of the obstacle(s) and real time road driving conditions, and transmitting the captured and identified information to a processor; and
guiding the driver to safe driving by way of a guiding indicator provided on the vehicle, based on analysing the identified characteristics of the obstacle(s) and road driving conditions in combination with real time braking distance of the vehicle.
10. The method as claimed in claim 9, wherein said method comprises of determining a plurality of alert zones based on identified characteristics of the obstacle(s) and road driving conditions in combination with real time braking distance of the vehicle.
11. The method as claimed in claim 9, wherein said method comprises of generating and indicating alert signals through guiding indicator, based on the determined alert zone as received from the processor.
| # | Name | Date |
|---|---|---|
| 1 | 202321038911-STATEMENT OF UNDERTAKING (FORM 3) [06-06-2023(online)].pdf | 2023-06-06 |
| 2 | 202321038911-POWER OF AUTHORITY [06-06-2023(online)].pdf | 2023-06-06 |
| 3 | 202321038911-FORM FOR SMALL ENTITY(FORM-28) [06-06-2023(online)].pdf | 2023-06-06 |
| 4 | 202321038911-FORM FOR SMALL ENTITY [06-06-2023(online)].pdf | 2023-06-06 |
| 5 | 202321038911-FORM 1 [06-06-2023(online)].pdf | 2023-06-06 |
| 6 | 202321038911-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-06-2023(online)].pdf | 2023-06-06 |
| 7 | 202321038911-EVIDENCE FOR REGISTRATION UNDER SSI [06-06-2023(online)].pdf | 2023-06-06 |
| 8 | 202321038911-DRAWINGS [06-06-2023(online)].pdf | 2023-06-06 |
| 9 | 202321038911-DECLARATION OF INVENTORSHIP (FORM 5) [06-06-2023(online)].pdf | 2023-06-06 |
| 10 | 202321038911-COMPLETE SPECIFICATION [06-06-2023(online)].pdf | 2023-06-06 |
| 11 | 202321038911-Proof of Right [14-06-2023(online)].pdf | 2023-06-14 |
| 12 | 202321038911-FORM-9 [19-06-2023(online)].pdf | 2023-06-19 |
| 13 | 202321038911-MSME CERTIFICATE [23-06-2023(online)].pdf | 2023-06-23 |
| 14 | 202321038911-FORM28 [23-06-2023(online)].pdf | 2023-06-23 |
| 15 | 202321038911-FORM 18A [23-06-2023(online)].pdf | 2023-06-23 |
| 16 | 202321038911-FORM-8 [03-08-2023(online)].pdf | 2023-08-03 |
| 17 | 202321038911-FER.pdf | 2023-09-27 |
| 18 | 202321038911-MARKED COPIES OF AMENDEMENTS [19-12-2023(online)].pdf | 2023-12-19 |
| 19 | 202321038911-FORM 13 [19-12-2023(online)].pdf | 2023-12-19 |
| 20 | 202321038911-FER_SER_REPLY [19-12-2023(online)].pdf | 2023-12-19 |
| 21 | 202321038911-COMPLETE SPECIFICATION [19-12-2023(online)].pdf | 2023-12-19 |
| 22 | 202321038911-AMMENDED DOCUMENTS [19-12-2023(online)].pdf | 2023-12-19 |
| 23 | 202321038911-US(14)-HearingNotice-(HearingDate-05-02-2024).pdf | 2024-01-06 |
| 24 | 202321038911-US(14)-HearingNotice-(HearingDate-22-01-2024).pdf | 2024-01-08 |
| 25 | 202321038911-Correspondence to notify the Controller [15-01-2024(online)].pdf | 2024-01-15 |
| 26 | 202321038911-Written submissions and relevant documents [23-01-2024(online)].pdf | 2024-01-23 |
| 27 | 202321038911-Retyped Pages under Rule 14(1) [06-03-2024(online)].pdf | 2024-03-06 |
| 28 | 202321038911-2. Marked Copy under Rule 14(2) [06-03-2024(online)].pdf | 2024-03-06 |
| 29 | 202321038911-PatentCertificate07-03-2024.pdf | 2024-03-07 |
| 30 | 202321038911-IntimationOfGrant07-03-2024.pdf | 2024-03-07 |
| 31 | 202321038911-PROOF OF ALTERATION [08-08-2024(online)].pdf | 2024-08-08 |
| 32 | 202321038911-FORM-26 [08-08-2024(online)].pdf | 2024-08-08 |
| 33 | 202321038911-POWER OF AUTHORITY [01-10-2024(online)].pdf | 2024-10-01 |
| 34 | 202321038911-FORM-16 [01-10-2024(online)].pdf | 2024-10-01 |
| 35 | 202321038911-ASSIGNMENT WITH VERIFIED COPY [01-10-2024(online)].pdf | 2024-10-01 |
| 36 | 202321038911-FORM FOR SMALL ENTITY [08-10-2024(online)].pdf | 2024-10-08 |
| 37 | 202321038911-EVIDENCE FOR REGISTRATION UNDER SSI [08-10-2024(online)].pdf | 2024-10-08 |
| 1 | Search_202321038911E_25-09-2023.pdf |
| 2 | SearchHistory_202321038911AE_28-12-2023.pdf |