Abstract: ABSTRACT A REAL-TIME TRAFFIC SIGN RECOGNITION AND INTEGRATION SYSTEM FOR AUTONOMOUS VEHICLE NAVIGATION IN DIVERSE AND DYNAMIC DRIVING ENVIRONMENTS The present invention relates to a real-time traffic sign recognition and integration system for autonomous vehicle navigation in diverse and dynamic driving environments. This is an on-board system for autonomous vehicles that seamlessly integrates real-time traffic sign recognition with navigation. It improves upon existing navigation systems by providing a more comprehensive understanding of the driving environment through real-time traffic sign interpretation. To be published with Figure 1
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
The Patent Rules, 2003
COMPLETE SPECIFICATION
(See sections 10 & rule 13)
1. TITLE OF THE INVENTION
A REAL-TIME TRAFFIC SIGN RECOGNITION AND INTEGRATION SYSTEM FOR AUTONOMOUS VEHICLE NAVIGATION IN DIVERSE AND DYNAMIC DRIVING ENVIRONMENTS
2. APPLICANT (S)
S. No. NAME NATIONALITY ADDRESS
1 NMICPS Technology Innovation Hub On Autonomous Navigation Foundation IN C/o Indian Institute of Technology Hyderabad, Kandi, Sangareddy, Telangana– 502284, India.
2 Indian Institute Of Technology Hyderabad IN Kandi, Sangareddy, Telangana– 502284, India.
3. PREAMBLE TO THE DESCRIPTION
COMPLETE SPECIFICATION
The complete specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF INVENTION:
[001] The present invention relates to the field of autonomous vehicle systems. The present invention in particular relates to a real-time traffic sign recognition and integration system for autonomous vehicle navigation in diverse and dynamic driving environments.
DESCRIPTION OF THE RELATED ART:
[002] Autonomous vehicles rely heavily on computer vision and machine learning technologies to navigate safely and efficiently. One of the critical components of autonomous vehicle navigation is the ability to recognize and interpret traffic signs in real-time. This capability allows the vehicle to comply with traffic rules and adjust its behavior based on road conditions and regulations.
[003] Current technologies typically use convolutional neural networks (CNNs) for object detection, including traffic signs. These networks are trained to detect objects within images captured by vehicle-mounted cameras. Popular models like You Only Look Once (YOLO) have been widely used due to their speed and efficiency in real-time object detection.
[004] Despite advancements, there are significant challenges in traffic sign recognition, particularly in environments with high variability such as those found in many parts of India. Factors such as varying sign designs, poor lighting conditions, obscured signs, and high vehicle speeds can severely affect the accuracy and reliability of traffic sign detection systems. Additionally, most existing systems struggle with low-resolution images that result from high-speed travel, further complicating the detection and interpretation process.
[005] Reference may be made to the following:
[006] IN Publication No. 202411009126 relates to traffic sign recognition, plays a critical role in enhancing road safety and supporting the development of autonomous vehicles. This research presents a novel traffic sign recognition system leveraging Convolutional Neural Networks (CNNs) and real-time sensor data fusion. The primary objective is to improve the accuracy, robustness, and real-time performance of traffic sign detection and interpretation under varying environmental conditions. The present invention involves the collection of a diverse dataset comprising thousands of traffic sign images, encompassing various sign types, lighting conditions, and sign orientations.
[007] IN Publication No. 202311046414 relates to traffic sign detection is an important problem as sometimes due to traffic or overtake the vehicle driver is unable to see the traffic sign available on traffic sign board along the road and this causes a number of accidents specially near the schools, hospitals and sharp turns. This work has proposed the design of s device which can be installed in the vehicles in order to detect the traffic sign from traffic sign board and alert the driver in laud voice. This system shall also work in night using night vision camera. The proposed system is decomposed into several modules which comprise video capturing and processing model, image processing modules and voice alert generation module.
[008] IN Publication No. 202341040501 relates to traffic signs and road safety are a must-know for everyone to make sure they are safe on roads and so are the people around them. Traffic sign detection is a Road vision problem and is the basis for many applications in the Automotive industries. Traffic signs are classified in terms of color, shape, and the presence of pictograms or text. The project is based on a deep neural network model that can classify traffic signs present in the image into different categories. A model is built using IoT devices that capture the traffic signs and alerts the user about the traffic sign.
[009] IN Publication No. 202441016641 relates to a real-time object detection system tailored for traffic surveillance, leveraging a Region-based Convolutional Neural Network (R-CNN) architecture to achieve high-precision detection and localization of vehicles, pedestrians, and other objects on roadways. Through selective region-based processing, the system enhances accuracy and efficiency, while its dynamic adaptation mechanism enables autonomous adjustment to changing traffic conditions. A comprehensive dataset facilitates robust model training, while a scalable architecture supports large-scale deployment across urban environments.
[010] Publication No. KR20200003349 relates to a system for recognizing a traffic signal implemented in a vehicle of an aspect of the present disclosure, one or more image frames are received from an image sensor, and a region of interest (ROI) for each of the image frames of an image frame set selected from the received image frames is defined, wherein the ROI is defined based on a section of each of the image frames and each ROI is resized to at least first and second resolution images.
[011] Publication No. CN110188705 relates to a long-distance traffic sign detection and identification method suitable for a vehicle-mounted system. The method comprises the following steps: preprocessing a traffic sign image sample set; constructing a lightweight convolutional neural network, and completing the convolutional feature extraction of the traffic sign; through a channel-spatial attention module embedded into the lightweight convolutional neural network, constructing an attention characteristic graph; using a region generation network RPN to generate a candidate region of the target; introducing context region information into the target candidate region generated by the RPN, and enhancing the mark classification characteristics; 6, sending the feature vector into a full connection layer, and outputting the category and position of the traffic sign; establishing an attention loss function, and training an FL-CNN model; repeating the steps 2-7 to complete sample training of the FL-CNN model; and repeating 2-6 to finish traffic sign detection and identification of the actual scene. According to the invention, long-distance traffic sign detection and identification are realized, and the precision reaches 92%.
[012] Publication No. CN117601776 relates to a traffic sign intelligent identification system and device, and the system comprises an unmanned vehicle which is provided with a control system in an inner cavity and is used for unmanned automatic driving; a supporting mechanism is arranged at the top of the unmanned vehicle, and four cameras are arranged at the top of the supporting mechanism in a quadrilateral array; the protection mechanism comprises a protection cover, a tempered glass cover, a chamfer edge, a plurality of heating wire rings and two connecting heating wires, the tempered glass cover is fixed to the outer wall of the protection cover, and the two connecting heating wires are fixed to the two sides of the heating wire rings; and a connecting mechanism for supporting the protective cover is arranged at the top end of the supporting mechanism.
[013] Publication No. CN117163029 relates to an intelligent driving method and system based on multi-lane and traffic sign detection. The method comprises the steps that S1, radars around a vehicle obtain radar detection signals; cameras around the vehicle obtain camera detection signals; s2, calculating vehicle conditions of each lane according to radar detection signals, wherein the vehicle conditions comprise the position, the distance, the direction and the relative speed relative to the vehicle; identifying a traffic sign according to the camera detection signal; and S3, making a driving decision according to a traffic sign recognition result and a vehicle condition result of each lane, and prompting/informing a user through a loudspeaker.
[014] Publication No. CN116868246 relates to a system and method for identifying a traffic sign during automatic driving and a vehicle, the system comprising: a camera module for acquiring a first traffic sign identification result; the sensor is used for acquiring behavior information of the vehicle and nearby vehicles; the training module is connected with the sensor and is used for outputting traffic sign recognition parameters according to behavior information of the vehicle and the nearby vehicles; the recurrent neural network module is connected with the training module and the camera module; wherein the recurrent neural network module is used for outputting a second traffic sign recognition result according to the traffic sign recognition parameters and the first traffic sign recognition result.
[015] Publication No. CN116863443 relates to a lightweight tiny traffic sign recognition method for a vehicle-mounted intelligent system, and belongs to the field of intelligent traffic. And performing compression processing on the acquired environment image to obtain an image file with a specified size. Inputting the image into an information flow logic propagation network, and simply extracting multi-layer low-level semantic information of the image by the information flow logic propagation network; boundary information of low-level semantic information is obtained through an edge information optimization module, and intra-layer and inter-layer high-level graph semantic information of multiple feature layers is extracted through boundary information supervision graph convolution flow.
[016] Publication No. CN116524459 provides a traffic sign recognition system based on deep learning and augmented reality technology. According to the system, traffic sign information around a vehicle is collected, feature extraction and classification are carried out by using a deep learning algorithm, an identification result is presented in the visual field of a driver in real time, and accurate traffic sign identification and intelligent reminding functions are provided. The system further comprises an intelligent reminding module which monitors the state of a driver by using various sensors and chips and provides real-time safety prompts and suggestions according to the current driving condition.
[017] Publication No. CN116580381 discloses a traffic sign deep learning mode recognition method, belongs to the technical field of automobile driving, and aims to solve the problem that in the prior art, similar traffic images also have a large amount of intra-class changes, so that when an automobile is subjected to sign analysis, the situation that identification cannot be carried out possibly occurs, and the identification accuracy is poor. If the type of traffic image does not exist in the input information base, the automobile needs to carry out automatic analysis, identification and learning.
[018] Publication No. CN116524725 discloses an intelligent driving traffic sign image data identification system, which comprises an intelligent driving image processing layer and a third party application layer which establishes communication with the intelligent driving image processing layer through a wireless network, and is characterized in that the intelligent driving image processing layer comprises a central processing module, an image processing module and a vehicle real-time positioning module; the invention relates to the technical field of intelligent driving image processing.
[019] Publication No. CN116343167 discloses a traffic facility acquisition system based on image recognition and GPS, which comprises an acquisition module, a sign extraction module, a sign recognition module and a calibration module, and is characterized in that field data comprises vehicle driving data and path shooting data; the information identification result comprises roadside sign information and corresponding vehicle GPS position information.
[020] Publication No. CN115946722 provides a vehicle control method and system based on traffic signs and a traffic sign recognition platform, which can be used in the technical field of artificial intelligence, and the method comprises the steps: carrying out the traffic sign recognition of a current driving road image through a pre-trained traffic sign recognition model, and obtaining traffic sign information, the traffic sign recognition model is obtained by training an improved target detection algorithm, and the last layer of a backbone network in the improved target detection algorithm is provided with an attention module; the traffic sign information is sent to a vehicle decision-making system, so that the vehicle decision-making system controls vehicle driving according to the traffic sign information, a lightweight traffic sign recognition model is provided by improving a target detection algorithm the application capability of the model is effectively improved, the detection and positioning precision is improved, the operand is reduced, and the real-time performance is high; vehicle driving is automatically controlled through the vehicle decision-making system, and the business handling efficiency and the customer experience feeling are greatly improved.
[021] Publication No. CN218728680 discloses an unmanned intelligent vehicle based on image recognition processing which comprises an intelligent vehicle body detection modules are fixedly embedded in four sides of the intelligent vehicle body a steering mechanism is mounted at the bottom of the intelligent vehicle body the steering mechanism is in transmission connection with four bogies a storage battery is fixedly embedded in the intelligent vehicle body, and the storage battery is in transmission connection with the bogies.
[022] Publication No. CN214751919 discloses a traffic sign identification and detection device based on a deep convolutional neural network, and belongs to the field of motor vehicle auxiliary driving tools. The system comprises an image acquisition unit and a processing unit.
[023] Publication No. CN113065399 discloses a traffic sign recognition system based on a vehicle-mounted platform, which comprises an acquisition module, a recognition module, an output module and a data processing module, and is characterized in that the acquisition module shoots video streams along a road along the vehicle; the recognition module is used for identifying the videos and information acquired by the acquisition module and identifying corresponding traffic signs; the output module is used for outputting the identified traffic sign to the vehicle-mounted platform; and the data processing module is used for constructing and storing existing traffic sign samples, establishing a sample feature set, and obtaining a traffic sign classification model through machine learning so that the identification module can conveniently carry out traffic sign identification according to the existing samples.
[024] Publication No. CN112216137 discloses a vehicle road indication sign recognition system and method, and belongs to the field of internet-of-vehicle systems. The system comprises a road traffic sign with a two-dimensional code and vehicle-mounted terminal equipment externally connected with a front camera. The vehicle-mounted terminal equipment identifies the two-dimensional code through the camera, and then rapidly identifies the corresponding road traffic sign. In addition, a cloud server of the dispatching management center can be accessed by means of the vehicle position information and the acquired two-dimensional code information to acquire the surrounding road condition information so as to assist driving and route optimization.
[025] Publication No. CN115273003 relates to a traffic sign recognition and navigation decision-making method and system combined with character positioning, and the method comprises the steps: collecting a driving environment image of a road in front of a vehicle, and carrying out the preprocessing of the image; respectively inputting the preprocessed image into a traffic sign category detection module, a road detection module and an optical character recognition (OCR) detection module to obtain a traffic sign category result, a lane line detection result and a character detection recognition result; according to the traffic sign category result, the lane line detection result and the character detection and recognition result, a scene decision module comprehensively decides and outputs a reminding strategy according to the traffic sign category detection and character detection and recognition results and in combination with vehicle positioning information, lane line information and navigation information; the traffic indication sign can be accurately recognized, and a driver is effectively reminded of safe driving.
[026] Publication No. CN212009865 relates to a traffic sign recognition system. The system comprises an image acquisition unit, an image recognition unit, a central processing unit and display equipment, wherein the image acquisition unit is electrically connected with the central processing unit through the image recognition unit, the image acquisition unit sends acquired images around a vehicle to the image recognition unit, and the image recognition unit recognizes traffic sign information and face information in the images and sends the traffic sign information and the face information to the central processing unit; the display device is electrically connected with the central processing unit, is in a strip shape, is installed in the length direction of an A column of an automobile, facilitates information obtaining, can remind a driver to pay attention to people in a blind area of the A column, and has the effect of improving the structural strength of the A column as the installation base serves as a reinforcing plate of the A column.
[027] Publication No. CN111832388 relates to a traffic sign detection and identification method and system in vehicle driving. The detection and identification method comprises the following steps: 1, constructing a first image data set, a second image data set and a third sample set; establishing a traffic sign detection model, and training and testing by using the first image data set; establishing a standard VGG 19-based traffic sign feature extraction and recognition network, and performing segmented training and testing by using the second image data set and the third sample set; 2, detecting a traffic sign in a video image acquired in the running process of the vehicle frame by frame, and when the traffic sign is detected, recording the frame as k frames; 3, acquiring a k + 1 frame and a k + 2 frame, and respectively calculating an image in the rectangular outer bounding box of the traffic sign and the position information of the outer bounding box; 4, extracting the features of the images in the rectangular outer bounding box of the traffic sign in k frames, k + 1 frames and k + 2 frames; 5, performing feature fusion to obtain fused features; and 6, inputting the fused features into a traffic sign recognition subnet for recognition.
[028] Publication No. CN111199217 relates to a traffic sign recognition method and system based on a convolutional neural network. The method comprises the steps: S1, acquiring a plurality of traffic sign images, and presetting the types of the traffic sign images; S2, preprocessing a plurality of training data sets and test data sets composed of the traffic sign images; S3, constructing a convolutional neural network;S4, inputting the training data set into the constructed convolutional neural network, and performing continuous iterative training multiple times through a back propagation algorithm so as to generate a traffic sign recognition model; and S5, inputting the test data set into the traffic sign recognition model, and outputting a traffic sign image recognition classification result.
[029] Publication No. KR20200003349 relates to a system for recognizing a traffic signal implemented in a vehicle of an aspect of the present disclosure, one or more image frames are received from an image sensor, and a region of interest (ROI) for each of the image frames of an image frame set selected from the received image frames is defined, wherein the ROI is defined based on a section of each of the image frames and each ROI is resized to at least first and second resolution images.
[030] Publication No. CN110826544 relates to a traffic sign detection and recognition system and method. The system comprises a video acquisition module which is used for acquiring a video image of the surrounding environment of a motor vehicle, and extracting the image frames frame-by-frame from the video image; a detection module which is connected with the video acquisition module and is used for detecting the image frames according to a pre-established traffic sign detection model, determining and acquiring an area where a traffic sign is located from the image frames and correspondingly generating the to-be-identified traffic sign patterns; and a classification module which is connected with the detection module and is used for analyzing and judging the to-be-identified traffic sign patterns according to a pre-established traffic sign classification model, determining the category information of the traffic signs contained in the to-be-identified traffic sign patterns and outputting the category information as an identification result.
[031] Publication No. US2018239972 relates to a vision system for a vehicle includes a camera disposed at the vehicle and having a field of view exterior of the vehicle. The camera captures image data. A control includes an image processor operable to process image data captured by the camera. The control, responsive at least in part to putative detection of a traffic sign via image processing by the image processor of image data captured by the camera, enhances resolution of captured image data based at least in part on known traffic sign images to generate upscaled image data. The control compares captured image data to upscaled image data to determine and/or classify and/or identify the putatively detected traffic sign.
[032] Publication No. US2018225530 relates to a vision system for a vehicle includes a camera and a control. The control determines information on traffic signs and determines whether an indicated speed limit is for the lane being traveled by the vehicle. The vision system determines whether the indicated speed limit is for the lane being traveled by the vehicle responsive to a determination that the sign is at the left side of the lane and has an indicator representative of the right side of the lane and leaves the field of view at its left side, determination that the sign is at the right side of the lane and has an indicator representative of the left side of the lane and leaves the field of view at its right side, or determination of a speed limit sign at both sides of the road being traveled by the vehicle with both signs indicating the same speed limit.
[033] Publication No. US2008137908 relates to a method for detecting and identifying a traffic sign in a computerized system mounted on a moving vehicle. The system includes a camera mounted on the moving vehicle. The camera captures in real time multiple image frames of the environment in the field of view of the camera and transfers the image frames to an image processor.
[034] Patent No. US8050863 relates to a navigation and control system including a sensor configured to locate objects in a predetermined field of view from a vehicle. The sensor has an emitter configured to repeatedly scan a beam into a two-dimensional sector of a plane defined with respect to a first predetermined axis of the vehicle, and a detector configured to detect a reflection of the emitted beam from one of the objects.
[035] Patent No. US6560529 relates to a method and a coupled system for road sign recognition and for navigation is proposed, which enables a bidirectional data transmission between the road sign recognition device and the navigation device.
[036] The article entitled “Traffic sign detection and recognition using deep learning-based approach with haze removal for autonomous vehicle navigation” by A. Radha Rani, Y. Anusha, S.K. Cherishama, S. Vijaya Laxmi; Advances in Electrical Engineering, Electronics and Energy Volume 7, , 100442; 15 June 2023 talks about the the deep learning model for haze removal based on TSDR (DLHR-TSDR). Initially, the CURE-TSD dataset is considered. The haze removal U-network (HRU-Net) module inputs a hazy image and outputs a haze-free image trained to learn the mapping between hazy and haze-free images. Then, the TSDR-convolutional neural network (CNN) module takes the haze-free image from the previous module as input and outputs the location traffic signs in the image. The simulation results on the Carleton University Retinal Eye-Traffic Sign Dataset (CURE-TSD) dataset show that the DLHR-TSDR method developed in the study resulted in 99.01 % accuracy, higher than traditional methods.
[037] The article entitled “A real-time traffic sign recognition system” by S. Estable; J. Schick; F. Stein; R. Janssen; R. Ott; W. Ritter; Y.-J. Zheng; IEEE Xplore; November 1994 talks about overall system design, the real-time implementation, and field test evaluation. The software architecture of the system integrates three hierarchical levels of data processing. On each level the specific tasks are isolated. The lowest level comprises specialists for colour, shape and pictogram analysis; they perform the iconic to symbolic data transformation. On the highest level the administration processes organize data flow as a double bottom-up and top-down mechanism to dynamically interpret the image sequence. A hybrid parallel machine was designed for running the traffic sign recognition system in real time on a transputer network coupled to power PC processors.
[038] The article entitled “Traffic sign detection and recognition system for autonomous vehicle” by Pratush Jadoun , Nikita Wanve, Namrata Khandagale , Megha Shinde, Preeti Yadav; International Journal for Research in Applied Science & Engineering Technology (IJRASET); Volume 11 Issue XI Nov; 2023 talks about a comprehensive approach for the development of a traffic sign detection system using convolution neural network, tensor flow and open CV to classify the traffic signs in real-time effectively. Tensor flow and open CV play an important role in shaping effective traffic detection and recognition system. We have explained the process of data collection, preparation, model architecture and the integration of Tensor flow for training and interference. Open CV for image processing and real time feed processing, ensuring seamless implementation on various hardware platform. The model uses German Traffic Sign Recognition dataset. The results show that the proposed system achieves high accuracy in detecting and recognizing traffic sign making it a valuable system for both autonomous vehicles and human drivers.
[039] The need for improved traffic sign recognition technologies in autonomous vehicles is critical, especially as these vehicles begin to penetrate markets with challenging driving environments like India. Indian traffic signs vary widely in design, color, and textual content, requiring a robust detection system that can generalize well across such variations. High speeds and variable distances from signs result in images that are often blurred or of low resolution, making traditional detection algorithms less effective.
[040] Autonomous vehicles require near-instantaneous interpretation of visual data to make timely navigation decisions. This demands highly efficient processing capabilities to ensure safety and compliance with local traffic laws.
[041] In order to overcome above listed prior art, the present invention aims to provide a real-time traffic sign recognition and integration system for autonomous vehicle navigation in diverse and dynamic driving environments.
[042] The invention addresses challenges by enhancing existing detection frameworks to improve accuracy, efficiency, and adaptability in real-time traffic sign recognition, which is essential for the safe operation of autonomous vehicles in complex and unpredictable environments. The present invention enhances the safety features of autonomous vehicles and ensures their operational feasibility in diverse global markets.
OBJECTS OF THE INVENTION:
[043] The principal object of the present invention is to provide a real-time traffic sign recognition and integration system for autonomous vehicle navigation in diverse and dynamic driving environments.
[044] Another object of the present invention is to provide an on-board system for autonomous vehicles that seamlessly integrates real-time traffic sign recognition with navigation.
[045] Yet another object of the present invention is to provide accurate, efficient, and adaptable real-time traffic sign recognition system and method, which is essential for the safe operation of autonomous vehicles in complex and unpredictable environments.
SUMMARY OF THE INVENTION:
[046] The present invention relates to a real-time traffic sign recognition and integration system for autonomous vehicle navigation in diverse and dynamic driving environments. This is an on-board system for autonomous vehicles that seamlessly integrates real-time traffic sign recognition with navigation. It improves upon existing navigation systems by providing a more comprehensive understanding of the driving environment through real-time traffic sign interpretation.
[047] The system is suitable for autonomous vehicle systems, employing advanced image recognition technologies for traffic sign detection and interpretation within a GPS-based navigation framework. This technology is integral to enhancing autonomous driving capabilities, especially in complex and varied traffic environments. The invention provides end-to-end system that seamlessly integrates real-time, high-fidelity traffic sign recognition with path planning and decision-making within the navigation framework.
BREIF DESCRIPTION OF THE INVENTION
[048] It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered for limiting of its scope, for the invention may admit to other equally effective embodiments.
[049] Figure 1 shows block diagram according to the present invention.
DETAILED DESCRIPTION OF THE INVENTION:
[050] The present invention provides a real-time traffic sign recognition and integration system for autonomous vehicle navigation in diverse and dynamic driving environments. This is an on-board system for autonomous vehicles that seamlessly integrates real-time traffic sign recognition with navigation. It improves upon existing navigation systems by providing a more comprehensive understanding of the driving environment through real-time traffic sign interpretation.
[051] The system is suitable for autonomous vehicle systems, employing advanced image recognition technologies for traffic sign detection and interpretation within a GPS-based navigation framework. This technology is integral to enhancing autonomous driving capabilities, especially in complex and varied traffic environments. The invention provides end-to-end system that seamlessly integrates real-time, high-fidelity traffic sign recognition with path planning and decision-making within the navigation framework.
[052] Figure 1 shows real-time traffic sign recognition and integration system for autonomous vehicle navigation. The system comprises high-performance stereo camera (1) which captures high-resolution images of the road ahead. It uses two lenses to capture slightly different perspectives, enabling the system to calculate the depth (distance) to objects in the scene. Traffic sign recognition unit (2) houses with deep learning model. This is pre-trained on a massive dataset of traffic signs, allowing it to identify various signs (stop, yield, speed limit, etc.) even in challenging conditions like low-resolution images or high speeds. The YOLOv8 model in this invention is enhanced with SPD-Conv and NAM modules, which helps it retain crucial details and operate efficiently. Navigation processing unit (3) is the brain of the navigation system. It receives pre-programmed map data, the vehicle's current location from GPS, and most importantly, real-time traffic sign information from the traffic sign recognition unit. High-speed communication bus (4) acting as a digital highway, allowing for the rapid exchange of data between the traffic sign recognition unit and the navigation processing unit. Vehicle control interface (5) translates the decisions made by the Navigation processing unit into actions for the autonomous vehicle. It controls functions like steering, acceleration, and braking (fig 1).
[053] This camera (1) is mounted in a strategic location with a clear view of the road ahead. Windshield (1a) at the top center of the windshield, behind the rearview mirror. This offers a good balance between capturing the road and avoiding obstructions from the vehicle itself. Roof (1b) mounted on the roof of the vehicle, near the front. This position provides a wider field of view but might be susceptible to weather elements or visibility limitations due to the vehicle's structure. Traffic sign recognition unit & navigation processing units (1c) are housed within the vehicle's electronic control unit (6) (ECU). The ECU is the brain of the autonomous vehicle, responsible for processing sensor data, controlling various systems, and making driving decisions. The specific location of the ECU can vary depending on the vehicle, but it's often placed in a protected area like the trunk or behind the dashboard.
[054] High-speed communication bus (7) connects internal network connects all the critical components for efficient data exchange. Modern vehicles utilize high-speed interfaces like Controller Area Network (CAN bus) or Ethernet to ensure real-time communication between various sensors, processors, and control units. These communication cables are typically routed throughout the vehicle's interior, connecting the camera unit, ECU (housing the traffic sign recognition and navigation units), and other vital systems.
[055] The stereo camera is connected directly to the ECU, allowing it to transmit captured images for processing. The high-speed communication bus might involve multiple cables depending on the vehicle's architecture and the number of connected components. These cables would be neatly bundled and secured within the vehicle's interior to avoid entanglement or damage.
[056] The system ensures superior traffic sign recognition wherein, SPD-Conv ensures exceptional feature retention. Additionally, the NAM module refines the training process, leading to a lightweight yet highly accurate model, perfect for real-time applications in resource-constrained environments.
[057] The system leverages a high-performance stereo camera as the primary hardware for image acquisition. This camera setup captures depth information alongside the visual data. This depth information is crucial for accurately determining the distance to the detected traffic sign, a vital piece of data for the navigation system.
[058] The system efficiently detects traffic signs in real-time. This critical information, including the type of sign, its location within the image (potentially indicating lane assignment), and the distance calculated using the stereo camera's depth data, is instantaneously transmitted via a high-speed communication bus to the navigation processing unit. The navigation processing unit receives the real-time traffic sign data and integrates it into the ongoing path-planning process.
[059] If a newly detected speed limit sign indicates a lower speed than previously planned, the navigation system can adjust the route accordingly, ensuring the autonomous vehicle adheres to the traffic regulations.
[060] Information about upcoming stop signs, yield signs, or lane-changing restrictions allows the navigation system to anticipate upcoming maneuvers and prepare the autonomous vehicle accordingly, fostering smoother and safer navigation.
[061] Real-time traffic sign recognition is integrated with navigation system. The real-time traffic sign data (type, location, distance) is fed directly into the navigation processing unit. This allows for dynamic route adjustments and enhanced situational awareness in autonomous vehicles. By incorporating SPD-Conv for better feature retention and NAM for efficient training, the model achieves high accuracy in lightweight form, making it suitable for real-time applications within resource-constrained environments.
[062] High-speed communication bus facilitates the rapid exchange of data between the camera, traffic sign recognition unit, and navigation processing unit. While high-speed communication buses exist in various forms, the specific implementation using CAN bus or Ethernet in this context might be relevant for the patent application.
[063] Thus the invention provides real-time traffic sign recognition and navigation integration. Camera with depth sensing is a high-performance stereo camera that captures images from two different perspectives allows the system to calculate the depth (distance) of traffic signs. This depth information is crucial for accurately determining the distance to detected traffic signs, a vital piece of data for the navigation system. The stereo camera setup enhances the system's ability to perform in high-speed scenarios where traffic signs may appear blurred or in low resolution. This setup significantly improves the accuracy of traffic sign recognition under varying speeds and distances.
[064] The enhancement of the YOLOv8 model with space-to-depth convolution (SPD-Conv) and normalization-based attention mechanism (NAM). SPD-Conv improves feature retention critical for recognizing traffic signs in challenging conditions, while NAM optimizes the model's training process to be more efficient and accurate. These enhancements allow the system to maintain high accuracy and efficiency in real-time applications, making it particularly suitable for resource-constrained environments where computational efficiency is crucial.
[065] The implementation of a high-speed communication bus, such as the CAN bus or Ethernet, designed for real-time data exchange between the traffic sign recognition unit and the navigation processing unit. This ensures that traffic sign data is immediately available for navigation decision-making, enabling dynamic route adjustments based on real-time traffic conditions. It enhances the responsiveness and situational awareness of the autonomous vehicle.
[066] Seamless integration of real-time traffic sign recognition data with path planning and decision-making within the navigation system. By feeding real-time traffic sign data (type, location, distance) directly into the navigation processing unit, the system allows for immediate and dynamic route adjustments, improving safety and compliance with traffic regulations.
[067] The system is highly adaptable to varied traffic environments and capable of recognizing a wide range of traffic sign variations found in different regions, particularly in countries like India. This adaptability is critical for global markets where traffic signs may significantly differ in design, color, and textual content, ensuring the system's operational feasibility across diverse geographic and environmental scenarios.
[068] Numerous modifications and adaptations of the system of the present invention will be apparent to those skilled in the art, and thus it is intended by the appended claims to cover all such modifications and adaptations which fall within the true spirit and scope of this invention.
,CLAIMS:WE CLAIM:
1. A real-time traffic sign recognition and integration system for autonomous vehicle navigation in diverse and dynamic driving environment comprises-
a) High-performance stereo camera (1) mounted in a strategic location with a clear view of the road ahead which captures high-resolution images of the road ahead characterized in that two lenses to capture slightly different perspectives, enabling the system to calculate the depth (distance) to objects in the scene.
b) Windshield (1a) at the top center of the windshield, behind the rearview mirror.
c) Roof (1b) mounted on the roof of the vehicle, near the front providing a wider field of view.
d) Traffic sign recognition unit & navigation processing units (1c) housed within the vehicle's electronic control unit (6) (ECU).
e) Traffic sign recognition unit (2) houses with deep learning model is pre-trained on a massive dataset of traffic signs, allowing it to identify various signs (stop, yield, speed limit, etc.) even in challenging conditions like low-resolution images or high speeds.
f) Navigation processing unit (3) receives pre-programmed map data, the vehicle's current location from GPS, and most importantly, real-time traffic sign information from the traffic sign recognition unit.
g) High-speed communication bus (4) acting as a digital highway, allowing for the rapid exchange of data between the traffic sign recognition unit and the navigation processing unit.
h) Control interface (5) translates the decisions made by the Navigation processing unit into actions for the autonomous vehicle.
i) Electronic control unit (6) (ECU) placed at the trunk or behind the dashboard controls processing sensor data, controlling various systems, and making driving decisions.
j) High-speed communication bus (7) connects internal network connects all the critical components for efficient data exchange.
2. The real-time traffic sign recognition and integration system, as claimed in claim 1, wherein the location of the ECU varies depending on the vehicle, but it's often placed in a protected area like the trunk or behind the dashboard.
3. The real-time traffic sign recognition and integration system, as claimed in claim 1, wherein the windshield (1a) captures good balance between capturing the road and avoiding obstructions from the vehicle itself.
4. The real-time traffic sign recognition and integration system, as claimed in claim 1, wherein the stereo camera is connected directly to the ECU, allowing it to transmit captured images for processing.
| # | Name | Date |
|---|---|---|
| 1 | 202441035963-STATEMENT OF UNDERTAKING (FORM 3) [07-05-2024(online)].pdf | 2024-05-07 |
| 3 | 202441035963-FORM 1 [07-05-2024(online)].pdf | 2024-05-07 |
| 4 | 202441035963-DRAWINGS [07-05-2024(online)].pdf | 2024-05-07 |
| 5 | 202441035963-DECLARATION OF INVENTORSHIP (FORM 5) [07-05-2024(online)].pdf | 2024-05-07 |
| 6 | 202441035963-FORM-5 [07-05-2025(online)].pdf | 2025-05-07 |
| 7 | 202441035963-DRAWING [07-05-2025(online)].pdf | 2025-05-07 |
| 8 | 202441035963-COMPLETE SPECIFICATION [07-05-2025(online)].pdf | 2025-05-07 |