Abstract: ABSTRACT Title: COMPUTATIONALLY EFFICIENT SYSTEM FOR EXTRACTING VEHICLE INFORMATION FROM THE VEHICLE IMAGES OR VIDEOS. The present invention discloses a system for automated extraction of vehicle information traffic surveillance images or traffic surveillance video comprising an imaging processor operatively connected with an image grabber module. The image grabber module captures the traffic surveillance images or traffic surveillance video and fed into the imaging processor and the imaging processor analyses of the traffic surveillance images or traffic surveillance video for extracts the vehicle information therefrom including the information relating to identification of type and colour of the vehicles.
DESC:FIELD OF THE INVENTION:
The present invention relates to image based vehicle information/attributes identification/extraction technique. More specifically the present invention directed to develop a computationally inexpensive system and method for extracting vehicle information from a plurality of traffic surveillance images or traffic surveillance video.
BACKGROUND OF THE INVENTION:
Automated image processing based identification of vehicles from traffic surveillances images/videos is a well practiced job and in such vehicle identification, rather than trying to locate vehicles in the scene as a whole, involvement of already extracted information to locate some part of the vehicle sufficiently identifies the class and color of the vehicle by using much less computational bandwidth and memory.
Most of the existing systems are computationally expensive and require high amount of processing time (~100ms for 3.0 GHz, Intel core). This results in lower FPS frame processing as well as limited channel/camera-support in a given computation platform. This also causes problems to use limited-power computing boxes (like Raspberry pi, Arduino etc.) for video surveillance applications. Therefore, there is a need to develop effective region-of-interest (ROI) selection system for vehicle type and color detection.
In most of the state-of-the-art video based vehicle type detection systems, entire image of the vehicle is used to classify them. Though, this is more accurate, as because the full view of the object is available in the image, it is computationally intensive. Moreover, in many cases it is very difficult to segment the vehicle from the background properly. Therefore there is always a risk of missing a few vehicles present in the scene. Multiple vehicles in the image make it more difficult to segment individual vehicles and as a result detection of the vehicle type and color and other related information (e.g., make and model of the vehicle) become difficult in a crowded scene. Therefore, there is a need to develop effective and fast vehicle type detection system using the information identified already in the previous steps of the video analytics system.
Similar to the vehicle type detection schemes, most of the state-of-the-art video based vehicle color detection systems use entire image of the vehicle to identify the color. This is more accurate – but computationally heavy. Moreover, in many cases it is also not possible to segment the vehicle from the background properly. This reduces the performance accuracy. Moreover, when there are multiple vehicles in the image, then proper segmentation of individual vehicle is difficult. Therefore, there is a need to develop effective and fast vehicle color detection system using the information identified already in the previous steps of the video analytics system.
Furthermore, existence of edges/texture in the regions which are used for color detection – often causes problems. This is due to the fact that existence of edges/texture in the image makes the intensity variation uneven which is problematic for correct color detection. Therefore, there is a need to develop effective color detection scheme discarding the problematic regions (edges/texture) of the image.
References:
1. Systems and methods for detecting vehicle attributes - US 2018/0018526 A1, Jan. 2018
2. Apparatus, method, and computer product for vehicle-type determination using image data of vehicle - US 8229171 B2, Jul. 2012
3. Object-centric fine-grained image classification - US 9665802 B2, May 30, 2017
4. Vehicle classification from laser scanners using fisher and profile signatures - US 9683836 B3, Jun. 2017
5. Systems and methods for visual classification with region proposals - US 2017/0220876 A1, Aug. 2017
6. Vehicle detection and recognition for intelligent traffic surveillance system - Multimed. Tools Appl., 2015
7. Part-based recognition of vehicle make and model - IET Img. Proc., 2017
8. A model for fine-grained vehicle classification based on deep learning - Neurocomputing, 2017
9. Vehicle color classification using manifold learning methods from urban surveillance videos - EURASIP Jour. on Img. and Vid. Proc., 2104
10. Vehicle Color Recognition using Convolutional Neural Network – 2015
OBJECT OF THE INVENTION:
It is thus the basic object of the present invention is to develop a computationally inexpensive system and method for extracting vehicle information from a plurality of traffic surveillance images or traffic surveillance video
Another object of the present invention is to develop a computationally inexpensive system and method for extracting vehicle information from plurality of traffic surveillance images or traffic surveillance video which would be adapted to select ROI of the vehicle to reduce search space.
Yet another object of the present invention is to develop a computationally inexpensive system and method for extracting vehicle information from plurality of traffic surveillance images or traffic surveillance video which would be adapted to select and minimize the ROI of the vehicle based on position of license plate (LP) on the vehicle for faster identification of the vehicle type and color.
A still further object of the present invention is to develop a computationally inexpensive system and method for extracting vehicle information from plurality of traffic surveillance images or traffic surveillance video which would be adapted to identify smooth region for color detection of a vehicle.
SUMMARY OF THE INVENTION:
Thus according to the basic aspect of the present invention there is provided a system for automated extraction of vehicle information traffic surveillance images or traffic surveillance video comprising
an imaging processor operatively connected with an image grabber module;
said image grabber module captures the traffic surveillance images or traffic surveillance video and fed into the imaging processor; and
said imaging processor analyses of the traffic surveillance images or traffic surveillance video for extracts the vehicle information therefrom including the information relating to identification of type and colour of the vehicles.
In a preferred embodiment of the present system, the imaging processor includes
a license plate (LP) recognizer;
a LP verifier;
an image/video frame region of important (ROI) selector;
a vehicle type detector; and
a vehicle colour detector.
In a preferred embodiment of the present system, the traffic surveillance images or traffic surveillance video fed into the imaging processor from the image grabber module is first passed through the LP recognizer;
said LP recognizer locates the LP in the images or video frames, segments and subsequently recognizes each characters of the LP.
In a preferred embodiment of the present system, the LP verifier is configured to validate existence of the LP based on pre-defined rules including number of alphabets and numeric in the detected LP.
In a preferred embodiment of the present system, the ROI selector marks the LP regions along with its surrounding region in a predefined height-width ratio as the ROI in the scene to identify the type and colour of the vehicles.
In a preferred embodiment of the present system, the vehicle type detector receives the marked ROI and divides the surround region of the LP into a number of zones some of which corresponds to body-parts of the vehicle in the image with an exception of a few whereby the zones corresponding to the body-parts of the vehicle in the image are separated for extracting important distinguishable features from said separated regions for detecting the type of the vehicle.
In a preferred embodiment of the present system, the vehicle type detector involves majority voting scheme to eliminate the surround zones of the LP which are not corresponds to body-parts of the vehicle in the image.
In a preferred embodiment of the present system, the vehicle colour detector receives the marked ROI and divides the surround region of the LP into a number of zones some of which corresponds to body-parts of the vehicle in the image whereby smooth sub-regions within that zones are identified by the vehicle colour detector for vehicle colour analysis.
In a preferred embodiment of the present system, the vehicle colour detector involves edge based smoothing region identification step to identify the smooth sub-regions within the selected region;
said vehicle colour detector involves characteristics representing colour content of the smooth regions for using as feature vector in a pre-trained classifier for the vehicle color identification.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS:
Figure 1 depicts an overall block diagram of the system and or module implemented in accordance with the disclosed embodiments.
Figure 1a depicts different disposition of the License plates on the body of the vehicles.
Figure 2 depicts a block diagram of the system and or module implemented in accordance with the disclosed embodiments regarding ROI selection for vehicle type and color detection.
Figure 3 depicts a block diagram of the system and or module implemented in accordance with the disclosed embodiments regarding the identification of smooth region for color detection of a vehicle.
DESCRIPTION OF THE INVENTION WITH REFERENCE TO THE ACCOMPANYING DRAWINGS:
The accompanying figure 1 depicts the overall block diagram of the proposed vehicle information extraction system. [1000] describes an image grabber module e.g. camera. The traffic surveillance images or traffic surveillance video as captured by the image grabber module is fed into an imaging processor for analysis of the images/video and extraction of the vehicle information therefrom.
The imaging processor preferably includes a license plate (LP) recognizer [1001], a LP verifier [1002], image/video frame region of important (ROI) selector [1003], vehicle type detector [1004] and vehicle colour detector [1005].
The traffic surveillance images or traffic surveillance video fed into the imaging processor from the image grabber module [1000] is first passed through the LP recognizer [1001] which is capable locating the LP in the image or video frames, segmenting and subsequently recognizing each characters of the LP.
The imaging processor involves output of the LP Localizer as one of its inputs. It is not dependant on the particular way LP is identified in an image or any particular method using which the LP localizer is implemented. The only thing it requires that the LP recognizer should return the LP bounding rectangle information as correctly as possible. There exists various ways to localize and recognize LPs in an image. Any of these techniques can be used to localize and recognize the LPs.
Validity of the existence of a LP is then verified in the LP verifier [1002]. This validation can be performed using some pre-defined rules like number of alphabets and numeric in the detected LP. The proposed technique uses the LP regions as a marker to locate the ROI in the scene by the ROI selector [1003]. In the scenario where the objects of interests have texts in predefined positions on the body of the said objects, the processor first locate the texts in the scene, identify the signature of the texts if it is known to have some predefined patterns, and then it determines a part of the body of the object in the scene.
After selection of the ROI, the vehicle type detector [1004] and the vehicle colour detector [1005] are used to identify the type and colour of the vehicles in an image when their license plate regions are already known.
Determination of vehicle body position: It is known that the License plates are fixed on the body of the vehicles, but it may be fixed at different portion of the body depending on the type of vehicle, as shown in figure 1a. Therefore, the surrounding area is selected taking the LP position as the pivot or central position, as shown in figure 1b.
The surround region of the License plate is divided into a number of zones. Most of the zones will contain body-parts of the vehicle in the image (shown with green arrows), with an exception of a few (shown with red arrows). Therefore, a majority voting mechanism is used to eliminate the odd men (not part of the body of the vehicle) out, where each block is used separately to provide a response. The final output (vehicle type) is the majority of these separate responses.
As shown in the figure 2, that it is possible for humans to identify the class/type of the vehicle only observing the selected area surrounding the LPs. As for example, locating a text region and identifying it as a License plate characters, its surrounding area in a predefined height-width ratio is selected for further processing to identify the type of vehicle. This can be achieved by extracting important distinguishable features from the selected regions and using these feature vectors to train an advanced machine learning technique. Various kinds of features can be used to represent the content of the selected image regions. Another way is to use a deep learning framework for detecting the type of the vehicle.
As the license plates can be positioned at different regions in a vehicle body, all the surrounding regions may not be true representative of the vehicle type. Moreover, some regions around the License plate may fall outside the body of the vehicle if the license plate is positioned at any corner of the vehicle body. Therefore, the surround region of the License plate is divided into multiple zones and a majority voting scheme is applied to eliminate the non-representative regions as outliers.
Vehicle color detection technique is described in the Figure 3. Based on the LP location in the image – its surrounding area in a predefined height-width ratio is selected for further processing. It is possible for human eyes to identify the color of the vehicle by only observing the selected area surrounding the LPs, as shown in figure 3. But, to produce more accurate results, rather than using the whole selected region for color analysis, it is advantageous to identify the smooth sub-regions within that region. This eliminates those portions of the regions where there are significant pixels representing edges, textures, heavy color variation, etc. For doing so, a simple edge based smoothing region identification step can be used. The output at various stages of such algorithms is shown in figure 3. Moreover, it is obvious to discard the LP location from the color analysis step. After identifying the smooth sub-regions within the selected region, various characteristics representing the colour content of the smooth regions can be used as feature vector. Using a pre-trained classifier, these features could be used for vehicle color identification.
Figure 2 shows one generic example of the ML/DL based vehicle class/type detection based on learning based mechanism. In the "Offline" phase pre-labeled image data is used to train the ML/DL based classifier. Based on the learned model, during the "Online" phase the trained classifier identifies the class/type of the query image data. If the learning system is based on ML based classifier than "hand engineered features" are first extracted from the training data and then based on these extracted features, the classifier finds the decision boundaries between the classes/types of vehicles. Hand engineered features can be generated using popularly known HOG, LBP, SIFT, but not limited to.
On the other hand, if DL based classifier is used for classification, this "hand engineered feature" extraction step is not required. DL based classifier itself is capable of identifying/extracting required important features from the training data.
Based on the LP location in the image – its surrounding area in a predefined height-width ratio is selected for further processing. As shown in the figure 3, that it is possible for humans to identify the color of the vehicle only observing the selected area surrounding the LPs. But, to produce more accurate results, rather than using the whole selected region for color analysis, it is advantageous to identify the smooth regions in that region. This eliminates those portions of the regions where there are significant pixels representing edges, textures, heavy color variation, etc. For doing so, a simple edge based smoothing region identification step can be used as shown in the figure 3. Moreover, it is obvious to discard the LP location from the color analysis step. After identifying the smooth regions within the selected region, various characteristics representing the colour content of the smooth regions can be used as feature vector. Using a pre-trained classifier, these features could be used for vehicle color identification.
,CLAIMS:WE CLAIM:
1. A system for automated extraction of vehicle information traffic surveillance images or traffic surveillance video comprising
an imaging processor operatively connected with an image grabber module;
said image grabber module captures the traffic surveillance images or traffic surveillance video and fed into the imaging processor; and
said imaging processor analyses of the traffic surveillance images or traffic surveillance video for extracts the vehicle information therefrom including the information relating to identification of type and colour of the vehicles.
2. The system as claimed in claim 1, wherein the imaging processor includes
a license plate (LP) recognizer;
a LP verifier;
an image/video frame region of important (ROI) selector;
a vehicle type detector; and
a vehicle colour detector.
3. The system as claimed in claim 1 or 2, wherein the traffic surveillance images or traffic surveillance video fed into the imaging processor from the image grabber module is first passed through the LP recognizer;
said LP recognizer locates the LP in the images or video frames, segments and subsequently recognizes each characters of the LP.
4. The system as claimed in anyone of the claims 1 to 3, wherein the LP verifier is configured to validate existence of the LP based on pre-defined rules including number of alphabets and numeric in the detected LP.
5. The system as claimed in anyone of the claims 1 to 4, wherein the ROI selector marks the LP regions along with its surrounding region in a predefined height-width ratio as the ROI in the scene to identify the type and colour of the vehicles.
6. The system as claimed in anyone of the claims 1 to 5, wherein the vehicle type detector receives the marked ROI and divides the surround region of the LP into a number of zones some of which corresponds to body-parts of the vehicle in the image with an exception of a few whereby the zones corresponding to the body-parts of the vehicle in the image are separated for extracting important distinguishable features from said separated regions for detecting the type of the vehicle.
7. The system as claimed in anyone of the claims 1 to 6, wherein the vehicle type detector involves majority voting scheme to eliminate the surround zones of the LP which are not corresponds to body-parts of the vehicle in the image.
8. The system as claimed in anyone of the claims 1 to 7, wherein the vehicle colour detector receives the marked ROI and divides the surround region of the LP into a number of zones some of which corresponds to body-parts of the vehicle in the image whereby smooth sub-regions within that zones are identified by the vehicle colour detector for vehicle colour analysis.
9. The system as claimed in anyone of the claims 1 to 8, wherein the vehicle colour detector involves edge based smoothing region identification step to identify the smooth sub-regions within the selected region;
said vehicle colour detector involves characteristics representing colour content of the smooth regions for using as feature vector in a pre-trained classifier for the vehicle color identification.
Dated this the 25th day of February, 2019 Anjan Sen
Of Anjan Sen and Associates
(Applicants Agent)
| # | Name | Date |
|---|---|---|
| 1 | 201831007205-IntimationOfGrant24-04-2025.pdf | 2025-04-24 |
| 1 | 201831007205-STATEMENT OF UNDERTAKING (FORM 3) [26-02-2018(online)].pdf | 2018-02-26 |
| 1 | 201831007205-Written submissions and relevant documents [08-07-2024(online)].pdf | 2024-07-08 |
| 2 | 201831007205-Correspondence to notify the Controller [20-06-2024(online)].pdf | 2024-06-20 |
| 2 | 201831007205-PatentCertificate24-04-2025.pdf | 2025-04-24 |
| 2 | 201831007205-PROVISIONAL SPECIFICATION [26-02-2018(online)].pdf | 2018-02-26 |
| 3 | 201831007205-FORM 1 [26-02-2018(online)].pdf | 2018-02-26 |
| 3 | 201831007205-US(14)-HearingNotice-(HearingDate-24-06-2024).pdf | 2024-05-21 |
| 3 | 201831007205-Written submissions and relevant documents [08-07-2024(online)].pdf | 2024-07-08 |
| 4 | 201831007205-DRAWINGS [26-02-2018(online)].pdf | 2018-02-26 |
| 4 | 201831007205-Correspondence to notify the Controller [20-06-2024(online)].pdf | 2024-06-20 |
| 4 | 201831007205-ABSTRACT [03-11-2022(online)].pdf | 2022-11-03 |
| 5 | 201831007205-US(14)-HearingNotice-(HearingDate-24-06-2024).pdf | 2024-05-21 |
| 5 | 201831007205-FORM-26 [23-05-2018(online)].pdf | 2018-05-23 |
| 5 | 201831007205-CLAIMS [03-11-2022(online)].pdf | 2022-11-03 |
| 6 | 201831007205-Proof of Right (MANDATORY) [18-08-2018(online)].pdf | 2018-08-18 |
| 6 | 201831007205-COMPLETE SPECIFICATION [03-11-2022(online)].pdf | 2022-11-03 |
| 6 | 201831007205-ABSTRACT [03-11-2022(online)].pdf | 2022-11-03 |
| 7 | 201831007205-FER_SER_REPLY [03-11-2022(online)].pdf | 2022-11-03 |
| 7 | 201831007205-ENDORSEMENT BY INVENTORS [25-02-2019(online)].pdf | 2019-02-25 |
| 7 | 201831007205-CLAIMS [03-11-2022(online)].pdf | 2022-11-03 |
| 8 | 201831007205-COMPLETE SPECIFICATION [03-11-2022(online)].pdf | 2022-11-03 |
| 8 | 201831007205-DRAWING [25-02-2019(online)].pdf | 2019-02-25 |
| 8 | 201831007205-OTHERS [03-11-2022(online)].pdf | 2022-11-03 |
| 9 | 201831007205-COMPLETE SPECIFICATION [25-02-2019(online)].pdf | 2019-02-25 |
| 9 | 201831007205-FER.pdf | 2022-05-06 |
| 9 | 201831007205-FER_SER_REPLY [03-11-2022(online)].pdf | 2022-11-03 |
| 10 | 201831007205-FORM 18 [31-12-2021(online)].pdf | 2021-12-31 |
| 10 | 201831007205-OTHERS [03-11-2022(online)].pdf | 2022-11-03 |
| 11 | 201831007205-COMPLETE SPECIFICATION [25-02-2019(online)].pdf | 2019-02-25 |
| 11 | 201831007205-FER.pdf | 2022-05-06 |
| 12 | 201831007205-DRAWING [25-02-2019(online)].pdf | 2019-02-25 |
| 12 | 201831007205-FORM 18 [31-12-2021(online)].pdf | 2021-12-31 |
| 12 | 201831007205-OTHERS [03-11-2022(online)].pdf | 2022-11-03 |
| 13 | 201831007205-FER_SER_REPLY [03-11-2022(online)].pdf | 2022-11-03 |
| 13 | 201831007205-ENDORSEMENT BY INVENTORS [25-02-2019(online)].pdf | 2019-02-25 |
| 13 | 201831007205-COMPLETE SPECIFICATION [25-02-2019(online)].pdf | 2019-02-25 |
| 14 | 201831007205-COMPLETE SPECIFICATION [03-11-2022(online)].pdf | 2022-11-03 |
| 14 | 201831007205-DRAWING [25-02-2019(online)].pdf | 2019-02-25 |
| 14 | 201831007205-Proof of Right (MANDATORY) [18-08-2018(online)].pdf | 2018-08-18 |
| 15 | 201831007205-CLAIMS [03-11-2022(online)].pdf | 2022-11-03 |
| 15 | 201831007205-ENDORSEMENT BY INVENTORS [25-02-2019(online)].pdf | 2019-02-25 |
| 15 | 201831007205-FORM-26 [23-05-2018(online)].pdf | 2018-05-23 |
| 16 | 201831007205-ABSTRACT [03-11-2022(online)].pdf | 2022-11-03 |
| 16 | 201831007205-DRAWINGS [26-02-2018(online)].pdf | 2018-02-26 |
| 16 | 201831007205-Proof of Right (MANDATORY) [18-08-2018(online)].pdf | 2018-08-18 |
| 17 | 201831007205-FORM 1 [26-02-2018(online)].pdf | 2018-02-26 |
| 17 | 201831007205-FORM-26 [23-05-2018(online)].pdf | 2018-05-23 |
| 17 | 201831007205-US(14)-HearingNotice-(HearingDate-24-06-2024).pdf | 2024-05-21 |
| 18 | 201831007205-PROVISIONAL SPECIFICATION [26-02-2018(online)].pdf | 2018-02-26 |
| 18 | 201831007205-DRAWINGS [26-02-2018(online)].pdf | 2018-02-26 |
| 18 | 201831007205-Correspondence to notify the Controller [20-06-2024(online)].pdf | 2024-06-20 |
| 19 | 201831007205-Written submissions and relevant documents [08-07-2024(online)].pdf | 2024-07-08 |
| 19 | 201831007205-STATEMENT OF UNDERTAKING (FORM 3) [26-02-2018(online)].pdf | 2018-02-26 |
| 19 | 201831007205-FORM 1 [26-02-2018(online)].pdf | 2018-02-26 |
| 20 | 201831007205-PatentCertificate24-04-2025.pdf | 2025-04-24 |
| 20 | 201831007205-PROVISIONAL SPECIFICATION [26-02-2018(online)].pdf | 2018-02-26 |
| 21 | 201831007205-IntimationOfGrant24-04-2025.pdf | 2025-04-24 |
| 21 | 201831007205-STATEMENT OF UNDERTAKING (FORM 3) [26-02-2018(online)].pdf | 2018-02-26 |
| 22 | 201831007205-RELEVANT DOCUMENTS [13-06-2025(online)].pdf | 2025-06-13 |
| 1 | SearchHistoryE_06-05-2022.pdf |