Abstract: The present disclosure provides methods and systems for vehicle detection in unconstrained environments. Initially, light source blobs are detected on the image and adjacent red and white range of colors are identified by taking a union image of red and white ranges to detect tail lights. To detect moving objects, centre of divergence is computed for the captured image, flow vectors for consecutive image frames are calculated. The non-divergent flow vector points are clustered within a bounding box comprising centroids of already detected tail lights. Head lights are detected by applying multi-level thresholds and hence generating binary images from gray-scale images as for each change in threshold level, a circle is generated around white circular blobs on the captured images to finally form concentric circle at the lowest threshold level. Alternately, the head light also detected by detecting merged white blobs at the lowest threshold level.
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of Invention:
METHODS AND SYSTEMS FOR VEHICLE DETECTION IN UNCONSTRAINED ENVIRONMENTS
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the embodiments and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[0001] The present application claims priority from Indian provisional specification no. 201621001018 filed on 11 January, 2016, the complete disclosure of which, in its entirety is herein incorporated by references.
TECHNICAL FIELD
[0002] The embodiments herein generally relate to vehicle detection and more particularly to methods and systems for vehicle detection in unconstrained environments.
BACKGROUND
[0003] On road vehicle detection capability of vehicles in unconstrained environments such as low visibility due to night-time darkness, foggy weather, heavy rainfall, smoky environment, solar-eclipse and myriad of such low visibility instances combined with other detection parameters like vehicle tail light functionality is an important measure to ascertain the safety robustness for any such vehicle. Currently, night time vehicle detection is done through Advanced Driver Assistance System (ADAS) like Forward Collision Warning (FCW), Automatic High Beam Control (AHBC) etc that are based on either non-imaging sensors like RADAR and LIDAR or on the tail lights of the target vehicles. The tail light detection approach is widely used for instances when sensors used are visible color cameras due to their low cost and multi-functionality. However, in using such an aproach, there lies an important assumption that both the taillights of the target vehicles are always functional. However in developing countries like India the case may not be so, as even vehicles with one functional tail light are allowed to ply on roads. In such a scenario the classical methods known in the art using features of taillights being close to each other in pair, symmetrical, same shape and size etc., would fail. Hence to be able to efficiently detect vehicles on road it is a challenge particularly when the ideal condition of having both functional tail lights is not met.
[0004] Moreover, it is also essential to detect head lights of forward approaching target vehicles so that head light beams can be controlled accordingly. However it is a challenge to detect head lights especially when myriad of other light sources as bright as head lights can be detected. So the challenge is to differentiate between other light sources and headlights. One classical solution is by using Machine Learning techniques, however it involves very large data set for training and computational complexity. Furthermore such an approach is limited to certain type of headlights based on shape, size or color for which the training is done.
SUMMARY
[0005] This summary is provided to introduce concepts related to vehicle detection and more particularly to methods and systems for vehicle detection in unconstrained environments. This summary is neither intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the present disclosure.
[0006] In an aspect there is provided a method for vehicle detection in an unconstrained environment, the method comprising: receiving at least one color video input wherein the at least one color video input comprises a plurality of frames of colored images; converting at least one received frame of colored image into Hue Saturation Value (HSV) color space; extracting a plurality of light source blobs from the converted HSV color space, wherein the converted HSV comprises red and white range of colors; segregating regions of red range of color from the converted HSV color space; identifying blob relationship of adjacent red and white range of colors by taking union of red and white range to generate a union image; analyzing the union image to detect at least one tail light of the vehicle wherein the at least one tail light is part of the at least one color video input; computing center of divergence on the at least one received frame of colored image, wherein the colored image comprises a plurality of pixels, the plurality of pixels comprises significant pixels and corner pixels; identifying the significant pixels and the corner pixels on the at least one received frame of colored image; computing flow vectors between consecutive frames of the received colored images for the significant and corner pixels, wherein the flow vectors comprises divergent and non-divergent flow vectors; classifying divergent and non-divergent flow vectors for identifying the moving objects; determining at least one bounding box based on cluster of non-divergent points comprising at least one centroid of the detected at least one tail light; and detecting the presence of a vehicle based on the determined at least one bounding box.
[0007] In an embodiment, the method described herein above further comprising identifying at least one head light of the vehicle wherein identifying at least one head light comprising the steps of : masking the detected at least one tail light on the received at least one color image; converting the received at least one color image into a gray scale image; applying multi-level thresholds to the gray scale image to convert the gray scale image into a plurality of binary images wherein the multi-level thresholds are applied by reducing the level of threshold of the gray scale image from a highest threshold to a lowest threshold ; determining at least one circular white blob for each of the applied level of thresholds wherein the at least one circular white blob increase in size with each change in level of threshold; and performing at least one of the following steps: generating circles for each level of threshold around the at least one circular white blob on the gray scale image to form at least one concentric circle at the lowest level of threshold around the at least one circular white blob and identifying the at least one concentric circle as a head light; or, determining at least one pair of centroid corresponding to at least one pair of white circular blob at the highest threshold level; determining the location of the at least one pair of centroid from the highest threshold level at the lowest threshold level; identifying a pair of head light if the location of the already determined centroids at the highest threshold level is comprised within an area of a single white blob at the lowest threshold level and the single white blob is a merged pair of white circular blobs at the lowest threshold level.
[0008] In another aspect, there is provided a system for vehicle detection comprising: one or more processors; a communication interface device; one or more internal data storage devices operatively coupled to the one or more processors for storing: a tail light identifying module comprising : a color space converter configured to convert at least one received frame of colored image into Hue Saturation Value (HSV) color space; a light source blob extractor configured to extracting a plurality of light source blobs from the converted HSV color space, wherein the converted HSV comprises red and white range of colors; a color segregator configured to segregate regions of red color from the converted HSV color space; a blob relationship identifier configured to identify blob relationship of adjacent red and white regions by taking union of red and white regions to generate a union image; a connected component analyzer configured to analyze the union image to detect at least one tail light of the vehicle wherein the at least one tail light is part of the at least one color video input and wherein the step of analyzing the union image comprises analyzing the connected components of the union image; a moving object identifying module comprising: a center of divergence computing module configured to compute center of divergence on the at least one received frame of colored image, wherein the colored image comprises a plurality of pixels, the plurality of pixels comprises significant pixels and corner pixels; a pixel identifier configured to identify the significant and corner pixels on the at least one received frame of colored image; a flow vector computing module configured to compute flow vectors between consecutive frames of received images for the significant and corner pixels, wherein the flow vectors comprises divergent and non-divergent flow vectors; a classifying module configured to classify divergent and non-divergent flow vectors for identifying the moving objects; a bounding box determining module configured to determine bounding box based on cluster of non-divergent points comprising at least one centroid of at least one detected tail light and detect the presence of a vehicle based on the determined at least one bounding box.
[0009] The system described herein above further comprising a head light identifying module comprising: a masking module configured to mask the at least one detected tail light on the received at least one color image; the color space converter module configured to convert the received at least one color image into a gray scale image; a multi-level threshold applying module configured to apply multi-level thresholds to the gray scale image to convert the gray scale image into a plurality of binary images wherein the multi-level thresholds are applied by reducing the level of threshold of the gray scale image from a highest threshold to a lowest threshold; a circular white blob determining module configured to determine at least one circular white blob for each of the applied level of thresholds wherein the at least one circular white blob increase in size with each change in level of threshold; a concentric circle generating module configured to generate circles for each level of threshold around the at least one circular white blob on the gray scale image to form at least one concentric circle at the lowest level of threshold around the at least one circular white blob and identifying the at least one concentric circle as a head light; and a merged white blob identifying module configured to: determine at least one pair of centroid corresponding to at least one pair of white circular blob at the highest threshold level; determine the location of the at least one pair of centroid from the highest threshold level at the lowest threshold level; identify a pair of head light if the location of the already determined centroids at the highest threshold level is comprised within an area of a single white blob at the lowest threshold level and the single white blob is a merged pair of white circular blobs at the lowest threshold level.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0011] FIG. 1 illustrates an exemplary block diagram of a system for vehicle detection in unconstrained environments in accordance with an embodiment of the present disclosure;
[0012] FIG. 2 illustrates an exemplary block diagram of a tail light identifying module that is part of the system of FIG.1 in accordance with an embodiment of the present disclosure;
[0013] FIG. 3 illustrates an exemplary block diagram of a moving object identifying module that is part of the system of FIG.1 in accordance with an embodiment of the present disclosure;
[0014] FIG. 4 illustrates an exemplary block diagram of a head light identifying module that is part of the system of FIG.1 in accordance with an embodiment of the present disclosure; and
[0015] FIG. 5 is an exemplary flow diagram illustrating a computer implemented method for vehicle detection using the system of FIG. 1 in accordance with an embodiment of the present disclosure.
[0016] It should be appreciated by those skilled in the art that any block diagram herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computing device or processor, whether or not such computing device or processor is explicitly shown.
DETAILED DESCRIPTION
[0017] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0018] Referring now to the drawings, and more particularly to FIGS. 1, 2, 3 and 4 where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and method.
[0019] FIG.1 illustrates an exemplary block diagram of a system 100 for vehicle detection in unconstrained environments using the system in accordance with an embodiment of the present disclosure, FIG.2 illustrates an exemplary block diagram of a tail light identifying module 112 that forms part of the system 100, FIG.3 illustrates an exemplary block diagram of a moving object identifying module 114 that forms part of the system 100, FIG. 4 is illustrates an exemplary block diagram of a head light identifying module 116 that is part of the system of FIG.1 in accordance with an embodiment of the present disclosure and FIG. 5 is an exemplary flow diagram illustrating a computer implemented method 500 for vehicle detection in unconstrained environments using the system of FIG. 1. The steps of method 500 of the present disclosure will now be explained with reference to the components of system 100 wherein system 100 comprises a tail light identifying module 112, a moving object identifying module 114, a head light identifying module 116 and a tracking module 118 as depicted in FIG. 1 for vehicle detection in unconstrained environments.
[0020] In an embodiment, system 100 includes one or more processors (not shown), communication interface or input/output (I/O) interface (not shown), and memory or one or more internal data storage devices (not shown) operatively coupled to the one or more processors. The one or more processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, system 100 can be implemented on a server or in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, cloud, hand-held device and the like.
[0021] The I/O interface can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface can include one or more ports for connecting a number of devices to one another or to another server.
[0022] The memory may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the various modules of system 100 can be stored in the memory.
[0023] For ease of explanation, the description of systems and methods of the present disclosure is provided with reference to a non-limiting example of detecting a moving target vehicle by a host vehicle, wherein the host vehicle is fitted with a monocular camera capable of capturing colored videos of target vehicle in unconstrained environments. In the context of the instant example, the expression “unconstrained environments” used throughout the explanation would refer to environments of low visibility due to various conditions combined with other technical failures like non-functioning of one of the tail-lights of the target vehicle etc.
[0024] In the context of the present disclosure, the expression “moving object” or “moving vehicle” refers to a moving object or vehicle and may be used interchangeably.
[0025] Typically, for vehicle detection in unconstrained environments reliance is laid on detection of head light or tail light of the target vehicle to ascertain the presence of a vehicle. In an embodiments, a camera present in a host vehicle captures images or videos of target vehicles that needs to be detected. Further, processing is done on the captured image to detect tail lights, moving objects and head lights. One of the attributes of tail lights have been proved to be quite consistent in captured images. This attribute of tail light is that, one bright spot (high value and poor saturation) is observed at the center that forms the light source and a red halo region surrounds this light source. This attribute of tail lights has been utilized by the present disclosure in order to detect the tail lights of the target vehicle by conducting color segregation methods to segment the aforementioned bright spot and red surrounding region and further analysis is done to detect vehicle in unconstrained environments. A head light has an attribute of producing a halo around its contour, this ‘halo effect’ of a head light has been utilized by the present disclosure to detect a head light. In order to capture the ‘halo effect’ present in head lights, the input image is subjected to multi-level of thresholds to obtain either concentric circles or merged blobs, the analysis of which further leads to detection of head lights.
[0026] Referring to FIG. 1, 2, and 5 at step 510, at least one color video input 110 is received by the taillight identifying module 112 wherein the at least one color video input 110 comprises a plurality of frames of colored images. At step 512, a color space converter 200 converts the at least one received frame of colored image into Hue Saturation Value (HSV) color space. Further to which the color image that is in Red Green Blue (RGB) color space is converted to HSV image.
[0027] At step 514, a light source blob extractor 210 extracts a plurality of light source blobs from the converted HSV color space. All bright objects like source of any light always has high value and poor saturation irrespective of its color (Hue). In accordance to an embodiment, by color segmentation (H: ALL, S: 0-30, V: 94-100) blobs for all the bright objects in the scene are extracted resulting in a binary image. The regions corresponding to bright spots are mostly sources of light and are having high intensity value in the V-plane of HSV image. Statistical properties like centroid and area are calculated for the extracted light blobs.
[0028] At step 516, color segregator 212 segregates regions of red range of color from the converted HSV color space. Using the range for red color in H-plane, red regions from the whole image is segregated. In an embodiment, Saturation value is limited just to reduce unnecessary segments. Here, H: 0-20 and 340-360, S: 32-100, V: ALL From the three plane HSV image, pixels that satisfy above mentioned three ranges in all three planes are made white during color segmentation.
[0029] At step 518, blob relationship identifier 214 can identify blob relationship of adjacent red regions and white range of colors by taking union of red and white range of colors to generate a union image. In order to identify the attribute of taillight that the bright light source or white blob is surrounded by a red region, adjacency of red and white blobs is to be identified. Such adjacency is identified by generating a union image from binary images with white blobs (extracted at step 514) and red regions (segregated in step 516). If both the color blobs (white and red) are present in the union image in connected manner, further processing is done on the same image.
[0030] At step 520, connected component analyzer 216 analyzes the union image to detect a tail light of the vehicle wherein the tail light is part of the at least one color video input. The connected component analyzer 216 analyzes the connected components of the union image to detect a tail light. Connected components are found from above mentioned union image. For each white blob (referred as Child blob) corresponding blob (referred as Parent blob) in the union image is identified by using the centroid of child blob. Next the centroids of the white blobs are used as seeds and hence only the blobs having parent-child neighborhood relationship (from all blobs present in the union image) are considered for further processing. The parent child relationship is established through the position of the centroid of the white child blobs. If area of parent blob (the union blob) is larger than the corresponding child blob (only the bright white blob), it suggests that red region is present surrounding white spot which is the attribute of a taillight. Hence corresponding child blob is detected as a tail light.
[0031] Referring now to FIG. 1, 3, and 5 at step 522, a center of divergence computing module 300 computes center of divergence on the at least one received frame of colored image wherein the colored image comprises a plurality of pixels, the plurality of pixels comprises significant pixels and corner pixels. Practically, if a video is captured from a camera placed in a moving object (like vehicle) all the objects in the captured scene will be apparently moving. If the moving object with camera placed in it is moved forward, it will be observed in the scene captured by the camera that the static objects are moving as divergent points from a center of divergence which can be considered as a vanishing point. The potential feature points (like corners) in the scene move far from the vanishing point radially as the number of frames are increased i.e., as the camera moves forward. In accordance to an embodiment, the moving objects are segregated from the scene in multi-level hierarchical Fuzzy C-Means clustering on phase and magnitude of flow vectors. In order to detect the center of divergence which is necessarily the vanishing point, the near perpendicular flow vectors in top and bottom half of the captured image frame is selected. The selection is based on stable phase (or direction) exhibiting flow vectors originated from one point (fixed x, y). At least 2 pairs of such stable arrows are identified from upper and lower half of the image. If their cross-points are identical or neighbor, then that intersecting point is detected as the vanishing point or center of divergence. Otherwise (if the two intersecting points are positioned beyond tolerance), anything above a ‘top’ point is considered as 1st two quadrants and anything below a ‘down’ point is considered as fixed 3rd and 4th quadrants. The no-mans’ land (between ‘top’ and ‘down’) will again be divided into 4 quadrants. After determining the vanishing point, all the flow vectors is clustered based on phase (or direction) value. The direction of the flow vectors is calculated using the formula as shown in Equation 1, where v and u are the displacement of the point along Y (vertically) and X (horizontally) direction.
ANGLE = tan-1(v/u) ------------------------- (1)
[0032] At step 524, pixel identifier 312 identifies corner and other significant pixels on the at least one received frame of colored image. In accordance to an embodiment, M number of pixels are identified that are favorable for correspondence establishment between pair of frames, for example corner pixels or Eigen based features.
[0033] At step 526, flow vector computing module 314 computes flow vectors between consecutive frames from the significant and corner pixels.
[0034] At step 528, classifying module 316 classifies divergent and non-divergent flow vectors. The points within the non-divergent flow vectors having less randomness, high stability either in same direction or in opposite direction are detected as points belonging to moving objects or vehicles.
[0035] At step 530, a bounding box determining module 318 determines the bounding box based on cluster of non-divergent points comprising at least one centroid of already detected at least one tail light. Step 530 receives input from step 520 too as centroids of already detected at least one tail light is an ascertained point on the moving object or vehicle. The centroid of detected taillight is certainly a point on the moving object or vehicle and hence a small Region of Interest (RoI) in the form of a square or bounding box is considered around the same. From the whole cluster of non-divergent points, the points that fall into the RoI are pushed into a new cluster. For next iteration the minimum enclosing rectangle, with marginal expansion, for the points in the new cluster is considered as RoI and same procedure is repeated untill no new point is added into the RoI. The minimum enclosing rectangle is the rectangle drawn using the four extreme values (Xmin,Xmax,Ymin,Ymax) of points in the new cluster .When no new point is added into the new cluster during the iteration it implies that all the points for that particular moving object or vehicle is already pushed into the new cluster and the final bounding box is the minimum enclosing rectangle for the points of new cluster without any marginal expantion. While growing the RoI starting from any one detected tail light centroid as described above, if centroid of another detected taillight falls within the RoI it means that both of these taillights are of the same moving object or vehicle and hence one common bounding box is estimated for both of them and not two. At step 532, the presence of a vehicle is detected hence based on the determined bounding box.
[0036] At step 534, a tail light status determining module 320 determines a status of the detected at least one tail light on a detected vehicle wherein the status of the detected at least one tail light can be at least one of: both tail lights of detected vehicle are functional, left tail light of detected vehicle is functional, right tail light of detected vehicle is functional. The status of the tail lights within the bounding box with respect to the tail lights’ functionality and alignment on the vehicle is hence determined. Once the bounding box is determined for the moving object or vehicle, based on the number of taillight present in the box and their locations the status of the taillights of the detected vehicle is given as one of the three possibilities:
1. Vehicle with both functional taillights is detected
2. Vehicle with left non-functional taillight is detected
3. Vehicle with right non-functional taillight is detected
When a tail light is detected, a region from the centroid of the detected tail light is grown towards all high density optical flow based non divergent vectors. Following which a bounding box is determined to contain the grown region or RoI and if two tail lights are contained inside the boundary of the bounding box, the vehicle comprises a pair tail lights. If the ROI is growing towards right but final bounding box contains only one taillight, the right taillight is non-functional. Otherwise, if the RoI is growing towards left but final bounding box contains only one tail light, the left taillight is non-functional. Additionally, a tail light of a two wheeler based on width of non-divergent flow vectors is also detected by the methods and systems of the present disclosure. When initially a tail light is detected and the RoI is grown from the centroid of the detected tail light and if the RoI grows only towards top and bottom but not significantly towards left and right, the taillight is detected as a tail light of a two wheeler.
[0037] Referring now to FIG. 1, 4 and 5, a method for identifying at least one head light of the vehicle is described herein as follows: at step 536 a masking module 400 masks the detected at least one tail light on the received at least one color image. By masking the already detected at least one tail light the possibility of any further detection on those detected areas are negated hence. At step 538, the color space converter 200 can convert the received at least one color image into a gray scale image. At step 540, a multi-level threshold applying module 412 applies multi-level thresholds to the gray scale image to convert the gray scale image into a plurality of binary images wherein the multi-level thresholds are applied by reducing the level of threshold of the gray scale image from a highest threshold to a lowest threshold. For example, the highest threshold can be a high intensity level of 250 wherein the gray scale image is converted to binary or black and white image by converting intensity levels above threshold intensity level 250 as white and below that level as black. At step 542, a circular white blob determining module 414 can determine at least one circular white blob for each of the applied level of thresholds wherein the at least one circular white blob increase in size with each change in level of threshold. Hence as the threshold is decreased the intensity levels above that threshold is converted to white zones, thus the size of the circular white blobs also keeps on increasing with increase in the white zones.
[0038] After step 542, in order to detect head lights either step 544 or steps 546, 548 and 550 can be followed. The step 544 describes the concentric circle approach and the steps 546, 548 and 550 describe the merged blobs approach. Input for both the approaches are same and with both the approaches, the behavior of blobs are analyzed to identify a "Halo Effect", which is a characteristic of headlights. ‘Halo effect’ is defined as that characteristic of head lights wherein halo or circular radiance of light is observed encircling a bright light source. Both approaches process the common input independently. While taking the final decision, if the blob is due to headlight or not, OR logic is used as voting rule between the two approaches. If the blob is identified as headlight by any or both of the approaches, it is declared as headlight. At step 544, a concentric circle generating module 416 generates circles for each level of threshold around the at least one circular white blob on the gray scale image to form at least one concentric circle at the lowest level of threshold around the at least one circular white blob and identifying the at least one concentric circle as a head light. As the size of the circular white blob increases at every step of thresholds, the circles are generated for each of the level, and finally generating a set of circles with one common center.
[0039] At step 546, a merged white blob identifying module 418 determines at least one pair of centroid corresponding to at least one pair of white circular blob at the highest threshold level. At step 548, the merged white blob identifying module 418 determines the location of the at least one pair of centroid from the highest threshold level at the lowest threshold level. At step, 550 the merged white blob identifying module 418 identifies at least one pair of head light if the location of the already determined centroids at the highest threshold level is comprised within an area of a single white blob at the lowest threshold level and the single white blob is a merged pair of white circular blobs at the lowest threshold level. As the white zones keep on increasing with the lowering of thresholds, the white zones or circular white blobs tend to merge hence. This characteristic is crucial for detecting the head lights as for example at high threshold level the centroids are already detected for a pair of white circular bobs corresponding to probable head lights. At high threshold level the white zones are lesser in size and hence distinct centroid locations can be identified, however, when merging takes place between the white blobs due to their increase in size, the already detected pair of centroids would now fall within a single white blob. Thus confirming the presence of a head light pair.
[0040] The methods and systems disclosed herein above further comprises a tracking module 118 that tracks the at least one of the identified tail light or headlight based on optical flow method. In real-time scenario, accuracy and speed are major factors but eventually there is a significant trade-off between these two. In an embodiment, to emphasize more on these factors, tracking has been incorporated along with the detection to improve the performance. Optical flow based tracking of the already detected tail light reduces the time span for the execution significantly which brings it closer to the original frames per second (fps) of the captured video. This speed is further increased by processing the optical flow based tracking on an adaptive Region of Interest which is manipulated by the minimum area covered by the previously tracked/detected feature list. Other advantage regarding the optical flow is that, several false rejections from the detection is properly tracked. The main reason behind such a scenario is because of the consideration of only the ideal conditions for the selection of tail light for detection which may not be available in every frame. So there are false rejections where ideal conditions are not satisfied. This is avoided as the optical flow does not consider the condition, rather it considers the displacement of the selected feature from the previous frame to the current. A trade-off between accuracy and speed is considered in this scenario to utilize periodic detection and tracking to avoid misses of such vehicles.
[0041] Statistical analysis of the history of detections are considered and the consistency of the detections are utilized for further tracking. The detections in set of multiple alternate frames are correlated to understand the inconsistency. The same set of tail lights may not have the same order of detection. So detected contours are rearranged in the order of the firstly detected order. This re arrangement involves tracking of the first detection and checking if the next detection happens in region near the tracked region and voting accordingly. Multiple false acceptance can be removed, thus improving the accuracy. Only the consistent detections are added to the feature list that is to be tracked. The next optimization for detection has been done to improve the speed. Since the tracking of the previous detection gives a set of tail light detections, it is unnecessary to detect the same tail light again. For this purpose the tracked feature is masked before the next detection, thus avoiding redetection of the same feature set.
[0042] In an aspect, for detections, there are cases wherein there are false positives on reflections on red reflectors and on red vehicle which may even be consistent for a specific set of frames. This leads to tracking of the wrong set of features and since the tracking doesn’t involve elimination with respect to any conditions the detected false are tracked till the vehicle exits the frame. To avoid such cases evaluation of the tracked region is considered. The evaluation is done in multiple phases to eliminate close range and far range false. The issue regarding tracking of far range reflection is solved by understanding the vanishing of white blob. For this purpose, a condition is assigned which eliminates the tracking feature by evaluating if a particular tracked feature is within a blob. If it’s not within a blob then it certainly indicates that it is not a source of light which needs not be tracked. Through analysis of close range reflection which is generally caused due to the high/low beam from vehicles. So understanding the maximum possible size and width of the closest tail light and utilizing those parameters as the limiting factor for elimination of unnecessarily tracked feature reduces false positives. By this method, there are multiple levels of eliminations done for detection and tracking which in turn improves the accuracy and thus improving the overall performance of the system.
[0043] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments of the invention. The scope of the subject matter embodiments defined here may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language.
[0044] It is, however to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[0045] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules comprising the system of the present disclosure and described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The various modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of non-transitory computer readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable media include CDs, DVDs, BLU-RAY, flash memory, and hard disk drives.
[0046] A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
[0047] Further, although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
[0048] The preceding description has been presented with reference to various embodiments. Persons having ordinary skill in the art and technology to which this application pertains will appreciate that alterations and changes in the described structures and methods of operation can be practiced without meaningfully departing from the principle, spirit and scope. ,CLAIMS:WE CLAIM:
1. A method for vehicle detection in unconstrained environments, the method comprising:
receiving at least one color video input wherein the at least one color video input comprises a plurality of frames of colored images;
converting at least one received frame of colored image into Hue Saturation Value (HSV) color space;
extracting a plurality of light source blobs from the converted HSV color space, wherein the converted HSV comprises red and white range of colors;
segregating regions of red range of color from the converted HSV color space;
identifying blob relationship of adjacent red and white range of colors by taking union of red and white range to generate a union image;
analyzing the union image to detect at least one tail light of the vehicle wherein the at least one tail light is part of the at least one color video input;
computing center of divergence on the at least one received frame of colored image, wherein the colored image comprises a plurality of pixels, the plurality of pixels comprises significant pixels and corner pixels;
identifying the significant pixels and the corner pixels on the at least one received frame of colored image;
computing flow vectors between consecutive frames of the received colored images for the significant and corner pixels, wherein the flow vectors comprises divergent and non-divergent flow vectors;
classifying divergent and non-divergent flow vectors for identifying the moving objects;
determining at least one bounding box based on cluster of non-divergent points comprising at least one centroid of the detected at least one tail light; and
detecting the presence of a vehicle based on the determined at least one bounding box.
2. The method of claim 1, wherein unconstrained environment comprises at least one of low visibility condition, night-time darkness, foggy weather, heavy rainfall, smoky environment, solar-eclipse vehicle, non-functional tail light of vehicle to be detected.
3. The method of claim 1, further comprising a step of determining a status of the detected at least one tail light on a detected vehicle wherein the status of the detected at least one tail light can be at least one of: both tail lights of detected vehicle are functional, left tail light of detected vehicle is functional, right tail light of detected vehicle is functional.
4. The method of claim 1, further comprising a step of detecting a tail light of a two wheeler based on width of non-divergent flow vectors.
5. The method of claim 1, wherein the step of analyzing the union image comprises analyzing the connected components of the union image.
6. The method of claim 1, further comprising identifying at least one head light of the vehicle wherein identifying at least one head light comprising the steps of :
masking the detected at least one tail light on the received at least one color image;
converting the received at least one color image into a gray scale image;
applying multi-level thresholds to the gray scale image to convert the gray scale image into a plurality of binary images wherein the multi-level thresholds are applied by reducing the level of threshold of the gray scale image from a highest threshold to a lowest threshold;
determining at least one circular white blob for each of the applied level of thresholds wherein the at least one circular white blob increase in size with each change in level of threshold; and performing at least one of the following steps:
generating circles for each level of threshold around the at least one circular white blob on the gray scale image to form at least one concentric circle at the lowest level of threshold around the at least one circular white blob and identifying the at least one concentric circle as a head light;
or,
determining at least one pair of centroid corresponding to at least one pair of white circular blob at the highest threshold level;
determining the location of the at least one pair of centroid from the highest threshold level at the lowest threshold level;
identifying a pair of head light if the location of the already determined centroids at the highest threshold level is comprised within an area of a single white blob at the lowest threshold level and the single white blob is a merged pair of white circular blobs at the lowest threshold level.
7. The method of claim 6 further comprising tracking at least one of the identified tail light or headlight based on optical flow method.
8. A system for vehicle detection comprising:
one or more processors;
a communication interface device;
one or more internal data storage devices operatively coupled to the one or more processors for storing:
a tail light identifying module comprising :
a color space converter configured to convert at least one received frame of colored image into Hue Saturation Value (HSV) color space;
a light source blob extractor configured to extracting a plurality of light source blobs from the converted HSV color space, wherein the converted HSV comprises red and white range of colors;
a color segregator configured to segregate regions of red color from the converted HSV color space;
a blob relationship identifier configured to identify blob relationship of adjacent red and white regions by taking union of red and white regions to generate a union image;
a connected component analyzer configured to analyze the union image to detect at least one tail light of the vehicle wherein the at least one tail light is part of the at least one color video input and wherein the step of analyzing the union image comprises analyzing the connected components of the union image;
a moving object identifying module comprising:
a center of divergence computing module configured to compute center of divergence on the at least one received frame of colored image, wherein the colored image comprises a plurality of pixels, the plurality of pixels comprises significant pixels and corner pixels;
a pixel identifier configured to identify the significant and corner pixels on the at least one received frame of colored image;
a flow vector computing module configured to compute flow vectors between consecutive frames of received images for the significant and corner pixels, wherein the flow vectors comprises divergent and non-divergent flow vectors;
a classifying module configured to classify divergent and non-divergent flow vectors for identifying the moving objects; and
a bounding box determining module configured to determine bounding box based on cluster of non-divergent points comprising at least one centroid of at least one detected tail light and detect the presence of a vehicle based on the determined at least one bounding box.
9. The system of claim 8, further comprising a tail light status determining module configured to determine a status of the detected at least one tail light on a detected vehicle wherein the status of the detected at least one tail light can be at least one of: both tail lights of detected vehicle are functional, left tail light of detected vehicle is functional, right tail light of detected vehicle is functional.
10. The system of claim 8 further comprising a head light identifying module comprising:
a masking module configured to mask the at least one detected tail light on the received at least one color image;
the color space converter module configured to convert the received at least one color image into a gray scale image;
a multi-level threshold applying module configured to apply multi-level thresholds to the gray scale image to convert the gray scale image into a plurality of binary images wherein the multi-level thresholds are applied by reducing the level of threshold of the gray scale image from a highest threshold to a lowest threshold;
a circular white blob determining module configured to determine at least one circular white blob for each of the applied level of thresholds wherein the at least one circular white blob increase in size with each change in level of threshold;
a concentric circle generating module configured to generate circles for each level of threshold around the at least one circular white blob on the gray scale image to form at least one concentric circle at the lowest level of threshold around the at least one circular white blob and identifying the at least one concentric circle as a head light; and
a merged white blob identifying module configured to:
determine at least one pair of centroid corresponding to at least one pair of white circular blob at the highest threshold level;
determine the location of the at least one pair of centroid from the highest threshold level at the lowest threshold level;
identify a pair of head light if the location of the already determined centroids at the highest threshold level is comprised within an area of a single white blob at the lowest threshold level and the single white blob is a merged pair of white circular blobs at the lowest threshold level.
11. The system of claim 10 further comprising a tracking module configured to track at least one of the identified tail light or headlight based on optical flow method.
| # | Name | Date |
|---|---|---|
| 1 | 201621001018-IntimationOfGrant11-12-2023.pdf | 2023-12-11 |
| 1 | Form 3 [11-01-2016(online)].pdf | 2016-01-11 |
| 2 | 201621001018-PatentCertificate11-12-2023.pdf | 2023-12-11 |
| 2 | Drawing [11-01-2016(online)].pdf | 2016-01-11 |
| 3 | Description(Provisional) [11-01-2016(online)].pdf | 2016-01-11 |
| 3 | 201621001018-CLAIMS [25-01-2021(online)].pdf | 2021-01-25 |
| 4 | Drawing [11-03-2016(online)].pdf | 2016-03-11 |
| 4 | 201621001018-COMPLETE SPECIFICATION [25-01-2021(online)].pdf | 2021-01-25 |
| 5 | Description(Complete) [11-03-2016(online)].pdf | 2016-03-11 |
| 5 | 201621001018-FER_SER_REPLY [25-01-2021(online)].pdf | 2021-01-25 |
| 6 | 201621001018-POWER OF ATTORNEY-(27-04-2016).pdf | 2016-04-27 |
| 6 | 201621001018-OTHERS [25-01-2021(online)].pdf | 2021-01-25 |
| 7 | 201621001018-FER.pdf | 2020-07-24 |
| 7 | 201621001018-CORRESPONDENCE-(27-04-2016).pdf | 2016-04-27 |
| 8 | Abstract1.jpg | 2018-08-11 |
| 8 | 201621001018-ORIGINAL UR 6(1A) FORM 26-120719.pdf | 2019-11-07 |
| 9 | 201621001018-Form 1-170216.pdf | 2018-08-11 |
| 9 | 201621001018-FORM-26 [05-07-2019(online)].pdf | 2019-07-05 |
| 10 | 201621001018-Correspondence-170216.pdf | 2018-08-11 |
| 11 | 201621001018-Form 1-170216.pdf | 2018-08-11 |
| 11 | 201621001018-FORM-26 [05-07-2019(online)].pdf | 2019-07-05 |
| 12 | 201621001018-ORIGINAL UR 6(1A) FORM 26-120719.pdf | 2019-11-07 |
| 12 | Abstract1.jpg | 2018-08-11 |
| 13 | 201621001018-CORRESPONDENCE-(27-04-2016).pdf | 2016-04-27 |
| 13 | 201621001018-FER.pdf | 2020-07-24 |
| 14 | 201621001018-OTHERS [25-01-2021(online)].pdf | 2021-01-25 |
| 14 | 201621001018-POWER OF ATTORNEY-(27-04-2016).pdf | 2016-04-27 |
| 15 | 201621001018-FER_SER_REPLY [25-01-2021(online)].pdf | 2021-01-25 |
| 15 | Description(Complete) [11-03-2016(online)].pdf | 2016-03-11 |
| 16 | 201621001018-COMPLETE SPECIFICATION [25-01-2021(online)].pdf | 2021-01-25 |
| 16 | Drawing [11-03-2016(online)].pdf | 2016-03-11 |
| 17 | 201621001018-CLAIMS [25-01-2021(online)].pdf | 2021-01-25 |
| 17 | Description(Provisional) [11-01-2016(online)].pdf | 2016-01-11 |
| 18 | 201621001018-PatentCertificate11-12-2023.pdf | 2023-12-11 |
| 18 | Drawing [11-01-2016(online)].pdf | 2016-01-11 |
| 19 | Form 3 [11-01-2016(online)].pdf | 2016-01-11 |
| 19 | 201621001018-IntimationOfGrant11-12-2023.pdf | 2023-12-11 |
| 1 | 201621001018searchstrategyE_24-07-2020.pdf |