Abstract: The present invention discloses a system for detecting, locating and tracking of vehicle at night time. The method as disclosed herein comprises of segmentation, validation, clustering, tracking and physical parameter estimation for detection of vehicles. In one aspect, the system utilizes entropy based image segmentation for raw image obtained from multichannel camera.
FORM 2
THE PATENTS ACT 1970
(39 of 1970)
AND
The Patents Rules, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION:
"A SYSTEM FOR DETECTING, LOCATING AND TRACKING A
VEHICLE"
2. APPLICANT (S):
(a) NAME: KPIT Cummins Infosystems Limited
(b) NATIONALITY: Indian Company incorporated under the
Companies Act, 1956
(c) ADDRESS: 35 & 36 Rajiv Gandhi Infotech Park, Phase 1, MIDC,
Hinjewadi, Pune 411057, India.
3. PREAMBLE TO THE DESCRIPTION:
The following specification describes the invention and the manner in which it is to be performed.
Field of Invention:
The present invention relates to a system for automatic detection, location and tracking of vehicles in night time imposing image segmentation based on entropy analysis and method thereof. The invention further relates to headlight control systems.
Background of Invention:
Research in motion analysis has evolved over the years. It has found versatile and extensive applications in challenging fields such as traffic monitoring, military, medicine and biological sciences etc. Detection and tracking of moving objects in video sequences can offer significant benefits to motion analysis. One of the approaches for in-motion vehicle tracking and detection is to use visual signals received from a camera or any other photographic means, in other words a vision-based approach.
During the past ten years much research has gone into the area of computer vision for autonomous vehicles navigation. Many codes and methods have been proposed, all with an ultimate common goal: to give intelligence to vehicles to interpret the visual information.
The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A number of such detection and tracking systems are known in prior art few are exemplified hereunder.
US 2009/0160630 provides a collision avoidance system enabled by input of a scene image received solely from a vision-based stereo configuration. The said system is directed to a fully automated collision avoidance system. The said system used IR-source, which is expensive, thereby making the complete system expensive.
US 2009/240432 provides a device for detection of relative distance between the vehicle and the obstacle with improved accuracy. The device enables reliable distinction between recognized objects to represent whether the detected object is the obstacle that causes
damage in case of collision. The said patent uses width-only based distance calculation, particularly during day-time. The said system cannot be utilized during night time.
US 2010/0091513 describes a vehicle detection apparatus, its method and a light control apparatus wherein, an oncoming vehicle is detected based on luminance and brightness of areas in an input image. The application fails to teach how a detected vehicle is tracked for driver assistance.
As seen in the prior art, there is need of a cost efficient system that can provide driver assistance even in poor lighting conditions to detect and track vehicle oncoming in its path. Accordingly, the present invention provides a novel driver assistance system for vehicle detection and tracking at night-time.
Summary of Invention:
The present invention provides a system for vehicle detection, tracking and location at night time. A system of the present invention is based on bright spot segmentation, spot validation, clustering of vehicles, and tracking and physical parameter estimation for detection of vehicles. The system consists of entropy based segmentation approach to detect circular light blobs even under difficult and varying light conditions. The system consists of a rule-based clustering unit to eliminate false segmentation and confirm correct light pairs.
Brief Description of Drawings:
Fig. 1 illustrates a block diagram of the dynamic high beam system according to the
embodiment of the invention.
Fig. 2 illustrates the segmentation module flowchart.
Fig. 3 illustrates an entropy analysis based segmentation flowchart.
Fig. 4 illustrates ray diagram for region of interest determination.
Fig. 5 illustrates area thresholding logic.
Fig. 6 illustrates detection of probable vehicles with vertical overlapping.
Fig. 7 illustrates a flowchart for vehicle tracking.
Fig. 8 illustrates calculation of vehicle signature.
Fig. 9 illustrates calculation for estimating vehicle signature in a search window.
Fig. 10 illustrates a flowchart for vehicle width estimation.
Fig. 11 illustrates horizontal plane road mapping geometry.
Fig. 12 illustrates a graph depicting the relation between row number in the image and the
scale factor (SFn).
Fig. 13 illustrates an error plot for pre-coded distance estimation technique.
Fig. 14 illustrates a search window for tracking.
Fig. 15 illustrates meeting of parallel lines in a pin-hole camera.
Fig. 16 illustrates the division of image frame.
Detailed description of Drawings:
Fig. 1 illustrates the operation of the proposed invention. It shows the process involved in detection and tracking of vehicles in images during night time obtained from multichannel camera.
Fig. 2 illustrates segmentation process wherein the multicolor image from the Multichannel Camera (1) is processed for identification of potential regions, through Potential Region Identification Module (221). The hence identified regions are further subjected to segmentation to highlight vehicles, through Segmentation Module (22).
Fig. 3 illustrates the flowchart for the Rule Based Engine (23). The figure shows the various components of the rule based engine.
Fig. 4, illustrates region of interest determination, whereby '0' is the pinhole camera. Using the perspective geometry, the region of vehicle head/tail light region is determined. In Fig. 4, 'HL' is the required vehicle region.
Fig. 5 illustrates a raw image received from the Multichannel Camera (1).
Fig. 6 illustrates the vertical overlap removal to avoid false detections.
Fig. 7 illustrates the flowchart for the Tracking Module (25), whereby the tracking comprises reading image, followed by obtaining the vehicle location using segmentation and decision logic based on presence of vehicles, defining and extracting search
window(s) in the next frame, obtaining horizontal and vertical signatures for the search window(s).
Fig. 8 gives an illustration for calculation of signature of a search window to identify the location of a bright spot.
Fig. 9 depicts the graphical representation of vehicle signature of an example search window.
Fig. 10 illustrates a flow chart for vehicle width estimation whereby firstly, width of vehicle (in pixels) and horizontal position of the vehicle is calculated using image processing techniques. Secondly, a scale factor (SFn) is calculated for the hence obtained horizontal position of vehicles. Finally, estimating width of the vehicle based on scale factor of the vehicle.
As shown in Fig. 11, projecting the target vehicle at a distance 'D' on to the image plane (I) will have a displacement of 'd' from the principal point (0) of camera. Thus, the camera can be calibrated in offline mode to generate a look up table for distance estimation using target vehicle position in the image plane as index. Thus, pre-coded/determined distance estimation technique gives a staircase distance readings rather than continuous.
Fig. 12 illustrates a sample graph depicting the relation between the row numbers in the image and the scale factor 'SFn'
Fig, 13 depicts the performance of distance estimation technique discussed herein.
Fig. 14 illustrates the schematic diagram for vehicle width estimation as as described herein below. If a vehicle, 'i' is detected in an (n-1)th frame, then a search window is defined in nth image around the region where 1th vehicle is found in (n-l)th frame. Search window is defined in nth frame by increasing both width and height of the detected vehicle by 6 pixels in (n-l)lh frame.
As shown in Fig. 15, in perspective projection, parallel lines on a plane n will cross each other in the image plane 'H'
Fig. 16 illustrates the segmentation of image based on entropy.
Detailed description of Invention:
The present invention will now be described in detail with reference to the preferred and optional embodiments so that various aspects thereof may be more fully understood and appreciated.
Vehicle detection at night is a very important application for driver assistance system. Hence, a vision-based approach to the detection and recognition of vehicles, calculation of their motion parameters, and tracking of multiple vehicles by using a sequence of grayscale images taken from a moving vehicle is presented. The present invention describes an automatic headlight control system in vehicles imposing image segmentation by entropy analysis, rule based engine for eliminating false detection, tracking the detected vehicle across consecutive frames, and measuring the detected vehicle's distance, angle, and width.
The present invention describes a system for real-time detection and location of vehicles. The said system consists of a Multi-channel Camera (1), a Controlling Unit (2) and an Output Device (10).
The present invention further describes a method for real-time detection and location of vehicles comprising,
• Receiving a multi-channel color image;
• identifying the potential region within multi-channel color image based on its entropy (potential region identification);
• segmenting the vehicles in identified potential region using gray scale image (vehicle segmentation);
• subjecting the segmented image to Rule based engine for processing the probable vehicle candidates for vehicle detection;
• applying lamp pair tracking algorithm to the resultant image;
• applying vehicle width estimation processing to the image obtained from the above step;
• estimating the distance exclusively based on vision for the image of the step above; and
• displaying the image on the display device.
Fig. 1 illustrates a flowchart for the dynamic high beam system according to the embodiment of the invention, wherein a multi-channel colored image is received from a Multichannel Camera (1). The image hence obtained is then passed on to a Tracker (21) of the Controlling Unit (2). The image frames from the Multichannel Camera (1) are input to a Frame Count Increment Block (3). The Frame Count Increment Block (3) serially increments the number of the input image frames with the passage of each frame. The output 'Count' number of the Frame Count Increment Block (3) is provided to the Tracker (21). The output of the Tracker (21) is determined by the equation between the 'Count', which is the serial number of the input frame and a predefined value 'N', which is set to the maximum number of frames that need to be tracked. Predefined variable *N* maybe set to any value, as per the requirement of the system. For the embodiment of the invention, if the 'Count' is a multiple of'N\ the output of the Tracker (21) is "Yes" and the image is passed on to a Segmentation Module (22) where the image is segmented, based on entropy. The image thus segmented is passed on to a Rule Based Engine (23) that processes the probable vehicle candidates for vehicle detection. The vehicle information obtained from the Rule Based Engine (23) is further passed on to a DAW Block (24) for estimation of distance, angle and width (DAW). If the 'Count' is not a multiple of 'N'the output of the Tracker (21) is "No" and the image is directly processed for vehicle detection by the Tracking Module (25) by-passing the Segmentation Module (22) and the Rule Based Engine (23). The vehicle information which is further processed for distance, angle and width measurement by the DAW Block (24) is then passed over the CAN network to the Output Device (10). The Output Device (10) may be a video display or an audio warning device known in the art. A video displays the detected vehicle on a screen and an audio device may provide warning alert in case of various predefined scenarios like proximity detection, collision warning, etc.
The components mentioned above are described in detail hereinafter.
In an embodiment, as described in Fig. 2, the Segmentation Module (22) of the Controlling Unit (2) facilitates the segmentation of image based on entropy. The said Segmentation Module (22) performs a Potential Region Identification (221) based on entropy which is explained hereinafter. After identifying potential regions, the Segmentation Module (22) farther performs Vehicle Segmentation (222), explained hereinafter, to identify the vehicle. This identification is performed based on the thresholding technique.
Step 1: Potential Region Identification (221):
Each input image frame of size 'R' x *C pixels is represented in multi-channel color format such as YCbCr, RGB, HSV, YUV or YCbCrK. 'R' and 'C are constant inputs based on the system requirement. The pixels in all the channels are lexicographically ordered as a vector and histogram of the ordered pixels is gained. The gained histogram is used to compute the entropy value. The entropy value (E) is given by,
XI represents pixel values
p(xj) represents normalized probability of pixels
N represents the total number of pixels
1. Each frame is segmented into four equal sized segments SFI, SF2, SF3 and SF4 of
size R/4 x C pixels each and each segment is in multi-channel color format. The
entropy values are computed for each segment and the corresponding entropy values
are El, E2, E3 and E4, respectively.
2. Each frame is sub-segmented into eight segments sfl, sf2, sf3, sf4, sf5, sf6, sf7 and
sf8 of size R/8 x C pixels each and the corresponding entropies are el, e2, e3, e4, e5,
e6, e7 and e8, respectively. Segmentation of frames is shown in Fig 16.
3. If entropy values E]>E or E2>E or E3>E or E4>E, then the segment which has more
entropy value as compared to other segments is chosen as a potential region (for
example SF3). Further, entropy values of sub-segments which correspond to the
chosen segment are compared again (for example, entropy values e5 and e6 are compared for SF3 segment). The sub-segment which has more entropy values as compared to another sub-segment is chosen as a potential region (for example, if e5>E6 then sub-segment sf5 is chosen as a potential region). The chosen potential region size is R/8 x C pixels. 4. If none of the entropies (either El, E2, E3,E4) are greater than the entropy of the frame, the segment which has more entropy values as compared to other segments is chosen as a potential region (for example if E1>E2>E4>E3, then the sub frame SF1 is chosen). The chosen potential regions size is R/4 x C pixels.
Step 2: Vehicle segmentation (222):
After identifying the potential region, thresholding technique is applied on the gray scale
image to obtain a segmented image. Vehicle segmentation process is given below.
1. The multi-channel image is converted to gray scale image.
2. Entropy value of the gray scale image (e) is computed.
3. Constant (k) times the entropy value (e) is chosen as a threshold value. Hard
thresholding is applied on the identified potential region of the gray scale image and
the segmented image is obtained. Hard threshold T is given by,
Where represent the potential region of the gray scale image to be segmented,
thresholding constant and entropy value, respectively.
In another embodiment, the Rule Based Engine (23) of the Controlling Unit (2) processes the image as obtained from the Segmentation Module (22) to eliminate false detections. Focus of this module is to reduce erroneous detection of noise like the bright spots produced by reflectors, street lamps and traffic signs. Accordingly, a sequence of rules is executed as illustrated in Fig, 3. The Rule Based Engine (23) accepts a binary input image from the Segmentation Module (22) containing bright spots. The first step is to simplify vehicle detection by restricting the Region of Processing (231) in the image using constraints derived from perspective geometry and camera calibration parameters.
The light scattering effects appearing in the image are handled by a Scatter Removal Processing (232). To refine the shape of components, the image is processed by a Majority Filter (233). Noise components appearing in the periphery of vehicle regions are eliminated using Area Thresholding Technique (234) and Width Thresholding Technique (235). The Component Clustering (236) applies various rules to measure symmetry between the potential components for head/tail light pair detection. Vertical Overlap Removal (237) reduces false positives in vehicle detection by exploiting the fact that no two vehicles that are located along the same line of view can simultaneously appear in the image. Road Reflection Removal (238) eliminates the road reflections underneath the head/tail light by detecting them using the spatial symmetry exhibited by the head/tail lamp pair and its reflection.
The method of the Rule Based Engine (23) for eliminating false detection comprises;
1. Region of Processing/interest determination (231);
2. Scatter Removal Processing (232);
3. Application of Majority Filter (233) to refine the shape of the components to improve the detection accuracy;
4. Application of Area Thresholding Technique (234) for reducing the noise thereby achieving increased detection accuracy;
5. Application of Width Thresholding Technique (235) for eliminating false pairs of vehicle lamps;
6. Application of Component Clustering (236);
7. Vertical Overlap Removal (237), and
8. Road Reflection Removal (238).
Which are described in detail herein after:
1) Region of Processing/interest determination (231):
At night time, the most salient features of vehicle are its bright head/tail lights. From perspective geometry of image formation, the head/tail lights location in image depends on height of head/tail lights from the road plane, distance from the camera and camera position. The maximum and minimum possible heights of head/tail lamps of vehicles from the road are considered to determine the bounds of the search range in the image. Using this range and camera calibration parameters, the region of
head/tail lights is identified which is used as the region of vehicle detection. Vertical search range for the vehicles is determined using the perspective geometry. The image locations corresponding to the search limits are thus determined. In general, the bright components such as street lamps, traffic lights and street-lamps not located under the potential region are removed. This results in simplification of scene for analysis.
In Fig. 4, '0' is the pinhole camera, 'B' is the minimum distance of the vehicle from the host vehicle, 'D' is the maximum distance from the host vehicle, 'BG - DF' is the minimum height of the head/taillight of the vehicle from road, 'DE = BC is the maximum height of the head/tail light of the vehicle from road, 'KOL' is the optical axis of the camera, 'MN' is the image plane. Using the perspective geometry, the region of vehicle head/tail light region is determined. In Fig.4 region between 'H' and T is vehicle appearance region.
2) Scatter Removal Processing (232):
The scatter removal processing is done to reduce the blooming/scatter effect of the head/tail lights. A bounding box is constructed enclosing each detected bright object region (detected in segmentation). A connected-component labeling process is performed on the segmented image to locate the connected bright objects. A set of properties like number of pixels and dimensions of bounding box for each connected component (object) in the binary image are calculated. The components larger than defined threshold value of number of pixels are potential candidate with scatter effect as given by the scatter component as calculated. The bounding box region of those components is segmented with high threshold to preserve highly bright regions which reduce the blooming/scatter effect.
From perspective geometry, the near vehicles are formed at bottom region of vehicle detection region. The binary majority filter (233) is applied on near vehicle region (refer Fig. 5) only. The filtering is performed through the application of a mask structure element on every pixel in the image. In the mask region number of non-zero pixels (N) is calculated and the image pixel is transformed as follows
Where,
I (m,n) = Image pixel at row 'm' and column 'n'
M = Number of pixel in the mask
N = Number of non-zero pixels in the mask region.
To handle the borders of the image, the image is padded with zeros on the borders according to the size of the mask. The majority filter is used to refine the shape of the components to improve the detection accuracy.
4) Area Thresholding (234):
The input to this functionality is a binary image with pixels belonging to probable vehicle regions marked as one. Perspective geometry is used as a basis to divide the region of processing in the image further into regions corresponding to near and far vehicle regions (Refer Fig. 5). A threshold is set on the pixel area of vehicle lamps for the near vehicle regions in the image. Using this method, as illustrated in Fig. 5, reduction of noise to improve vehicle detection accuracy is achieved. The application of constraints is expressed as;
NL < PA , If centroid of the component lies in near vehicle region Where,
PA: Pixel area of the component i.e. number of pixels in the component. NL: Lower Area threshold for near regions.
5) Width Thresholding Technique (235):
Again, perspective geometry is used as a basis to divide the region of interest in the image further into regions corresponding to far vehicles and near vehicles. The observed width of the vehicles in the image lies within thresholds. Separate thresholds are applied for the near and far vehicle regions that are derived from experiments. The constraints used are expressed as
If the pair of components lie in far vehicle region
If the pair of components lie in near vehicle region Where,
6) Component Clustering (236):
The input to this functionality is a binary image with reduced noise. The component clustering applies various rules to measure symmetry between the potential components for head/tail light pair detection. The following are the functionalities within component clustering:
Slope threshold: The slope of the line joining the centroid of the components should be less than a threshold for it to be detected as a pair of head/tail lamps. The basis of this logic is that the height of the head/tail lamp pair with respect to the ground is same. The thresholds are applied as given by
Where,
Rl: Row value of the centroid of the first component. R2: Row value of the centroid of the second component. CI: Column value of the centroid of the first component. C2: Column value of the centroid of the second component. ST: Slope threshold.
Dimension similarity between head/tail lamp pair; This functionality is based on the fact that the head/tail lamp pair of vehicles possesses similar physical dimensions. The physical dimensions of a lamp are described as expressed here:
LD: Lamp descriptor. WL: Width of lamp. HL: Height of lamp.
The similarity between a pair of head/tail lamps is measured using
Where,
LD1 < LD2 and 0 < DS < 1
LDi: Lamp descriptor of first component.
LD2: Lamp descriptor of second component.
DS : Dimension similarity between the lamp pair
The dimension similarity computed by the lamp descriptor LD is compared against a standard threshold as VS£ DST .
Where,
DS: Dimension similarity.
DST: Dimension similarity threshold.
Ratio of pixel areas of the head/tail lamp pair: This functionality is based on the phenomenon that the left and right lamps of head/tail lights should occupy the same area in image as a result of consequence of having same physical dimensions. Pixel areas are compared as:
Where,
PA< PA2
PAi: Pixel area of the first component.
PA2: Pixel area of the second component.
ART: Area ratio threshold.
Color similarity between lamp pair: This functionality is based on the fact that the color properties of the light emitted from the left and right lamps of head/tail light pair are same. This property helps to reduce the false positive detected which are spots of lamps of two different vehicles. The color similarity is measured by a quantity expressed as
Where,
C: Color similarity.
Hi, H2: Hue plane values of the centroid of the left and right lamps respectively.
The color similarity between the lamp pairs is checked against a standard threshold as expressed as
Where,
CS: Color similarity.
CST: Color similarity threshold
7) Vertical overlap removal (237):
Vertical overlap removal reduces false positives in vehicle detection by exploiting the fact that no two vehicles that are located along the same line of view can simultaneously appear in the image. The area occupied by the detected regions and the vertical alignment ratio between them are used to remove the false vehicle detections as illustrated in Fig. 6.
The amount of horizontal overlap is calculated as following:
Where,
AH: Horizontal overlap between two vehicles mind: The left boundary of first vehicle. maxCi: The right boundary of first vehicle. minC2: The left boundary of second vehicle. maxC2: The right boundary of second vehicle.
The vehicles are vertically aligned if the horizontal overlap (AH < 0) is non-negative adjudged as follows:
RVA: Vertical alignment ratio. Wi: Width of the first vehicle. W2: Width of the second vehicle.
8) Road reflection removal (238):
Reflection of head/tail lights underneath the lamps on. the road often gets detected as a false positive. Road reflections are eliminated by detecting them using the spatial symmetry exhibited by the head/tail lamp pair and its reflection. The true and reflected pairs are highly overlapped and aligned on the same vertical line. In general, vertical alignment ratio for the true vehicle pairs is high as compared to the reflected pairs and the vertical alignment difference between the true and reflected pairs is small. These properties are used to achieve the elimination of road reflections.
In further embodiment, the Tracking Module (25) of the Controlling Unit (2) facilitates lamp pair tracking for the purpose of tracking the vehicle movement through frames. The flow chart as illustrated in Fig. 7 describes the Lamp pair tracking logic described above. Light sources appear as bright objects in the image in night time, provided head/tail lamps are ON. Lamp pair tracking process extracts the signature of the search window along the horizontal and vertical directions. Since light sources appear as bright blobs, their location can be found by finding the peaks in the signature.
If a vehicle, T is detected in an (n-l)lh frame, then a search window is defined in nth image around the region where iIh vehicle is found in (n-l)thframe. Search window is defined in nth frame by increasing both width and height of the detected vehicle by 6 pixels in (n-l)th frame. This is shown schematically in Fig. 14. In fact, width and height of the search window in nth frame is governed by relative velocity between the host and target vehicle, resolution of the camera and pixel size.
Sum of the pixel values along each column and also along each row are computed. This gives us two signatures for a vehicle. Fig. 8 gives an illustration for calculation of signature of a search window to identify the location of a bright spot. Fig. 9 depicts the graphical representation of vehicle signature of an example search window.
Iterative search operation is not required, thus the method of the present invention is computationally less intensive and hence requiring less execution time. In addition, summing up the pixel values makes signature and thus lamp pair tracking is less sensitive to noisy conditions.
Saturation of lamp intensity will result in flat Gaussian distribution reducing the accuracy of object detection by tracking. Order of inaccuracy will be proportional to dimensions of flat intensity region.
According to the embodiment of the invention, the DAW Block (24) of the Controlling Unit (2) estimates vehicle width, vehicle angle and vehicle distance'from
the source. As shown in Fig. 15, in perspective projection, parallel lines on a plane V will cross each other in the image plane 'HAs it can be observed, though two parallel lines in plane 'JC' are separated by a constant distance, separation between projected lines in the image plane 'H' is not constant. We define a scale factor 'SF' (in meters/pixel) for each row in the image. Calculation of'SF' is as given below: Say, for an nth row, separation between the two lines in the image is N pixels.
Calculation of scale factor for an n,h row in image Where,
W: separation between the two parallel lines in plane n (in meters). Scaling factor SFn is modeled as function of rows in the image, but not columns.
Scale factor SFn for a range of rows n = Nl to N2, are obtained offline. Sample graph depicting the relation between the row numbers in the image and the scale factor SFn is shown in Fig. 12. For a given camera and geometric configuration, SFn for each row in the image can be calculated in offline. Reasonable accuracy can be achieved by relating SFn and row number in the image with a linear equation.
Relation between scale factor and row number for a given camera
Where,
SFn is the scale factor for an nth row in the image
RowNumber is the horizontal position of the vehicle in the image
cl and c2 are the constants using by curve fitting techniques.
Width of the vehicle 'N' in pixels can be obtained using image processing techniques. Thus using Equation 15 and Equation 16 width of the vehicle W can be calculated.
Fig. 10 illustrates a flow chart for the vehicle width estimation whereby firstly, width of vehicle (in pixels) and horizontal position of the vehicle is calculated using image processing techniques. Secondly,, scale factor (SFn) is calculated for the hence
obtained horizontal position of vehicles. Finally, width of the vehicle is estimated based on scale factor of the vehicle.
In yet another embodiment, the method of invention further comprises distance estimation method. The distance estimation, according to this embodiment, uses calibrated video data to derive a transfer function in order to estimate the distance of the vehicle in front. This method of distance estimation inherently accounts for nonlinearities of the vision system. Distance estimation method proposed according to the present invention is based on the assumptions that geometric configuration of the camera is fixed. And it is assumed that host and target vehicles are on same straight plane road. As illustrated in Fig. 11, for a given camera configuration, projecting the target vehicle at a distance 'D' on to the image plane (I) will have a displacement of 'd' from the principal point (0) of camera. Camera can be calibrated in offline mode to generate a look up table for distance estimation using target vehicle position in the image plane as index. Thus, pre-coded/pre-determined distance estimation technique gives a staircase distance readings rather than continuous. Performance of distance estimation technique discussed above is depicted in Fig. 13.
The distance is estimated using frames obtained from single camera. The method of invention requires no intrinsic or extrinsic parameters for camera calibration. All the parameters are computed at the time of offline calibration process resulting in computationally very effective process. Lens distortion effects are also very minimal. This method of the invention is developed with the assumption of straight roads.
The present invention is described in scientific terms using the mathematical formulae as stated herein. A person skilled in the art may appreciate that the values of these parameters are relative to application and do not limit the application of the invention. While the embodiments of the present invention have been described with certain examples, those with ordinary skill in the art will appreciate that various modifications and alternatives to those details could be developed in the light of the overall teachings of the disclosure, without departing from the scope of the invention. The examples used to illustrate the embodiments of the present invention, in no way limit the applicability of the present invention to them and as such, the present invention may be utilized for various similar applications.
We claim,
1) A system for detecting, locating and tracking of vehicle comprising;
a. a Multichannel Camera (1);
b. a Controlling Unit (2) and
c. an Output Unit (10)
wherein, the said Controlling Unit (2) further comprises a Segmentation Module (22), a Rule Based Engine (23), a DAW Module (24) and a Tracking Module (25).
2) The system according to claim 1, wherein the Segmentation Module (22) is enabled to identify the potential region within multi-channel color image obtained from multichannel camera based on its entropy and segments the image.
3) The system according to claim 1, wherein the Rule Based Engine (23) further comprises;
a Region of Interest Identification Module (231), wherein said Region of Interest Identification Module (231) is enabled to simplify vehicle detection by restricting the region of processing in the image using constraints derived from perspective geometry and input device calibration parameters;
a Scatter Processing Module (232), wherein said Scatter Processing Module (232) is enabled to rectify light scattering effects appearing in the image received form Region of Interest Identification Module (231);
a Majority Filter (233), wherein said Majority Filter (233) is enabled to refine the shape of components;
an Area Thresholding Module (234), wherein said Area Thresholding Module (234) is enabled to eliminate noise components appearing in the periphery of vehicle regions;
a Width Thresholding Module (235), wherein said Width Thresholding Module (235) is enabled to measure symmetry between the potential components for head/tail light pair detection;
a Component Clustering Module (236), wherein said Component Clustering Module (236) is enabled to measure symmetry between the potential components for head/tail light pair detection ;
a Vertical Overlap Removal Module (237), wherein the said Vertical Overlap Removal Module (237) is enabled to reduce false positives in the vehicle detection
and
a Road Reflection Removal Module (238), wherein said Road Reflection Removal Module (238) is enabled to eliminate road reflections underneath the head/tail lights by detecting them using the spatial symmetry and their reflection.
4) The system according to claim 1, wherein the DAW Module (24) is enabled to estimate the distance, based exclusively on vision.
5) The system according to claim 1, wherein the Tracking Module (25) is enabled to facilitate lamp pair tracking which comprises extracting the signature of the search window along the horizontal and vertical directions.
6) A method for detecting and tracking of vehicle at night time comprising;
(a) Receiving a multi-channel color image from input device;
(b) identifying the potential region within multi-channel color image of step (a) based on its entropy (potential region identification);
(c) segmenting the gray scale image in which vehicles have been identified in region of step (b) using gray scale image (vehicle segmentation);
(d) subjecting the segmented image of step (c) to rule based engine for eliminating false detections;
(e) applying lamp pair tracking technique to the image obtained in step (d);
(f) applying vehicle width estimation processing to image obtained in step (e);
(g) estimating the distance, based exclusively on vision, for the image of step (f), and
(h) displaying the image on the output device.
7) The method according to claim 6, wherein the method of rule based engine for eliminating errors comprises:
(a) Simplifying vehicle detection by restricting the region of processing in the image using constraints derived from perspective geometry and input device calibration parameters;
(b) Applying scatter removal processing to rectify light scattering effects appearing in the image of step (a);
(c) refining the shape of components in image of step (b) by majority filter technique;
(d) eliminating noise components appearing in the periphery of vehicle regions as detected in step (c) by area thresholding technique;
(e) applying various rules of component clustering to image of step (d) to measure symmetry between the potential components for head/tail light pair detection;
(f) reducing false positives in vehicle detection by exploiting the fact that no two vehicles that are located along the same line of view can simultaneously appear in the image using Vertical overlap removal technique and
(g) eliminating road reflections underneath the head/tail lights by detecting them using the spatial symmetry and their reflection.
8) The method according to claim 7, wherein component clustering comprises;
(a) Applying slope thresholding technique;
(b) applying dimension similarity technique;
(c) calculating ratio of pixel areas for head/tail lamp pair and
(d) reducing false positives using color similarity between lamp pair.
9) The method according to claim 6, wherein identifying the potential region within
multi-channel color image based on its entropy comprises;
(a) Receiving frame of size 'R' x 'C pixels;
(b) lexicographically ordering the pixels of step (a) as a vector and gaining a histogram;
(c) segmenting the frame of step (a) into four equal sized segments namely SF1, SF2, SF3, SF4 of size R/4 x C pixels;
(d) computing the entropy values for each segment of step (c) as El, E2, E3 and E4;
(e) sub-segmenting frame of step (c) in eight segments sfl, sf2, sf3, sf4, sf5, sf6, sf7 andf8 of sizeR/8xC;
(f) computing the entropy values for each segment of step (e) as el, e2, e3, e4, e5, e6, e7 and e8 and
(g) comparing the entropy values of steps (d) followed by values of step (f), and
(h) choosing the segments showing more entropy value as calculated in comparison conducted in step (g).
10) The method according to claim 6, wherein estimating the distance, based exclusively on vision, comprises;
(a) Projecting the target vehicle at a distance on to the image plane and
(b) Looking up a pre-define table for distance estimation using target vehicle position in the image plane as index value.
| # | Name | Date |
|---|---|---|
| 1 | 1743-MUM-2011-FORM 4 [09-09-2019(online)].pdf | 2019-09-09 |
| 1 | 1743-MUM-2011-FORM 9(22-06-2011).pdf | 2011-06-22 |
| 2 | 1743-mum-2011-abstract.doc | 2018-08-10 |
| 2 | 1743-MUM-2011-FORM 18(22-06-2011).pdf | 2011-06-22 |
| 3 | Other Document [27-06-2017(online)].pdf | 2017-06-27 |
| 3 | 1743-mum-2011-abstract.pdf | 2018-08-10 |
| 4 | Examination Report Reply Recieved [27-06-2017(online)].pdf | 2017-06-27 |
| 4 | 1743-MUM-2011-CERTIFICATE OF INCORPORATION(17-1-2014).pdf | 2018-08-10 |
| 5 | Drawing [27-06-2017(online)].pdf | 2017-06-27 |
| 6 | Correspondence [27-06-2017(online)].pdf | 2017-06-27 |
| 6 | 1743-mum-2011-claims.pdf | 2018-08-10 |
| 7 | Abstract [27-06-2017(online)].pdf | 2017-06-27 |
| 7 | 1743-MUM-2011-CORRESPONDENCE(6-9-2011).pdf | 2018-08-10 |
| 8 | 1743-MUM-2011-PatentCertificate02-02-2018.pdf | 2018-02-02 |
| 8 | 1743-mum-2011-correspondence.pdf | 2018-08-10 |
| 9 | 1743-mum-2011-description(complete).pdf | 2018-08-10 |
| 9 | 1743-MUM-2011-IntimationOfGrant02-02-2018.pdf | 2018-02-02 |
| 10 | 1743-mum-2011-drawing.pdf | 2018-08-10 |
| 10 | abstract 1.jpg | 2018-08-10 |
| 11 | 1743-MUM-2011-FER.pdf | 2018-08-10 |
| 11 | 1743-MUM-2011-original under rule 6 (1A)Correspondence-271216.pdf | 2018-08-10 |
| 12 | 1743-MUM-2011-FORM 1(6-9-2011).pdf | 2018-08-10 |
| 12 | 1743-MUM-2011-original under rule 6 (1A) Power of Attorney-271216.pdf | 2018-08-10 |
| 13 | 1743-mum-2011-form 1.pdf | 2018-08-10 |
| 13 | 1743-mum-2011-form 5.pdf | 2018-08-10 |
| 14 | 1743-MUM-2011-FORM 13(17-1-2014).pdf | 2018-08-10 |
| 14 | 1743-mum-2011-form 3.pdf | 2018-08-10 |
| 15 | 1743-mum-2011-form 2(title page).pdf | 2018-08-10 |
| 15 | 1743-mum-2011-form 26.pdf | 2018-08-10 |
| 16 | 1743-mum-2011-form 2.pdf | 2018-08-10 |
| 17 | 1743-mum-2011-form 2.pdf | 2018-08-10 |
| 18 | 1743-mum-2011-form 26.pdf | 2018-08-10 |
| 18 | 1743-mum-2011-form 2(title page).pdf | 2018-08-10 |
| 19 | 1743-MUM-2011-FORM 13(17-1-2014).pdf | 2018-08-10 |
| 19 | 1743-mum-2011-form 3.pdf | 2018-08-10 |
| 20 | 1743-mum-2011-form 1.pdf | 2018-08-10 |
| 20 | 1743-mum-2011-form 5.pdf | 2018-08-10 |
| 21 | 1743-MUM-2011-FORM 1(6-9-2011).pdf | 2018-08-10 |
| 21 | 1743-MUM-2011-original under rule 6 (1A) Power of Attorney-271216.pdf | 2018-08-10 |
| 22 | 1743-MUM-2011-FER.pdf | 2018-08-10 |
| 22 | 1743-MUM-2011-original under rule 6 (1A)Correspondence-271216.pdf | 2018-08-10 |
| 23 | 1743-mum-2011-drawing.pdf | 2018-08-10 |
| 23 | abstract 1.jpg | 2018-08-10 |
| 24 | 1743-mum-2011-description(complete).pdf | 2018-08-10 |
| 24 | 1743-MUM-2011-IntimationOfGrant02-02-2018.pdf | 2018-02-02 |
| 25 | 1743-MUM-2011-PatentCertificate02-02-2018.pdf | 2018-02-02 |
| 25 | 1743-mum-2011-correspondence.pdf | 2018-08-10 |
| 26 | Abstract [27-06-2017(online)].pdf | 2017-06-27 |
| 26 | 1743-MUM-2011-CORRESPONDENCE(6-9-2011).pdf | 2018-08-10 |
| 27 | Correspondence [27-06-2017(online)].pdf | 2017-06-27 |
| 27 | 1743-mum-2011-claims.pdf | 2018-08-10 |
| 28 | Drawing [27-06-2017(online)].pdf | 2017-06-27 |
| 29 | Examination Report Reply Recieved [27-06-2017(online)].pdf | 2017-06-27 |
| 29 | 1743-MUM-2011-CERTIFICATE OF INCORPORATION(17-1-2014).pdf | 2018-08-10 |
| 30 | 1743-mum-2011-abstract.pdf | 2018-08-10 |
| 30 | Other Document [27-06-2017(online)].pdf | 2017-06-27 |
| 31 | 1743-MUM-2011-FORM 18(22-06-2011).pdf | 2011-06-22 |
| 32 | 1743-MUM-2011-FORM 4 [09-09-2019(online)].pdf | 2019-09-09 |
| 32 | 1743-MUM-2011-FORM 9(22-06-2011).pdf | 2011-06-22 |
| 1 | 1743mum2011fsearchstrategy_27-12-2016.pdf |