Sign In to Follow Application
View All Documents & Correspondence

Processing Device

Abstract: The objective of the present invention is to detect steps existing on a road accurately, by suppressing erroneous detections accompanying erroneous measurements of parallax. A vehicle-mounted environment recognition device 1 is provided with a processing device which processes a pair of images acquired by means of a stereo camera unit 100 installed in a vehicle. The processing device is provided with: a stereo matching unit 200 which measures the parallax between the pair of images, to generate a parallax image; a step candidate extracting unit 300 which extracts a step candidate on a road along which the vehicle is traveling, from the parallax image generated by the stereo matching unit 200; a line segment candidate extracting unit 400 which extracts a line segment candidate from the images acquired by the stereo camera unit 100; an analyzing unit 500 which compares the step candidate extracted from the step candidate extracting unit 300 with the line segment candidate extracted by the line segment candidate extracting unit 400, and analyzes the validity of the step candidate on the basis of the comparison result and the inclination of the line segment candidate; and a three-dimensional object detecting unit 600 which detects a step existing on the road on the basis of the analysis result obtained by the analyzing unit 500.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 July 2022
Publication Number
47/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
archana@anandandanand.com
Parent Application

Applicants

HITACHI ASTEMO, LTD.
2520, Takaba, Hitachinaka-shi, Ibaraki 3128503

Inventors

1. TAKEMURA Masayuki
c/o HITACHI, LTD., 6-6, Marunouchi 1-chome, Chiyoda-ku, Tokyo 1008280
2. SHIMA Takeshi
c/o HITACHI AUTOMOTIVE SYSTEMS, LTD., 2520, Takaba, Hitachinaka-shi, Ibaraki 3128503
3. MATONO Haruki
c/o HITACHI, LTD., 6-6, Marunouchi 1-chome, Chiyoda-ku, Tokyo 1008280

Specification

Title of invention : Processing device
Technical field
[0001]
 The present invention relates to a processing device, and for example, to a processing device provided in an in-vehicle environment recognition device.
Background technology
[0002]
 Preventive safety technology for preventing accidents by recognizing the surrounding environment of a vehicle using a camera or the like mounted on the vehicle is becoming popular, and the development of technology for recognizing the surrounding environment is also accelerating. One of the surrounding environments to be recognized is a step on a road.
[0003]
 There are a wide variety of steps on the road, such as curbs and gutters on the side of the road, steps such as bumps and joints on the road surface, and obstacles on the road surface. is. In particular, there are large differences in the presence or absence of curbs or walls on sidewalks, the presence or absence of lane markings, etc., and obstacles such as utility poles, rocks, or grass protrude into the driving lane. Or some object is falling, so it is extremely complicated. Accurately detecting bumps on a road is more difficult than accurately detecting lane markings between driving lanes.
[0004]
 As an invention for detecting a step on a road, for example, there is an image processing device described in Japanese Patent Application Laid-Open No. 2002-200010. The image processing device described in Patent Document 1 includes an input unit that acquires a stereo image, calculates a parallax distribution from the stereo image acquired by the input unit, and a plurality of iso-parallax lines that connect points with the same parallax based on the parallax distribution. and detecting the shape of the road surface based on the plurality of iso-parallax lines.
prior art documents
patent literature
[0005]
Patent Document 1: Japanese Patent Application Laid-Open No. 2018-200190
SUMMARY OF THE INVENTION
Problems to be Solved by the Invention
[0006]
 The image processing apparatus described in Patent Document 1 searches for corresponding points between a first image and a second image forming a stereo image while horizontally shifting a small area on the image when calculating the parallax distribution from the stereo image. The parallax is calculated by
[0007]
 A line segment on the first image and the second image has similar feature amounts at a plurality of points on the line segment. If the direction in which the line segment extends is the same as the direction in which the corresponding points are searched for in the first image and the second image, it becomes difficult to search for the correct position of the corresponding points, and erroneous measurement of parallax occurs. easy. If an erroneous measurement of parallax occurs, a large error is included in the measurement result of the distance in the depth direction that is measured according to the parallax. As a result, there is a possibility of erroneously detecting that there is a step at a location that does not actually exist.
[0008]
 The image processing apparatus described in Patent Document 1 does not take into consideration erroneous measurement of parallax, so there is room for improvement in terms of more accurately detecting bumps on the road.
[0009]
 The present invention has been made in view of the above, and an object of the present invention is to suppress erroneous detection due to erroneous measurement of parallax and accurately detect steps existing on a road.
Means to solve problems
[0010]
 In order to solve the above-mentioned problems, the present invention provides a feature image generation unit that acquires features of a pair of images and generates a feature image; a step candidate extracting unit for extracting a step candidate of a road to which the line segment candidate extracts; a line segment candidate extracting unit for extracting a line segment candidate from the image; an analysis unit that compares with the line segment candidate extracted by and analyzes the validity of the step candidate based on the matching result and the inclination of the line segment candidate; and based on the analysis result of the analysis unit, One feature is to include a three-dimensional object detection unit that detects a step existing on the road.
Effect of the invention
[0011]
 ADVANTAGE OF THE INVENTION According to this invention, the level|step difference which exists in a road can be detected correctly by suppressing the misdetection accompanying the mismeasurement of parallax.
 Problems, configurations, and effects other than those described above will be clarified by the following description of the embodiments.
Brief description of the drawing
[0012]
1 is a diagram showing the configuration of an in-vehicle environment recognition device; FIG.
[Fig. 2] A diagram for explaining a method of measuring distance using the principle of triangulation.
3 is a diagram showing the configuration of a stereo camera section; FIG.
4 is a diagram showing the configuration of a stereo matching section; FIG.
5A and 5B are diagrams for explaining analysis results when there is a bias in the feature amount in the matching window in the vertical direction. FIG.
6A and 6B are diagrams for explaining analysis results when there is a bias in the feature amount in the matching window in the horizontal direction;
7A and 7B are diagrams for explaining an analysis result when the feature quantity in the matching window is biased in an oblique direction; FIG.
8 is a diagram showing the configuration of a step candidate extraction unit shown in FIG. 1; FIG.
9 is a diagram for explaining the processing of the road surface analysis unit shown in FIG. 8; FIG.
10 is a diagram for explaining the processing of the road edge step extraction section and the traveling road surface step extraction section shown in FIG. 8;
11 is a diagram showing the configuration of a line segment candidate extraction unit shown in FIG. 1; FIG.
12 is a diagram showing the configuration of an analysis unit shown in FIG. 1; FIG.
13 is a diagram for explaining the processing of the three-dimensional point group analysis unit shown in FIG. 12; FIG.
14A and 14B are diagrams for explaining the processing of the horizontal line checking unit shown in FIG. 12;
15A and 15B are diagrams for explaining the processing of the oblique line confirmation unit shown in FIG. 12;
16A and 16B are diagrams for explaining the processing of the oblique line confirmation unit shown in FIG. 12;
17 is a diagram showing the configuration of a three-dimensional object detection unit shown in FIG. 1; FIG.
18 is a flowchart showing ambient environment recognition processing performed by the in-vehicle environment recognition device shown in FIG. 1;
MODE FOR CARRYING OUT THE INVENTION
[0013]
 An embodiment of the present invention will be described below with reference to the drawings. In addition, unless otherwise mentioned, the configurations denoted by the same reference numerals in each embodiment have the same functions in each embodiment, and thus the description thereof will be omitted. In addition, hereinafter, an example employing a stereo vision system will be described as an example of a sensing system configured by an in-vehicle environment recognition device, but the present invention is not limited to the stereo vision system. Parallax is an example of an image feature.
[0014]
[In-Vehicle Environment Recognition Device]
 FIG.
[0015]
 The in-vehicle environment recognition device 1 is a device that performs surrounding environment recognition processing. Surrounding environment recognition processing processes surrounding images acquired by a pair of cameras mounted on the vehicle, recognizes the surrounding environment such as roads, preceding vehicles, pedestrians, and obstacles, and performs vehicle travel control and warnings. This is a process of outputting information necessary for notification of The in-vehicle environment recognition device 1 is implemented by cooperation between hardware such as a microcomputer and software including a program describing the details of the ambient environment recognition process.
[0016]
 As shown in FIG. 1, the in-vehicle environment recognition device 1 includes a stereo camera section 100, a stereo matching section 200, a step candidate extraction section 300, a line segment candidate extraction section 400, an analysis section 500, and a three-dimensional object detection section. 600 and an alarm control unit 700 .
[0017]
 The stereo camera unit 100 is a sensing system including a pair of cameras installed inside the window shield glass of the vehicle and facing forward in the direction of travel. The stereo camera unit 100 acquires a pair of images by synchronizing a pair of cameras to image the surroundings of the vehicle.
[0018]
 The stereo matching unit 200 performs stereo matching processing using the pair of images acquired by the stereo camera unit 100, and measures the parallax of the same part of the same object shown in each of the pair of images. The stereo matching unit 200 uses the principle of triangulation to measure the distance and position in the three-dimensional space from the measured parallax. The stereo matching unit 200 shown in FIG. 1 performs stereo matching processing for searching for corresponding points between a pair of images in a direction (base line direction) connecting a pair of cameras, and generates parallax images. A parallax image is an image obtained by mapping the distance of each pixel in the depth direction measured according to the parallax between each pixel of a pair of images for each pixel. A parallax image is an example of a feature image representing features of a pair of images acquired by the stereo camera unit 100 . The stereo matching unit 200 is an example of a feature image generation unit that acquires features of a pair of images and generates a feature image. A feature of a pair of images may be, for example, a difference between the pair of images obtained by comparing the pair of images.
[0019]
 In stereo matching processing, when the direction in which corresponding points are searched between a pair of images and the direction in which the line segments on the images extend generally match, the line segments on the images have similar feature amounts at multiple locations. Therefore, there is a possibility that equivalent similarities will continue and it will be difficult to search for correct corresponding points. In this case, erroneous measurement of parallax occurs, and the measurement result of parallax may contain a large error. Mismeasurement of parallax is an unavoidable problem as long as the principle of triangulation is used. If erroneous measurement of parallax occurs, the distance in the depth direction measured according to the parallax is also erroneously measured, and the distance measurement result may include a large error.
[0020]
 In this embodiment, the direction on the image along the search direction of the corresponding points in the stereo matching process is also referred to as the "first direction". A direction perpendicular to the first direction on the image is also referred to as a “second direction”. A direction intersecting the first direction and the second direction on the image is also referred to as a “third direction”.
[0021]
 When a pair of cameras included in the stereo camera unit 100 are installed with an interval in the left-right direction, which is the width direction of the vehicle, the search direction for corresponding points in the stereo matching process is the left-right direction, and on the image corresponds to the horizontal direction of In this case, the first direction is the horizontal direction, the second direction is the vertical direction, and the third direction is the oblique direction crossing the horizontal direction and the vertical direction. Also, in this case, if the lines on the image extend in the horizontal direction (first direction) or oblique direction (third direction), erroneous measurement of parallax may occur.
[0022]
 Similarly, when a pair of cameras included in the stereo camera unit 100 are installed with a gap in the vertical direction, which is the height direction of the vehicle, the search direction for corresponding points in the stereo matching process is the vertical direction. , corresponds to the vertical direction on the image. In this case, the first direction is the vertical direction, the second direction is the horizontal direction, and the third direction is the oblique direction crossing the vertical direction and the horizontal direction. Also, in this case, if the lines on the image extend in the vertical direction (first direction) or oblique direction (third direction), erroneous measurement of parallax may occur.
[0023]
 The step candidate extraction unit 300 extracts road step candidates from the parallax image generated by the stereo matching unit 200 . Specifically, the step candidate extraction unit 300 uses the parallax image generated by the stereo matching unit 200 to analyze the road surface plane portion of the road on which the vehicle travels. The road surface plane portion is a road surface of a travel road (hereinafter also referred to as "travel road surface") and is a portion that can be regarded as a substantially flat surface. The step candidate extraction unit 300 extracts a three-dimensional point group having a difference in height compared to the road surface plane as a road step candidate.
[0024]
 A step on a road is a three-dimensional object having a difference in height with respect to the plane portion of the road surface. The steps that exist on the road include convex steps that are high with respect to the flat surface of the road surface and concave steps that are low with respect to the flat surface of the road surface. Steps existing on the road include steps existing at the edge of the road located on the side of the road, bumps, joints, holes, etc. existing on the road surface, and obstacles existing on the road surface. Road edges include road shoulders, roadside strips or sidewalks that are laterally adjacent to the road surface. The convex-shaped steps that exist at the road edge are, for example, steps with a small difference in height between the road surface and the shoulder, and steps such as curbs that exist between the sidewalk and the shoulder. The recessed step that exists at the road edge is a side ditch or the like that is lower than the road surface.
[0025]
 The line segment candidate extraction unit 400 uses the edge image of the image acquired by the stereo camera unit 100 to search for line candidates included in the image based on the continuity and linearity of the edge. The line segment candidate extraction unit 400 extracts line segment candidates having a start point and an end point from among the searched line candidates, and classifies the extracted line segment candidates according to their extending directions. For example, the line segment candidate extraction unit 400 classifies the extracted line segment candidates into line segment candidates extending in the first direction, line segment candidates extending in the second direction, and line segment candidates extending in the third direction. .
[0026]
 The analysis unit 500 analyzes whether an erroneous step candidate is extracted due to the influence of erroneous measurement of parallax. Specifically, the analysis unit 500 collates the step candidate extracted by the step candidate extraction unit 300 with the line segment candidate extracted by the line segment candidate extraction unit 400 . This matching process may be, for example, a process of checking whether a step candidate overlaps a line segment candidate. Then, the analysis unit 500 analyzes the validity of the step candidate extracted by the step candidate extraction unit 300 based on the matching result and the slope of the line segment candidate.
[0027]
 High validity of the step candidate means that the step candidate is highly likely to represent a step that actually exists on the road. The low validity of the step candidate means that it is difficult to determine whether the step candidate indicates a step that actually exists on the road, and the possibility of extraction due to erroneous parallax measurement cannot be ruled out.
[0028]
 If a step candidate with a small height difference overlaps with a line segment candidate extending in the first direction or the third direction, there is a possibility that the step does not actually exist on the road. For example, when a pair of cameras are installed in the left-right direction, a step candidate with a small height difference that overlaps with a line segment candidate extending in a horizontal direction or an oblique direction can be , laterally or diagonally extending pavement markings. A step candidate that overlaps with a line segment candidate extending in the first direction or the third direction may have been extracted due to an erroneous measurement of parallax.
[0029]
 When the analysis unit 500 does not determine from the collation result that the step candidate is a step existing on the road, that is, when the validity of the step candidate is low, the analysis unit 500 analyzes the arrangement of the three-dimensional point cloud that constitutes the step candidate, Analyze whether there is a possibility that the step candidate was extracted due to erroneous measurement of parallax. The analysis unit 500 analyzes the cause of the erroneous parallax measurement when there is a possibility that the step candidate is extracted due to the erroneous measurement of the parallax.
[0030]
 For example, the analysis unit 500 determines that the cause of erroneous measurement of parallax is that the heights of the images do not match in parallelization processing of a pair of images (hereinafter also referred to as “Y shift”), or that the matching Analyze whether it is texture bias in the window or whether it is random noise or the like. When there is a high possibility that a step candidate has been extracted due to an erroneous measurement of parallax, the analysis unit 500 re-performs the stereo matching process to re-measure the parallax, corrects the distance according to the parallax, or analyzes the extracted step candidate. are deleted as noise.
[0031]
 The three-dimensional object detection unit 600 corrects the height and inclination of the road surface plane portion using the parallax remeasured by the analysis unit 500, and detects steps existing on the road based on the corrected road surface plane portion. I do. That is, the three-dimensional object detection unit 600 identifies steps present on the road and road markings based on the corrected road surface plane portion, and detects steps present at the road edge and steps such as bumps present on the traveling road surface. Also, it detects obstacles and the like that exist on the road surface. Note that the three-dimensional object detection unit 600 may perform a process of detecting a level difference existing on the road based on the uncorrected road surface plane portion.
[0032]
 Based on the detection result of the three-dimensional object detection unit 600, the alarm control unit 700 outputs information necessary for driving control of the vehicle, notification of an alarm, etc. to the control device of the vehicle.
[0033]
 In this embodiment, the pair of cameras included in the stereo camera unit 100 is a pair of cameras installed with a gap in the horizontal direction, and the direction of searching for corresponding points in the stereo matching process is the horizontal direction on the image. explain. Of the pair of left and right cameras, the image obtained by the right camera is also referred to as the "right image", and the image obtained by the left camera is also referred to as the "left image". However, this embodiment can also be applied when the pair of cameras included in the stereo camera unit 100 are a pair of cameras installed with a gap in the vertical direction.
[0034]
[Distance Measuring Method]
 FIG. 2 is a diagram for explaining a distance measuring method using the principle of triangulation.
[0035]
 In FIG. 2, the lower left vertex of the rectangular parallelepiped appears at coordinates (XR, YR) in the right image of the pair of images, and at coordinates (XL, YL) in the left image of the pair of images. . Let d be the parallax at the lower left vertex of the rectangular parallelepiped, Z be the coordinate in the depth direction, B be the distance (base line length) between the optical axes of the pair of cameras that acquire the pair of images, and f be the focal length of the pair of cameras. and
[0036]
 When a pair of cameras with the same specifications are installed perfectly parallel and the camera distortion and optical axis misalignment are corrected, the same part of the same object in each of the pair of images will be the same as the left image and the right image. and appear at the same height. In this case, the parallax d and the coordinate Z in the depth direction are calculated from the following equations.
  d=XL-XR Z=(B・f)/d
[0037]
 The coordinate Z above represents the distance from the pair of cameras to the forward object. A lateral difference (XL-XR) of corresponding points of a pair of images, that is, the smaller the parallax d, the longer the distance to the object, and the larger the parallax d, the shorter the distance to the object. Since the disparity d is the denominator in the equation for calculating the coordinate Z, the resolution of the coordinate Z in the depth direction increases as the disparity d increases. In other words, the shorter the distance from the pair of cameras to the object, the more precisely the distance in the depth direction can be measured.
[0038]
[Stereo Camera Unit]
 FIG. 3 is a diagram showing the configuration of the stereo camera unit 100. As shown in FIG.
[0039]
 The stereo camera section 100 includes an image acquisition section 110, an exposure adjustment section 120, a sensitivity calibration section 130, a geometry calibration section 140, and an edge generation section 150, as shown in FIG.
[0040]
 The image acquisition unit 110 is a stereo camera configured by a pair of cameras spaced apart in the left-right direction, which is the width direction of the vehicle. A pair of cameras that constitute the image acquisition unit 110 are arranged so that their optical axes are parallel to each other and face forward. The pair of cameras are calibrated in an installation such that the images acquired by each are parallel. The image acquisition unit 110 takes images of the surroundings of the vehicle in synchronization with each other to acquire a pair of images. In the pair of images acquired by the image acquisition unit 110, the same object appears at the same height.
[0041]
 The exposure adjustment unit 120 adjusts exposure conditions so that the image acquisition unit 110 acquires a pair of images under the same exposure conditions. For example, the exposure adjustment unit 120 analyzes the brightness of the road surface of the right image of the acquired pair of images, determines the exposure conditions for the next frame, and applies the determined exposure conditions to the image acquisition unit 110. adjusted by reflecting on each camera of the
[0042]
 The sensitivity calibration section 130 calibrates the sensitivity of the pair of cameras that constitute the image acquisition section 110 . In a pair of images acquired by the image acquisition unit 110, even if the exposure conditions of the image acquisition unit 110 are the same, there is a difference in brightness due to individual differences between cameras, and the brightness decreases with distance from the optical axis of the lens. The same location on the same object may not have the same luminance due to characteristics such as The sensitivity calibration unit 130 corrects these characteristics and calibrates the sensitivity of the pair of cameras so that the same location of the same object captured in each of the pair of images has the same brightness.
[0043]
 The geometric calibration unit 140 calibrates the geometric conditions of the pair of images so that the pair of images are parallelized by correcting the distortion of the pair of cameras constituting the image acquisition unit 110, the misalignment of the optical axes, and the like. In the stereo camera section 100, calibration by the sensitivity calibration section 130 and the geometric calibration section 140 can facilitate searching for the same location of the same object appearing in each of the pair of images.
[0044]
 The edge generation unit 150 performs edge extraction processing on the reference image, for example, the right image of the pair of images acquired by the image acquisition unit 110, to generate an edge image. The edge image includes horizontal edges having luminance changes in the horizontal direction and vertical edges having luminance changes in the vertical direction. Note that the reference image may be the left image.
[0045]
[Stereo Matching Section]
 FIG. 4 is a diagram showing the configuration of the stereo matching section 200. As shown in FIG. FIG. 5 is a diagram for explaining the analysis result when the feature amount in the matching window is biased in the vertical direction (in the case of vertically biased texture). FIG. 6 is a diagram for explaining the analysis results when there is a bias in the feature amount in the matching window in the horizontal direction (horizontally biased texture). FIG. 7 is a diagram for explaining the case where the bias of the feature amount in the matching window is in the diagonal direction (diagonal line texture).
[0046]
 The stereo matching unit 200 includes a window setting unit 210, an in-window feature analysis unit 220, a center-of-gravity calculation unit 230, a search unit 240, and a reliability evaluation unit 250, as shown in FIG.
[0047]
 Here, the stereo matching processing performed by the stereo matching section 200 will be described with reference to FIG. In FIG. 2, it is assumed that the lower left corner of the rectangular parallelepiped appears at coordinates (XR, YR) in the right image of the pair of images, and appears at coordinates (XL, YL) in the left image of the pair of images. . Let the right image be the reference image.
[0048]
 At this time, the window setting unit 210 focuses on the pixel at the coordinates (XR, YR) of the right image and sets a small rectangular area around the coordinates (XR, YR) as a matching window. Since the geometric conditions of the right image and the left image have been calibrated by the geometric calibration unit 140, the window setting unit 210 sets a matching window having the same size and height as the matching window set for the right image. set to The search unit 240 calculates the similarity between the feature amount in the matching window of the right image and the feature amount in the matching window of the left image. After the similarity calculation by the searching unit 240, the window setting unit 210 horizontally moves the matching window of the left image by one pixel to set the matching window on the left image. The search unit 240 calculates the similarity between the matching windows of the left image and the right image. While repeating such processing, the search unit 240 searches for the matching window of the left image with the highest degree of similarity. The search unit 240 sets the position of the matching window of the left image having the highest degree of similarity as the position of the pixel of the left image corresponding to the target pixel of the right image.
The search unit 240 measures the parallax between the right image and the left image from the coordinates of the pixel of interest in the right image and the coordinates of the pixel in the left image corresponding to the pixel of interest.
[0049]
 When the searching unit 240 searches for a pixel in the left image corresponding to the pixel of interest in the right image and measures the parallax, the window setting unit 210 horizontally moves the pixel of interest in the right image to set a matching window. set. The search unit 240 searches for pixels in the left image corresponding to the moved pixel of interest and measures parallax. While repeating such processing, the searching unit 240 can search for corresponding points between the right image and the left image, measure the parallax, and generate a parallax image.
[0050]
 The in-window feature analysis unit 220 analyzes the bias of the feature amount within the matching window set for the right image by the window setting unit 210 . The in-window feature analysis unit 220 analyzes the bias of the feature amount within the matching window before the search by the search unit 240 is performed.
[0051]
 The in-window feature analysis unit 220 uses the edge image generated by the edge generation unit 150 to analyze the bias of the feature amount within the matching window. Specifically, the in-window feature analysis unit 220 sets a matching window for the same area as the matching window set for the right image by the window setting unit 210 in the edge image generated by the edge generation unit 150 . Then, the in-window feature analysis unit 220 analyzes the edge in the matching window set in the edge image as a feature amount in the matching window.
[0052]
 As shown in FIGS. 5 to 7, the in-window feature analysis unit 220 performs horizontal edge extraction processing and vertical edge extraction processing on the image within the matching window. Horizontal edge extraction processing is processing for extracting edges from an image within a matching window using a horizontal edge extraction filter. Vertical edge extraction processing is processing for performing edge extraction on an image within a matching window using a filter for vertical edge extraction. The in-window feature analysis unit 220 projects the processing results of the horizontal edge extraction processing and the vertical edge extraction processing in the horizontal direction and the vertical direction.
[0053]
 Specifically, the in-window feature analysis unit 220 expresses, for each vertical coordinate, a cumulative value obtained by accumulating the horizontal edge strength in the horizontal direction as a horizontal projection result of the horizontal edge extraction processing result. Generate a histogram showing the vertical distribution of the cumulative values. The in-window feature analysis unit 220 expresses, for each horizontal coordinate, the cumulative value obtained by accumulating the horizontal edge strength in the vertical direction as a vertical projection result of the horizontal edge extraction processing result. Generate a histogram showing the directional distribution. That is, the in-window feature analysis unit 220 generates a histogram of cumulative values ​​obtained by horizontally accumulating edge intensities (horizontal edge intensities) having luminance changes in the horizontal direction within the matching window. The intra-window feature analysis unit 220 generates a histogram of cumulative values ​​obtained by vertically accumulating edge intensities (horizontal edge intensities) having luminance changes in the horizontal direction within the matching window.
[0054]
 Similarly, the in-window feature analysis unit 220 expresses, for each vertical coordinate, a cumulative value obtained by accumulating the vertical edge strength in the horizontal direction as a horizontal projection result of the vertical edge extraction processing result. Generate a histogram showing the vertical distribution of values. The in-window feature analysis unit 220 expresses, for each horizontal coordinate, a cumulative value obtained by accumulating the vertical edge strength in the vertical direction as a vertical projection result of the vertical edge extraction processing result. Generate a histogram showing the directional distribution.
[0055]
 Based on the generated histogram, the in-window feature analysis unit 220 can grasp the presence or absence of bias in the feature amount in the window and the direction of the bias.
[0056]
 With reference to FIG. 5, an analysis result when there is a bias in the feature amount in the matching window in the vertical direction will be described. FIG. 5 describes the case where the feature quantities in the matching window are concentrated in the lower part of the matching window, as shown in the upper part of FIG. A filter for horizontal edge extraction is shown in the middle left part of FIG. The middle right portion of FIG. 5 shows a histogram representing the horizontal projection result of the horizontal edge extraction processing result. The middle lower part of FIG. 5 shows a histogram representing the projection result of the horizontal edge extraction processing result in the vertical direction. A filter for vertical edge extraction is shown on the lower left side of FIG. The lower right portion of FIG. 5 shows a histogram representing the result of horizontal projection of the result of vertical edge extraction processing. A histogram representing the result of projection in the vertical direction of the result of vertical edge extraction processing is shown in the lower part of FIG.
[0057]
 The image within the matching window shown in the upper part of FIG. 5 has no luminance change in the horizontal direction. That is, the image within the matching window shown in the upper part of FIG. 5 does not have strong horizontal edges. The image in the matching window shown in the upper part of FIG. 5 has a luminance change in the vertical direction, and the luminance change in the vertical direction continues in the horizontal direction with the same state. That is, the image within the matching window shown in the upper part of FIG. 5 has strong vertical edges.
[0058]
 As shown in the middle right part of FIG. 5, the result of projecting the result of the horizontal edge extraction processing in the horizontal direction is that the cumulative value of the horizontal edge intensity is smaller than a predetermined reference value and is constant in the vertical direction. Show the results. As shown in the middle lower part of FIG. 5, the result of projecting the result of horizontal edge extraction processing in the vertical direction shows that the cumulative value of horizontal edge strength is smaller than the reference value and is constant over the horizontal direction. .
[0059]
 As shown in the lower right part of FIG. 5, the results of horizontal projection of the results of the vertical edge extraction processing show that the cumulative value of the vertical edge strength is smaller than the reference value and constant from the top to the bottom of the window. , the cumulative value of the vertical edge strength becomes much larger than the reference value in the lower part of the window. As shown in the lower part of the bottom row of FIG. 5, the result of projecting the results of the vertical edge extraction processing in the vertical direction is that the cumulative value of the vertical edge intensity is the cumulative value of the horizontal edge intensity accumulated in the vertical direction (the middle bottom part of FIG. 5). (see side histograms) and is constant across the horizontal direction.
[0060]
 Based on the histogram shown in FIG. 5, the in-window feature analysis unit 220 can grasp that the feature amount in the window is biased in the vertical direction.
[0061]
 With reference to FIG. 6, an analysis result when there is a bias in the feature amount in the matching window in the horizontal direction will be described. FIG. 6 describes the case where the feature quantities in the matching window are concentrated in the right part of the matching window, as shown in the upper part of FIG. 6 . A filter for horizontal edge extraction is shown on the left side in the middle of FIG. The middle right portion of FIG. 6 shows a histogram representing the horizontal projection result of the horizontal edge extraction processing result. The middle lower part of FIG. 6 shows a histogram representing the projection result of the horizontal edge extraction processing result in the vertical direction. A filter for vertical edge extraction is shown on the lower left side of FIG. The lower right portion of FIG. 6 shows a histogram representing the result of horizontal projection of the result of vertical edge extraction processing. A histogram representing the result of projection in the vertical direction of the result of vertical edge extraction processing is shown in the lower part of FIG.
[0062]
 The image within the matching window shown in the upper part of FIG. 6 has no luminance change in the vertical direction. That is, the image within the matching window shown in the upper part of FIG. 6 does not have strong vertical edges. The image in the matching window shown in the upper part of FIG. 6 has a change in luminance in the horizontal direction, and the luminance change in the horizontal direction continues in the vertical direction while maintaining the same change in luminance in the horizontal direction. That is, the image within the matching window shown in the upper part of FIG. 6 has strong horizontal edges.
[0063]
 Therefore, as shown in the middle right part of FIG. 6, the result of projecting the result of the horizontal edge extraction processing in the horizontal direction is the cumulative value of the horizontal edge strength, which is the cumulative value obtained by accumulating the vertical edge strength in the horizontal direction (Fig. 6). (see histogram on the right side of the lower row) and is constant in the vertical direction. As shown in the middle lower part of FIG. 6, the result of projecting the result of horizontal edge extraction processing in the vertical direction shows that the cumulative value of the horizontal edge strength is smaller than the reference value and constant from the left part to the right part in the window. However, in the right part of the window, the result shows that the cumulative value of the horizontal edge strength becomes much larger than the reference value.
[0064]
 As shown in the lower right part of FIG. 6, the result of projecting the vertical edge extraction processing results in the horizontal direction shows that the cumulative value of the vertical edge strength is smaller than the reference value and is constant over the vertical direction. The result of projecting the vertical edge extraction processing result in the vertical direction shows that the cumulative value of the vertical edge strength is smaller than the reference value and is constant in the horizontal direction, as shown in the lower part of FIG. .
[0065]
 Based on the histogram shown in FIG. 6, the in-window feature analysis unit 220 can grasp that the feature amount in the window is biased in the horizontal direction.
[0066]
 With reference to FIG. 7, an analysis result in the case where there is bias in the feature amount in the matching window in an oblique direction will be described. FIG. 7 describes a case where the feature values ​​in the matching window are concentrated in the upper left part of the matching window, as shown in the upper part of FIG. A filter for horizontal edge extraction is shown on the left side in the middle of FIG. The middle right portion of FIG. 7 shows a histogram representing the horizontal projection result of the horizontal edge extraction processing result. The middle lower part of FIG. 7 shows a histogram representing the projection result of the horizontal edge extraction processing result in the vertical direction. A filter for vertical edge extraction is shown on the lower left side of FIG. The lower right portion of FIG. 7 shows a histogram representing the result of horizontal projection of the result of vertical edge extraction processing. A histogram representing the result of projection in the vertical direction of the result of vertical edge extraction processing is shown in the lower part of FIG.
[0067]
 The image in the matching window shown in the upper part of FIG. 7 has luminance changes in both the horizontal and vertical directions in the upper left part of the matching window. There is no brightness change in each. That is, the image within the matching window shown in the upper part of FIG. 7 has a strong horizontal edge and a strong vertical edge only in the upper left part of the matching window. edge does not exist.
[0068]
 Therefore, as a result of projecting the result of the horizontal edge extraction processing in the horizontal direction, the accumulated value of the horizontal edge intensity is larger in the upper part of the window than in the upper part of the window, as shown in the middle right part of FIG. , shows the result. As a result of projecting the result of horizontal edge extraction processing in the vertical direction, as shown in the middle lower part of FIG. , shows the result.
[0069]
 As shown in the lower right part of FIG. 7, the results of horizontal projection of the results of vertical edge extraction processing show that the cumulative value of vertical edge strength is greater than the reference value in the upper part of the window, and is higher than the upper part of the window. The result shows that it becomes larger. As shown in the lower part of the lower part of FIG. 7, the result of projecting the vertical edge extraction processing result in the vertical direction shows that the cumulative value of the vertical edge strength is larger than the reference value in the left part of the window, and It shows the result that it becomes larger than other.
[0070]
 Based on the histogram shown in FIG. 7, the in-window feature analysis unit 220 can grasp that the feature amount in the window is skewed in the diagonal direction.
[0071]
 Note that when there is no luminance change in the image within the matching window, the histograms shown in FIGS. Both the accumulated value obtained by accumulating the edge strength in the horizontal direction and the accumulated value obtained by accumulating the vertical edge strength in the vertical direction are smaller than the reference value and constant. Based on these histograms, the in-window feature analysis unit 220 can ascertain that there is no bias in feature amounts within the matching window.
[0072]
 The center-of-gravity calculator 230 calculates the center-of-gravity position of the feature amount within the matching window set for the right image by the window setting unit 210 . The default centroid position is the center position of the matching window. Since the search direction of the corresponding points in the stereo matching process is the horizontal direction, the centroid calculation unit 230 calculates the centroid position of the feature amount in the matching window based on the result of the horizontal edge extraction process for extracting the luminance change in the horizontal direction. to calculate Specifically, the center-of-gravity calculation unit 230 generates a histogram showing the vertical distribution of the cumulative value obtained by accumulating the horizontal edge strength in the horizontal direction (the histogram on the right side of each middle row in FIGS. 5 to 7), and the vertical distribution of the horizontal edge strength. Based on the histogram showing the horizontal distribution of the cumulative values ​​accumulated in the direction (the histograms in the middle of each of FIGS. 5 to 7), the position of the center of gravity of the feature quantity within the matching window is calculated.
[0073]
 More specifically, the centroid calculation unit 230 creates a histogram showing the vertical distribution of the cumulative values ​​of the horizontal edge strengths accumulated in the horizontal direction, and a histogram showing the horizontal distribution of the cumulative values ​​of the horizontal edge strengths accumulated in the vertical direction. Smooth. Then, in each smoothed histogram, when the distribution of cumulative values ​​has a cumulative value equal to or greater than a predetermined reference value and has a peak, the center-of-gravity calculation unit 230 calculates the vertical coordinate of the position of the peak. Or calculate the horizontal coordinate. Then, the barycenter calculation unit 230 determines the calculated vertical or horizontal coordinate as the vertical or horizontal coordinate of the barycenter position of the feature amount within the matching window.
[0074]
 On the other hand, if the distribution of the cumulative values ​​in each histogram does not have a cumulative value equal to or greater than the reference value or does not have a peak, the center-of-gravity calculator 230 calculates the default center-of-gravity position of the matching window. The vertical or horizontal coordinate of the center position in the matching window is determined as the vertical or horizontal coordinate of the barycentric position of the feature quantity in the matching window.
[0075]
 For example, as shown in the upper part of FIG. 5, when the features in the matching window are concentrated in the lower part of the matching window, as shown in the middle right part of FIG. The vertical distribution of cumulative values ​​is constant over the vertical direction and does not have a peak. In this case, the center-of-gravity calculator 230 determines the vertical coordinate of the center position of the matching window as the vertical coordinate of the center-of-gravity position of the feature quantity within the matching window. As shown in the middle lower part of FIG. 5, the horizontal distribution of the cumulative value obtained by accumulating the horizontal edge strength in the vertical direction is constant in the horizontal direction and does not have a peak. In this case, the center-of-gravity calculator 230 determines the horizontal coordinate of the center position of the matching window as the horizontal coordinate of the center-of-gravity position of the feature quantity in the matching window. As a result, when the feature values ​​are concentrated in the lower part of the matching window, the centroid position of the feature values ​​in the matching window is the position indicated by the circled cross in the middle image of FIG. .
[0076]
 As shown in the upper part of FIG. 6, when the feature values ​​in the matching window are concentrated in the right part of the matching window, as shown in the middle right part of FIG. The longitudinal distribution of values ​​is constant across the length and has no peaks. In this case, the center-of-gravity calculator 230 determines the vertical coordinate of the center position of the matching window as the vertical coordinate of the center-of-gravity position of the feature quantity within the matching window. As shown in the middle lower part of FIG. 6, the horizontal distribution of the cumulative value of the horizontal edge strength accumulated in the vertical direction has a cumulative value equal to or greater than the reference value and a peak at the right part of the matching window. have. In this case, the centroid calculator 230 determines the horizontal coordinate of the position of the right portion of the matching window, which is the position of this peak, as the horizontal coordinate of the centroid position of the feature quantity within the matching window. As a result, when the feature values ​​are concentrated in the right part of the matching window, the position of the center of gravity of the feature values ​​in the matching window is the position indicated by the circled cross in the middle image of FIG. Become.
[0077]
 As shown in the upper part of FIG. 7, when the feature values ​​in the matching window are concentrated in the upper left part of the matching window, as shown in the middle right part of FIG. The vertical distribution of values ​​has a cumulative value greater than or equal to the reference value and a peak at the top of the matching window. In this case, the centroid calculator 230 determines the vertical coordinate of the upper position in the matching window, which is the position of this peak, as the vertical coordinate of the centroid position of the feature quantity in the matching window. As shown in the middle lower part of FIG. 7, the horizontal distribution of the cumulative values ​​obtained by accumulating the horizontal edge strength in the vertical direction has a cumulative value equal to or greater than the reference value and a peak at the left part of the matching window. have. In this case, the barycenter calculator 230 determines the horizontal coordinate of the position of the left portion in the matching window, which is the position of this peak, as the horizontal coordinate of the barycenter position of the feature amount in the matching window. As a result, when the feature values ​​are concentrated in the upper left part of the matching window, the position of the center of gravity of the feature values ​​in the matching window is the position indicated by the circled cross in the middle image of FIG. Become.
[0078]
 If there is no bias in the feature amount within the matching window, such as when textures are dispersed within the matching window, even if the parallax is measured based on the center position of the matching window, the parallax measurement error is small. . On the other hand, when the bias of the feature amount is large within the matching window, such as when a texture with a large luminance change locally exists within the matching window, if the parallax is measured based on the center position of the matching window, Parallax measurement error increases.
[0079]
 When the bias of the feature amount is large within the matching window, the parallax is measured based on the position of the center of gravity of the feature amount. This is because when the bias of the feature amount within the matching window is large, the clues for searching for corresponding points within the matching window are only locations where horizontal edges having large luminance changes in the horizontal direction are locally present. This is because the position of the corresponding point is determined where the horizontal edge exists.
[0080]
 Therefore, when distance measurement is performed with reference to the position of the center of gravity of the feature quantity within the matching window, the distance measurement error can be reduced because it matches the reference position in the parallax measurement. When the distance is measured with reference to the center position of the matching window, the in-vehicle environment recognition device 1 corrects the distance measurement result using the position of the center of gravity of the feature amount in the matching window, thereby measuring the distance. Errors can be reduced. As a result, the in-vehicle environment recognition apparatus 1 can accurately reproduce steps existing on the road using the three-dimensional point group forming the step candidates, and can accurately detect steps existing on the road. However, whether or not to actually correct the distance measurement result is determined by considering the evaluation result of the reliability of the stereo matching process described below and other information such as whether there is a line segment candidate in the matching window. and is judged.
[0081]
 The reliability evaluation unit 250 evaluates the reliability of stereo matching processing. The reliability evaluation unit 250 determines whether or not the distribution of the cumulative values ​​of the horizontal edge strength calculated by the in-window feature analysis unit 220 has cumulative values ​​equal to or greater than a predetermined reference value. In addition, the reliability evaluation unit 250 determines whether the distribution of the cumulative values ​​of horizontal edge strengths calculated by the intra-window feature analysis unit 220 has a peak in the horizontal direction. The fact that the distribution of the cumulative values ​​of horizontal edge strengths has a peak in the horizontal direction can mean that the similarity calculated by the searching unit 240 is high at one point in the horizontal direction.
[0082]
 The reliability evaluation unit 250 determines that, when the distribution of the cumulative values ​​of horizontal edge strengths has cumulative values ​​equal to or greater than the reference value and the degree of similarity is high at one location in the horizontal direction, the reliability of the stereo matching process is rated as high. That is, the reliability evaluation unit 250 evaluates that the reliability of the stereo matching process is high when the distribution of the cumulative values ​​of the horizontal edge strength has a cumulative value equal to or greater than the reference value and has a peak in the horizontal direction. .
[0083]
 If the distribution of the horizontal edge strength cumulative value does not have a cumulative value equal to or greater than the reference value, or if the similarity is not high at one location in the horizontal direction on the image, the reliability evaluation unit 250 performs stereo matching processing. are evaluated as having low reliability. That is, the reliability evaluation unit 250 determines that the reliability of the stereo matching process is low when the distribution of the cumulative values ​​of horizontal edge strengths does not have a cumulative value equal to or greater than the reference value or does not have a peak in the horizontal direction. evaluate.
[0084]
 If the distribution of horizontal edge strength cumulative values ​​does not have cumulative values ​​equal to or greater than the reference value, it indicates that there is a lack of information sources for identifying corresponding points. Even if the distribution of the cumulative value of the horizontal edge strength has a cumulative value equal to or greater than the reference value, if it is equal to or greater than the reference value at multiple locations in the horizontal direction on the image, it means that there is an image feature similar to the background. It is difficult to determine which of these multiple points is the correct corresponding point. Note that even if the distribution of the cumulative value of the horizontal edge strength has a cumulative value equal to or greater than the reference value and the similarity is high at one location in the horizontal direction on the image, a line extending diagonally on the image , there is a possibility that texture bias, Y deviation, etc. within the matching window will affect the reliability.
[0085]
[Step-difference candidate extraction unit]
 FIG. 8 is a diagram showing the configuration of the step-difference candidate extraction unit 300 shown in FIG. FIG. 9 is a diagram for explaining the processing of the road plane analysis unit 310 shown in FIG. FIG. 10 is a diagram for explaining the processing of the road edge step extractor 320 and the traveling road surface step extractor 330 shown in FIG.
[0086]
 The step candidate extracting unit 300 extracts step candidates having a height difference compared to the flat portion of the road surface, such as a step at the edge of the road, a step such as a bump on the road surface, or an obstacle on the road surface. to extract The step candidate extraction unit 300 checks the accuracy and noise of the extracted step candidates.
[0087]
 As shown in FIG. 8, the step candidate extraction unit 300 includes a road plane analysis unit 310, a road edge step extraction unit 320, a traveling road surface step extraction unit 330, a single noise removal unit 340, and a connected component extraction unit 350. including.
[0088]
 As shown in the upper part of FIG. 9, the road surface analysis unit 310 processes the road surface of the road on which the vehicle is predicted to travel based on the predicted result of the course of the vehicle and the vehicle width. Then, the road surface plane analysis unit 310 uses the parallax images generated by the stereo matching unit 200 to analyze the parallax to be processed.
[0089]
 Specifically, the road surface analysis unit 310 converts the parallax to be processed into three-dimensional spatial coordinates, and then generates a cross-sectional view of the road surface as shown in the lower part of FIG. Estimate the slope. The road plane analysis unit 310 converts the parallax to be processed into three-dimensional spatial coordinates, acquires a three-dimensional point group to be processed, and uses the acquired three-dimensional point group to perform It is possible to generate a road cross-sectional view of The cross-sectional view of the road surface shown in the lower part of FIG. 9 is a graph in which the horizontal axis represents the distance in the depth direction and the vertical axis represents the height of the road surface.
[0090]
 When generating a cross-sectional view of the road surface, the road surface analysis unit 310 scans the processing area 311 of the parallax image in the horizontal direction, and converts the mode of the height of the three-dimensional point cloud to represent the road surface on the cross-sectional view of the road surface. Vote as a candidate point for one point that passes through the straight line. The road surface plane analysis unit 310 repeats such voting processing along the depth direction, and acquires a row of candidate points like the crosses shown in the lower part of FIG. The road surface plane analysis unit 310 performs straight line estimation processing on the string of obtained candidate points. In the straight line estimation process, the road surface plane analysis unit 310 estimates the straight line through which the largest number of candidate points pass.
[0091]
 The road surface plane analysis unit 310 extracts, from among the candidate points greatly deviating from the straight line estimated by the straight line estimation process, candidate points that clearly indicate a step existing on the road surface, as candidate points constituting step candidates. , other candidate points are deleted as noise. Candidate points that clearly indicate a step on the road surface are, for example, candidate points that indicate a bump and are arranged in a semi-elliptical shape (cylindrical shape). As a result, the road surface plane analysis unit 310 can extract step candidates such as bumps that exist on the road surface.
[0092]
 The road surface analysis unit 310 performs fitting processing using only candidate points near the straight line, and estimates the height and inclination of the road surface surface. Since the road surface analysis unit 310 performs fitting processing using only candidate points near the straight line, the height and inclination of the road surface can be accurately estimated.
[0093]
 As shown in the upper part of FIG. 10, the road edge step extracting unit 320 scans the road surface in the lateral direction from the center line toward the road edge, and extracts candidate points that constitute step candidates existing at the road edge. Extract. The upper part of FIG. 10 exemplifies the case where there is a step between the sidewalk at the road edge and the road, and shows an example of scanning from the center line of the road surface to the left toward the road edge. .
[0094]
 Specifically, the road edge step extraction unit 320 first checks whether the height of the center line of the road surface deviates significantly from the height of the road surface plane portion estimated by the road surface plane analysis unit 310. .
Noise is determined when the height of the center line of the road surface deviates greatly from the height of the flat surface of the road surface. If it is judged as noise, the subsequent processing is skipped.
[0095]
 When the height of the road surface plane portion estimated by the road surface plane analysis portion 310 is not greatly deviated, the road edge step extraction portion 320 performs the following processing. That is, the road edge step extraction unit 320 scans in the lateral direction from the center line of the traveling road surface toward the road edge, and acquires a three-dimensional point group forming the traveling road surface along the scanning direction. The road edge step extraction unit 320 compares the acquired height of the three-dimensional point group with the height of the road plane portion estimated by the road plane analysis unit 310 . Then, as shown in the middle part of FIG. 10, the road edge step extracting unit 320 generates a graph showing the height of the obtained three-dimensional point cloud with respect to the road surface plane. The cross-sectional view of the road surface shown in the middle part of FIG. 10 is a graph in which the horizontal axis represents the distance from the center line of the road surface to the left, and the vertical axis represents the height of the road surface.
[0096]
 In the graph shown in the middle part of FIG. 10 , the road edge step extracting unit 320 extracts the average height of the fixed range when the three-dimensional point group with the height of the fixed range continues over the predetermined range in the scanning direction. A value is established as the height of the road surface.
[0097]
 Then, the road edge step extraction unit 320 confirms the change in height of the obtained three-dimensional point group. Specifically, the road edge step extracting unit 320 determines whether the height of the three-dimensional point group based on the height of the road surface flat portion changes so as to satisfy a predetermined condition on the laterally outer side of the traveling road surface. determine whether or not If the height of the three-dimensional point cloud based on the height of the road surface plane portion changes so as to satisfy a predetermined condition outside the traveling road surface in the lateral direction, the road edge step extraction unit 320 A three-dimensional point group in which is changed so as to satisfy a predetermined condition is extracted as a candidate point constituting a step candidate existing at the road edge.
[0098]
 For example, if there is a sidewalk higher than the road surface on the side of the road, the height of the 3D point cloud will be the same height as the road surface and continue in the horizontal direction for a while. are shown continuously in the horizontal direction at a high height. In this case, the road edge step extracting unit 320 detects, for example, when at least two three-dimensional point groups having a height higher than the road surface plane continue laterally outside the road surface, Three-dimensional point groups at positions where the height changes from the same height to a height higher than the road surface plane are extracted as candidate points constituting step candidates existing at the road edge. The above-mentioned predetermined condition, that is, the condition for extracting the candidate points constituting the step candidate present at the road edge is that, for example, a three-dimensional point group having a height higher than the plane surface of the road surface is located outside the road surface. At least two of them are continuous in the horizontal direction.
[0099]
 The road edge step extraction unit 320 shifts the position of the center line of the road surface of interest in the depth direction, and continues scanning in the lateral direction from the shifted position of the center line. While repeating such processing, the road edge step extracting unit 320 can extract candidate points that constitute step candidates existing at the road edge.
[0100]
 As shown in the upper part of FIG. 10, the traveling road surface level difference extracting unit 330 scans in the depth direction from the vehicle toward the vanishing point (point at infinity), and extracts candidate points that form step candidates existing on the traveling road surface. . Specifically, like the road edge step extraction unit 320, the road surface level difference extraction unit 330 scans in the depth direction, acquires a three-dimensional point group that constitutes the road surface along the scanning direction, A cross-sectional view of the road surface as shown in the lower part of 10 is generated. The cross-sectional view of the road surface shown in the lower part of FIG. 10 is a graph in which the horizontal axis represents the distance in the depth direction and the vertical axis represents the height of the road surface.
[0101]
 The traveling road surface step extraction unit 330 processes only the three-dimensional point cloud near the road surface plane estimated by the road surface analysis unit 310, and treats the three-dimensional point cloud greatly deviating from the estimated road surface plane as noise. delete. The traveling road surface level difference extraction unit 330 determines the height of the road surface plane part based on the height of the three-dimensional point group to be processed. The traveling road surface level difference extracting unit 330 confirms the change in the height of the three-dimensional point group with reference to the height of the road surface plane portion.
[0102]
 Candidate points forming step candidates such as bumps existing on the traveling road surface have already been extracted by the road surface plane analysis unit 310 . The traveling road surface level difference extraction unit 330 collates the candidate points constituting the level difference candidates such as the bumps extracted by the road surface plane analysis unit 310 with the three-dimensional point group obtained by scanning in the depth direction. Further, the height of the three-dimensional point cloud of an obstacle or the like existing on the road surface is often continuously higher than the plane portion of the road surface along the depth direction. The traveling road surface level difference extraction unit 330 confirms whether the height of the three-dimensional point group based on the height of the road surface plane portion increases continuously along the depth direction, and confirms whether the height increases continuously along the depth direction. A three-dimensional point group that is higher than the road surface is extracted as candidate points that constitute step candidates such as obstacles that exist on the road surface. As a result, the traveling road surface level difference extracting unit 330 extracts candidate points forming step candidates such as obstacles existing on the traveling road surface.
[0103]
 In addition, the processing of the traveling road surface level difference extraction unit 330 basically processes only the three-dimensional point group on the line extending in the scanning direction, so it can be executed easily and at high speed, but it is susceptible to noise and the like. Sometimes. For the candidate points extracted by the running road surface level difference extracting section 330, a final determination is made again as to whether or not the points are noise.
[0104]
 The single noise removal unit 340 removes noise using the step points extracted by the road edge step extraction unit 320 or the traveling road surface step extraction unit 330 . However, since the candidate points forming step candidates such as bumps on the road surface have already been extracted by the voting process using the mode value, the single noise elimination unit 340 does not eliminate noise.
[0105]
 The connected component extraction unit 350 confirms whether the candidate points remaining after noise removal by the single noise removal unit 340 have a certain degree of connectivity and aggregation. For example, the connected component extraction unit 350 checks whether the candidate points extracted by the road edge step extraction unit 320 include other candidate points that are continuous along the direction in which the road extends. Also, for example, the connected component extracting unit 350 checks whether the candidate point extracted by the traveling road surface step extracting unit 330 has other candidate points similar in the lateral direction or the depth direction. Thereby, the connected component extraction unit 350 can confirm that the candidate points extracted by the road edge step extraction unit 320 or the traveling road surface step extraction unit 330 form step candidates, and can extract them.
[0106]
[Line Segment Candidate Extracting Section]
 FIG. 11 is a diagram showing the configuration of the line segment candidate extracting section 400 shown in FIG.
[0107]
 The line segment candidate extraction unit 400 includes a line candidate search unit 410, a line feature comparison unit 420, and a line segment classification unit 430, as shown in FIG.
[0108]
 Line candidate search section 410 searches for line candidates using the edge image of the right image generated by edge generation section 150 . The edge image includes horizontal edges having luminance changes in the horizontal direction and vertical edges having luminance changes in the vertical direction. The straight line candidate searching unit 410 generates an edge angle image by synthesizing the horizontal edge and the vertical edge of the edge image. The edge angle image is an image in which edges are vectorized using the intensity of horizontal edges and the intensity of vertical edges, and the angles formed between the vectorized edges and coordinate axes are digitized and stored. The line candidate searching unit 410 performs Hough transform using the generated edge angle image to search for line candidates.
[0109]
 The line feature comparison unit 420 checks whether the edge angles aligned on the line candidate searched by the line candidate search unit 410 have a certain degree of similarity, and confirms that the line candidate is not a line drawn on a random texture. to confirm. The line feature comparison section 420 searches for line candidates having the characteristics of line segment candidates from among the line candidates searched by the line candidate search section 410, and extracts them as line segment candidates. For example, using the edge angle image and the edge image, the straight-line feature comparison unit 420 uses edge images that have a certain degree of edge strength, high similarity of edge angles, and have a starting point and an A line candidate is searched for and extracted as a line segment candidate.
[0110]
 The line segment classification unit 430 classifies the line segment candidates extracted by the line feature comparison unit 420 according to the inclination of the line segment candidates, that is, the edge angle. More specifically, the line segment classification unit 430 classifies line segment candidates extending horizontally on the image, line segment candidates extending vertically on the image, and line segment candidates extending diagonally on the image. .
[0111]
[Analysis Unit]
 FIG. 12 shows the configuration of the analysis unit 500 shown in FIG. 13A and 13B are diagrams for explaining the processing of the three-dimensional point group analysis unit 520 shown in FIG. 12. FIG. 14A and 14B are diagrams for explaining the processing of the horizontal line checking unit 530 shown in FIG. 15A and 15B are diagrams for explaining the processing of the oblique line confirmation unit 540 shown in FIG. 12 , and are diagrams for explaining the case where the cause of erroneous measurement of parallax is the deviation of the centroid position of the feature amount within the matching window. . 16A and 16B are diagrams for explaining the processing of the oblique line confirmation unit 540 shown in FIG. 12, and are diagrams for explaining the case where the cause of the erroneous measurement of parallax is the Y shift.
[0112]
 The analysis unit 500 includes a matching unit 510, a three-dimensional point group analysis unit 520, a horizontal line confirmation unit 530, a diagonal line confirmation unit 540, and a matching correction unit 550, as shown in FIG.
[0113]
 Collation unit 510 collates the step candidate extracted by step candidate extraction unit 300 with the line segment candidate extracted by line segment candidate extraction unit 400 . Specifically, the collation unit 510 confirms whether the step candidate extracted by the step candidate extraction unit 300 overlaps with the line segment candidate extracted by the line segment candidate extraction unit 400 on the edge image or on the image. do.
[0114]
 If the step candidate extracted by the step candidate extraction unit 300 does not overlap with the line segment candidate extracted by the line segment candidate extraction unit 400, or if it overlaps with the line segment candidate extending in the vertical direction, the matching unit 510 Since the reliability of the stereo matching process is high, this step candidate is determined to be a step candidate indicating a step existing on the road. That is, in this case, collation unit 510 determines that the step candidate extracted by step candidate extraction unit 300 is highly appropriate.
[0115]
 As a result, the in-vehicle environment recognition device 1 immediately determines that a highly appropriate step candidate indicates a step existing on the road. Processing such as remeasurement can be performed. Therefore, the in-vehicle environment recognition device 1 can immediately suppress erroneous detection due to erroneous measurement of parallax, and can accurately detect steps existing on the road.
[0116]
 On the other hand, if the step candidate extracted by the step candidate extracting unit 300 overlaps with the line segment candidate extending in the horizontal direction or the diagonal direction, the matching unit 510 treats the step candidate as a step candidate indicating a step existing on the road. Do not immediately judge that there is That is, in this case, collation unit 510 determines that the validity of the step candidate extracted by step candidate extraction unit 300 is low.
[0117]
 In particular, when a step candidate with a small height difference, that is, a step candidate whose three-dimensional point cloud height is not much different from the road surface plane portion, overlaps with a line segment candidate extending in a lateral direction or an oblique direction, this step candidate It may be a pavement marking extending laterally or diagonally, such as a marking line drawn on the pavement or a zebra. There is a possibility that a step candidate with a small height difference overlapping a line segment candidate extending in a horizontal direction or an oblique direction was extracted due to an erroneous measurement of parallax.
[0118]
 When the validity of the step candidate extracted by the step candidate extraction unit 300 is low, the three-dimensional point group analysis unit 520 analyzes the arrangement of the three-dimensional point group forming the step candidate using the method shown in FIG. do. Then, the 3D point group analysis unit 520 analyzes whether there is a possibility that the step candidate has been extracted due to erroneous parallax measurement based on the arrangement of the 3D point group that constitutes the step candidate.
[0119]
 As a result, the in-vehicle environment recognition apparatus 1 can re-measure parallax and correct the distance for a step candidate with low validity, or delete it as noise. Only the bumps that exist on the road can be accurately reproduced by the group. The in-vehicle environment recognition device 1 can suppress erroneous detection due to erroneous measurement of parallax, and accurately detect steps existing on the road.
[0120]
 Specifically, the three-dimensional point group analysis unit 520 identifies a three-dimensional point group including the step candidate and distributed in the horizontal direction on the parallax image. As shown in FIG. 13, the three-dimensional point group analysis unit 520 generates a lateral cross-sectional view of the road surface including the traveling road surface indicated by the specified three-dimensional point group. At this time, the three-dimensional point group analysis unit 520 sets the viewpoint of the camera on the cross-sectional view of the road surface, and generates a cross-sectional view of the road surface showing the arrangement of the three-dimensional point group viewed from the set viewpoint of the camera. The position of the viewpoint of the camera set on the cross-sectional view of the road surface may be a position corresponding to the vanishing point.
[0121]
 The viewpoint of the camera set on the cross-sectional view of the road surface does not exist on the cross-sectional view of the road surface because the position in the three-dimensional depth direction is different from the actual position. The three-dimensional point group analysis unit 520 assumes that the viewpoint of the camera exists on the cross-sectional view of the road surface, and sets the viewpoint of the camera on the cross-sectional view of the road surface. Then, the 3D point group analysis unit 520 sets straight lines passing through the viewpoint of the camera on the road cross-sectional view and each of the 3D point groups.
[0122]
 In the cross-sectional view of the road surface generated by the 3D point group analysis unit 520, the 3D point group positioned directly below the viewpoint of the set camera indicates the road surface, and the 3D point group positioned at the edge in the lateral direction from directly below the camera. The dimensional point cloud indicates road edges. In FIG. 13, there is a gutter that is low relative to the road surface at the road edge to the right in FIG. 13, and a curb that is high relative to the road surface at the road edge to the left in FIG. An example is given. FIG. 13 shows a side wall that forms a hole in the side ditch at the end of the road on the right side, where the road surface is interrupted.
[0123]
 A plurality of straight lines passing through the viewpoint of the camera and each of the 3D point clouds correspond to light rays incident on the camera. Multiple straight lines passing through are basically not bent. Therefore, a plurality of straight lines that pass through the viewpoint of the camera and each of the three-dimensional point groups basically do not intersect each other or have uneven intervals between them. In other words, an object in which a plurality of straight lines that pass through the viewpoint of the camera and each of the three-dimensional point clouds intersect each other or the distances between the plurality of straight lines are uneven may be imaged by the camera. None. For this reason, when a plurality of straight lines passing through the viewpoint of the camera and each of the 3D point groups intersect each other, or when the intervals between the plurality of straight lines become uneven, each 3D point passing through the plurality of straight lines The configured step candidates are likely to have been extracted by mismeasurement of parallax.
[0124]
 When a plurality of straight lines passing through the viewpoint of the camera and each of the 3D point clouds intersect, or when the intervals between the plurality of straight lines become uneven, the height of each of the 3D point clouds with respect to the road surface is It is easy to change up and down at random. A three-dimensional point group whose height relative to the road surface changes randomly means that the positions of a plurality of three-dimensional points adjacent in the horizontal direction in the height direction are higher than the road surface and lower than the road surface. It is a three-dimensional point cloud that varies irregularly to .
[0125]
 A plurality of straight lines passing through each of the three-dimensional point groups enclosed by the dashed-dotted lines in FIG. varies randomly up and down. The three-dimensional point group analysis unit 520 determines that there is a possibility that the step candidate formed by the three-dimensional point group enclosed by the dashed line has been extracted due to erroneous measurement of parallax.
[0126]
 In other words, the three-dimensional point group analysis unit 520 determines whether a plurality of straight lines passing through each of the three-dimensional point groups forming the step candidate and the viewpoint of the camera intersect each other, or when the intervals between the plurality of straight lines are uneven. , it is determined that there is a possibility that a step candidate composed of a three-dimensional point group passing through a plurality of straight lines has been extracted due to erroneous measurement of parallax.
[0127]
 As a result, the in-vehicle environment recognition device 1 can identify, by a simple method, a step candidate that may have been extracted due to an erroneous measurement of parallax. The in-vehicle environment recognition device 1 can accurately reproduce only steps existing on the road by a three-dimensional point group forming step candidates.
Therefore, the in-vehicle environment recognition device 1 can easily suppress erroneous detection due to erroneous measurement of parallax, and can accurately and easily detect steps existing on the road.
[0128]
 Note that the 3D point group analysis unit 520 uses a method other than the above method that uses a plurality of straight lines that pass through each of the 3D point groups and the viewpoint of the camera, and uses line segment candidates extending in the horizontal direction or the diagonal direction. It is possible to analyze whether there is a possibility that the superimposed step difference candidate was extracted due to erroneous measurement of parallax. For example, the 3D point group analysis unit 520 detects step candidates that are superimposed on line segment candidates extending in a horizontal direction or an oblique direction, and the height of the 3D point cloud that constitutes the step candidates with respect to the road surface varies randomly. If it changes, it is determined that there is a possibility that the step candidate formed by the three-dimensional point group has been extracted due to erroneous measurement of parallax.
[0129]
 Specifically, the 3D point group analysis unit 520 identifies 3D points that are higher than the road surface and 3D points that are lower than the road surface, using the 3D point group that forms the road surface as a reference. do.
Then, the three-dimensional point group analysis unit 520 generates a three-dimensional point group whose height relative to the road surface changes randomly if the specified three-dimensional points are adjacent to each other in the lateral direction within a predetermined range. I judge. Then, the 3D point group analysis unit 520 determines that there is a possibility that a step candidate composed of a 3D point group whose height with respect to the road surface changes randomly up and down has been extracted due to an erroneous measurement of parallax. be able to.
[0130]
 This approach is easier to implement than the above approach using multiple straight lines passing through each of the 3D point clouds and the camera viewpoint. Therefore, the in-vehicle environment recognition device 1 can more easily suppress erroneous detection due to erroneous measurement of parallax, and can accurately and more easily detect steps existing on the road.
[0131]
 Horizontal line confirmation unit 530 finally determines whether or not there is a high possibility that a step candidate that overlaps a line segment candidate extending in the horizontal direction has been extracted due to an erroneous measurement of parallax. As shown in the upper part of FIG. 14, when the left image and the right image are appropriately parallelized and pixels having no feature amount other than the luminance change in the vertical direction are arranged in the horizontal direction, the reliability is The reliability of the stereo matching process evaluated by the evaluation unit 250 is low. The horizontal line confirmation unit 530 searches for line segment candidates for which most of the pixel rows arranged in the horizontal direction have no feature amount other than the luminance change in the vertical direction, and confirms that the reliability of the stereo matching process is low.
[0132]
 Furthermore, as shown in part B in the lower part of FIG. 14, in the case of a line segment candidate composed of a pixel row having a slightly different luminance distribution in the vertical direction than in the upper part of FIG. It is difficult to accurately specify where in the horizontal direction the luminance distribution changes in the vertical direction when the quantization is performed, and there is a possibility that accurate quantization cannot be performed. Even such a small event can be a cause of erroneous measurement of parallax. Furthermore, the fact that the left image and the right image are not parallelized properly, that is, the occurrence of Y deviation can also be a factor of erroneous measurement of parallax. When there is a Y shift, the reliability of the stereo matching process is generally low, and the 3D point cloud that constitutes the line segment candidate that extends in the horizontal direction and the step candidate that overlaps with the road surface has a random height relative to the road surface. It is often a three-dimensional point cloud that changes up and down.
[0133]
 The horizontal line confirmation unit 530 determines that there is a high possibility that a step candidate that overlaps with a line segment candidate extending in the horizontal direction as shown in the upper and lower stages of FIG. In other words, the horizontal line confirmation unit 530 determines that there is a high possibility that erroneous measurement of parallax has occurred due to line segment candidates extending in the horizontal direction as shown in the upper and lower stages of FIG. 14 .
[0134]
 The oblique line confirmation unit 540 finally determines whether or not there is a high possibility that the step candidate superimposed on the line segment candidate extending in the oblique direction has been extracted due to an erroneous measurement of parallax. Here, as shown in FIGS. 15 and 16, the processing of the oblique line confirmation unit 540 will be described by taking as an example a case where a white line appears in the upper left part of the matching window in the right image. White lines are road markings and include road markings and lane markings. Road markings are road markings such as zebras and stop lines. A lane marking is a road surface marking such as a boundary line between a plurality of roads (for example, a boundary line of a vehicle traffic zone) or a boundary line between a roadway and a road edge (for example, a roadway outer line). In this embodiment, an example will be described in which a line outside the roadway is displayed as a white line in the upper left part of the matching window.
[0135]
 The upper part of FIG. 15 shows a case where the barycentric position of the feature amount within the matching window is largely deviated from the center position of the matching window. The middle part of FIG. 15 is an enlarged view of the matching window shown in the upper part of FIG. The lower part of FIG. 15 shows a case where the barycentric position of the feature quantity within the matching window is hardly deviated from the center position of the matching window compared to the upper and middle parts of FIG. 15 . In the middle and lower parts of FIG. 15, the circled cross mark indicates the centroid position of the feature quantity in the matching window, and the oblique circled mark indicates the center position of the matching window.
[0136]
 In a stereo camera distance measurement method using a matching window, the distance is often measured with reference to the center position of the matching window. On the other hand, as described in the above description of the center of gravity calculation unit 230, when there is a bias in the feature quantity within the matching window, the parallax is measured based on the position of the center of gravity of the feature quantity within the matching window. is accurate. However, in normal stereo matching processing, parallax measured based on the centroid position of the feature amount is often treated as parallax measured based on the center position of the matching window. can be.
[0137]
 For example, as shown in the middle part of FIG. 15, if a white line appears only in the upper left part of the matching window and there are no other conspicuous features, there is a bias in feature amounts within the matching window. In this case, the center of gravity of the feature amount within the matching window is located in the upper left part, which is far away from the center position of the matching window by ΔZ and ΔX. If the parallax is measured with reference to the position of the center of gravity of the feature quantity within the matching window, the measurement error is small. However, if the parallax is measured using the center position of the matching window as a reference, ΔZ and ΔX become a problem, and the parallax measurement error increases with respect to the size of the matching window. The distance measurement error also increases in accordance with the parallax measurement error.
[0138]
 On the other hand, as shown in the lower part of FIG. 15, when a white line passes through the center position of the matching window and there are no other conspicuous features, the position of the center of gravity of the feature quantity in the matching window and the center position of the matching window are at substantially the same position. In this case, even if the parallax is measured using the center position of the matching window as a reference, the parallax measurement error is minute, and the distance measurement error is also minute.
[0139]
 The oblique line confirmation unit 540 determines that there is a high possibility that a step candidate overlapping a line segment candidate extending in an oblique direction as shown in the upper and middle stages of FIG. 15 has been extracted by erroneous measurement of parallax. In other words, the oblique line confirmation unit 540 determines that there is a high possibility that erroneous measurement of parallax has occurred due to the line segment candidates extending in the oblique direction as shown in the upper and middle stages of FIG. 15 .
[0140]
 The upper part of FIG. 16 shows the case where the left image and the right image are properly parallelized and no Y shift occurs. The lower part of FIG. 16 shows a case in which the left image and the right image are not parallelized properly and a Y shift occurs.
[0141]
 If the left and right images are not properly parallelized and there is a Y shift, parallax mismeasurement will occur. Originally, it would be desirable to search for corresponding points by moving the matching window in the left image in the horizontal direction with respect to the matching window indicated by the solid line in the right image in the lower part of FIG. When a Y shift occurs, the matching window of the left image is horizontally moved with respect to the matching window indicated by the dashed line in the right image in the lower part of FIG. 16 to search for the corresponding point. That is, when there is a Y shift, since the heights of the left image and the right image are not the same, corresponding points between the right image and the left image are searched for at different heights, resulting in a parallax error. Measurements can occur.
[0142]
 The oblique line confirmation unit 540 confirms whether or not a Y deviation has occurred with respect to the line segment candidate extending in the oblique direction determined to have a high possibility of erroneous measurement of parallax. Specifically, the oblique line confirmation unit 540 shifts either the matching window set for the right image or the matching window set for the left image by a predetermined amount in the vertical direction and resets the matching window. Using the reset matching window, the oblique line confirmation unit 540 performs stereo matching processing on the line segment candidates extending in the oblique direction, and recalculates the degree of similarity. At this time, the oblique line confirmation unit 540 may reset the matching window by shifting the matching window by a predetermined amount a plurality of times in each of the upward and downward directions in the vertical direction, and recalculate the similarity. . The oblique line confirmation unit 540 compares the degree of similarity when using the reset matching window and the degree of similarity when using the existing matching window before being reset. If no Y shift occurs, the similarity using the existing matching window will be higher than the similarity using the reset matching window. If a Y shift occurs, the similarity using the reset matching window will be higher than the similarity using the existing matching window. As a result, the oblique line confirmation unit 540 can confirm whether or not a Y deviation has occurred in the line segment candidate extending in the oblique direction determined to have a high possibility of erroneous measurement of parallax.
[0143]
 Note that the horizontal line checking unit 530 uses the same method as the oblique line checking unit 540 to check whether a Y deviation occurs in a line segment candidate extending in the horizontal direction that is highly likely to have caused an erroneous measurement of parallax. Confirm.
[0144]
 The matching correcting unit 550 corrects the parallax for the line segment candidate for which the horizontal line confirming unit 530 and the oblique line confirming unit 540 have determined that there is a high possibility that mismeasurement of parallax has occurred, according to the type of cause of the mismeasurement of parallax. Correct the distance accordingly.
[0145]
 The matching correction unit 550 corrects the distance according to the parallax using the barycentric position of the feature amount in the matching window for the line segment candidates extending in the diagonal direction as shown in the upper and middle stages of FIG. 15 . For example, the matching correction unit 550 corrects the distance according to the parallax using the differences ΔZ and ΔX between the centroid position of the feature amount in the matching window and the center position of the matching window.
[0146]
 As a result, the in-vehicle environment recognition device 1 reduces the parallax measurement error that occurs due to the parallax measurement error, even when the feature amount in the matching window is biased due to texture bias, etc., and reduces the distance measurement error. can be reduced. The in-vehicle environment recognition device 1 can accurately reproduce steps existing on a road using a three-dimensional point group forming step candidates. Therefore, the in-vehicle environment recognition device 1 can suppress erroneous detection due to erroneous measurement of parallax and accurately detect a step existing on the road.
[0147]
 If it is confirmed that the line segment candidate extending in the diagonal direction or the line segment candidate extending in the horizontal direction as shown in the lower part of FIG. The left image and the right image are parallelized and the distance is corrected according to the parallax based on the result of confirmation processing of occurrence of Y deviation by the unit 540 .
[0148]
 That is, the horizontal line confirmation unit 530 or the oblique line confirmation unit 540 determines the similarity when using the matching window reset by shifting the predetermined amount in the vertical direction, and the similarity when using the existing matching window before resetting. By comparing with the similarity of , it is confirmed whether a Y deviation has occurred. The matching correction unit 550 corrects the distance according to the parallax based on the similarity comparison result. Specifically, the matching correction unit 550 identifies the matching window used in the stereo matching process with the highest similarity in the similarity comparison result. The matching correction unit 550 parallelizes the left image and the right image according to the amount of deviation of the specified matching window from the existing matching window. The matching correction unit 550 performs stereo matching again on the parallelized left and right images. The matching correction unit 550 re-measures the parallax and corrects the distance according to the parallax according to the result of the stereo matching processing performed again.
[0149]
 As a result, even when the left image and the right image are not properly parallelized, the in-vehicle environment recognition device 1 can reduce the parallax measurement error that occurs due to parallax erroneous measurement, and reduce the distance measurement error. can be done. The in-vehicle environment recognition device 1 can accurately reproduce steps existing on a road using a three-dimensional point group forming step candidates. Therefore, the in-vehicle environment recognition device 1 can suppress erroneous detection due to erroneous measurement of parallax and accurately detect a step existing on the road.
[0150]
 The matching correction unit 550 vertically expands the size of the matching window for line segment candidates extending in the horizontal direction as shown in the upper and lower stages of FIG. 14, and performs the stereo matching process again. Specifically, the matching correction unit 550 vertically increases the size of the matching window until the vertically extending line segment candidates (that is, horizontal edges) around the horizontally extending line segment candidates enter the matching window. Zoom in direction and reset the matching window. Using the reset matching window, the matching correcting unit 550 performs stereo matching processing again for this horizontally extending line segment candidate. The matching correction unit 550 re-measures the parallax and corrects the distance according to the parallax according to the result of the stereo matching processing performed again. By enlarging the vertical size of the matching window, it may become difficult to reproduce the surrounding 3D shape in detail, but the horizontal edges that were outside the matching window can easily enter the matching window. Become. Since the horizontal edge serves as a clue for searching for corresponding points between a pair of images, when the horizontal edge falls within the matching window, it becomes easier to correctly search for corresponding points, and the reliability of stereo matching processing can be improved.
[0151]
 As a result, the in-vehicle environment recognition apparatus 1 can accurately quantize a line segment candidate extending in the horizontal direction, which is composed of a pixel row in which the luminance distribution slightly changes in the vertical direction. It is possible to reduce parallax measurement errors that occur due to erroneous measurements, and to reduce distance measurement errors. The in-vehicle environment recognition device 1 can accurately reproduce steps existing on a road using a three-dimensional point group forming step candidates. Therefore, the in-vehicle environment recognition device 1 can suppress erroneous detection due to erroneous measurement of parallax and accurately detect a step existing on the road.
[0152]
 Note that if it is confirmed that a line segment candidate extending in an oblique direction or a line segment candidate extending in a horizontal direction as shown in the lower part of FIG. , and the stereo matching process may be performed again. The matching correction unit 550 may re-measure the parallax and correct the distance according to the parallax according to the result of the stereo matching process performed again. By enlarging the size of the matching window in the vertical direction, the matching correction unit 550 can reduce the influence of the Y shift on the matching window, and can reduce the parallax measurement error and the distance measurement error. can.
[0153]
 Further, the matching correcting unit 550 determines that the edge strength of the line segment candidate extending in the horizontal direction or the oblique direction for which it is determined that there is a high possibility of erroneous measurement of parallax is a predetermined strength among the edges existing within the matching window. Mask above edges. The matching corrector 550 re-performs the stereo matching process on the line segment candidates extending in the horizontal direction or the diagonal direction, except for the masked edges. The matching correction unit 550 remeasures the parallax and corrects the distance according to the parallax according to the result of the stereo matching processing performed again.
[0154]
 By masking edges that exist within the matching window and whose edge strength is greater than or equal to a predetermined strength, the bias of the feature amount within the matching window is reduced. If the bias of the feature quantity within the matching window is reduced, the center position of the feature quantity and the center position of the matching window will be close to each other. become smaller.
[0155]
 As a result, the in-vehicle environment recognition device 1 can reduce the parallax measurement error that occurs due to parallax erroneous measurement and reduce the distance measurement error even when the texture in the matching window is biased. The in-vehicle environment recognition device 1 can accurately reproduce steps existing on a road using a three-dimensional point group forming step candidates. Therefore, the in-vehicle environment recognition device 1 can suppress erroneous detection due to erroneous measurement of parallax and accurately detect a step existing on the road.
[0156]
 Here, in the stereo matching process, the stereo matching unit 200 calculates the degree of similarity using the intensity of the edge within the matching window as the feature quantity within the matching window, as described above. That is, the stereo matching unit 200 performs the stereo matching process by a method in which the degree of change in brightness within the matching window directly affects the calculation of the degree of similarity.
[0157]
 The matching correction unit 550 can re-perform the stereo matching process using a technique that does not affect the similarity calculation by the magnitude of luminance change within the matching window. For example, the matching correction unit 550 can calculate the degree of similarity using the angle of the edge within the matching window as the feature quantity within the matching window. The angle of the edge is the angle between the vectorized edge and the coordinate axis, which is obtained by vectorizing the edge using the intensity of the horizontal edge and the intensity of the vertical edge.
[0158]
 Specifically, the matching correction unit 550 determines the strength of the vertical edge in the matching window and Identify the angle of the edge calculated from the intensity. At this time, the matching correction section 550 may identify the edge angle from the edge angle image generated by the line candidate search section 410 . The matching correction unit 550 uses the specified edge angle to calculate the degree of similarity, and performs the stereo matching process again on the line segment candidates extending in the horizontal direction or the diagonal direction. The matching correction unit 550 remeasures the parallax and corrects the distance according to the parallax according to the result of the stereo matching processing performed again.
[0159]
 When edge angles are used to calculate similarity, stereo matching processing is possible only when line segment candidates extending in the horizontal direction or diagonal direction include edges with minimum intensity. Then, in this case, not only strong edges but also weak edges with minimum intensity existing within the matching window can be reflected in similarity calculation. Therefore, the matching correction unit 550 can reduce erroneous measurement of parallax caused by relying only on strong edges by calculating the degree of similarity using the angles of the edges.
[0160]
 As a result, the in-vehicle environment recognition device 1 can reduce parallax measurement errors that occur due to parallax measurement errors, and can reduce distance measurement errors. existing steps can be reproduced accurately. Therefore, the in-vehicle environment recognition device 1 can suppress erroneous detection due to erroneous measurement of parallax and accurately detect a step existing on the road.
[0161]
[Three-dimensional Object Detection Section]
 FIG. 17 is a diagram showing the configuration of the three-dimensional object detection section 600 shown in FIG.
[0162]
 The three-dimensional object detection unit 600 includes a road edge step detection unit 610, a traveling road surface step detection unit 620, and an obstacle detection unit 630, as shown in FIG.
[0163]
 The three-dimensional object detection unit 600 acquires the three-dimensional point group again based on the result of the stereo matching processing performed again by the matching correction unit 550 . Then, the three-dimensional object detection unit 600 corrects the height and inclination of the road surface plane portion using the three-dimensional point cloud acquired again, and uses the corrected road surface plane portion to detect steps present on the road. process. Note that the three-dimensional object detection unit 600 may detect a three-dimensional object using an already acquired three-dimensional point group. That is, the three-dimensional object detection unit 600 may use the height and inclination of the road surface plane portion estimated by the road surface plane analysis unit 310 to perform processing for detecting a level difference existing on the road.
[0164]
 The road edge level difference detection unit 610 detects a level difference existing at the road edge based on the estimation result regarding the road surface plane part and the level difference candidate extracted by the connected component extraction unit 350, and detects the level difference existing at the road edge. Identify steps and road markings. For example, the road edge step detection unit 610 determines that the step candidates extracted by the road edge step extraction unit 320 and extracted by the connected component extraction unit 350 are highly likely to be extracted due to erroneous parallax measurements. For a step candidate that does not have a slope, it is reconfirmed that it has a height difference with respect to the corrected road surface portion. Further, for example, the road edge step detection unit 610 determines that the step candidates extracted by the road edge step extraction unit 320 and extracted by the connected component extraction unit 350 are highly likely to be extracted due to erroneous parallax measurement. With regard to the step candidate thus obtained, it is confirmed that it has been erroneously extracted as a step candidate by comparing the three-dimensional point group obtained again with the corrected road plane portion.
[0165]
 As a result, the road edge level difference detection unit 610 detects level differences existing at the road edge and level difference candidates erroneously extracted due to erroneous parallax measurements due to line segment candidates extending in the horizontal direction or the oblique direction. can be identified with certainty. In particular, the road edge level difference detection unit 610 detects a level difference with a small height difference between the road surface and the road shoulder, and the horizontal or oblique direction such as the division line and the zebra zone drawn on the road surface. and road markings extending into the The road edge step detection unit 610 can also perform time-series processing in order to remove the influence of the conducting zone (zebra).
[0166]
 The traveling road surface level difference detection unit 620 detects a level difference existing on the traveling road surface based on the corrected road surface plane portion. The traveling road surface level difference detection unit 620 removes, as noise, three-dimensional point groups greatly deviating from the corrected road surface plane part in the level difference candidates extracted by the traveling road surface level difference extracting unit 330 and extracted by the connected component extracting unit 350. Then, the traveling road surface level difference detection unit 620 confirms the shape of the remaining three-dimensional point group, and detects a level difference such as a bump that exists on the traveling road surface. In other words, the running road surface level difference detection unit 620 detects a level difference in the running road surface that the vehicle can easily overcome while running, but that can give an impact to the vehicle.
[0167]
 The obstacle detection section 630 detects obstacles and the like existing on the road surface based on the corrected road surface plane portion. The obstacle detection unit 630 detects obstacles and the like existing on the traveling road surface by judging whether the three-dimensional point group having a height difference with respect to the corrected road surface plane part has an aggregation property. The parallax used at this time is obtained by correcting the parallax measurement result by the matching correction unit 550, and the obstacle detection unit 630 can accurately detect even an obstacle with a small height difference. can.
[0168]
[Alarm Control Unit]
 Based on the detection result of the three-dimensional object detection unit 600, the alarm control unit 700 outputs control information for controlling the traveling of the vehicle or the notification of an alarm to the control device of the vehicle.
[0169]
 For example, when it is detected that the vehicle is about to deviate from the lane marking based on the detection result of the road edge step detection unit 610, the alarm control unit 700 outputs control information for issuing an alarm, steering Control information for adjusting the angle and control information for suppressing the vehicle speed are output to the control device of the vehicle.
As a result, the warning control unit 700 can prevent the vehicle from deviating from the lane markings, and can prevent the vehicle from colliding with curbs, walls, or the like existing at the road edge.
[0170]
 Further, for example, when it is detected that there is a step such as a bump on the traveling road surface based on the detection result of the traveling road surface unevenness detection unit 620, the alarm control unit 700 outputs control information for suppressing the vehicle speed and active suspension control information. to the control device of the vehicle to change the setting to absorb the impact. As a result, the alarm control unit 700 can reduce the impact applied to the vehicle when the vehicle passes through a step on the road surface.
[0171]
 Further, for example, based on the detection result of the obstacle detection unit 630, when it is detected that there is an obstacle or the like on the road surface and the vehicle is about to collide with the obstacle, the alarm control unit 700 detects the obstacle. In order to prevent a collision with the vehicle, control information for the brake to stop traveling and control information for the steering angle to avoid obstacles are output to the vehicle control device. As a result, the warning control unit 700 can prevent the vehicle from colliding with an obstacle present on the road surface. Note that the warning control unit 700 may output control information for notifying a warning to the control device of the vehicle before outputting the control information for the brake and the steering angle.
[0172]
 The in-vehicle environment recognition device 1 suppresses erroneous detection due to erroneous measurement of parallax, accurately detects a step existing on the road, and transmits control information for controlling the traveling of the vehicle or the notification of an alarm to the control of the vehicle. Can be output to a device. Therefore, the in-vehicle environment recognition device 1 can enhance the preventive safety function, the driving support function, and the like of the vehicle.
[0173]
[Ambient Environment Recognition Processing]
 FIG. 18 is a flow chart showing the ambient environment recognition processing performed by the in-vehicle environment recognition device 1 shown in FIG.
[0174]
 When the in-vehicle environment recognition device 1 acquires a pair of images using a pair of cameras that constitute the image acquisition unit 110 (step S01), edge image generation processing is performed (step S02). Specifically, the in-vehicle environment recognition apparatus 1 generates an edge image by performing edge extraction processing on the right image, which is the reference image, of the pair of images acquired by the pair of cameras.
[0175]
 After calibrating the sensitivities of the pair of cameras and the geometric conditions of the pair of acquired images, the in-vehicle environment recognition device 1 performs stereo matching processing (step S03) to search for corresponding points between the pair of images. When the corresponding points between the pair of images are searched, the positions of the three points in the three-dimensional space are specified, and the parallax between the pair of images can be measured. The in-vehicle environment recognition device 1 measures the parallax between a pair of images, generates a parallax image, and measures the distance in the depth direction from the measured parallax based on the principle of triangulation. Thereby, a 3D point cloud within the field of view of the pair of cameras can be obtained.
[0176]
 The in-vehicle environment recognition device 1 performs analysis processing of the road surface plane portion of the traveling road (step S04). Specifically, the in-vehicle environment recognition device 1 estimates the height and inclination of the road plane portion from the acquired three-dimensional point group. Thereby, the positional relationship between the pair of cameras and the road surface can be estimated.
[0177]
 The in-vehicle environment recognition device 1 performs extraction processing of step candidates existing on the road (step S05). Specifically, the in-vehicle environment recognition device 1 identifies a three-dimensional point group having a height difference with respect to the road plane portion estimated in step S04, and based on the height of the identified three-dimensional point group, Extract the step candidates existing on the road. After that, the in-vehicle environment recognition device 1 proceeds to step S07.
[0178]
 In parallel with the processing of steps S03 to S05, the in-vehicle environment recognition device 1 performs a line segment candidate extraction processing (step S06). Specifically, the in-vehicle environment recognition device 1 searches for line candidates included in the image based on the edge image generated in step S02, and selects a line segment having a start point and an end point from among the searched line candidates. Extract candidates. The in-vehicle environment recognition device 1 divides the extracted line segment candidates into vertically extending line segment candidates, horizontally extending line segment candidates, and diagonally extending line segment candidates according to the inclination of the line segment candidates on the image. categorized as a segment candidate. The in-vehicle environment recognition device 1 confirms whether the line segment candidate exists on a continuous edge, and after removing the minimum noise, proceeds to step S07.
[0179]
 The in-vehicle environment recognition device 1 performs matching processing between the step candidate extracted in step S05 and the line segment candidate extracted in step S06 (step S07). Then, the in-vehicle environment recognition device 1 analyzes the validity of the extracted step candidate based on the matching result and the inclination of the line segment candidate.
[0180]
 Specifically, the in-vehicle environment recognition apparatus 1 can determine that the stereo matching process has been correctly performed when the step candidate does not overlap with the line segment candidate, or when the step candidate overlaps with the line segment candidate extending in the vertical direction. Therefore, it is determined that the step candidate has high validity (step S07: YES), and the process proceeds to step S10. On the other hand, when a step candidate overlaps with a line segment candidate extending in a horizontal direction or an oblique direction, the in-vehicle environment recognition device 1 detects a step candidate because there is a high possibility that the result of the stereo matching process includes an erroneous measurement of parallax. is low (step S07: NO), and the process proceeds to step S08.
[0181]
 The in-vehicle environment recognition device 1 performs analysis processing related to erroneous measurement of parallax (step S08). Specifically, the in-vehicle environment recognition apparatus 1 analyzes the arrangement of the three-dimensional point group forming the step candidate, and determines whether or not the step candidate may have been extracted due to erroneous parallax measurement.
[0182]
 When there is a possibility that a step candidate has been extracted due to erroneous measurement of parallax, the in-vehicle environment recognition device 1 analyzes the cause of the erroneous measurement of parallax. For example, the in-vehicle environment recognition apparatus 1 cannot accurately quantize a line segment candidate extending in the horizontal direction because it is composed of pixel rows in which the luminance distribution slightly changes in the vertical direction (lower row in FIG. 14). ), that there is a Y shift (see the lower part of FIG. 16), that the distortion of the camera cannot be completely corrected and that partial distortion remains, etc., and the occurrence of erroneous measurement of parallax Analyze factors. Further, for example, the in-vehicle environment recognition device 1 determines that the position of the center of gravity of the feature amount is biased due to the bias of the texture in the matching window, etc. for the line segment candidate extending in the diagonal direction (see the middle part of FIG. 15), It is confirmed that the Y deviation occurs (see the lower part of FIG. 16), etc., and the cause of the erroneous measurement of parallax is analyzed.
[0183]
 The in-vehicle environment recognition device 1 analyzes the cause of the erroneous measurement of parallax, and when it is determined that there is a high possibility that the step candidate is extracted due to the erroneous measurement of parallax, the process proceeds to step S09. On the other hand, if the in-vehicle environment recognition device 1 analyzes the cause of parallax erroneous measurement and determines that it is unlikely that the step candidate is extracted due to parallax erroneous measurement, the process proceeds to step S10.
[0184]
 When the in-vehicle environment recognition device 1 determines that there is a high possibility that the step candidate was extracted due to erroneous measurement of parallax, the in-vehicle environment recognition device 1 performs matching correction processing (step S09). Specifically, the in-vehicle environment recognition device 1 re-performs the stereo matching process to re-measure the parallax, corrects the distance according to the parallax, or corrects the extracted step according to the cause of the parallax measurement error. Remove candidates as noise. After that, the in-vehicle environment recognition device 1 proceeds to step S10.
[0185]
 The in-vehicle environment recognition device 1 uses the remeasured parallax to correct the height and inclination of the road surface plane, and based on the corrected road surface plane, performs a three-dimensional object detection process that detects steps present on the road. (step S10). That is, the in-vehicle environment recognition device 1 identifies steps existing on the road and road markings based on the corrected road surface plane, Also, it detects obstacles and the like that exist on the road surface. The matching correction process in step S09 has little effect on the detection performance for obstacles, holes, roadside ditches, and other steps with large height differences on the road surface. Conversely, the matching correction processing in step S09 has a large effect on the detection performance for steps with a small difference in height (for example, steps with a height of about 5 cm) and steps such as bumps. That is, by the matching correction processing in step S09, the in-vehicle environment recognition device 1 substantially improves the detection performance for steps with a small height difference and steps such as bumps without deteriorating the detection performance for steps with a large height difference. be able to.
[0186]
 The in-vehicle environment recognition device 1 performs alarm control processing (step S11). Specifically, the in-vehicle environment recognition device 1 outputs information necessary for driving control of the vehicle, notification of an alarm, etc. to the control device of the vehicle based on the detection result of step S10. After that, the in-vehicle environment recognition device 1 terminates the surrounding environment recognition processing.
[0187]
 In this embodiment, among the components of the in-vehicle environment recognition device 1, a stereo matching unit 200, a step candidate extraction unit 300, a line segment candidate extraction unit 400, an analysis unit 500, and a three-dimensional object detection unit 600 are also collectively referred to as "processing equipment". The processing device processes a pair of images acquired by the pair of cameras of the stereo camera section 100 . The processing device may further include an alarm controller 700 . The processing device may further include at least one of the exposure adjustment unit 120 , the sensitivity calibration unit 130 , the geometry calibration unit 140 and the edge generation unit 150 of the stereo camera unit 100 . The processing device can perform the surrounding environment recognition processing shown in FIG. The in-vehicle environment recognition device 1 can also be expressed as having a pair of cameras and a processing device.
[0188]
 In other words, the processing device is at least a processing device that processes a pair of images acquired by a pair of cameras mounted on a vehicle, and measures the parallax of the pair of images and generates a parallax image. 200, a step candidate extracting unit 300 that extracts step candidate of the road on which the vehicle is traveling from the parallax image generated by the stereo matching unit 200, and a line segment that extracts the line segment candidate from the image acquired by the pair of cameras. The candidate extracting unit 400 and the step candidate extracted by the step candidate extracting unit 300 are collated with the line segment candidate extracted by the line segment candidate extracting unit 400, and based on the collation result and the slope of the line segment candidate, the step is extracted. It comprises an analysis unit 500 that analyzes the validity of candidates, and a three-dimensional object detection unit 600 that detects steps existing on the road based on the analysis result of the analysis unit 500 .
[0189]
 The processing device can analyze the validity of the step candidate based on the matching result and the slope of the line segment candidate. As a result, the processing device can re-measure the parallax to correct the distance or delete it as noise for the line segment candidate that is likely to cause parallax mismeasurement and the step candidate that overlaps the candidate. The processing device can accurately reproduce only the bumps existing on the road by the three-dimensional point group forming the bump candidates. Therefore, the processing device can suppress erroneous detection due to erroneous measurement of parallax, and accurately detect a step existing on the road.
[0190]
 Note that the processing device may be provided integrally with the pair of cameras. For example, the processing device may be provided within the housing of a stereo camera device that includes a pair of cameras installed inside the window shield of the vehicle. Also, the processing device may be provided separately from the pair of cameras. For example, the processor may be provided as part of an electronic control unit, which is one of the vehicle's controllers.
[0191]
[Others]
 The present invention is not limited to the above-described embodiments, and includes various modifications. For example, the above embodiments have been described in detail in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the configurations described. Also, part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. Moreover, it is possible to add, delete, or replace part of the configuration of each embodiment with another configuration.
[0192]
 Further, each of the above configurations, functions, processing units, processing means, and the like may be realized by hardware, for example, by designing a part or all of them using an integrated circuit. Moreover, each of the above configurations, functions, etc. may be realized by software by a processor interpreting and executing a program for realizing each function. Information such as programs, tapes, and files that implement each function can be stored in recording devices such as memories, hard disks, SSDs (Solid State Drives), or recording media such as IC cards, SD cards, and DVDs.
[0193]
 Further, the control lines and information lines indicate those considered necessary for explanation, and not all control lines and information lines are necessarily indicated on the product. In practice, it may be considered that almost all configurations are interconnected.
Code explanation
[0194]
 DESCRIPTION OF SYMBOLS 1... In-vehicle environment recognition apparatus 100... Stereo camera part
 200... Stereo matching part 300... Level difference candidate extraction part
 400... Line segment candidate extraction part 500... Analysis part
 600... Three-dimensional object detection part 700... Alarm control part
The scope of the claims
[Claim 1]
 A processing device for processing a pair of images acquired by a pair of cameras mounted on a vehicle,
 comprising: a feature image generation unit that acquires features of the pair of images and generates a feature image; and the
 feature image generation unit . a  step candidate extracting unit for extracting step candidates of the road on which the vehicle travels from the feature image generated by
 a line segment candidate extracting unit for extracting line segment candidates from the image;
an analysis unit that compares the obtained step candidate with the line segment candidate extracted by the line segment candidate extracting unit, and analyzes the validity of the step candidate based on the matching result and the slope of the line segment candidate; and
 a three-dimensional object detection unit that detects a level difference existing on the road based on the analysis result of the analysis unit.
[Claim 2]

 2. The processing apparatus according to claim 1,  wherein the feature image generation unit includes a stereo matching unit that measures parallax between the pair of images and generates a parallax image .
[Claim 3]

 The stereo matching unit measures the parallax  by performing a stereo matching process of searching for corresponding points between the pair of images along a direction connecting the pair of cameras. The line segment candidates are divided into the line segment candidates extending in a first direction along the corresponding point search direction and the line segment candidates extending in a second direction perpendicular to the first direction, depending on the inclination of the segment candidates. , and the line segment candidate extending in a third direction that intersects the first direction and the second direction, and the
 analysis unit determines if the step candidate does not overlap with the line segment candidate, or if the step candidate
 3. The processing apparatus according to claim 2, wherein the validity of the step candidate is determined to be high when the line segment candidate extending in the second direction is superimposed on the line segment candidate .
[Claim 4]

 The stereo matching unit measures the parallax  by performing a stereo matching process of searching for corresponding points between the pair of images along a direction connecting the pair of cameras. The line segment candidates are divided into the line segment candidates extending in a first direction along the corresponding point search direction and the line segment candidates extending in a second direction perpendicular to the first direction, depending on the inclination of the segment candidates. , and the line segment candidate extending in a third direction intersecting the first direction and the second direction, and the
 analysis unit classifies the
  step candidate as the line segment extending in the first direction or the third direction. If it is superimposed on the candidate, it is determined that the validity of the step candidate is low, and there is a possibility that the step candidate was extracted by erroneous measurement of the parallax based on the arrangement of the three-dimensional point group that constitutes the step candidate.  3. The processing apparatus according to claim 2, further comprising : analyzing whether
  there is any, and correcting the distance according to the parallax if there is a high possibility that the step candidate has been extracted due to erroneous measurement of the parallax .
[Claim 5]
 The analysis unit passes through each of the three-dimensional point groups forming the step candidate overlapping the line segment candidate extending in the first direction or the third direction and the viewpoint of the camera. When a plurality of straight lines intersect each other, or when the intervals between the plurality of straight lines are uneven, the step candidate formed by the three-dimensional point group passing through the plurality of straight lines is regarded as the parallax error. 5.
 The processing apparatus according to claim 4, wherein it is determined that there is a possibility that it has been extracted by measurement .
[Claim 6]
 The analysis unit determines that the step candidate superimposed on the line segment candidate extending in the first direction or the third direction, wherein the height of the three-dimensional point group forming the step candidate with respect to the road surface of the road is random. 5.
 The processing device according to claim 4, wherein when the height changes up and down, it is determined that there is a possibility that the step candidate formed by the three-dimensional point group has been extracted due to an erroneous measurement of the parallax. .
[Claim 7]
 The stereo matching unit
  performs the stereo matching process by setting a matching window on the pair of images,
  and accumulates in the first direction the intensities of edges having luminance changes in the first direction within the matching window. A histogram of values ​​and a histogram of cumulative values ​​obtained by accumulating the intensity of the edge in the second direction are generated, and
  from the peak positions of each generated histogram, the position of the center of gravity of the feature amount within the matching window is determined. and
 if there is a high possibility that the step candidate overlapping with the line segment candidate extending in the first direction or the third direction is extracted by erroneous measurement of the parallax, the analysis unit uses the position of the center of gravity 5.
 A processing apparatus according to claim 4 , wherein said distance is modified .
[Claim 8]
 The stereo matching unit performs the stereo matching process by setting a matching window on the pair of images, and the
 analysis unit determines whether the line segment candidate extending in the first direction and the step candidate overlapping with the parallax error are detected. If there is a high possibility of being extracted by measurement,
  the size of the matching window is reduced until the line segment candidate extending in the second direction around the line segment candidate extending in the first direction is within the matching window. expanding in the second direction, resetting the matching window, performing the stereo matching process again on the line segment candidate extending in the first direction using the reset matching window, and performing the process again
  . 5.
 The processing device according to claim 4, wherein said distance is corrected according to the result of said stereo matching processing .
[Claim 9]
 The stereo matching unit calculates a similarity between a feature amount in a matching window set for one of the pair of images and a feature amount in a matching window set for the other of the pair of images, and Stereo matching processing is performed,
 and if there is a high possibility that the step candidate overlapping the line segment candidate extending in the first direction or the third direction has been extracted due to an erroneous measurement of the parallax, the analysis
  unit either the matching window set on one of the images or the matching window set on the other of the pair of images is shifted in the second direction by a predetermined amount to reset the matching window; Using the set matching window, the stereo matching processing is performed again for the line segment candidate extending in the first direction or the third direction, and
  the similarity when the reset matching window is used. , and the similarity when using the matching window before being reset, parallelizing the pair of images based on the comparison result, and correcting the distance
 . Processing equipment as described.
[Claim 10]
 The stereo matching unit performs the stereo matching process by setting a matching window on the pair of images, and the
 analyzing unit performs the step candidate overlapping the line segment candidate extending in the first direction or the third direction. is likely to be extracted due to erroneous measurement of parallax, the
  line segment candidate extending in the first direction or the third direction has an edge strength equal to or greater than a predetermined strength among the edges existing within the matching window. ,
  the stereo matching processing is performed again on the line segment candidates extending in the first direction or the third direction, excluding the masked edges, and
  the result of the stereo matching processing performed again is 5. A processor according
 to claim 4, wherein said distance is modified accordingly .
[Claim 11]
 The stereo matching unit calculates a similarity between a feature amount in a matching window set for one of the pair of images and a feature amount in a matching window set for the other of the pair of images, and Stereo matching processing is performed, and if there is a high possibility that the step candidate overlapping the line segment candidate extending in the first direction or   the
 third direction has been extracted due to the erroneous measurement of the parallax, the analysis unit
In the line segment candidate extending in one direction or the third direction, an angle of an edge existing within the matching window is specified, and the angle of the specified
  edge is used as the feature quantity, and the line segment candidate extends in the first direction or the third direction. 5.  The processing apparatus according to claim 4, wherein the stereo matching process is performed again for the line segment candidate extending in the direction, and
  the distance is corrected according to the result of the stereo matching process performed again .
[Claim 12]

 2. The apparatus according to claim 1,  further comprising an alarm control unit for outputting control information for controlling travel of the vehicle or notification of an alarm to a control device of the vehicle based on a detection result of the three-dimensional object detection unit. Processing equipment as described.

Documents

Application Documents

# Name Date
1 202217039751-ABSTRACT [23-06-2023(online)].pdf 2023-06-23
1 202217039751.pdf 2022-07-11
2 202217039751-CLAIMS [23-06-2023(online)].pdf 2023-06-23
2 202217039751-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [11-07-2022(online)].pdf 2022-07-11
3 202217039751-STATEMENT OF UNDERTAKING (FORM 3) [11-07-2022(online)].pdf 2022-07-11
3 202217039751-COMPLETE SPECIFICATION [23-06-2023(online)].pdf 2023-06-23
4 202217039751-REQUEST FOR EXAMINATION (FORM-18) [11-07-2022(online)].pdf 2022-07-11
4 202217039751-DRAWING [23-06-2023(online)].pdf 2023-06-23
5 202217039751-PROOF OF RIGHT [11-07-2022(online)].pdf 2022-07-11
5 202217039751-FER_SER_REPLY [23-06-2023(online)].pdf 2023-06-23
6 202217039751-PRIORITY DOCUMENTS [11-07-2022(online)].pdf 2022-07-11
6 202217039751-OTHERS [23-06-2023(online)].pdf 2023-06-23
7 202217039751-POWER OF AUTHORITY [11-07-2022(online)].pdf 2022-07-11
7 202217039751-FORM 3 [19-06-2023(online)].pdf 2023-06-19
8 202217039751-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105-PCT Pamphlet) [11-07-2022(online)].pdf 2022-07-11
8 202217039751-Information under section 8(2) [19-06-2023(online)].pdf 2023-06-19
9 202217039751-FORM 18 [11-07-2022(online)].pdf 2022-07-11
9 202217039751-FORM 3 [22-12-2022(online)].pdf 2022-12-22
10 202217039751-FER.pdf 2022-12-21
10 202217039751-FORM 1 [11-07-2022(online)].pdf 2022-07-11
11 202217039751-Correspondence-240822.pdf 2022-08-25
11 202217039751-DRAWINGS [11-07-2022(online)].pdf 2022-07-11
12 202217039751-DECLARATION OF INVENTORSHIP (FORM 5) [11-07-2022(online)].pdf 2022-07-11
12 202217039751-Others-240822-1.pdf 2022-08-25
13 202217039751-COMPLETE SPECIFICATION [11-07-2022(online)].pdf 2022-07-11
13 202217039751-Others-240822.pdf 2022-08-25
14 202217039751-FORM-26 [14-07-2022(online)].pdf 2022-07-14
15 202217039751-COMPLETE SPECIFICATION [11-07-2022(online)].pdf 2022-07-11
15 202217039751-Others-240822.pdf 2022-08-25
16 202217039751-DECLARATION OF INVENTORSHIP (FORM 5) [11-07-2022(online)].pdf 2022-07-11
16 202217039751-Others-240822-1.pdf 2022-08-25
17 202217039751-DRAWINGS [11-07-2022(online)].pdf 2022-07-11
17 202217039751-Correspondence-240822.pdf 2022-08-25
18 202217039751-FORM 1 [11-07-2022(online)].pdf 2022-07-11
18 202217039751-FER.pdf 2022-12-21
19 202217039751-FORM 18 [11-07-2022(online)].pdf 2022-07-11
19 202217039751-FORM 3 [22-12-2022(online)].pdf 2022-12-22
20 202217039751-Information under section 8(2) [19-06-2023(online)].pdf 2023-06-19
20 202217039751-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105-PCT Pamphlet) [11-07-2022(online)].pdf 2022-07-11
21 202217039751-FORM 3 [19-06-2023(online)].pdf 2023-06-19
21 202217039751-POWER OF AUTHORITY [11-07-2022(online)].pdf 2022-07-11
22 202217039751-OTHERS [23-06-2023(online)].pdf 2023-06-23
22 202217039751-PRIORITY DOCUMENTS [11-07-2022(online)].pdf 2022-07-11
23 202217039751-FER_SER_REPLY [23-06-2023(online)].pdf 2023-06-23
23 202217039751-PROOF OF RIGHT [11-07-2022(online)].pdf 2022-07-11
24 202217039751-DRAWING [23-06-2023(online)].pdf 2023-06-23
24 202217039751-REQUEST FOR EXAMINATION (FORM-18) [11-07-2022(online)].pdf 2022-07-11
25 202217039751-STATEMENT OF UNDERTAKING (FORM 3) [11-07-2022(online)].pdf 2022-07-11
25 202217039751-COMPLETE SPECIFICATION [23-06-2023(online)].pdf 2023-06-23
26 202217039751-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [11-07-2022(online)].pdf 2022-07-11
26 202217039751-CLAIMS [23-06-2023(online)].pdf 2023-06-23
27 202217039751.pdf 2022-07-11
27 202217039751-ABSTRACT [23-06-2023(online)].pdf 2023-06-23
28 202217039751-Response to office action [06-05-2025(online)].pdf 2025-05-06

Search Strategy

1 SearchHistoryE_21-12-2022.pdf