Abstract: The present disclosure relates to a terrain inference system (1). At least in certain embodiments, the terrain inference system (1) is suitable for a host vehicle (2). A controller (6) is provided to monitor a target vehicle (3) to identify an attitude and/or a movement of the target vehicle (3). At least one terrain characteristic is inferred in dependence on the identified attitude of the target vehicle (3) and/or the identified movement of the target vehicle (3). The at least one terrain characteristic relates to a region of terrain proximal to the target vehicle (3). The present disclosure also relates to a host vehicle (2) incorporating the terrain inference system (1). The present disclosure also relates to a method of inferring at least one terrain characteristic; and a non-transitory computer-readable medium. [FIGURE 4]
The present disclosure relates to a terrain inference method and apparatus. In particular, but not exclusively, the present disclosure discloses a method and apparatus for inferring at least one terrain characteristic. The present disclosure has particular application in a vehicle, such as an automobile.
BACKGROUND
When driving a vehicle off-road, it can be advantageous to have advanced knowledge of the terrain ahead, for example to assess the terrain composing the track ahead. Examples of information which would be useful can include track obstacles (holes, ruts, rough surfaces, side slopes, wades) or track direction (bends, slopes). Detecting these features is usually very difficult, until the vehicle is traversing them, often resulting in reactive systems to deal with them after the event.
The present invention seeks to implement a terrain inference apparatus and method for inferring at least one terrain characteristic.
SUMMARY OF THE INVENTION
Aspects of the present invention relate to a terrain inference system, a vehicle, a method
and a non-transitory computer-readable medium as claimed in the appended claims.
According to a further aspect of the present invention there is provided a terrain inference system comprising a controller configured to:
monitor a target vehicle;
identify an attitude of the target vehicle and/or a movement of the target vehicle; and
inferring at least one terrain characteristic relating to a region of terrain proximal to the target vehicle in dependence on the identified attitude of the target vehicle and/or the identified movement of the target vehicle. The at least one terrain characteristic may be inferred with reference to the attitude and/or movement of the target vehicle. Thus, the at least one terrain characteristic may be determined indirectly with reference to the behaviour of the target vehicle. At least in certain embodiments, the terrain inference system may apply an inverse dynamics model to infer the at least one terrain characteristic in dependence on the determined behaviour of the target vehicle.
The target vehicle may be in front of the host vehicle. The target vehicle may, for example, be the vehicle in front of the host vehicle in a convoy or may be a lead vehicle in a convoy. The host vehicle may be a following vehicle (i.e. a vehicle which is following the target vehicle). At least in certain embodiments, the host vehicle and the target vehicle are both land vehicles. The host vehicle and the target vehicle may be wheeled vehicles. In a vehicle follow situation, data can be obtained relating to the target vehicle. It is possible to detect, for example, target vehicle roll, target vehicle inclination relative, or small target deviations resulting from surface conditions. Computation of these parameters can be used to provide a prediction of approaching surface conditions, or to determine a direction or course of the track taken by the target vehicle. The terrain inference system could, for example, be used to implement a pro-active adaptive terrain system that prepares one or more systems in the host vehicle for a rough surface based on the observations made of the target vehicle. Another example may be a warning system to output an alert of a dangerous side slope ahead, for example based on the relative body angle of the target vehicle.
The inferred terrain characteristic may comprise at least one of the following set: an incline angle, an incline direction, a surface roughness, and a terrain composition. The incline angle may correspond to a gradient of the terrain on which the target vehicle is traversing. The surface roughness may provide an indication of the prevailing surface conditions, for example the magnitude and frequency of surface irregularities. The terrain composition may provide an indication of whether the terrain comprises solid/stable surface or an amorphous/unstable surface. The terrain composition may be determined, for example, by detecting a vertical displacement between an underside of the vehicle body and the surface of the terrain.
The terrain characteristic may be inferred in dependence on a roll angle and/or a pitch angle and/or a yaw angle of the target vehicle. For example, the incline angle and/or incline direction may be determined in dependence on one or more of the following: the roll angle, the pitch angle, the yaw angle of the target vehicle. Alternatively, or in addition, the terrain characteristic may be inferred in dependence on a rate of change of the roll angle, the pitch angle, the yaw angle of the target vehicle.
The controller may be configured to generate a vehicle control parameter in dependence on the at least one inferred terrain characteristic. The vehicle control parameter comprises at least one of the following set: drivetrain control parameter, a transmission control parameter, a chassis control parameter, and a steering control parameter. The terrain inference system
described herein may be installed in a host vehicle. The vehicle control parameter may be generated to control one or more vehicle systems in said host vehicle.
The controller may be configured to output an alert in dependence on the inferred terrain characteristic. The alert may, for example, notify a driver that the terrain is impassable or potentially hazardous. The controller may, for example, determine that an incline angle of the terrain exceeds a predefined incline threshold.
The identification of the attitude of the target vehicle may comprise one or more of the following set: a target vehicle pitch angle, a target vehicle roll angle, and a target vehicle yaw angle.
The identification of the movement of the target vehicle may comprise identifying at least one of the following set: a change in the target vehicle pitch angle, a change in the target vehicle roll angle, and a change in the target vehicle yaw angle.
The identification of the movement of said target vehicle may comprise identifying at least one of the following set: a vertical movement, a transverse movement, and a longitudinal movement.
The identification of the movement of said target vehicle may comprise identifying an extension or a compression of a vehicle suspension.
The controller may be configured to receive image data from at least one image sensor, the controller being configured to process said image data to identify the attitude of the target vehicle and/or the movement of the target vehicle
The controller may be configured to determine a geographic position of a target vehicle and to map said at least one terrain characteristic in dependence on the determined geographic position.
According to a further aspect of the present invention there is provided a vehicle comprising a terrain inference system as described herein.
According to a further aspect of the present invention there is provided a method of inferring at least one characteristic of the terrain proximal to a target vehicle, the method comprising: monitoring a target vehicle;
identifying an attitude of the target vehicle and/or a movement of the target vehicle; and
inferring said at least one terrain characteristic proximal to the target vehicle in dependence on the identified attitude and/or the identified movement.
The inferred terrain characteristic may comprise at least one of the following set: an incline angle, an incline direction, a surface roughness, and a terrain composition. The incline angle and/or the incline direction may be determined in dependence on a roll angle and/or a pitch angle and/or a yaw angle of the target vehicle.
The method may comprise generating a vehicle control parameter in dependence on the at least one inferred terrain characteristic. The vehicle control parameter comprises at least one of the following set: drivetrain control parameter, a transmission control parameter, a chassis control parameter, and a steering control parameter. The chassis control parameter may, for example, adjust suspension controls and/or Electronic Stability Program (ESP) functions.
The method may comprise outputting an alert in dependence on the inferred terrain characteristic.
The identification of the attitude of said target vehicle comprises identifying at least one of the following set: a target vehicle pitch angle, a target vehicle roll angle, and a target vehicle yaw angle.
The identification of the movement of said target vehicle may comprise identifying at least one of the following set: a change in the target vehicle pitch angle, a change in the target vehicle roll angle, and a change in the target vehicle yaw angle.
The method may comprise identifying the movement of said target vehicle comprises identifying at least one of the following set: a vertical movement, a transverse movement, and a longitudinal movement.
The identification of the movement of said target vehicle may comprise identifying an extension or a compression of a vehicle suspension.
The method may comprise receiving image data from at least one image sensor, the method comprising processing said image data to identify the attitude of the target vehicle and/or the movement of the target vehicle
The method comprising determining a geographic position of a target vehicle. The at least one terrain characteristic may be mapped in dependence on the determined geographic position.
According to a further aspect of the present invention there is provided a non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method(s) described herein.
The host vehicle may be a land vehicle. The target vehicle may be a land vehicle. The term "land vehicle" is used herein to refer to a vehicle configured to apply steering and drive (traction) forces against the ground. The vehicle may, for example, be a wheeled vehicle or a tracked vehicle.
The term "location" is used herein to refer to the relative position of an object on the surface of the earth. Unless indicated to the contrary, either explicitly or implied by the context, references herein to the location of an object refer to the geospatial location of that object.
It is to be understood that by the term 'type of terrain' is meant the material comprised by the terrain over which the vehicle is driving such as asphalt, grass, gravel, snow, mud, rock and/or sand. By 'off-road' is meant a surface traditionally classified as off-road, being surfaces other than asphalt, concrete or the like. For example, off-road surfaces may be relatively compliant surfaces such as mud, sand, grass, earth, gravel or the like. Alternatively or in addition off-road surfaces may be relatively rough, for example stony, rocky, rutted or the like. Accordingly in some arrangements an off-road surface may be classified as a surface that has a relatively high roughness and/or compliance compared with a substantially flat, smooth asphalt or concrete road surface.
Any control unit or controller described herein may suitably comprise a computational device having one or more electronic processors. The system may comprise a single control unit or electronic controller or alternatively different functions of the controller may be embodied in, or hosted in, different control units or controllers. As used herein the term "controller" or "control unit" will be understood to include both a single control unit or controller and a plurality of control units or controllers collectively operating to provide any stated control
functionality. To configure a controller or control unit, a suitable set of instructions may be provided which, when executed, cause said control unit or computational device to implement the control techniques specified herein. The set of instructions may suitably be embedded in said one or more electronic processors. Alternatively, the set of instructions may be provided as software saved on one or more memory associated with said controller to be executed on said computational device. The control unit or controller may be implemented in software run on one or more processors. One or more other control unit or controller may be implemented in software run on one or more processors, optionally the same one or more processors as the first controller. Other suitable arrangements may also be used.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the present invention will now be described, by way of
example only, with reference to the accompanying Figures, in which:
Figure 1 shows a plan view of a host vehicle incorporating an object classification system in accordance with an embodiment of the present invention;
Figure 2 shows a side elevation of the host vehicle shown in Figure 1 incorporating the object classification system in accordance with an embodiment of the present invention;
Figure 3 shows a schematic representation of the object classification system incorporated into the host vehicle shown in Figures 1 and 2;
Figure 4 shows a schematic representation of the combination of the data sets from the inertial measurement unit and the image processing module;
Figure 5 shows an exemplary image captured by the optical sensor and analysed to detect a discrete image components corresponding to the target vehicle;
Figure 6A illustrates the determination of a minimum inclination angle of a track on which the target vehicle is travelling;
Figure 6B illustrates the determination of a roll angle of the target vehicle is travelling;
Figure 6C illustrates the determination of a surface roughness by tracking the movement and/or attitude of the target vehicle;
Figure 7 A illustrates an image acquired by a camera showing a target vehicle and a bounding box generated by an image processing module;
Figure 7B illustrates changes to the image shown in Figure 7 A resulting from the target vehicle traversing a pothole; and
Figure 7C illustrates changes to the image shown in Figure 7A resulting from the target vehicle driving around a pothole.
DETAILED DESCRIPTION
A terrain inference system 1 in accordance with an embodiment of the present invention will
now be described with reference to the accompanying Figures.
As illustrated in Figures 1 and 2, the terrain inference system 1 is installed in a host vehicle 2. The host vehicle 2 is a wheeled vehicle, such as an automobile or an off-road vehicle. The terrain inference system 1 is operable to detect a target vehicle 3. The target vehicle 3 is a wheeled vehicle, such as an automobile or an off-road vehicle. The host vehicle 2 and the target vehicle 3 are both land vehicles (i.e. vehicles configured to apply steering and drive (traction) forces against the ground). The target vehicle 3 may, for example, be travelling in front of the host vehicle 2. For example, the target vehicle 3 may be a lead vehicle or a vehicle in front of the host vehicle 2 in a convoy. In this scenario, the host vehicle 2 may be a following vehicle which is travelling along the same route as the target vehicle 3.
The host vehicle 2 described herein comprises a first reference frame comprising a longitudinal axis X1, a transverse axis Y1 and a vertical axis Z1. The target vehicle 3 described herein comprises a second reference frame comprising a longitudinal axis X2, a transverse axis Y2 and a vertical axis Z2. The orientation of the first and second reference frames is described herein with reference to a horizontal axis X and a vertical axis Z.
The host vehicle 2 comprises four wheels W1-4. A torque is transmitted to the wheels W1-4 to apply a tractive force to propel the host vehicle 2. The torque is generated by one or more torque generating machine, such as an internal combustion engine or an electric traction machine, and transmitted to the driven wheels W1-4 via a vehicle powertrain. The host vehicle 2 in the present embodiment has four-wheel drive and, in use, torque is transmitted selectively to each of said wheels W1-4. It will be understood that the terrain inference
system 1 could also be installed in a host vehicle 2 having two-wheel drive. The host vehicle 2 in the present embodiment is an automobile having off-road driving capabilities. For example, the host vehicle 2 may be capable of driving on an un-metalled road, such as a dirt road or track. The host vehicle 2 may, for example, be a sports utility vehicle (SUV) or a utility vehicle, but it will be understood that the terrain inference system 1 may be installed in other types of vehicle. The terrain inference system 1 may be installed in other types of wheeled vehicles, such as light, medium or heavy trucks. The target vehicle 3 may have the same configuration as the host vehicle 2 or may have a different configuration.
A schematic representation of the terrain inference system 1 installed in the host vehicle 2 is shown in Figure 3. The terrain inference system 1 comprises a controller 6 having at least one electronic processor 7 and a memory 8. The processor 7 is operable to receive a data signal S1 from a sensing means 9. As described herein, the processor 7 is operable to process the image data signal S1. In the present embodiment, the processor 7 is configured to implement an image processing module 10 to analyse the image data signal S1. The image processing module 10 in accordance with the present invention is configured to detect the target vehicle 3 and to determine an attitude (orientation) and/or movement of the target vehicle 3. The processor 7 may optionally also control operation of the host vehicle 2 in dependence on the relative location of the target vehicle 3. For example, the processor 7 may be operable to control a target follow distance D1 between the host vehicle 2 and the target vehicle 3. The processor 7 may control selection of one or more driving modes of the host vehicle 2 in dependence on the monitoring of the target vehicle 3. For example, the processor 7 may be configured to control one or more of the following systems: All-Terrain Progress Control (ATPC), Hill Descent Control, Electronic Traction Control (ETC), Adaptive Dynamics, Dynamic Stability Control (DSC), and variable ratio Electric Power-Assisted Steering (EPAS). The processor 7 may, for example, control one or more of the following set: suspension settings; throttle response; brake response; and transmission settings. Alternatively, or in addition, the processor 7 may output a target follow distance signal SD1 to a cruise control module 11. The cruise control module 11 may be selectively operable in a follow mode suitable for controlling a target speed of the host vehicle 2 to maintain the target follow distance D1 between the host vehicle 2 and the target vehicle 3. The cruise control module 11 may output a target speed signal SV1 to an engine control module 12 which controls the output torque transmitted to the wheels W1-4. The cruise control module 11 may also generate a brake control signal for controlling a braking torque applied to said wheels W1-4. The processor 7 may optionally also output a steering control signal (not represented) to control an electronic power assisted steering module (not shown) to control a steering
angle of the host vehicle 2. The steering control signal SD1 may be output to control the host vehicle 2 to follow the path taken by the target vehicle 3.
As illustrated in Figures 1 and 2, the sensing means 9 is mounted in a forward-facing
5 orientation to establish a detection region in front of the host vehicle 2. The sensing means 9
in the present embodiment comprises at least one optical sensor 13 mounted to the host vehicle 2. The sensing means 9 may comprise a single camera. Alternatively, the sensing means 9 may comprise a stereoscopic camera. The at least one optical sensor 13 may be mounted at the front of the host vehicle 2, for example incorporated into a front bumper or
10 engine bay grille; or may be mounted within the vehicle cabin, for example in front of a rear-
view mirror. The at least one optical sensor 13 has a field of view FOV having a central optical axis VX extending substantially parallel to the longitudinal axis X1 of the host vehicle 2. The field of view FOV is generally conical in shape and extends in horizontal and vertical directions. The at least one optical sensor 13 comprises a digital imaging sensor for
15 capturing image data. The image data comprises an image IMG1 corresponding to a scene
within the field of view FOV of the at least one optical sensor 13. The image data is captured substantially in real-time, for example at 30 frames per second. The at least one optical sensor 13 in the present embodiment is operable to detect light in the visible spectrum of light. The sensing means 9 comprises optics (not shown) for directing the incident light onto
20 an imaging sensor, such as a charge-coupled device (CCD), operable to generate image
data for transmission in the image data signal S1. Alternatively, or in addition, the sensing means 9 may be operable to detect light outside of the visible light spectrum, for example in the infra-red range to generate a thermographic image. Alternatively, or in addition, the sensing means 9 may comprise a Lidar sensor for projecting a laser light in front of the host
25 vehicle 2. Other types of sensor are also contemplated.
The sensing means 9 is connected to the controller 6 over a communication bus 14 provided in the host vehicle 2, as shown in Figure 3. The image data signal S1 is published to the communication bus 14 by the sensing means 9. In the present embodiment, the connection
30 between the sensing means 9 and the controller 6 comprises a wired connection. In
alternative embodiments, the connection between the sensing means 9 and the controller 6 may comprise a wireless connection, for example to enable remote positioning of the sensing means 9. By way of example, the sensing means 9 may be provided in a remote targeting system, such as a drone vehicle. The processor 7 is operable to read the image
35 data signal S1 from the communication bus 14. The processor 7 extracts image data from
the image data signal S1. In accordance with an aspect of the present invention, the image processing module 10 is configured to infer one or more characteristics of the terrain over
10
which the target vehicle 3 is travelling in dependence on a determined attitude (orientation)
and/or a determined movement of the target vehicle 3. The image processing module 10
cross-references the inferred terrain characteristic(s) with a determined geospatial location
of the target vehicle 3. The image processing module 10 may thereby compile terrain data
5 remote from the host vehicle 2. The resulting terrain data is particularly useful if the host
vehicle 2 is following the target vehicle 3 along a particular route, as the host vehicle 2 will in due course traverse the same terrain. Accordingly, the terrain data may be used proactively to coordinate vehicle systems prior to encountering the terrain. The operation of the image processing module 10 will now be described.
10
The image processing module 10 parses the image data from the at least one optical sensor 13 to identify one or more image components IMC(n) within an image IMG1. The image components IMC(n) are preferably persistent features within the image IMG1 detectable within the image data for at least a predetermined time period or over a predetermined
15 number of frames, for example two or more successive frames. In certain embodiments, the
image components IMC(n) may comprise an identifiable feature or element contained within the image IMG1, for example comprising a plurality of pixels which are present in successive frames. The image processing module 10 implements an edge detection algorithm to detect edges within the image data. The image processing algorithm may, for example, be
20 configured to identify points where the image brightness comprises discontinuities,
particularly those points arranged into linear or curved line segments which may correspond to an edge. The image processing module 10 may apply a brightness threshold (which may be a predetermined threshold or a dynamic threshold) to identify the edges of the image components IMC(n) within the image IMG1. The identified edge(s) may be incomplete, for
25 example in regions where image discontinuities are less pronounced. The image processing
module 10 may complete the edges, for example utilising a morphological closing technique, to form a closed region. The or each closed region is identified as a discrete image component IMC(n). By repeating this process, the image processing algorithm may identify each image component IMC(n) contained within the image data.
30
The image processing module 10 implements a pattern matching algorithm to compare each of the image components IMC(n) identified in the image IMG1 to predefined patterns stored in memory 8. The image processing module 10 classifies each of the image components IMC(n) in dependence on the correlation between each image component IMC(n) with the
35 predefined patterns. The image processing module 10 may, for example, classify each
image component IMC(n) as one of the following set: an obstacle 4; a target vehicle 3; a cyclist; a person (not shown); an animal, etc. In the present embodiment, the image
11
processing module 10 is configured to identify the target vehicle 3 within the image IMG1. The pattern matching algorithm is implemented to determine if any of the image components IMC(n) identified in the image data (partially or completely) match one or more predefined patterns. The predefined patterns may, for example, comprise an object model defined in 5 two-dimensions (2-D) or three-dimensions (3-D). The predefined patterns may be stored in the memory 8 and accessed by the image processing module 10. In the present embodiment, the predefined patterns correspond to a shape and/or profile of one or more target vehicles 5. Optionally, the predefined patterns may define a colour of the target vehicle 3, for example specified by a user or identified during an initial calibration procedure.
10 Alternatively, or in addition, the predefined patterns may comprise a registration (number) plate mounted to an exterior of the target vehicle 3. The registration (number) plate comprises one or more alphanumeric characters and the attitude of the target vehicle 3 may be determined by analysing the image IMG1 to determine the perspective of said alphanumeric characters. The pattern corresponding to the registration (number) plate may
15 be defined during a calibration phase. Known pattern matching techniques may be used to determine a correlation between the predefined patterns and the or each image component IMC(n). The image component IMC(n) corresponding to the target vehicle 3 may thereby be identified within the image IMG1.
20 The image processing module 10 is configured to analyse the image component IMC(n) corresponding to the target vehicle 3 to estimate the attitude of the target vehicle 3. For example, the image processing module 10 may analyse the image component IMC(n) to estimate one or more of the following set: a target vehicle pitch angle (θ2), a target vehicle roll angle (β2), and a target vehicle yaw angle (γ2). The target vehicle pitch angle (θ2) is the
25 included angle between the longitudinal axis X2 and the horizontal axis X. The target vehicle roll angle (β2) is the included angle between the vertical axis X2 and the vertical axis Z. The target vehicle yaw angle (γ2) is the included angle between the longitudinal axis X1 of the host vehicle 2 and the longitudinal axis Z2 of the target vehicle 3. The image processing module 10 may optionally also monitor movement of the target vehicle 3. The image
30 processing module 10 may analyse changes in the image component IMC(n) with respect to time to estimate one or more of the following set: longitudinal movement (speed and/or acceleration) of the target vehicle 3; lateral movement (speed and/or acceleration) of the target vehicle 3, for example caused by side-slipping; and/or vertical movement (speed and/or acceleration) of the target vehicle 3. Alternatively, or in addition, the image processing
35 module 10 may analyse changes in the image component IMC(n) with respect to time to estimate one or more of the following set: a change or rate of change of the target vehicle pitch angle (θ2), a change or rate of change of the target vehicle roll angle (β2), and a
12
change or rate of change of the target vehicle yaw angle (γ2). It will be understood that the image processing module 10 may operate in conjunction with other sensors provided on the host vehicle 2 to monitor the target vehicle 3. The host vehicle 2 may comprise additional sensors suitable for tracking the movement of the target vehicle 3. By way of example, the 5 host vehicle 2 may comprise one or more of the following set: an ultrasound sensor, a radar sensor and a lidar sensor.
The image processing module 10 may optionally also estimate the position of the target vehicle 3 relative to the host vehicle 2. For example, the image processing module 10 may
10 determine the relative position of the target vehicle 3 in dependence on the size of the image component IMC(n) within the image IMG1; and/or the position of the image component IMC(n) within the image IMG1. By combining a known location of the host vehicle 2, for example derived from a global positioning system breaks GPS), with the relative position determined by the image processing module 10, a geospatial location of the target vehicle 3
15 may be determined. Alternatively, or in addition, the host vehicle 2 may receive geospatial location data transmitted from the target vehicle 3, for example using a suitable vehicle-to-vehicle communication protocol. The image processing module 10 outputs a target vehicle data signal ST1 to the terrain inference system 1.
20 It will be understood that the scene captured by the sensing means 9 is dependent on the attitude of the host vehicle 2 and/or movements of the host vehicle 2. In order to compensate for changes in the attitude and/or movements of the host vehicle 2, the terrain inference system 1 in the present embodiment is configured to receive an inertial measurement signal S2 from an inertial measurement unit (IMU) 15 provided in the host vehicle 2. The IMU 15
25 comprises one or more sensors 16 for measuring inertial movement of the host vehicle 2. The one or more sensors 16 measure a host vehicle pitch angle (θ1) and a host vehicle roll angle (β1). The host vehicle pitch angle (θ1) is the included angle between the longitudinal axis X1 and the horizontal axis X. The host vehicle roll angle (β1) is the included angle between the vertical axis X1 and the vertical axis Z. The IMU 15 may determine a change (or
30 a rate of change) of the host vehicle pitch angle (θ1) and a change (or rate of change) of the host vehicle roll angle (β1). The one or more sensors 16 may comprise one or more accelerometers (not shown) and/or one or more gyroscopes (not shown). The terrain inference system 1 analyses said inertial measurement signal S2 to determine movements of the host vehicle 2. Optionally, one or more movements of the host vehicle 2 may be
35 estimated, for example in dependence on the inertial measurement signal S2. The estimation of one or more movements of the host vehicle 2 may, for example, be appropriate if the IMU 15 does not include a sensor for one or more degrees of movement.
13
As shown in Figure 5, the terrain inference system 1 is configured to correct the measured
attitude and/or movements of the target vehicle 3 in dependence on the determined attitude
and/or movements of the host vehicle 2. The orientation and the movement of the host
5 vehicle 2 are derived from the IMU 15 (BLOCK 10); and the measured orientation and
movement of the target vehicle 3 are derived from the image processing module 10 (BLOCK 20). A comparison algorithm is applied (BLOCK 30) to compare both data sets. The comparison algorithm may, for example, subtract the orientation and the movement of the host vehicle 2 from the measured orientation and movement of the target vehicle 3 to
10 determine a corrected orientation and movement of the target vehicle 3. The corrected
orientation of the target vehicle 3 may, for example, be defined relative to a horizontal axis and a vertical axis. The terrain inference system 1 uses the corrected orientation and movement of the target vehicle 3 to estimate the one or more terrain characteristics (BLOCK 40). The terrain inference system 1 may, for example, apply an inverse dynamics model to
15 infer the at least one terrain characteristic. By monitoring the dynamic behaviour of the target
vehicle 3, the terrain inference system 1 may infer one or more characteristics of the terrain over which the target vehicle 3 is travelling. The terrain inference system 1 may, for example, determine a surface gradient in dependence on the corrected orientation of the target vehicle 3. The surface gradient may be inferred with reference to the long period
20 behaviour of the target vehicle 3. The terrain inference system 1 may infer characteristics of
the surface roughness by virtue of the magnitude and/or range and/or frequency of changes in the orientation of the target vehicle 3. For example, if the orientation of the target vehicle 3 is changing with a high frequency, the terrain inference system 1 may infer that the target vehicle 3 is travelling over a rough or irregular surface. The magnitude of the changes in the
25 orientation of the target vehicle 3 may provide an indication of the size of any surface
irregularities. The frequency of the changes in the orientation of the target vehicle 3 may provide an indication of the number of surface irregularities. The surface roughness may be inferred with reference to the short period behaviour of the target vehicle 3. The surface composition may be inferred with reference to the position and/or the attitude of the target
30 vehicle 3 relative to the surface.
The terrain inference system 1 may grade the terrain, for example by determining a surface
roughness coefficient SRC. The surface roughness coefficient SRC provides an indication of
the roughness of a surface SF over which the target vehicle 3 is travelling. The surface
35 roughness coefficient SRC may, for example, provide an indication of the size and/or
prevalence of surface irregularities. The surface roughness coefficient SRC may be determined in dependence on the magnitude and/or frequency of target vehicle movements,
14
for example vertical movements. Alternatively, or in addition, the surface roughness coefficient SRC may be determined in dependence on changes in the target vehicle pitch angle (θ1) and/or the target vehicle roll angle (β1). The surface roughness coefficient SRC may be determined in dependence on the period of any such movements, for example 5 differentiating between short-period oscillations and long-period oscillations of the target vehicle 3. In the present embodiment, the surface roughness coefficient SRC is in the range zero (0) to one (1), inclusive. The surface roughness coefficient SRC is set equal to one (1) if the surface is deemed to be very rough, for example corresponding to terrain that cannot be traversed by the host vehicle 2. The surface roughness coefficient SRC is set equal to zero 10 (0) if the surface is deemed to be smooth, for example corresponding to a metalled road surface. The surface roughness coefficient SRC may grade the surface roughness between these endpoints. For example, a surface which is slightly rough may have a surface roughness coefficient SRC of 0.6.
15 The operation of the terrain inference system 1 will now be described. An exemplary image IMG1 captured by the sensing means 9 disposed on the host vehicle 2 is shown in Figure 5. The image processing module 10 identifies a plurality of image components IMC(n) within the image IMG1. Using appropriate pattern matching techniques, the image processing module 10 classifies a first of said image component IMC(1) as corresponding to the target
20 vehicle 3. The image processing module 10 analyses the first image component IMC(1) to determine the target vehicle pitch angle (θ2), target vehicle roll angle (β2), and the target vehicle yaw angle (γ2). The terrain inference system 1 determines the host vehicle pitch angle (θ1) and host vehicle roll angle (β1) in dependence on the inertial measurement signal ST2 received from the IMU 15. By combining the datasets relating to the host vehicle 2 and
25 the target vehicle 3, the terrain inference system 1 determines the corrected orientation and/or corrected movement of the target vehicle 3. The image processing module 10 in the present embodiment is configured to track the first image component IMC(1), for example over successive frames of the image data or at predetermined time intervals. The image processing module 10 may thereby monitor the target vehicle 3.
30
As shown in Figure 6A, the height H (elevation) of the target vehicle 3 relative to the host vehicle 2 may be determined in dependence on the vertical position of the first image component IMC(1) within the first image IMG1. By determining a longitudinal distance between the host vehicle 2 and the target vehicle 3, the terrain inference system may
35 estimate a minimum inclination angle (α) of the surface between the host vehicle 2 and the target vehicle 3. As shown in Figure 6B, the target vehicle roll angle (β1) is calculated by comparing a vertical axis of the first image component IMC(1) to a reference vertical axis.
15
The terrain inference system 1 may thereby determine that the target vehicle 3 is disposed on an inclined surface having a side slope angle substantially equal to the calculated target vehicle roll angle (β2). As shown in Figure 6C, the terrain inference system 1 determines the surface roughness coefficient SRC in dependence on the magnitude and/or frequency of 5 changes in the vertical position of the target vehicle 3. The terrain inference system 1 may optionally also consider the magnitude and/or frequency of changes in the target vehicle pitch angle (θ2). As outlined above, the terrain characteristics are cross-referenced with the determined geospatial location of the target vehicle 3, for example to generate a terrain map.
10 The terrain inference system 1 in accordance with the present embodiment has particular application in an off-road environment. When the host vehicle 2 and the target vehicle 3 are travelling off-road, the determination of the terrain characteristics is usually more important than in an on-road environment. The terrain inference system 1 may be selectively activated when the host vehicle 2 is travelling off-road, for example in response to a user input or
15 automatically when an off-road driving mode is selected. The terrain inference system 1 may track the target vehicle 3, for example to determine the route taken by the target vehicle 3. The terrain inference system 1 may generate a corresponding target route for the host vehicle 2. At least in certain embodiments, the image processing module 10 may calculate the speed and/or the trajectory of the target vehicle 3. It will be understood, however, that
20 the terrain inference system may be utilised in an on-road setting (i.e. a metalled surface), for example to facilitate identification of a traffic calming measure, such as a speed hump or a speed table, or a pothole.
A variant of the terrain inference system 1 will now be described with reference to Figures 25 7A, 7B and 7C. Like reference numerals are used for like components. The terrain inference system 1 is suitable for inferring the presence of an obstacle 16, such as a pothole or other terrain feature, in the path of the target vehicle 3. The obstacle 16 may be present in a metalled surface or un-metalled surface.
30 The terrain inference system 1 comprises at least one optical sensor 13 configured to capture an image IMG2. The optical sensor 13 in the present embodiment comprises a forward-facing camera disposed on the host vehicle 2 and operable to capture a video image, for example comprising twenty (20) images per second. The camera may comprise a mono camera or a stereoscopic camera. As described herein, the image processing module
35 10 is configured to process the images captured by the optical sensor 13 to identify and track the target vehicle 3. An exemplary image IMG2 captured by the optical sensor 13 is shown in Figure 7A. The image processing module 10 analyses the image IMG1 to identify
16
and classify an image component IMC(1) corresponding to the target vehicle 3. The image
processing module 10 adds a bounding box 17 around the image component IMC(1) in the
image IMG1. A suitable method of generating the bounding box 17 comprises identifying
corners 18A-D of the image component IMC(1). Horizontal and vertical lines are drawn
5 between the corners 18A-D to complete the bounding box 17. The image processing unit 10
is configured to perform this operation at least substantially in real-time. The bounding box
17 moves with the target vehicle 3, thereby enabling the image processing module 10 to
track movement of the target vehicle 3 in a sequence of images. Over a period of time the
image processing module 10 will track the bounding box 17 and determine its normal range
10 of movement in a vertical direction and/or a transverse direction. Alternatively, or in addition,
the terrain inference system 1 may comprise a radar sensor or other type of sensor.
Upon identifying an obstacle 16, a driver of a vehicle may elect to drive over the obstacle 16 or to drive around the obstacle 16. If the vehicle drives over the obstacle 16, there is typically
15 a corresponding vertical movement of the vehicle. If the vehicle drives around the obstacle
16, there is a corresponding lateral movement of the vehicle. The terrain inference system 1 in the present embodiment is configured to identify short period perturbations which may correspond to a target vehicle 3 driving over or around an obstacle 16. Any such perturbations may indicate that the target vehicle 3 is reacting to an obstacle 16 in its path.
20 The terrain inference system 1 may infer terrain characteristics in dependence on the
perturbations in the movement of the target vehicle 3. By analysing the movement of the target vehicle 3, the terrain inference system 1 may categorise the type or nature of the obstacle 16. For example, if the obstacle 16 is a pothole, the movement may comprise a downwards movement followed by an upwards movement. If the obstacle 16 is a ridge or a
25 speed hump, the movement may comprise an upwards movement followed by a downwards
movement. The terrain inference system 1 may identify such movements in the target vehicle 3 and infer characteristics of the obstacle 16. The host vehicle 2 may act upon this information and take appropriate pre-emptive action to mitigate the effect of the obstacle 16. In dependence on the terrain characteristics inferred by the terrain inference system 1, the
30 host vehicle 2 could, for example, implement a steering change, or may re-configure a
vehicle suspension, for example by changing damper settings.
The operation of the terrain inference system 1 to infer terrain characteristics is illustrated
with reference to the images IMG3 and IMG4 shown in Figures 7B and 7C respectively. The
35 obstacle 16 in the illustrated examples comprises a pothole. If the target vehicle 3 drives
over the pothole with one wheel, there is a sudden movement of the target vehicle 3 which causes a rolling motion. This rolling motion of the target vehicle 3 can be detected by
17
analysing the image IMG3. In particular, the image processing module may estimate a target vehicle roll angle (β2) by calculating an angle between the top and bottom sides of the bounding box 16 and a horizontal reference plane Y.
5 Alternatively, or in addition, the image processing module may be configured to detect vertical movement of the target vehicle 3 by monitoring the position of the bounding box 16. The vertical movement of the target vehicle 3 may be detected by monitoring the vertical position of one or more sides of the bounding box 16 in the image IMG3. If the target vehicle 3 traverses a pothole or a speed restriction hump with both wheels, the resulting movement 10 of the target vehicle 3 would comprise a vertical movement with or without a change in the roll angle. The image processing module may be configured to detect a corresponding change in the vertical position of the bounding box 16 in the image IMG3.
Alternatively, or in addition, at least one threshold may be predefined for relative movement 15 of diametrically opposed corners 18A-D of the bounding box 17. If the movement of the diametrically opposed corners 18A-D of the bounding box 17 exceeds the predefined threshold(s), the image processing module may determine that the target vehicle 3 has traversed a pothole. The at least one threshold may be generated from one or more previous observations of the target vehicle 3. The at least one threshold may be calibrated by 20 comparing detected movements of the target vehicle 3 with measured behaviour of the host vehicle 2 traversing the same obstacle 16. The thresholds may be adjusted dynamically, for example adjusted in dependence on an estimated speed of the target vehicle 3.
If the target vehicle 3 drives around the obstacle 16, there is a change in the trajectory of the
25 target vehicle 3. This change in trajectory may occur rapidly as the driver of the vehicle may
have a relatively short period of time in which to drive around the obstacle 16. As illustrated
in Figure 7C, if the target vehicle 3 drives around a pothole, there is a first lateral movement
to avoid the pothole which may optionally be followed by a second lateral movement to
return target vehicle 3 to the original trajectory. In this example, it will be appreciated that the
30 first and second lateral movements are in opposite directions. The image processing module
10 may be configured to detect the first lateral movement and optionally also the second
lateral movement of the target vehicle 3 which are indicative of an avoidance manoeuvre.
The image processing module may detect the lateral movement(s) of the target vehicle 3 by
identifying a movement of the bounding box 16. The image processing module may be
35 configured to identify a lateral movement ΔY which exceeds a predetermined threshold, for
example within a set time period. The lateral movement ΔY is illustrated in Figure 7C by a
first bounding box 17’ shown as a dashed line representing the position of the target vehicle
18
3 at a first time; and a second bounding box 17 shown as a continuous line representing the
position of the target vehicle 3 at a second time. The threshold may be set by a calibration
process or derived from observation of movement of the target vehicle 2 over a period of
time. The thresholds may be adjusted dynamically, for example in dependence on an
5 estimated speed of the target vehicle 3.
The terrain inference system 1 may determine a geospatial position of the obstacle 16. For
example, the image processing module 10 may estimate a position of the obstacle 16 with
reference to a known location of the host vehicle 2. The image processing module 10 may
10 be configured to track a wheel path of the target vehicle 3. The wheel path could be used to
estimate a location of the obstacle 16 that prompted a change in the trajectory of the target vehicle 3.
The terrain inference system 1 described herein infers terrain characteristics in dependence
15 on the movement or behaviour of another vehicle (the target vehicle 3), typically the vehicle
in front of the host vehicle 2. The terrain inference system 1 may thereby infer terrain
characteristics which are obscured from on-board sensors by the target vehicle 3. This has
particular advantages if the distance between the host vehicle 2 and the target vehicle 3 is
relatively small, for example when operating in traffic. The operation of a conventional
20 scanning system, for example utilising a radar system, which directly scans the terrain may
be impaired in this scenario.
It will be appreciated that various modifications may be made to the embodiment(s) described herein without departing from the scope of the appended claims.
25
The image processing module may be configured to detect and track the rear (tail) lights on a rear surface of the target vehicle 3. This technique may be used instead of, or in addition to, the techniques described herein to identify an outline of the target vehicle 3. This approach may be advantageous at night or in restricted visibility conditions. The host vehicle
30 2 could optionally emit light, for example from the headlamps, which is reflected off of the
rear (tail) lights of the target vehicle 3.
The present invention has been described with particular reference to sensing means 9
which is forward facing to enable detection and classification of the target vehicle 3 in front of
35 the host vehicle 2 when it is travelling a forward direction. It will be understood that the
invention may be implemented in other configurations, for example comprising sensing means 9 which is side-facing and/or rear-facing. The image processing module 10 could
19
optionally be configured to track movements of the wheels of the target vehicle 3. Any such movements of the wheels of the target vehicle 3 may provide an indication of the operation of the suspension of the target vehicle 3. The terrain inference system 1 may, for example, determine the surface roughness coefficient SRC in dependence on analysis of the behaviour of the vehicle suspension (not shown). For example, the extent and/or frequency of changes in the suspension height may be used to determine the surface roughness coefficient SRC.
The host vehicle 2 may be configured to transmit the determined terrain characteristics, for example to relay them to another vehicle (discrete from said host vehicle 2 and the target vehicle 3).
CLAIMS:
1. A terrain inference system terrain inference system comprising a controller
configured to:
monitor a target vehicle;
identify an attitude of the target vehicle and/or a movement of the target vehicle; and
inferring at least one terrain characteristic relating to a region of terrain proximal to the target vehicle in dependence on the identified attitude of the target vehicle and/or the identified movement of the target vehicle.
2. A terrain inference system as claimed in claim 1, wherein the inferred terrain characteristic comprises at least one of the following set: an incline angle, an incline direction, a surface roughness, and a terrain composition.
3. A terrain inference system as claimed in claim 1 or claim 2, wherein the controller is configured to generate a vehicle control parameter in dependence on the at least one inferred terrain characteristic.
4. A terrain inference system as claimed in claim 3, wherein the vehicle control parameter comprises at least one of the following set: drivetrain control parameter, a transmission control parameter, a chassis control parameter, and a steering control parameter.
5. A terrain inference system as claimed in claim any one of the preceding claims, wherein identifying the movement of said target vehicle comprises identifying at least one of the following set: a change in the target vehicle pitch angle, a change in the target vehicle roll angle, a change in the target vehicle yaw angle, a vertical movement, a transverse movement, a longitudinal movement, and an extension or a compression of a vehicle suspension.
6. A terrain inference system as claimed in any one of the preceding claims, wherein the controller is configured to receive image data from at least one image sensor, the controller being configured to process said image data to identify the attitude of the target vehicle and/or the movement of the target vehicle
7. A terrain inference system as claimed in any one of the preceding claims, wherein the controller is configured to determine a geographic position of a target vehicle and to map said at least one terrain characteristic in dependence on the determined geographic position.
8. A vehicle comprising a terrain inference system as claimed in any one of the preceding claims.
9. A method of inferring at least one characteristic of the terrain proximal to a target vehicle, the method comprising:
monitoring a target vehicle;
identifying an attitude of the target vehicle and/or a movement of the target vehicle; and
inferring said at least one terrain characteristic proximal to the target vehicle in dependence on the identified attitude and/or the identified movement.
10. A non-transitory computer-readable medium having a set of instructions stored
therein which, when executed, cause a processor to perform the method claimed in claim 9.
| # | Name | Date |
|---|---|---|
| 1 | 201811007660-Covering Letter [16-12-2020(online)].pdf | 2020-12-16 |
| 1 | 201811007660-STATEMENT OF UNDERTAKING (FORM 3) [01-03-2018(online)].pdf | 2018-03-01 |
| 2 | 201811007660-FORM 1 [01-03-2018(online)].pdf | 2018-03-01 |
| 2 | 201811007660-Form 1 (Submitted on date of filing) [16-12-2020(online)].pdf | 2020-12-16 |
| 3 | 201811007660-Request Letter-Correspondence [16-12-2020(online)].pdf | 2020-12-16 |
| 3 | 201811007660-FIGURE OF ABSTRACT [01-03-2018(online)].pdf | 2018-03-01 |
| 4 | 201811007660-DRAWINGS [01-03-2018(online)].pdf | 2018-03-01 |
| 4 | 201811007660-REQUEST FOR CERTIFIED COPY [25-05-2018(online)].pdf | 2018-05-25 |
| 5 | 201811007660-DECLARATION OF INVENTORSHIP (FORM 5) [01-03-2018(online)].pdf | 2018-03-01 |
| 5 | 201811007660-Correspondence-210318.pdf | 2018-04-02 |
| 6 | 201811007660-Power of Attorney-210318.pdf | 2018-04-02 |
| 6 | 201811007660-COMPLETE SPECIFICATION [01-03-2018(online)].pdf | 2018-03-01 |
| 7 | abstract.jpg | 2018-03-28 |
| 7 | 201811007660-FORM-26 [19-03-2018(online)].pdf | 2018-03-19 |
| 8 | abstract.jpg | 2018-03-28 |
| 8 | 201811007660-FORM-26 [19-03-2018(online)].pdf | 2018-03-19 |
| 9 | 201811007660-Power of Attorney-210318.pdf | 2018-04-02 |
| 9 | 201811007660-COMPLETE SPECIFICATION [01-03-2018(online)].pdf | 2018-03-01 |
| 10 | 201811007660-Correspondence-210318.pdf | 2018-04-02 |
| 10 | 201811007660-DECLARATION OF INVENTORSHIP (FORM 5) [01-03-2018(online)].pdf | 2018-03-01 |
| 11 | 201811007660-DRAWINGS [01-03-2018(online)].pdf | 2018-03-01 |
| 11 | 201811007660-REQUEST FOR CERTIFIED COPY [25-05-2018(online)].pdf | 2018-05-25 |
| 12 | 201811007660-Request Letter-Correspondence [16-12-2020(online)].pdf | 2020-12-16 |
| 12 | 201811007660-FIGURE OF ABSTRACT [01-03-2018(online)].pdf | 2018-03-01 |
| 13 | 201811007660-FORM 1 [01-03-2018(online)].pdf | 2018-03-01 |
| 13 | 201811007660-Form 1 (Submitted on date of filing) [16-12-2020(online)].pdf | 2020-12-16 |
| 14 | 201811007660-STATEMENT OF UNDERTAKING (FORM 3) [01-03-2018(online)].pdf | 2018-03-01 |
| 14 | 201811007660-Covering Letter [16-12-2020(online)].pdf | 2020-12-16 |