Abstract: Embodiments of the present invention provide a method, controller, a system and a vehicle for determining a dimension of an object attached to a vehicle. The present invention receiving information from a sensing means mounted to a closure member of the vehicle, when the closure member is in the open position; and determining the dimension in dependence on the information received from the sensing means.
TECHNICAL FIELD
5
Aspects of the invention relate to a method, a controller, a vehicle and a system for determining a dimension of an object attached to a vehicle.
BACKGROUND 10
Collision of vehicles with overhead obstacles is a known problem particularly for high sided vehicles such as trucks. In order to reduce a risk of collision it is known to attach a radar device to the vehicle. The radar device scans an environment ahead of the vehicle for potential collision hazards. It is also known to scan areas adjacent 15 to a vehicle, for example, to detect for a potential collision when changing lane. These systems are also particularly relevant for self-driving (autonomous) vehicles. Objects may be attached to vehicles which change the overall dimension of the vehicle, for example trailers or loads placed on a roof of a vehicle, which present problems for the aforesaid systems. 20
It is an object of embodiments of the invention to at least mitigate one or more of the problems of the prior art.
25
SUMMARY OF THE INVENTION
Aspects and embodiments of the invention may be understood with reference to the appended claims. 30
In an aspect, there is a method for determining a dimension of an object attached to a vehicle. The method comprises receiving information from a sensing means mounted to a closure member of the vehicle, when the closure member is in an open position; and determining the dimension of the object in dependence on the 35 information received from the sensing means. The information received from the
3
sensing means may be indicative of a dimension of the object attached to the vehicle.
Aspects of the present invention provide determination of the dimension of an object attached to a vehicle. 5
In embodiments, the dimension of the object may be one of a height of an object above a roof of the vehicle; a length of an object above the roof of the vehicle and/or the length of an object attached to the rear of the vehicle. 10
In embodiments, the sensing means may comprise an imaging device. In embodiments, an imaging device may comprise a camera.
15
In embodiments, the closure member may be moved automatically to the open position, then the closure member may be automatically moved to a closed position after the information has been received from the sensing means. This may further provide a convenient method that determines a dimension of an object with minimal 20 input from a user of the vehicle.
In embodiments, the sensing means may comprise a distance or range sensing means operable to determine the distance between the sensing means and the 25 object. In embodiments, the distance or range sensing means may comprise a stereo camera and/or proximity sensor and/or rangefinder. This may further improve the accuracy of determination of a dimension of the object.
30
In embodiments, the effective direction of the sensing means may be changed. In embodiments, the sensing means may be rotatably mounted to the closure member. This may further improve the functionality of the sensing means. For example, the camera may be able to rotate to further adapt to larger objects or different positons of objects. 35
4
In embodiments, the open position of the closure member may comprise the maximally open position of the closure member. For some vehicle body types, or object locations, the maximally open positon may be the optimal position for a closure member mounted sensing means.
5
In embodiments, information may be received from the sensing means from two or more open positions, and determining the dimension may be in dependence on the information received from the sensing means in the two or more open positons. Multiple closure member positions may be used to further improve the accuracy of 10 determination of a dimension.
In embodiments, distance information may be received from the sensing means, and determining the dimension may be in dependence on the distance information. 15 Distance information may relate to the distance between the sensing means and the object. Distance information may be used to determine a dimension of the object, or it may be used in combination with other information from the sensing means (e.g. an image) to further improve the accuracy of determination of the dimension.
20
In embodiments, the information received from the sensing means may comprise at least one image of at least a portion of the object.
25
In embodiments, the at least one image may comprise pixels and determining the dimension of the object may comprise applying a threshold to the pixels of the at least one image.
30
In embodiments, determining the dimension of the object may comprise subtracting a predetermined image of the vehicle without the object attached to the vehicle from the at least one image.
35
5
In embodiments, determining the dimension of the object may comprise determining pixel dimension information of the at least one image in dependence on the distance information.
5
In embodiments, determining the dimension of the object may comprise identifying the dimension of the object in the at least one image, and counting the pixels associated with the dimension of the object in the at least one image.
10
In embodiments, the determined dimension is compared to a detected road condition. A detected road condition may, for example, comprise a determined height of an overhead obstacle or the length of a space in an adjacent road lane.
15
In embodiments, the dimension may be a height of an object attached to the roof of the vehicle and the height of the object is compared to the height of an overhead obstacle barrier.
20
In an aspect, there is a controller for determining a dimension of an object attached to a vehicle. The controller comprises input means to receive information from sensing means mounted to a closure member of the vehicle, when the closure member is in an open position; and processing means to determine the dimension in dependence on the received information. 25
In an aspect, there is a system for determining a dimension of an object attached to a vehicle. The system comprises sensing means adapted to be mounted to a closure member of a vehicle; and a controller comprising input means to receive information 30 from sensing means mounted to a closure member of the vehicle, when the closure member is in an open position; and processing means to determine the dimension in dependence on the received information.
35
In embodiments, the input means may be adapted to receive information from the sensing means from multiple open positions of the closure member.
6
In embodiments, the input means may be adapted to receive distance information from the sensing means, and the processing means may be adapted to determine the dimension of the object in dependence on the distance information. 5
In embodiments, the input means may be adapted to receive at least one image from the sensing means.
10
In embodiments, the processing means may be adapted to determine the dimension of the object by applying a threshold to pixels of the at least one image.
15
In embodiments, the processing means may be adapted to determine pixel dimension information in dependence on the distance information.
In embodiments, the sensing means may comprise an imaging device. In 20 embodiments, an imaging device may comprise a camera.
In embodiments, the sensing means may comprise a distance or range sensing means operable to determine the distance between the sensing means and the object. In embodiments, the distance or range sensing means may comprise a stereo 25 camera and/or proximity sensor and/or rangefinder
In embodiments, the sensing means may be adapted so that the effective direction of the sensing means can be changed. In embodiments, the effective direction may be 30 changed by rotating the sensing means. In embodiments, the sensing means may be adapted to be rotatably actuated. In embodiments, a system may comprise actuators to rotate sensing means.
35
In embodiments, the closure member may include any one of: a tailgate, boot lid (trunk lid), a door or bonnet (hood) of a vehicle. In embodiments, the closure member
7
is a tailgate. In an example, the closure member may be pivotably mounted to a body of the vehicle.
In an aspect there is a vehicle comprising any system or controller described herein 5 or a vehicle, controller or system adapted to perform any method described herein.
In embodiments, the vehicle may be adapted to be driven autonomously.
In an aspect there is computer software which, when executed, may perform any 10 method described herein. In embodiments the computer software may be stored on a non-transitory computer readable medium.
BRIEF DESCRIPTION OF THE DRAWINGS 15
Embodiments of the invention will now be described by way of example only, with reference to the accompanying figures, in which:
Figure 1 shows an embodiment method of the invention; 20
Figure 2 shows an embodiment controller and system of the invention;
Figures 3a to 3c show embodiment vehicles of the invention; in Figure 3a a vehicle is shown with an object above the roof of the vehicle; in Figure 3b the vehicle of Figure 25 3a is shown with the tailgate in an open position; and in Figure 3c a vehicle is shown with an object attached to the rear of the vehicle.
Figure 4 shows an embodiment method of the invention;
30
Figure 5 shows an image processing method according to a step shown in Figure 4; and
Figure 6a and 6b illustrate intermediate images according to the image processing method shown in Figure 5. 35
DETAILED DESCRIPTION
8
Before describing several embodiments of the invention, it is to be understood that the invention is not limited to the details of construction or process steps set forth in the following description. It will be apparent to those skilled in the art having the benefit of the present disclosure that the invention is capable of other embodiments 5 and of being practiced or being carried out in various ways.
The term “vehicle” used herein may refer to but is not limited to automobiles. The term vehicles may include automobiles with SUV, MPV, saloon (sedan) or estate 10 (station wagon) body types, amongst others. The term vehicle may include combustion or electrically propelled vehicles. The term vehicle may include user driven or self-driven vehicles. A vehicle may have a forward direction and a rear direction opposite thereto. The forward direction is aligned with the normal direction of travel of the vehicle. 15
The term “object” used herein in reference to an object attached to a vehicle, may refer to an object that is attached, connected or otherwise mounted to the vehicle when the vehicle is in motion. For example, object may refer to an item such as 20 luggage mounted to the roof of a vehicle, or it may refer to an object at the rear of the vehicle such as a trailer or any other item attached to the tow bar of a vehicle, or a long item protruding from the trunk of the vehicle. Aspects of the invention may be applicable to vehicles with more than one object attached, therefore, object is intended to also encompass the plural form, i.e. objects. An object may be attached 25 to a vehicle directly or may be attached via a connecting apparatus. Connecting apparatus may include amongst others, a tow bar or roof bars.
The object may have a dimension associated therewith. The term “dimension” used 30 herein in reference to an object may refer to any dimension of the object which extends the overall dimension of the vehicle. In the example of an object mounted to the roof of the vehicle, the dimension may be the height of the object above the roof of the vehicle. In the example of the trailer, the dimension may be the length of the trailer. Dimension may also be relate to the position, location, distance from a 35 reference point, volume or an area of the object
9
The term “closure member” used herein in reference to a body aperture of a vehicle may refer to any member that that can be opened to present an aperture in the body of a vehicle. The term body aperture may refer to an opening in a vehicle body that can be closed by a closure member. A closure member may include any one of a 5 tailgate, a door, boot lid (trunk lid), or bonnet (hood) of a vehicle. In particular, the term “closure member” may include any closure member that is openable to a position where at least a part of the member is above the height of the vehicle roof. Non-limiting examples include a tailgate, bonnet (hood), boot lid (trunk lid), gullwing doors, scissor doors, canopy doors and the like. 10
The term “tailgate” used herein may refer to an opening of the vehicle that is generally at the rear of the vehicle. The tailgate may be openable, for example, to access the trunk (boot) space. 15
The term “open position” used in reference to the closure member may refer to any position of the closure member wherein the closure member is not in a closed position. A closed position is where no opening is presented from outside the vehicle 20 to the interior of the body via the closure member.
The term “maximally open position” used herein may refer to the position of the closure member, where it is fully opened, i.e. the tailgate is at the end of its range of 25 motion.
The term “image processing” as used herein may refer to manipulation and/or analysis of a digital image. For example, algorithms may be applied to one or more 30 digital images to alter the images or to determine information from the images.
The term “predetermined measurement” in reference to the vehicle used herein may refer to any dimension information that is indicative of a dimension of a part the 35 vehicle. The dimension may be measured or estimated.
10
The term “detected road condition” used herein may refer to any condition sensed by sensors of a vehicle. It may include detecting the presence and position of obstacles, or any other condition arising during driving.
5
The term “self-driving system” used herein may refer to a vehicle that is operated autonomously or partially-autonomously. A vehicle that is operated autonomously may be one where at least the steering and speed of the vehicle is controlled by a controller. A vehicle that is operated partially-autonomously may be one where at 10 least the steering and speed of the vehicle is controlled by a combination of a user and a controller.
The term “effective direction” used herein in relation to sensing means may refer to 15 direction in which the sensing means operates to perform a sensing function. For example, where the sensing means comprises a rangefinder, the effective direction may be the direction in which the sensing means establishes a range between itself and the target object. Similarly, where the sensing means comprises an imaging device, the effective direction may be the centre line from the sensor to the field of 20 view of the image. The effective direction may be considered in relation to the mounting of the sensing means, i.e. to the closure member.
Referring to Figure 1 an embodiment method 100 for determining a dimension of an 25 object attached to a vehicle is shown. The method comprises receiving information 104 from a sensing means 106 mounted on a closure member of the vehicle, whilst the closure member is in an open position. The method also comprises determining the dimension of the object in dependence on the information received from the sensing means 106. The method may also comprise a prior step of positioning the 30 closure member of the vehicle in an open position 102. The information from the sensing means in step 104 may be indicative of the object attached to the vehicle. The open position 102 may be a predetermined open position arranged to place the sensing means 106 in a known position and orientation relative to a body of the vehicle. Additionally or alternatively, the open position 102 may comprise a plurality 35 of such known open positions, each of which places the sensing means 106 in known positions and orientations relative to the body of the vehicle.
11
The vehicle may be an automobile, for example a saloon or SUV type vehicle. The vehicle comprises a closure member with sensing means mounted thereto.
5
The sensing means may comprise one or more sensors. The sensors may include any of an imagining device, or a distance sensing means such as a stereo camera, proximity sensor or rangefinder. Example sensors include electromagnetic, ultrasonic, or laser proximity sensors amongst others. The sensing means may be 10 mounted to the closure member so that when the closure member is in an open position, the sensing means can be orientated towards the object. In embodiments, the sensing means may be mounted close to a rear-most portion of the closure member. In embodiments, the sensing means may be rotatably mounted to the closure member so that the sensing means can be orientated towards the object. In 15 an example, the sensing means is mounted to the closure member so as to be positioned distal from the body of the vehicle when the closure member is in the open position. Thus the effective direction of a sensor may be changed. In embodiments, the sensing means may be rotated by hand or by an actuator. In embodiments, the sensing means may remain in a fixed position, and the effective direction of the 20 sensing means may be changed. For example, the effective angle may be changed by changing the orientation of a reflective surface that the sensing means is orientated towards. The reflective surface may be actuated to change effective direction. Alternatively, the sensing means may comprise a plurality of sensors, positioned at different spacings and/or orientations. Different sensors of the sensing 25 means may then be selected to change effective direction of the sensing means.
The closure member may comprise a tailgate, for example. The closure member may be moved or positioned into an open position, whereby the sensing means is 30 orientated towards the object. In an example embodiment, the sensing means may be mounted to a tailgate to face rearwards or upwards from the vehicle when the tailgate is closed. When the tailgate is in an open position as in step 102, the mounting of the sensing means may be arranged so that the sensing means is orientated in a generally forwards direction, e.g. towards an object on the roof of the 35 vehicle. Similarly, the sensing means may be mounted so that when the tailgate is in
12
an open position, the sensing means is orientated rearwards and optionally downwards, e.g. towards an object attached to the rear of the vehicle.
In step 104, whilst the tailgate is open, the sensing means may generate information 5 indicative of the object attached to the vehicle. This information is then used in step 106 to determine the dimension of the object. In an example embodiment where the sensing means comprises an imaging device, the imaging device may create one or more images of the object as part of step 104. These may then be analysed to determine a dimension of the object. In a further example embodiment, the object 10 may be a load mounted on the roof of the vehicle. In this embodiment, an image of the object from the imaging device may be analysed in step 106 to determine the height of the object above the vehicle.
15
Referring to Figure 2 a controller 202 is comprised as part of an embodiment system 200 is shown. The system 200 comprises a controller 202 and a sensing means 204. The controller 202 and system 200 are be operable to determine a dimension of an object attached to a vehicle. In embodiments, the controller 202 and/or system 200 may be operable perform a method or any part thereof described herein or as shown 20 in Figure 1.
The controller 202 comprises input means 203 and processing means 205. The input means 203 is configured to receive information from a sensing means 204 mounted 25 to a closure member, when the closure member is in an open position. The processing means 205 is configured to determine the dimension of the object based on the received information.
30
The controller 202 may comprise a computer program stored on a computer readable storage memory and/or electrical circuitry. The controller 202 may be operable to provide a control function to the system 200 defined herein, or may be operable to perform any relevant method step described herein. Where the controller 202 comprises electrical circuitry, the electrical circuitry may be distributed, including on 35 board a vehicle. The electrical circuitry may also be distributed on another component in communication with the vehicle, which may include a networked-
13
based, including as a remote server, or cloud-based computer or portable electronic device, which may include a mobile phone. Electrical circuitry may comprise electrical components known to the skilled person, including passive components, e.g. combinations of transistors, transformers, resistors, capacitors or the like. The electrical circuitry may be partially embodied on one or more processors, including as 5 an ASIC, microcontroller, FPGA, microprocessor, state machine or the like. The processor can include a computer program stored on a memory and/or programmable logic, for execution of a process. The memory can be a computer-readable storage medium. The process may include a method of determining a dimension an object attached to a vehicle. 10
The system 200 comprises sensing means 204 adapted to be mounted to the closure member of a vehicle. The system may comprise any sensing means described herein. In embodiments, the sensing means may comprise any of an imaging device, 15 a proximity sensor or a rangefinder.
In use, the input means 203 of the controller 202 may receive information from the sensing means 204, when the sensing means is mounted to a closure member of a 20 vehicle, and the closure member is in an open position. In use, the processing means 205 of the controller 202 uses the information received from the sensing means 204, via the input means 203, to determine the dimension of an object attached to the vehicle. In an example embodiment where the sensing means 204 comprises an imaging device, the input means 203 may receive data corresponding 25 to an image from the sensing means 204. The image may be a digital photograph of the object attached to the vehicle, for example. The processing means 205 may then process the image to determine a dimension of the object. Where the object is a load on the roof of the vehicle, the dimension determined may be the height of the object. Where the object is a trailer attached to a tow bar, tow hitch, or pintle of the vehicle, 30 for example, the dimension may be the length of the trailer.
Referring to Figure 3a an embodiment vehicle 300 is shown. The vehicle may comprise any controller or system described herein, and/or may be operable to 35 perform any method described herein. The vehicle 300 shown in Figure 3a comprises an object 304 and a closure member 306, shown in Figure 3a as a
14
tailgate. The tailgate has sensing means 204 mounted thereto. In the example of Figure 3a, the object 304 is a load on the roof 305 of the vehicle 300. The object comprises a dimension 302. The dimension 302 corresponds to the height of the object 304 above the roof 305 of vehicle 300. In figure 3a, the closure member 306 is shown in a closed position, i.e. there is no opening to facilitate access to the trunk 5 (boot) space from outside of the vehicle.
In Figure 3b, the tailgate 306 is shown in an open position 306a. An open position is one where there is an opening to the trunk/boot, the open positon may be any 10 positioned in any between closed to a maximally open position. In embodiment illustrated in Figure 3b, in the open position 306a, the sensing means 204 is orientated towards the object 304 on the roof 305 of the vehicle.
15
In Figure 3c, the tailgate is shown in an open position 306a, and the sensing means is orientated towards object 304 at the rear 307 (see Figure 3c) of the vehicle 300. In the example of Figure 3c the object 304 is a trailer attached to a tow bar or tow hitch of the vehicle 300. Systems, controllers or methods of the invention may be operable to determine the dimensions corresponding to the trailer length 302. In embodiments, 20 a controller 202, system 200 or method 100 may receive information from two or more sensing means 204. For example, a first sensing means may be mounted to be orientated to determine the dimension of an object 304 on the roof 305 of the vehicle 300 and a second sensing means may be mounted to be orientated to determine the dimension of an object 304 at the rear 307 of the vehicle 300. Alternatively, one 25 sensing means 204 may be movably mounted to change orientation to determine the dimension 302 of an object 304 at the rear 307 of a vehicle 300 or the dimension 302 of an object on a roof 305 of the vehicle 300.
In embodiments, the closure member 306 may be automatically moved from a closed 30 position to one or more open positions 306a. Once the information from the sensing means 204 has been received, the closure member 306 may be automatically moved back to the closed position. Automatic movement of the closure member 306 may be controlled by the controller 202. The movement of the closure member 306 may be performed by actuators (not shown), which may operate according to a control signal 35 received from the controller 202. Automatic movement of the closure member 306 may be initiated automatically, for example upon starting the vehicle 300, or upon
15
detecting the presence of an object 304 attached to the vehicle 300. The presence on an object 304 may include detecting a connection to a tow bar, or the placement of an object on roof bars. This may be achieved, for example, by receiving input from one or more load cells in a roof bar or tow bar or tow hitch or any other relevant sensor positioned accordingly. The output from a sensor to detect the presence of an 5 object 304 may be received by the controller 202. If the presence of an object 304 is detected, the automatic movement of the closure member and determination of the object dimension may be initiated automatically, or on start-up of the vehicle. Alternatively, the user may be notified by the controller 202 when an object has been detected and automatic movement of the closure member 306 and determination of 10 the object dimension 302 may be initiated after further input from the user. User notification of the detection of an object may take the form of an audible (via a loudspeaker in the vehicle or via the car horn), visual (via a warning light or display device visible to the user in normal use), or haptic warning (via vibration of the steering wheel, pedals, seat or seatbelt) as may be suitable for the vehicle 15 application. Input from the user may be received by the controller 202, and may be in the form of pressing a button or an input through a vehicle visual display unit.
In embodiments, information may be received from the sensing means 204 when the closure member 306 is in the closed position. The information may be used by the 20 controller to determine if the automatic movement of the closure member will cause a collision with an obstruction. If a potential obstruction is detected, the closure member 306 may be prevented from being automatically moved to an open position 306a, this may be performed by the controller 202.
25
In embodiments, an open position 306a of the closure member 306 may be considered to be the maximally open position of the closure member. The maximally open position of the closure member 306 may provide the highest elevation of the sensing means 204. 30
In embodiments, the sensing means 204 may be moveably mounted to the closure member. The sensing means 204 may be mounted to rotate in one or more axes of rotation. Rotation may be performed by an actuator. Control of the actuator may be 35 performed by the controller 202. In alternative embodiments, the sensing means 204 may be rotated by hand. In embodiments, the sensing means 204 may be moveably
16
mounted so that the sensing means 204 can obtain information indicative of an object 304 at the rear 307 of the vehicle or an object 304 above the vehicle. In embodiments, the sensing means 204 may be moveably mounted so that the sensing means can be directed at an object. In embodiments, the sensing means 204 may be moveably mounted so that the sensing means 204 may generate 5 information indicative of an object from different angles. This may be used to scan the object to generate more information, or may allow a large range of object sizes to be accommodated by the field of view of the sensing means. In embodiments, the sensing means 204 may be moveably mounted so that the sensing means 204 may remain orientated towards the object as the closure member moves between open 10 positions. This may allow information to be received from different angles which may help provide a more accurate determination of the dimension of the object.
In embodiments, the determined dimension may comprise one or more of: the 15 length/depth, width, height of an object, or area facing the sensing means, the position of the object in relation to the sensing means, a reference point or spacing between multiple objects.
20
In embodiments, the method may comprise comparing the determined dimension of the object to a detected road condition. Comparison of the determined dimension of the object to a detected road condition may be performed by controller 202. A detected road condition may comprise a height of an overhead obstacle (for example, a barrier). The height of the overhead obstacle may be compared to the 25 determined height of the object above the roof of the vehicle 300. The determined dimension may be combined with one or more predetermined measurements of the vehicle. For example, in embodiments, the height of the object may be combined with a predetermined height of the vehicle, and optionally also height roof bars if not included in the original determination of object dimension. Similarly, in a further 30 example a detected road condition may include a length of a parking space or the length of unoccupied space in a lane of traffic. Where the determined dimension is the length of an object at the rear of the vehicle, this may be compared length of the unoccupied space in a lane of traffic. The determined length of an object at the rear of the vehicle may be combined with a predetermined length of the vehicle before 35 comparison. In embodiments, if the comparison indicates that the determined dimension exceeds the detected road condition and a collision may occur, a warning
17
may be generated. Similarly, in a self-diving system the outcome of the comparison may be used to direct the vehicle to avoid a collision.
In embodiments, the controller, system or method may be configured to be used with 5 a self-driving system. The determined dimension may be used by or as part of a self-driving system.
In embodiments, the determined dimension may be used as in input to vehicle 10 control modules. For example, where a large object is above the roof of a vehicle, the determined dimension may be used to adjust vehicle performance or vehicle range calculations to compensate for increased air resistance, or the mass of the towed load. Similarly, noise cancelling systems may be altered to compensate for increased road noise from the object. 15
In embodiments, the determined dimension of the object may be used to approximate a centre of mass of the vehicle. For example, roof bars comprising load cells may be fitted to a vehicle that can determine the mass of a roof mounted object. 20 This information may be used in combination with the determined area facing the sensing means or position on the roof. This information may be used to approximate the centre of mass of the object.
25
Referring to Figure 4, an embodiment method 400 is shown. The method comprises the step 402 of moving or placing a closure member 306 of a vehicle 300 into an open position 306a. The closure member comprises a sensing means 204 which is an imaging device mounted to the closure member 306. In the open position 306a, the sensing means 204 is directed towards an object 304 mounted on the vehicle 30 300. An image of the object is then obtained in step 404 by the sensing means 204. The image, or information corresponding to the image undergoes image processing 406 to determine a dimension 302 of the object 304.
35
An embodiment process of image processing of step 406 is shown in Figure 5. In step 502, the image from the sensing means undergoes thresholding to convert to a
18
binarized image. Figure 6a shows a thresholded image 600 of an object on a roof of an automobile. The pixels representing the object have a value of 1 (representing white) assigned, and a pixel value of 0 (representing black) has been assigned to the pixels that do not represent the object (i.e. the background). In step 504, a control binarized image is subtracted from the binarized image of the object. In step 506 an 5 algorithm is applied to identify a dimension in the image. Figure 6b shows the image 601 of figure 6a, with the dimension 603 identified. The image has been identified by applying edge detection to the top and bottom of the image and finding the average pixel count in the vertical direction between the edges. The pixel count is then converted to a dimension using the distance between the object and the sensing 10 means, determined by a rangefinder comprised as part of the sensing means.
In embodiments, analysing an image may comprise applying any of thresholding; clustering methods, compression based methods, histogram based methods, edge 15 detection, region growing or diminishing methods; partial differential based methods, graph partition methods, watershed transformation, model based segmentation, or any applicable image segmentation method known in the art. Image recognition may be used to identify different object types, then a different image processing technique applied depending on the recognised object type. 20
In embodiments, image processing may comprise first identifying the object in the image. This may comprise applying a filter or similar algorithm such as a threshold, to apply a particular value to pixels of the image. A control image of the vehicle with no 25 object attached may also be used by subtracting the control from the image. Where an algorithm or filter has been applied to the image, the same algorithm or filter may be applied to the control image prior to subtraction, or may be applied only after subtraction. Once the object has been identified in the image, the dimension of the object may be then identified in the image. The dimension may be identified for 30 example, by counting the number of pixels of the object in the direction of the dimension. In the example of Figure 6b where the object is mounted to the roof of the vehicle, the relevant dimension may be the height of the object in the vertical direction. Therefore, the number of pixels in the vertical direction (Y axis) of the image corresponds to the height of the object and thus the relevant dimension. 35 Counting the number of pixels in the direction of the relevant dimension may comprise determining pixel counts in the relevant direction at multiple locations
19
perpendicular thereto then and averaging, or the taking single largest count. In further example, the distance between the vertically highest and lowest pixels may be used as the count for example. Alternative pixel counting methods known in the art may also be used. The pixel count may be converted to a numerical dimension by applying pixel dimension information to the pixel count. Pixel dimension information 5 may be considered to be information describing the physical dimension that each pixel of the image relates to.
Pixel dimension information may be determined by applying a predetermined value. 10 In the example of an object load mounted on a vehicle roof, the object may typically be positioned on the roof with a small variance. For example, the rear edge of objects may be placed in a consistent position in relation to a rearmost roof bar. Where an object is positioned with small variance, the distance from the sensing means to the object may be approximated. The pixel dimension information can then be 15 determined based on the approximated distance between the sensing means and the object and the field of view of the sensing means. The pixel dimension information may also be predetermined by generating an image of an object of a known dimensions.
20
In other embodiments, the sensing means may comprise a distance sensing means and an imaging device. This may for example comprise a proximity sensor or rangefinder. The distance between the sensing means and the object, or multiple points on the object may be determined by the distance sensing means. The pixel 25 dimension of the image may then be determined based on the distance between the sensing means and the field of view of the imaging device.
In embodiments, pixel dimension information may comprise a constant applied to all 30 pixels, or pixel dimension information may vary depending on the position of the pixel within the image. In embodiments, multiple distance estimates or measurements may be used to determined pixel dimension information for different regions of the image and thus calibrate the system for any given open position of the closure member. 35
20
In embodiments, information may be received from the sensing means, when the closure member is in two or more different open positions. In embodiments, information may continually be received from the sensing means as the closure member moves from a closed position to an open position, or between two or more open positions. Thus the information received will be of the object from different 5 angles. In embodiments, the information received may comprise multiple images, a stack of images or a video sequence. In embodiments, multiple images or images from a video may be analysed individually to determine a dimension of an object. The differences between each image at different angles may also be analysed to determine a dimension object. 10
In embodiments, the position of the closure member may be determined. Determining position of the closure member may comprise receiving information from a closure member position sensing means. The open positon of the closure member 15 may be determined before receiving information from the closure member position sensing means. If the closure member is not in the desired open position an error may be returned. A closure member position sensing means may comprise an angle sensor, displacement sensor, level sensor, rangefinder or proximity sensor. The sensing means may be connected to any suitable component of the closure member. 20 Where the motion of the closure member is reproducible and moves from a closed position to an open position, the positon of the closure member may be determined by measuring time elapsed from opening. This function may be performed by the controller. The position of the closure member, or the position of the sensing means may be associated with information received from the sensing means in that position. 25 For example, a sequence of images may be received as the closure member moves from a closed position to an open position. The position of the sensing means at the time of forming the image may be associated with each image of the sequence. The positon information may also be used for determining a dimension of the object.
30
Throughout the description and claims of this specification, the words "comprise" and "contain" and variations of the words, for example "comprising" and "comprises", means "including but not limited to", and is not intended to (and does not) exclude other additives, components, integers or steps. 35
21
Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise
5
It will be appreciated that embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in 10 the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs that, when executed, implement 15 embodiments of the present invention. Accordingly, embodiments provide a program comprising code for implementing a system or method as claimed in any preceding claim and a machine readable storage storing such a program. Still further, embodiments of the present invention may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection 20 and embodiments suitably encompass the same.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so 25 disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, 30 abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
35
22
The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. The claims should not be construed to cover merely the 5 foregoing embodiments, but also any embodiments which fall within the scope of the claims.
We claim:
1. A method for determining a dimension of an object attached to a vehicle, the method comprising:
receiving information from a sensing means mounted to a closure 5 member for a body aperture of the vehicle, when the closure member is in an open position; wherein the information is indicative of the object attached to the vehicle; and
determining the dimension in dependence on the information received from the sensing means. 10
2. A method of claim 1, wherein the dimension of the object includes at least one of:
a height of an object above a roof of the vehicle;
a depth of an object above the roof of the vehicle; or 15
a length of an object attached to the rear of the vehicle.
3. A method of claim 1 or claim 2, wherein the sensing means comprises an imaging device.
20
4. A method of any previous claim comprising automatically moving the closure member to the open position, then automatically moving the closure member to a closed position after the information has been received from the sensing means.
25
5. A method of any previous claim, wherein the sensing means comprises distance or range determining device operable to determine the distance between the sensing means and the object.
6. A method of any previous claim, wherein the sensing means is arranged so 30 that the effective direction of the sensing means can be changed.
7. A method of any previous claim, wherein the open position of the closure member comprises the maximally open position of the closure member.
35
8. A method of any previous claim, wherein information is received from the sensing means from two or more open positions and determining the
24
dimension is in dependence on the information received from the sensing means in the two or more open positions.
9. A method of any previous claim, comprising receiving distance information from the sensing mean, and determining the dimension in dependence on the 5 distance information.
10. A method of any previous claim, wherein the information received from the sensing means comprises at least one image of at least a portion of the object. 10
11. A method of claim 10, wherein the at least one image comprises pixels and determining the dimension of the object comprises applying a threshold to the pixels of the at least one image.
15
12. A method of claim 10 or claim 11, wherein determining the dimension of the object comprises subtracting a predetermined image of the vehicle without the object attached to the vehicle from the at least one image.
13. A method of claim 11 or claim 12 when dependent on claim 9, wherein 20 determining the dimension of the object comprises determining pixel dimension information of the at least one image in dependence on the distance information.
14. A method of claim 13, wherein determining the dimension of the object 25 comprises identifying the dimension of the object in the at least one image, and counting the pixels associated with the dimension of the object in the at least one image.
15. A method of any previous claim, the determined dimension is compared to a 30 detected road condition.
16. A method of claim 15, wherein the dimension is a height of an object attached to the roof of the vehicle, and wherein the height of the object is compared to the height of an overhead obstacle. 35
25
17. A controller for determining a dimension of an object attached to a vehicle, the controller comprising:
input means to receive information from sensing means mounted to a closure member of a body aperture of the vehicle, when the closure member is in an open position; and 5
processing means to determine the dimension in dependence on the received information.
18. A controller of claim 17, wherein the input means is adapted to receive information from the sensing means at multiple open positions of the closure 10 member.
19. A controller of claim 17 or claim 18, wherein the input means is adapted to receive distance information from the sensing means, and the processing means is adapted to determine the dimension of the object in dependence on 15 the distance information.
20. A controller of any of claims 17 to 19, wherein the input means is adapted to receive at least one image of at least a portion of the object from the sensing means. 20
21. A controller of claim 20, wherein the processing means is adapted to determine the dimension of the object by applying a threshold to pixels of the at least one image.
25
22. A controller of claim 20 or claim 21, when dependent on claim 19, wherein the processing means is adapted to determine pixel dimension information in dependence on the distance information.
23. A system for determining a dimension of an object attached to a vehicle, the 30 system comprising:
sensing means adapted to be mounted to a closure member of a vehicle; and
a controller according to any of claims 17 to 22.
35
24. A system of claim 23, wherein the sensing means comprises an imaging device.
26
25. A system of claim 23 or 24, wherein the sensing means is arranged so that the effective direction of the sensing means can be changed.
26. A system of any of claims 23 to 25, wherein the sensing means comprises a 5 proximity or range determining device.
27. A vehicle comprising the controller of any of claim 17 to 22.
28. A vehicle comprising the system of any of claim 23 to 26. 10
29. A vehicle adapted to perform the method of any of claims 1 to 16.
30. A vehicle according to claim 27 or claim 28, wherein the vehicle is adapted to be driven autonomously. 15
31. Computer software which, when executed, performs a method according to any of claims 1 to 16.
32. Computer software according to claim 31 wherein the software is stored on a 20 non-transitory computer readable medium.
| # | Name | Date |
|---|---|---|
| 1 | 201711043022-STATEMENT OF UNDERTAKING (FORM 3) [30-11-2017(online)].pdf | 2017-11-30 |
| 2 | 201711043022-FORM 1 [30-11-2017(online)].pdf | 2017-11-30 |
| 3 | 201711043022-FIGURE OF ABSTRACT [30-11-2017(online)].pdf | 2017-11-30 |
| 4 | 201711043022-DRAWINGS [30-11-2017(online)].pdf | 2017-11-30 |
| 5 | 201711043022-DECLARATION OF INVENTORSHIP (FORM 5) [30-11-2017(online)].pdf | 2017-11-30 |
| 6 | 201711043022-COMPLETE SPECIFICATION [30-11-2017(online)].pdf | 2017-11-30 |
| 7 | 201711043022-REQUEST FOR CERTIFIED COPY [11-01-2018(online)].pdf | 2018-01-11 |
| 8 | 201711043022-FORM-26 [11-01-2018(online)].pdf | 2018-01-11 |
| 9 | 201711043022-Power of Attorney-150118.pdf | 2018-01-22 |
| 10 | 201711043022-Correspondence-150118.pdf | 2018-01-22 |
| 11 | abstract.jpg | 2018-01-24 |