Abstract: The present invention related an Unmanned Aerial System to where there is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, .most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This invention presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this invention compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.
The present invention relates to Unmanned Aerial System to precisely estimate the geolocation of an objfcct when observed from an flying UAV.
Summary: The present invention related an Unmanned Aerial System to where there is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-leveK This invention presents a precise process for gedlocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and .target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this invention compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation
Brief description of drawings:
The detailed description is described with reference to the accompanying figures. In the figures, the
left most digit in the reference number identifies the figure in which the reference number first
appears. The same numbers are used throughout the drawings to reference like features and
components.
Figure 1: Flowchart of the framework
Figure 2: UAV avionic with RTK
Figure 3: Example of target interesting for tracking
Figure 4: Target tracked in the sequence of successive frames
Figure 5; The orientation of the sensor frame (S frame) relative to the inertial coordinate frame (1
frame).
i' - •
; - * '
Detailed description of drawings:
Exemplary embodiments will now be described with reference to the accompanying drawing. The invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein ; rather , these embodiments are provided so that this invention will He thorough and complete , and will fully convey its scope to those skilled in the art. The terminology used in the detailed description of the particular exemplary embodiments illustrated in the accompanying drawings is not intended to be limiting. In the drawings, like numbers refer to like elements.
Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature , structure ,or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.The specification may refer to "an", "one" or "some" embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same erhbodiment(s), or that the feature only implies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein , the singular forms "a", "an" and "the" are intended to include the plural forms as well , unless expressly stated otherwise. It will be further understood that the terms "includes" , "comprises" , "including", and/or "comprising" when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Furthermore, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have same meaning as commonly understood by one of ordinary skill in the art to which this invention pertains. It will be further understood that term, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overlay formal sense unless expressly so defined herein.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Certain terms that are used to describe the disclosure are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the disclosure. For convenience, certain terms may be highlighted, for example using italics and/or quotation marks. The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term is the same, in the same context, whether or not it is highlighted. It will be appreciated that same thing can be said in more than one way.
The figures depict a simplified structure only showing some elements and functional entities, all being logical units whose implementation may differ from what is shown. The connections shown are logical connections; the actual physical connections may be different.
Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein, nor is any special significance to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more
synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification.
Without intent to further limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.
According to the preferred embodiment of the present invention
The detection of a ground target within the image frame can be performed automatically using video tracking. Video sequence object tracking consists of determining the image coordinates of an object of interest in consecutive video frames. One possible approach is to use the mean shift algorithm to localize the target object.
The target coordinates can be calculated using the UAV position, attitude and the camera orientation relative to the UAV body. The UAV position is given by an on-board GPS receiver, while the attitude angles are computed from a navigation filter which integrates the inertial sensors (gyroscopes, magnetometer and accelerometers) and the GPS .However, by using this approach to solve the localization problem, both lateral and vertical positioning errors of the GPS receivers will contribute to the sources of error for target's location estimation. These errors can be additive and result in ground, resolution with up to tens of meters. RTK-GPS is a differential GPS procedure that is based on carrier phase GNSS (Global Navigation Satellite System) observations and leads to relative positions between a master and a rover station with centimetre accuracy in real time.
Applying computer vision methods in UAV applications field have been continuously improved in recent years to process captured image sequences and videos from the environment to produce numerical or thematic information for making decisions. Computer vision based methods are applied to detect, identify, and accurately geolocating unknown targets of interest.
Localizing stationary targets without considering terrain slopes using a UAV with a gimbaled camera we also apply recursive least square filtering to the image sequence and account for navigation biases and wind to improve accuracy up to 3m with no differential GPS. we also explored the problem of flight path optimization by finding an optimal altitude and radius for a circular trajectory above the stationary target. Due to its symmetry, a circular trajectory leads to a iaj*?6F$a$get localization error
making it widely accepted as the optimal trajectory. The geo-location methodology developed in , which requires multiple target bearing measurements, can be easily adapted to multiple UAV operations and cooperative geo-location, and the tracking of moving targets. The main result is that the target's position and the UAV's systematic attitude measurement errors can be jointly estimated using linear regression, provided the measurement errors are sufficiently small.
Problem of simultaneous target estimation and vehicle trajectory optimization and the resulting algorithms produce vehicle trajectories that increase the information provided by the measurements, greatly enhancing the target estimation performance, removing biases, improving filter convergence, increasing estimation, and overall leading to improved target localization. More accurate target localization can be obtained by registering the aerial images to a geo-referenced image provided by a Geographic Information, System (GIS) database. The video-based measurement model, the geo-location error and the UAV system dynamics are used.
INVENTION
This invention presents a real-time process for the identification and gedlocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS.
The diagram of the proposed framework is shown in Fig. 1. It includes three main steps, as target detection and tracking, realtime positioning, target localization, and estate estimation which will be discussed in detail in the following subsections.
Real Time Kinematic Positioning
Traditional Global Positioning System (GPS) uses the time differences between signals transmitted from satellites to a receiver which then digitally processes the data in order to find a location. This traditional method however, has an accuracy error of approximately -10m. In Real Time Kinematic GPS, there is a Base station module on the ground as well as a Rover. As long as the Rover and the Base maintain at least 5 satellites in common, there can be a more accurate locational prediction of the Rover by adjusting the corrections determined by the Base station. This RTK solution can provide centimetre grade accuracy of the position, and should cause a greater than 200 times increase in accuracy in comparison with traditional GPS. The major benefits are the extreme precision of the GPS unit for any application, with an option for real time tracking, it will be a crucial player in the future of UAV technology.
Target Detection and Tracking
Object tracking purpose is to find the targets between the consecutive frames in image sequences. Mean shift tracking algorithms have been used due to their simplicity and robustness. This method successfully applied it to image segmentation and tracking. In these mean shift tracking « algorithms, a colour histogram is used to describe the target region. The information theoretic similarity measures are commonly employed to measure the similarity between the template (or model) region and the current target region . Tracking is accomplished by iteratively finding the local minima of the distance measure functions using the mean shift algorithm. Fig. 4
Target Localization
To estimate the 3D coordinates of ground target, target position is computed by intersecting the ray starting from the camera centre and passing through the target pixel location in the image plane with the ground.
The method are described for locating the stationary target in the navigation coordinate system. In order to achieve this objective, relation between coordinate frame information is described briefly as follows:
Coordinate Frames and Conversion
The Localization algorithm uses a number of coordinate frames and considers transformations of 3-vectors among coordinate frames. We assume that all coordinate frames are right-handed and orthogonal. The inertial coordinate Frame (I) is an earth-fixed coordinate system with its origin at the defined home location. As shown in Fig 1. This coordinate system is sometimes referred to as a north-east-down (NED) reference frame. It is common for north to be referred to as the inertial x direction, east to be referred to as the inertial y direction, and down to be referred to as the inertial z direction. The transformation from vehicle frame to body frame is given by:
R' =
itiiv
■y
0)
v K™ J
The vehicle frame (v) is at the centre of mass of the UAV. However, the axes of v are aligned with the axis of the inertial frame, in other word the x direction points north, y direction points east, and z points toward the centre of the earth.
The body frame (b) is vehicle-carried and is directly defined on the body of the flying vehicle. Its
origin is the centre of mass, x direction points out the nose of the airframe, y direction points out
the right wing, and z direction points out the belly. The transformation from vehicle frame to body
frame is given by -
J
%■
K&e^-RWUemv)
CeS,
cfi*
v
(2)
where, C
= sift p. The angles ) (5)
PL =(Pn>Pe»Pd)r
The only element on the rightrhand side of equation 5, which is unknown is L. Therefore, solving the geolocation problem reduces to the problem of estimating the range to the target I. If digital
elevation model is not available, simple strategy for estimating L is to assume a flat-earth model: The
•J
geometry of the situation where h = -pj is the height-above-ground, and X is the angle between/ and direction k axis. It is.clear:
cos A ='*' /' =k'.R^RlRbJs (6)
L= h (?)
k'.R^RvbRbr
The Geolocation estimation is given by combining equation 7 and 5 as: -
Geolocation using extended Kalman filter: The geolocation estimate in equation 8 provides a one-shot estimate of the target location. Unfortunately, this equation'is highly sensitive to measurement errors, especially attitude estimation errors of the airframe. We will describe the use of the extended Kalman filter (EKF) to solve the geolocation problem. If we assume the object is stationary the state vector of dynamic system is given by:
*V=[t«,teJL] (9)
** ~~ » y\*ebj *uav) \*obj *vev )
(10)
P =
(M)
v 0 ,
where/n , tn = north and east position coordinate of target, Pwv= the UAV velocity, v g and x UAV ground speed and course angle.
The prediction step for filter corresponding to the target is given by:
(12)
Where the P* is the state covariarice matrix for the target at time step k, At is the sampling period and Ft is the system Jacobian matrix.
*; _
| # | Name | Date |
|---|---|---|
| 1 | 201811040638-Other Patent Document-291018.pdf | 2018-10-30 |
| 2 | 201811040638-Other Patent Document-291018-.pdf | 2018-10-30 |
| 3 | 201811040638-FORM28-291018.pdf | 2018-10-30 |
| 4 | 201811040638-Form 5-291018.pdf | 2018-10-30 |
| 5 | 201811040638-Form 2(Title Page)-291018.pdf | 2018-10-30 |
| 6 | 201811040638-Form 1-291018.pdf | 2018-10-30 |
| 7 | abstract.jpg | 2018-12-18 |