Abstract: In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is provided. First, a target location model is taken then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive. least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. This invention is implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-Iocation functions.
Real-time multi-target identification localization plays an essential and significant role in disaster emergency rescue, mapping, delivery, border security and so on. UAV electro-optical stabilized imaging systems are equipped with many kinds of sensors, including visible light cameras, infrared thermal imaging systems, laser range finders and angle sensors. Target localization needs to measure the attitude of the UAV, the attitude of the electro-optical stabilized imaging system and the distance between the electro-optical stabilized imaging system and the target. The target localization methods from UAVs are divided into two categories. One category is target localization using a group of UAVs . The other category is target localization using a single UAV . This invention aims to improve the effectiveness and efficiency of target localization from a single UAV. Particularly, this invention proposes a new hybrid target localization scheme which integrates both zoom lens distortion correction and an RLS filtering method. The proposed scheme has many unique features which are designed to geo-locate targets rapidly. Earlier research do not take into account the effect of zoom lens distortion on multi-target localization. Many electro-optical stabilized imaging system are equipped with zoom lenses. The focal length of a zoom lens is adjusted to track targets at different distances during the flight. The zoom lens distortion varies with changing focal length. Real-time zoom lens distortion is impossible to correct by using calibration methods because the large amount of transformation calculation has to be repeated when the focal length is changed.
The primary invention here are: (1) the accuracy of multi-target localization has been improved due to the combination of a real-time zoom lens distortion correction method and a RLS filtering method using embedded hardware (a multi-target geo-location and tracking circuit board); (2) UAV geo-locates targets using embedded hardware (the multi-target geo-location and tracking circuit board) in real-time without orbiting the targets; (3) 50 targets can be located at the same time using only one UAV; (4) the UAV can geo-locate targets without any pre-existing geo-referenced imagery, or a terrain database; (5) the circuit board is small, and therefore, can be applied to many kinds of small UAVs; (6) multi-target localization and tracking techniques are combined, therefore,we can geo-locate multiple moving targets in real-time and obtain the target motion parameters such as velocity and trajectory. This is very important for UAVs performing reconnaissance and attack missions.
Brief description of drawings:
The detailed description is described with reference to the accompanying figures. In the
figures, the left most digit in the reference number identifies the figure in which the reference
number first appears. The same numbers are used throughout the drawings to reference like
features and components.
Figure 1. (a) Multi-target geo-location and tracking circuit board; (b) Electro-optical
stabilized imaging system. The arrows in the Figure 1 b represent the installation locations of
main sensors in an electro-optical stabilized imaging system.
Figure 2. UAV system architecture.
Figure 3. The overall framework of the multi-target geo-location method.
Figure 4. The coordinate frames relation: Azimuth-elevation rotation sequence between
camera and body frames, (a) camera frame; (b) body frame.
Figure 5. The coordinate frames relation: roll-pitch-yaw rotation sequence between body and
vehicle frames, (a) body frame; (b) intermediate frame; (c) vehicle frame.
Figure 6. The coordinate frames relation: Vehicle, ECEF and geodetic frames.
Figure 7. Coordinate transformation process of multi-target geo-location system.
Figure 8. The location of any target in image model.
Figure 9. Flowchart of the RLS algorithm.
Detailed description :
Exemplary embodiments will now be described with reference to the accompanying drawing. The invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein ; rather , these embodiments are provided so that this invention will be thorough and complete , and will fully convey its scope to those skilled in the art. The terminology used in the detailed description of the particular exemplary embodiments illustrated in the accompanying drawings is not intended to be limiting. In the drawings, like numbers refer to like elements.
Reference in this specification to "one embodiment "or "an embodiment" means that a particular feature , structure ,or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
The specification may refer to "an", "one" or "some" embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only implies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein , the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms "includes" , "comprises" , "including", and/or "comprising" when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Furthermore, "connected" or "coupled" as used_herein may include
wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have same meaning as commonly understood by one of ordinary skill in the art to which this invention pertains. It will be further understood that term, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overlay formal sense unless expressly so defined herein.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Certain terms that are used to describe the disclosure are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the disclosure. For convenience, certain terms may be highlighted, for example using italics and/or quotation marks. The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term is the same, in the same context, whether or not it is highlighted. It will be appreciated that same thing can be said in more than one way.
The figures depict a simplified structure only showing some elements and functional entities, all being logical units whose implementation may differ from what is shown. The connections shown are logical connections; the actual physical connections may be different.
Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein, nor is any special significance to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification.
Without intent to further limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.
According to the preferred embodiment The real-time multi-target geo-location algorithm in this invention is programmed and implemented on a multi-target geo-location and tracking circuit board specifically manufactured for the invention , Figure 1 a) @ 720 MHz Clock Rate and 32 Bit Instructions/Cycle and 1 GB double data rate synchronous dynamic random access memory (DDR SDRAM). This circuit board also performs the proposed zoom lens distortion correction and the RLS filtering in real-time. The multi-target geo-location and tracking circuit board is mounted on an electro-optical stabilized imaging system (see Figure lb). This
aerial electro-optical stabilized imaging system consists of a visible-light camera, a laser range finder, an inertial measurement unit (IMU), a global positioning system (GPS), and a photoelectric encoder. They are in the same gimbal so that they rotate altogether in the same direction in any axis. The electro-optical stabilized imaging system is mounted on the UAV to stabilize the videos and any eliminate video jitters caused by the UAV therefore greatly reducing the impact of external factors. The UAV system incorporates the electro-optical stabilized imaging system, UAV, data transmission module and ground station, which is shown in Figure 2. In the traditional target geo-location algorithms, the image and UAV attitude information are transmitted to a ground station. The target geo-location is calculated on a computer in the ground station. However, the date transmission model sends data with divided time mode, so the image and UAV attitude information are respectively transmitted at different times from the UAV to the ground station, so it is not guaranteed that the image arid UAV attitude information will be obtained at the same time in ground station. Therefore, the traditional target geo-location algorithm on the computer in the ground station has poor real-time ability and unreliable target geo-location accuracy. To overcome the shortcomings of traditional target geo-location algorithms such as algorithm complexity, unreliable geo-location accuracy and poor real-time ability, in this invention, the target geo-location algorithm is implemented on a multi-target geo-location and tracking circuit board on the UAV in real-time. Real-time ability is very important for urgent response in applications such as reconnaissance and disaster monitoring.The overall framework of the multi-target geo-location method is shown in Figure 3. The detailed workflows of the above mentioned multi-target geo-location method will be introduced as follows: we use UAV to search for the ground targets, which are selected by an operator in the ground station. The coordinates of the multiple targets in the image are transmitted to the UAV through a data transmission model. Then, all the selected targets are tracked automatically by the multi-target geo-location and tracking circuit board using the improved tracking method . The electro-optical stabilized imaging system locks the main target in the field of view (FOV) center. Other targets in the FOV are referred to as sub-targets. The electro-optical stabilized imaging system measures the distance between the main target and the UAV using a laser range finder.In order to ensure that the image, UAV attitude information, electro-optical stabilized imaging system's azimuth and elevation angle, laser range finder value, and camera focal length are obtained at the same time, the frame synchronization signal of the camera is used as the external trigger signal for data acquisition by the above sensQgs^so we don't need to implement sensor data interpolation algorithms in the system/^^p*-^^^ GPS data. The
UAV coordinates interpolation algorithm is shown in Equations (35) and (36).The multi-target geo-location and tracking circuit board computed the multi-target geo-location after lens distortion correction in real-time. Then, the board used the moving target detection algorithm for the tracked targets. If the target which is tracked is stationary, the multi-target geo-location and tracking circuit board uses the RLS filter to improve the target geo-location accuracy. The multi-target geo-location results are superimposed on each frame in the UAV and downlinked to a portable image receiver and the ground station.This invention aims to address the issues of real-time multi-target localization in UAVs by developing a hybrid localization model. In detail, the proposed scheme integrates the following improvements:
(a) The multi-target localization accuracy is improved due to the combination of the zoom lens distortion correction method and the RLS filtering method. A real-time zoom lens distortion correction method is implemented on the circuit board in real time. It is analysed the effect of lens distortion on target geo-location accuracy. Many electro-optical stabilized imaging systems are equipped with zoom lenses. The focal length of a zoom lens can be adjusted to track targets at different distances during the flight . The zoom lens distortion varies with changing focal length. Real-time distortion correction of a zoomable lens is impossible by using the calibration methods because the tedious calibration process has to be repeated again if the focal length is changed.
(b) The target geo-location algorithm is implemented on a circuit board in real time. The size of the circuit board is very small, therefore, this circuit board can be applied to many kinds of small UAVs. The target geo-location algorithm has the following advantages: low computational complexity and good real-time performance. UAV can geo-locate targets without pre-existing geo-referenced imagery, terrain databases and the relative height between UAV and targets. UAV can geo-locate targets using the embedded hardware in real-time without orbiting the targets.
(c) The multi-target geo-location and tracking circuit board use the moving target detection algorithm for the tracked targets. If the target which is tracked is stationary, the multi-target geo-location and tracking circuit board uses the RLS filter to automatically improve the target geo-location accuracy.
(d) The multi-target localization, target detection and tracking techniques are combined. Therefore, we can geo-locate multiple moving targets in real-time and obtain target motion parameters such as velocity and trajectory. This is very important for UAVs performing reconnaissance and attack missions.
The real output rate of the geo-location results is 25 Hz. The reasons are as follows:
(a) The data acquisition frequency of all the sensors is 25 Hz: the visible light camera's frame rate is 25 Hz. The frame synchronization signal of the camera is used as the external trigger signal for ail sensors expect GPS .
(b) Lens distortion correction is implemented in real-time, and the output rate of target location results after the lens distortion con-ectio^i&^SH2-
(c) When it is necessary to locate a new stationary target, the RLS algorithm needs 3-5 s to converge to a stable value (within 5 s, lens distortion correction is implemented in real-time, the output rate is 25 Hz). After 5 s, the geo-location errors of the target have converged to a stable value. We can obtain a more accurate location of this stationary target immediately (it is no longer necessary to run RLS). That output rate is 25 Hz, too.
The geo-location algorithm can geo-locate at least 50 targets simultaneously. The reasons are
as follows:
(a) For a moving target used lens distortion correction to improve the target geo-location accuracy. This consumes 0.4 ms on average when calculating the geo-location of a single target and, at the same time, correcting zoom lens distortion. It consumes 0.4 ms for tracking the multiple targets . The image frame rate is 25 fps, so the duration of a frame is 40 ms, so our geo-location algorithm can geo-locate at least 50 targets simultaneously.
(b) For a stationary target, only when it is necessary to locate a new stationary target, the RLS algorithm needs 3-5 s to converge to a stable value (within 5 s, lens distortion correction is implemented in real-time, 50 targets can be located simultaneously). After 5 s, the geo-location errors of the target have converged to a stable value. We no longer need to run RLS, so our geo-location algorithm can geo-locate at least 50 targets simultaneously after lens distortion correction and RLS.
Real-Time Target Geo-Location and Tracking System
Coordinate Frames and Transformation
Five coordinate frames (camera frame, body frame, vehicle frame, ECEF frame and geodetic frame) are used in this study. The relative relationships between the frames are shown in Figure 4. All coordinate frames follow a right-hand rule.
Camera Frame
The origin is the camera projection center. The x-axis xc is parallel to the horizontal column pixels' direction in the CCD sensor (i.e., the u direction in Figure 4). The >>-axis yc is parallel to the vertical row pixels' direction in the CCD sensor (i.e., the v direction in Figure 4). The positive z-axis zc represents the optical axis of the camera.
Body Frame
The origin is the mass center of the attitude measuring system. The x-axis xt is the 0° direction of attitude measuring system. The j/-axis yt is the 90° direction of attitude measuring system. The z-axis z$ completes the right handed orthogonal axes set. The azimuth 0, elevation angle y/ and distance X\ output by electro-optical stabilized imaging system are relative to this coordinate frame.
Vehicle Frame v
A north-east-down (NED) coordinate frame. The origin is the mass center of attitude measuring system. The aircraft yaw /?, pitch e and roll angle y output by the attitude measuring system are relative to this coordinate frame.
ECEF Frame
The origin is Earth's center of mass. The z-axis ze points to the Conventional Terrestrial Pole
(CTP) defined by International Time Bureau (BIH) 1984.0, and the x-axis xe is directed to
the intersection between prime meridian (defined in BIH1984.0) and CTP equator. The axes
ye completes the right handed orthogonal axes set. v
WGS-84 Geodetic Frame
The origin and three axes are the same as in the ECEF. Geodetic longitude L, geodetic
latitude
M and geodetic height H are used here to describe spatial positions, and the aircraft
coordinates (Zo, Mo, Ho), output by GPS are relative to this coordinate frame.
The relation between camera frame and body frame is shown in Figure 4. Two steps are required. First, transformation from camera frame to intermediate frame intl: rotate 90° (elevation angle ^) along the j>-axis yc. The next step is transformation from intermediate frame intl to body frame: rotate azimuth angle 0 along the z-axis zint\. In Figure 4a, y/\ represents 90°.
The relation between body frame and vehicle frame is shown in Figure 5. Three steps are required. First, transformation from the body frame to the intermediate frame midl: rotate roll angle y along the x-axis x^ The next step is transformation from the intermediate frame midl to the intermediate frame midl: rotate pitch angle e along the ^y-axis >wi- The final step is transformation from the intermediate frame midl to the vehicle frame: rotate yaw angle p along the z-axis zmi(a.
The relation between vehicle frame and earth centred earth fixed (ECEF) is shown in
Figure 6.
Multi-Target Geo-Location Model
As shown in Figure 4a, the main target is at the camera field of view (FOV) centre, whose homogeneous coordinates in the camera frame are [xc, yC9 zc, 1] = [0, 0, X\9 1], Through the transformation among five coordinate frames ranging from camera frame to WGS-84 geodetic frame, the geographic coordinates of main target in the WGS-84 geodetic frame can be determined, as shown in Figure 7.
First, we calculate the coordinates of the main target in the ECEF:
A\> -Q^MJ _:?U> -'•i* <'Mo (N4 Wo}*'A-fi)l"/-ii
ye ->L0>M0 t'i'-o ~cMo*Ui (iV-F Wo^'Mo^o %. ,■
2e <-"JW0 ° -*Mo (N(l- tr) -r/:/o)^Mo
1 0 0 0 1
<-v-> -CySp +SySeCp S?5^ + l.7>|.i.^ 0 c0s^ -% ^e^ () *'c
CeSp CyCfi 4- s'y$GSp —SyCp 4- (:7%^ 0 x ■ SyS& t-e %sv- 0 x lJc
-s« SyC£ c7ce 0 -<> 0 sT 0 zc
0 0 0 1 0 0 0 1 1
(1)
where c* = cos(*), s* = si.n(*).
Then we derive the geodetic coordinates of the main target from earth centred earth fixed - world geodetic system transforming equations :
U — arctan
aze
t>y/xl + yl
(2)
L={
arctan ^, when xe > 0
4f, when xe = 0, ye > 0
— y, when xe = 0, ye < 0
7T + arctan^, when xe < 0, ye > 0
—7r 4- arctan^, H/fe#
is the angle between t and/ , then [20]:
cos a
smJ
kskkjk
(7)
cos/? =
'•;
(8)
k/kk/k
.Fcis the basis vectors for camera frame in a 3-dimensional vector space, the coordinates of LOS
vectors s and t in the camera frame are given by Equations (9) and (10):
-4 S
= FT
0 0
/
(9)
-cT
-»
t =F.
II — UQ
v-v0
f
(10)
where/is the camera focal length, the unit is mm. The pixel coordinate of the point F is («o,
vo)-
The pixel coordinates of the point Tis (u, v).
Fv is the basis vectors for vehicle frame in a 3-dimensional vector space. In vehicle frame, the LOS
—> —> vectory' goes down axis zv, the coordinates of/ in vehicle frame is
given by Equation (11):
-+
v
_ rT; _ vT
0 0
(11)
The coordinates of/ in the camera frame are solved as
jc = Rcvjv = RcbRbvjv =
-5© ce 0
C0S7sf - cTsp cTCfi 4- srsc6y ' rfs7
jv (12)
where c* = cos(*), s+ = sin(*).
Rbv is the rotation matrix transformation from the vehicle frame to the body frame. Rcb is the rotation matrix transformation from the body frame to the camera frame. R^ is the rotation matrix transformation from the vehicle frame to the camera frame, a is the angle between the zvaxis of the vehicle frame and the zcaxis of the camera frame.
According to the geometric relationship in Figure 6, we obtain:
ky k =jvz f=jvzcosa
Using Euler parameters, or quaternions, we have the definition:
rj = cos 2
(13) (14)
(15)
It can also be shown that:
' = ±5(1 + teCcv)J
(16)
This may be manipulated into:
2rj2-l = trCcv - 1
(17)
Therefore:
f
f
f
M
COS
a 2cos2(f)-l 272-l
(18)
By substituting the jV2 value into Equations (11) and (12), the coordinate jc of j in the camera frame can be obtained. Thenyc is substituted into Equations (7) and (8), to obtain cos a and cos p.
Finally, according to the known main target distance A,i and Equation (6), the relative altitude h and the
CD G)
co Q.
CD
CN
E o
LL
<5 CO CO
o
and it follows that:
/ - 2f
trCcv — 1
(19)
tMUAvAO^^f
">
ufl
sub-target distance X2 can be determined. Based on the sub-target distance A.2 and the LOS vector / of the sub-target in the camera frame, the coordinates of the sub-target in this frame can be determined:
= A>
t
Z -4 t
(20)
Finally, the geodetic longitude L, the geodetic latitude M and geodetic height H of the sub-target can be calculated by substituting xC9 yc and zc into Equations (l)-(5).
Targets Tracking
The operator selects multiple targets in the first image and then these targets are tracked using the tracking algorithm. We use a simple two stage method to improve the real-time performance of the correlation tracking algorithm . The main improvements are as follows: In the low resolution stage, we calculate the average of four adjacent pixels in the original image to generate a low resolution image, whose resolution is half that of the original image. The low resolution template is generated in the same way. The formula of the normalized cross correlation (NCC) algorithm is as follows:
m m
ZI,T(x,y)S(x + u,y + v) x=l y=
*(i*,v)=(21)
<: m m vc), where uc= 0.5vt>, vc= 0.5/*. The range of wo is [uc- 0.05w, wc+ 0.05w]. The range of vois [vc- 0.05/*, vc+ 0.05/*]
We sampled JSfc samples of wo in the range of [uc- 0.05w, uc+ 0.05w]. We sampled N$ samples of vo in the range of [vc - 0.05/*, vc + 0.05/*]. We sample N\ samples of k\ in the range of [~ £T2, D~2 ] in each distortion center (wo> vo), so N\ x N2 x JV3 possible distortion parameters are generated from these samples in a certain camera focal length. The distortion parameter
\K\A>tf>) is shown in Equation (22) to Equation (24):
*i -pyt-^, (22)
iJQ = QA5w+j xdiiQ (23)
vl = 0.45/7 + p x 5UQ = ir2^ = 1,2,.../
For each distortion parameter (*v "o'^o), the pixel coordinates of the corrected chessboard image's edge points (w„, v„) are computed by using Equation (25) to Equation (28):
*d= (u«) = arctanf —- j (31)
CM
E o
lis the chessboard image brightness value, Gw, Gvare the first-order derivatives of the corrected image's edge points brightness.
CO
o ^-
o
We compute the Hough transform of the corrected chessboard image. The N strongest
peaks in the Hough transform correspond to the most distinct lines. The distance between a
» line Nq and the origin is dist(q). The orientation of a line Nq is fi(q), where q =1,2,..., N.
©
C! If the angular difference between the edge point orientation a(w„, v„) and the line Nq
J5 orientation fi(q) is less than a certain threshold (in our implementation, it is set to 2°. This
oo threshold can meet the distortion correction accuracy requirements. We compute the distance
g dq from edge point (w„, v„) to the line Nq:
CM
CM
dq = |w„cosG%)) + vns\n(fj(q)) - dist(q)\ (32)
If dq is less than a certain threshold. In our implementation, it is set to 2 pixels. This threshold can meet the distortion correction accuracy requirements. The edge point (un, vn) votes for the line Nq, the votes of the edge point (wrt? v„) is:
1
votes = (33)
We compute the sum of all edge points votes. In this focal length, the best distortion
parameters (k\9 wo> vo) are obtained by maximizing the
straightness measure function:
(dist(q),p(q),k,vu!0,vfy
(N
max X votes(34)
where votesV W'PW' v 0, o^ js the votes of the line A^ in the corrected chessboard image using distortion parameter \kv w0' vo).
We apply the above algorithm to calibrate the best distortion parameter (k\9 wo, vo) with different lens focal lengths. Then, the best zoom lens distortion parameters (k\9 wo? vo) in all focal lengths are gained through curve fitting . We store the distortion parameter table for all focal length in the flash chip on the multi-target geo-location and tracking circuit board.
Real-Time Lens Distortion Correction on the UAV
The zoom lens is connected to the potentiometer through the gears. The relationship between focal length and resistance has been calibrated in the laboratory. We can get the focal length by measuring the resistance value of the potentiometer in real-time. During the flight of the UAV, we use the focal length measuring sensor to measure the camera focal length and we find the distortion parameter (#\, ud, V(f) in the flash chip on the multi-target localization circuit board. The pixel coordinates (un, v„) of the corrected real-time image are computed using Equation (25) to Equation (28). We use (ww, v„) to calculate the geodetic coordinates of the targets.
RLS Filter
For stationary targets on the ground, the location result in different frames should be the same. Therefore, a popular technique to remove the estimation error is to use a recursive least squares (RLS) filter. The RLS filter minimizes the average squared error of the estimate. The RLS filter uses an algorithm that only requires a scalar division at each step, making the RLS filter suitable for real-time implementation, so we use RLS to reduce the standard deviation and improve the accuracy of multiple stationary target localization.
Suppose the original geo-location data of t images are JC* (k = 1, 2, . . . , i). The RLS algorithm flowchart is shown in Figure 9. In Figure 9, I\x\ is a 1 x 1 unit matrix. After the RLS filtration of the original data JC*5 the obtained data are Xk (k — 1, 2,... /), where Xk can be longitude L9 latitude Mor geodetic height H.
The GPS data (coordinates of the UAV) refresh rate is 1 Hz, but the video frame rate is usually above 25 Hz. To raise the convergence rate of the RLS algorithm, when the UAV speed is known, the coordinates of the UAV at the corresponding time can be determined through dead reckoning. In the WGS-84 ECEF, the coordinates of the UAV are :
n
Xe = xeQ+l VxM (35)
0
n
}/e = }JeO + f Vydt (36)
0
where (xe0, yeo) are the coordinates at the initial time, and Vx is UAV speed in direction X, Vy is UAV speed in direction Y. The influence of the UAV geodetic coordinate position and speed on the reckoned coordinates of the UAV is analyzed as follows:
(1) In the WGS-84 geodetic frame, the higher the latitude of the UAV is, the smaller the projection of 1° longitude onto the horizontal direction. Therefore, in the high latitude area, the measurement accuracy of GPS is high, and the accuracy of the reckoned coordinates is high.
(2) The smaller the UAV speed is, the smaller the distance UAV moves in the same time interval, the higher accuracy of the reckoned coordinates is.
According to Equations (35) and (36), the error resulting from the updating rate of GPS data can be compensated to converge the RLS algorithm rapidly to a stable value. Therefore, we can geo-locate multiple stationary ground targets quickly and accurately.
CLAIMS
A system in Unmanned Aerial Vehicle for multi target localisation and identification using the real-time zoom lens distortion correction method;
A system in Unmanned Aerial Vehicle for multi target localisation and identification using a recursive least squares (RLS) filtering method based on UAV dead reckoning A system in Unmanned Aerial Vehicle using day and thermal camera along with LIDAR , RADAR laser range finder , GPS , IMU , photo electric encoder and a circuit board to do the filtering and image processing , stereo camera , multiple camera for the target localisation and identification .
A system in Unmanned Aerial Vehicle where multiple targets are localised and mapped with respect to their position using multiple cameras and LIDAR sensor . A system in Unmanned Aerial vehicle for target localisation where the image, UAV attitude information, electro-optical stabilized imaging system's azimuth and elevation angle, laser range finder value, and camera focal length are obtained at the same time, the frame synchronization signal of the camera is used as the external trigger signal for data acquisition by the above sensors, so there is no need to implement sensor data interpolation algorithms in the system except for the GPS data
| # | Name | Date |
|---|---|---|
| 1 | 201811040369-SSI REGISTRATION-261018.pdf | 2018-10-30 |
| 2 | 201811040369-Other Patent Document-261018.pdf | 2018-10-30 |
| 3 | 201811040369-FORM28-261018.pdf | 2018-10-30 |
| 4 | 201811040369-Form 5-261018.pdf | 2018-10-30 |
| 5 | 201811040369-Form 2(Title Page)-261018.pdf | 2018-10-30 |
| 6 | 201811040369-Form 1-261018.pdf | 2018-10-30 |
| 7 | abstract.jpg | 2018-12-17 |