Sign In to Follow Application
View All Documents & Correspondence

Micro Unmanned Aerial Vehicle Swarming

Abstract: The present invention inspired by the small flying insects and birds and their swarming behavior for a formation flight. Swarming behavior is desirous for micro unmanned Aerial vehicle for executing tasks in future ranging from surveillance to their use in battlefield.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 November 2017
Publication Number
21/2019
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

MEDBIT TECHNOLOGIES PRIVATE LIMITED
MEDBIT TECHNOLOGIES PRIVATE LIMITED, LEVEL 3, VASANT SQUARE MALL, POCKET V, SECTOR B, VASANT KUNJ, NEW DELHI-110070, INDIA

Inventors

1. SUSHANT GUPTA
F-65 PATEL NAGAR-3 GHAZIABAD UTTAR PRADESH-201001, INDIA
2. RAHUL SHARMA
64 SHREERAM ENCLAVE, LAL KUAN, NEAR SHREERAM DHARAM KANTA GHAZIABAD UTTAR PRADESH-201001, INDIA

Specification

Field of Invention:
The present invention related an Unmanned Aerial System inspired by small flying insects and birds
and their swarming behavior for a formation flight.
Background:
The present invention inspired by the small flying insects and birds and their swarming behavior for
a formation flight. Swarming behavior is desirous for micro unmanned Aerial vehicle for executing
tasks in future ranging from surveillance to their use in battlefield.
Summary:
The present invention shows method for performing vision-based formation flight control
of multiple MAV's in the presence of obstacles. No information is communicated.between
aircraft, and only passive 2 -D vision information is available to maintain formation. The
methods for formation control rely either on estimating the range from 2-D vision
information by using Extended Kalman Filters or directly regulating the size of the image
subtended by a leader aircraft on the image plane. When the image size is not a reliable
measurement, especially at large ranges, we consider the use of bearing-only information.
In this case, observability with respect to the relative distance between vehicles is
accomplished by the design of time-dependent formation geometry. To improve the
robustness of the estimation process with respect to unknown leader aircraft acceleration,
we augment the EKF with an adaptive neural network. These measurements are used to
update the NN augmented Kalman filter. This filter generates estimates of the target
aircraft position, velocity and acceleration in inertial 3D space that are used in theguidance
and flight control law to guide the follower aircraft relative to the target aircraft.
Brief description of drawings:
The detailed description is described with reference to the accompanying figures. In the figures, the
left most digit in the reference number identifies the figure in which the reference number first
appears. The same numbers are used throughout the drawings to reference like features and
components.
FIGURE 1. Control Architecture of Formation Flight or MAV Swarming .
FIGURE 2. Composite Adaptation based Adaptive State Estimation .
FIGURE 3. Simulation and estimation using vision processing.
Detailed description of drawings:
Exemplary embodiments will now be described with reference to the accompanying drawing. The
invention may, however, be embodied in many different forms and should not be construed as
limited to the embodiments set forth herein ; rather, these embodiments are provided so that this
invention will be thorough and complete , and will fully convey its scope to those skilled in the art.
The terminology used in the detailed description of the particular exemplary embodiments
illustrated in the accompanying drawings is not intended to be limiting. In the drawings, like
numbers refer to like elements.
Reference in this specification to "one embodiment"or "an embodiment" means that a particular
feature , structure ,or characteristic described in connection with the embodiment is included in at
least one embodiment of the disclosure. The appearances of the phrase "in one embodiment" in
various places in the specification are not necessarily all referring to the same embodiment, nor are
separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various
features are described which may be exhibited by some embodiments and not by others. Similarly,
various requirements are described which may be requirements for some embodiments but not
other embodiments.
The specification may refer to "an", "one" or "some" embodiment(s) in several locations. This does
not necessarily imply that each such reference is to the same embodiments), or that the feature
only implies to a single embodiment. Single features of different embodiments may also be
combined to provide other embodiments.
As used herein , the singular forms "a", "an" and "the" are intended to include the plural forms as
well , unless expressly stated otherwise. It will be furthej^ufrderstoodjtfeab^ie terms "includes" ,
"comprises" , "including", and/or "comprising" when used in this specification, specify the presence
of stated features, integers, steps, operations, elements, and/or components, but do not preclude
the presence or addition of one or more other features, integers, steps, operations, elements,
components, and/or groups thereof. It will be understood that when an element is referred to as
being "connected" or "coupled" to another element, it can be directly connected or coupled to the
other element or intervening elements may be present. Furthermore, "connected" or "coupled" as
used herein may include wirelessly connected or coupled. As used herein, the term "and/or"
includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have same
meaning as commonly understood by one of ordinary skill in the art to which this invention pertains.
It will be further understood that term, such as those defined in commonly used dictionaries, should
be interpreted as having a meaning that is consistent with their meaning in the context of the
relevant art and will not be interpreted in an idealized or overlay formal sense unless expressly so
defined herein.
The terms used in this specification generally have their ordinary meanings in the art, within the
context of the disclosure, and in the specific context where each term is used. Certain terms that are
used to describe the disclosure are discussed below, or elsewhere in the specification, to provide
additional guidance to the practitioner regarding the description of the disclosure. For convenience,
certain terms may be highlighted, for example using italics and/or quotation marks. The use of
highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term
is the same, in the same context, whether or not it is highlighted. It will be appreciated that same
thing can be said in more than one way.
The figures depict a simplified structure only showing some elements and functional entities, all
being logical units whose implementation may differ from what is shown. The connections shown
are logical connections; the actual physical connections may be different.
Consequently, alternative language and synonyms may be used for any one or more of the terms
discussed herein, nor is any special significance to be placed upon whether or not a term is
elaborated or discussed herein. Synonyms for certain terms are provided, A recital of one or more
synonyms does, not exclude the use of other synonyms^JThe use of examples anywhere in this
specification including examples of any terms discussed herein is illustrative only, and is not
intended.to further limit the scope and meaning of the disclosure or of any exemplified term.
Likewise, the disclosure is not limited to various embodiments given in this specification.
Without intent to further limit the scope of the disclosure, examples of instruments, apparatus,
methods and their related results according to the embodiments of the present disclosure are given
below. Note that titles or subtitles may be used in the examples for convenience of a reader, which
in no way should limit the scope of the disclosure. Unless otherwise defined, all technical and
scientific terms used herein have the same meaning as commonly understood by one of ordinary
skill in the art to which this disclosure pertains. In the case of conflict, the present document,
including definitions will control.
According to the preferred embodiment of the present invention as shown in FIG 1. One such
operational scenario is that of a small unmanned aerial vehicle (SUAV) flying over an urban area,
dispensing micro aerial vehicles (MAVs) to examine points of interest from a close distance. The system
being investigated is designed to perform persistent intelligence, reconnaissance and warfare capabilities.
We tackle the problem of assigning, in realtime, multiple MAVs to fly in a fleet and in a leader follower
formation towards the target using vision based info about the trajectory and path of the leader MAV to
perform a particular mission. The leader sets a nominal trajectory for the formation to follow and,may
cooperate with the followers in regulating range. In the virtual structure approach, the entire formation is
treated as a single entity. Desired motion is assigned to this single entity, the virtual structure, which traces
out trajectories for each member in the formation to track. We utilize the vision information from two
different perspectives. In one approach, we construct an Extended Kalman filter (EKF) to estimate relative
velocity and position , which we utilize in the guidance policy. In the second approach, the guidance policy
is based on directly regulating the vision (image-plane) measurements, by applying an EKF to the bearing
measurement, estimates of the relative position and velocity are obtained of each vehicle in. the formation
by every other vehicle.The cooperative controller (CC) architecture is essentially a centralized controller with
spatially distributed tasking members. The tasking members are the SUAV, MAVs, and operator; which means
these members accept and perform tasks assigned to them by the CC. The CC is assumed to be on the ground
station, where the automated decision making and computational capability is to be concentrated. The SUAV and
MAVs have limited onboard decision making ability. They will try to complete the tasks as well as possible given
the existing circumstances. Some variance can be accounted for and the members will signal if a task cannot be
done, when done, and the results.
2. Range Observability
Range information is unobservable without certain maneuvers. It is well known that the best relative motion for
range estimation accuracy is a motion that is perpendicular to the line-of-sight (LOS) . The optimal maneuver for
range estimation is determined by maximizing that "best" motion . For the bearings-only target state estimation,
analysis of the contributing factors to the range estimate covariance indicates that a large magnitude of (3 gives
more accurate range
estimation. This also makes sense physically, as viewing the tracked vehicle from a different direction will provide
information about position in an additional dimension. From this analysis, it is concluded that (3 should' be
maximized in order to obtain an accurate range estimate. At the same time, it is preferred that the relative bearing
stay close to its prescribed desirable value. Also, it is important to limit the acceleration p . Therefore, an
optimization problem that maximizes the predicted range estimation accuracy is formulated as .
" • - - (1)
subject to the relative
motion dynamics The Hamiltonian H and the Euler-Lagrange equations for this optimization problem are
formulated as given by
where Ai and hz is Lagrange multipliers. Those equations can be solved analytically and the optimal
solution for the bearing angle is derived as follows.
(3)
That is, the optimal relative bearing angle is represented as sine and cosine functions.
(4)
described in . This approach has recently been extended to augment an EKF . These approaches provide
robustness to unknown and unmodeled dynamics in the process. A critical application of the adaptive EKF
lies in the realm of tracking maneuvering targets, particularly in the bearings-only target-tracking problem. It
is well known in the target-tracking literature that the accuracy of the resulting EKF estimates depends
extensively on the target behavior. The universal approximation property of NNs has paved the way for NNbased
identification and estimation schemes that may account for these unknown modeling
errors/uncertainties in the process. The training signal for the NN is generated by the residuals produced by
the EKF. The residuals are the difference between the image plane measurements and the EKF estimates.
4. Application to Vision-based Target Tracking
Consider the relative LOS kinematics between a target and follower aircraft in the inertial Cartesian
coordinate frame
where Vx, Vy and Vz are now state-dependent measurement noise terms.
5. Formation Guidance Strategy
Here, a leader takes the formation along the desired trajectory. This trajectory is unknown to the other,
follower, aircraft. The follower aircraft each attempt to maintain a prescribed time-dependent relative
position to the leader. The relative position is time-dependent to ensure observability of range to the leader.
Collision hazards between following aircraft are prevented by careful selection of these chosen relative
positions; mitigating the need for robust estimation of other aircraft state for anything but the leader. Here,
the commanded distance is set to a constant, and the angle is varied periodically by a small amount. Each
follower generates a lateral and longitudinal acceleration command to bring it to the prescribed relative
position with a second order response, utilizing its position and velocity estimates for the leader aircraft. An
aircraft performance model then limits this acceleration command. However, all aircraft will depart from
these strategies as necessary to avoid obstacles. This is the subject of the next section.
6 . Simulation and Testing
The relative position and velocity estimator discussed in the previous section is extended to 3-dimensions
and applied to a formation flight of two airplanes. Estimation results of 6 DOF image-in-the-loop simulations
are shown in this section. Figure 5 shows a display of the 6 DOF airplane simulator. |t includes two
airplanes, configured as leader and follower. The follower aircraft has, a camera and its image is also
simulated. The synthetic images are processed and providing the same type of output we expect in an
actual flight.
The image processor provides a position and size (wingspan) of the leader in camera images. From those
measurements, the filter is designed to estimate relative position and velocity between the two aircraft.
Fig. 3(a) 6 DOF image-in-the-loop airplane simulator.
We consider the case in which the follower aircraft is guided to change its relative position to the leader
obeying a box-shaped command, while the leader flies straight with a constant speed.
Fig 3(b) Vision-based formation flight simulation.
Itshows a screenshot of the vision-based formation flight simulation. The 'src' screenshot is the framegrabber
window which is used by the image processing algorithm to capture the leader aircraft center
(green crosshair) and wingtips (red crosshairs). The 'Scene Window V and 'Scene Window 2' screenshots
depict the formation view from the top and from behind the follower aircraft respectively. The circles in
these screenshots depict the target estimator's estimate of the leader position. The leader aircraft is flying
in a circle in the horizontal plane at a constant heading rate. The follower aircraft is tasked with maintaining
specified separation distances along the x-, y- and z-axes of the followerJDody-fixed frame. The follower is
first put into the desired formation using only GPS communicatetfdata of the leader inertial position,
velocity and acceleration. The leader GPS data is communicated at about 5 Hz and is filtered to produce
leader state estimates at the rate required (-50 .Hz) by the follower aircraft guidance and flight control
algorithms. Once the leader aircraft is at the desired separation distance, the image processing and target
state estimation algorithms are switched on. The update rate of the image processing in simulation ranges
is - 10 Hz. The estimates of the leader position, velocity and acceleration from using the vision-based
target state estimator are blended in with the corresponding GPS estimates to produce the leader state
estimates that are used in the guidance and flight control algorithms for formation keeping.
Fig. 3(c). Relative position estimation.
The Follower is guided by a relative position command (dashed red line). It shows estimation results of
relative position. The position x is approximately the range between the two aircraft. -
7. Conclusion
This invention describes the implementation of a NN-augmented Kalman filter as an adaptive target state
estimator in a vision-based target tracking and autonomous formation flight problem. The design of the
adaptive target estimator reduces reliance on a priori knowledge of the target maneuver and/or of avoids
construction of elaborate target maneuver models. The benefits of such a design are clearly illustrated via
the vision-in-the-loop 6DOF simulation results. With adaptive estimation, the unknown target maneuver is
fairly accurately captured by the output of the adaptive NN and vision -in-the-closed-loop formation flight is
maintained. Invention will focus on the flight testing of the presented adaptive estimation method in close dloop
vision-based formation flight, ground target tracking and obstacle avoidance applications.

We claim :
1. A system in Unmanned Aerial System having one or multiple aerial vehicle which is capable
swarming behaviour using adaptive neural networks.
2. A system in Unmanned Aerial System which is capable of swarming using vision based
tracking of lead vehicle.
3. A system as claimed in Claim 1 and Claim 2 which is capable of tracking both aerial and
ground targets which may be static or moving.
4. A system as claimed in 1,2,3 which is capable of tracking and surveillance of multiple ground
and aerial targets using multiple aerial vehicles in swarm formation.
5. A system as claimed in claim 1 ,2,3,4 which is capable of obstacle avoidance in 2D or 3D
environment and in land , air and water.

Documents

Application Documents

# Name Date
1 201711041455-Other Patent Document-201117.pdf 2017-12-04
2 201711041455-Form 5-201117.pdf 2017-12-04
3 201711041455-Form 2(Title Page)-201117.pdf 2017-12-04
4 201711041455-Form 1-201117.pdf 2017-12-04
5 abstract.jpg 2018-01-03