Sign In to Follow Application
View All Documents & Correspondence

Robotic Vision In Unmanned Aerial System

Abstract: The present invention related to robotic vision in Unmanned Aerial Systems , artificial intelligence, data mining , machine learning and predictive analytics in robotic system for the navigation or data collection of Unmanned Aerial System during day and or Night . It also helps in localization of the Unmanned Aerial Systems in GPS denied environment and acquire and determine the exact ground reference position of data acquired and required by the user. This invention presents a precise methodology for 3D geo-location of UAV images using geo-referenced data. The fundamental concept behind this geo-location process is using database matching technique for refining the coarse initial attitude and position parameters of the camera derived from the navigation data. These refined exterior orientation parameters are then used for geo-locating entire image frame using rigorous collinearity model in a backward scheme. A forward geo-locating procedure also is used based on a ray-DSM intersection method for the cases where ground location of specific image targets (and not the entire frame) is required.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 October 2018
Publication Number
18/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

AEROSENSE TECHNOLOGIES PRIVATE LIMITED
E 43/1, OKHLA PHASE II, NEW DELHI, PIN-110020

Inventors

1. SUSHANT GUPTA
F-65, PATEL NAGAR-3, GHAZIABAD-201001, UTTAR PRADESH, INDIA
2. SHAKUN ARORA
E-72D, GANGOTRI ENCLAVE, ALAKNANDA, NEW DELHI, PIN-110019

Specification

The present invention related to robotic vision system Unmanned Aerial Systems using one or more day and night camera and thermal sensors using artificial intelligence. More specifically, it describes a novel artificial intelligence system using robotic vision or machine vision in Unmanned Aerial Systems that is readily deployable, efficient and easy to install and use involving intelligent analysis of imaging records data based on predefined algorithms and seff-evofving artificial inteffigence and predict his future state and generate results in a desirable form. Data set of present conditions of environment and their previous state may be used as raw data in the system , moreover real time data is acquired and used for accomplishing a task or complete the defined goals .
*•• Background:
The present invention applies machine vision , artificial intelligence and predictive analysis on the data available in the system to generate required results for the navigation, acquisition or data collection of Unmanned Aerial System during day and or Night time . In most of UAV applications it is essential to determine the exterior orientation of on-board sensors and precise ground locations of images acquired by them.
Summary:
The present invention related to robotic vision in Unmanned Aerial Systems , artificial intelligence, data mining , machine learning and predictive analytics in robotic system for the navigation or data collection of Unmanned Aerial System during day and or Night . It also helps in localization of the Unmanned Aerial Systems in GPS denied environment and acquire and determine the exact ground reference position of data acquired and required by the user. This invention presents a precise methodology for 3D geo-location of UAV images using geo-referenced data. The fundamental concept behind this geo-location process is using database matching technique for refining the coarse initial attitude and position parameters of the camera derived from the navigation data. These refined exterior orientation parameters are then used for geo-locating entire image frame using rigorous collinearity model in a backward scheme. A forward geo-locating procedure also is used based on a ray-DSM intersection method for the cases where ground location of specific image targets (and not the entire frame) is required.

1.
Detailed Description of the Invention :
Exemplary embodiments will now be described with reference to the accompanying drawing. The invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein ; rather, these embodiments are provided so that this invention will be thorough and complete , and will fully convey its scope to those skilled in the art. The terminology used in the detailed description of the particular exemplary embodiments illustrated in the accompanying drawings is not intended to'be limiting. In the drawings, like numbers refer to like elements.
Reference in this specification to "one embodimenf'or "an embodiment" means that a particular feature , structure ,or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
The specification may refer to "an", "one" or "some" embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only implies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein , the singular forms "a", "an" and "the" are intended to include the plural forms as well , unless expressly stated otherwise. It will be further understood that the terms "includes" , "comprises" , "including", and/or "comprising" when used in-this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Furthermore, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have same meaning as commonly understood by one of ordinary skill in the^a.rt-te=tt^ich this invention pertains.

It will be further understood that term, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overlay formal sense unless expressly so defined herein.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Certain terms that are used to describe the disclosure are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the disclosure. For convenience, certain terms may be highlighted, for example using italics and/or quotation marks. The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term is the same, in the same context, whether or not it is highlighted. It will be appreciated that same thing can be said in more than one way.
The figures depict a simplified structure only showing some elements and functional entities, all being logical units whose implementation may differ from what is shown. The connections shown are logical connections; the actual physical connections may be different.
Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein, nor is any special significance to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification.
Without intent to further limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.

t
According to the preferred embodiment it presents a precise rigorous model based methodology for 3D geo-location of UAV images using geo-referenced data. The procedure uses database matching technique for providing Virtual Control Points (VCPs) in the coverage areas of each frame. Initial Exterior Orientation Parameters (EOPs) together with positional information of provided VCPs for each frame are then used to adjust these data through a weighted least square based resection process. Finally, using obtained fine EOPs of each frame it would be possible to geo-locate entire image frame following a rigorous model (collinearity equations) in a backward scheme. If ground location of specific image targets (and not the entire frame) is required, it could be obtained through a forward geo-location scheme. In this case, repetitive ray-DSM intersection method will be needed. Considering the divergence conditions of the common method for solving this problem especially in the case of UAV imagery, we use a method that prevents these divergence cases. The main stages of this geo-location process are as following: i. Extract features and descriptors from reference image ii. Coarse Geo-locate forward iii. Image resection using LS technique iv. Fine Geo-locate forward v. Fine Geo-locate inverse
Extract features and descriptors from reference image :ln the first stage, Scale Invariant Feature Transform (SIFT) descriptors from the geo-referenced image are derived and stored as part of our database. This process is time consuming, so it is done once in the beginning of the procedure and the results are stored in the database for the next consequent usages. Remaining stages will be performed for each of acquired image frames repeatedly.
Coarse geo-locate forward :For each image frame, the coarse geo-location of its borders are determined using GPS/IMU parameters extracted from the navigation data in the form of a forward geo-referencing process. This procedure is equivalent to the forward projection step in image orthorectification techniques based on forward projection scheme. For each image corner the light ray passing through the camera's projection center and that point is intersected with three dimensional ground surface defined by the DSM and resulted position of that corresponding corner in the ground space. Even though EOPs of the image and DSM are available, because of mutual dependency of horizontal and vertical ground coordinates this process is not straightforward. Computation of horizontal ground coordinates is dependent on vertical coordinate. And clearly vertical coordinate of the point that is read from DSM is dependent on its horizontal coordinates. As a consequence, translation from 2D image space to 3D image space requires a repetitive

I I

computation scheme .tn the following of the geo-location procedure, ground locations of four image corners obtained in the forwards geo-location step considering a confidence margin area are used to extract candidate reference descriptors already available in the database.
Fine geo-locate forward :At this point, SIFT feature descriptors from the UAV image frame are
extracted and matched against to reference feature descriptors extracted in the previous step. After
removing potential outliers, if at least three matched points are available, it would be possible to
refine camera parameters using these points as virtual control points whose vertical positions are
simply read from the available DSM in the database. For this purpose, VCPs information (image and
ground positions) as well as GPS/IMU data are integrated in a combined weighted least squares
adjustment process for solving resection problem and eventuate adjusted exterior orientation
parameters of the camera. Weights are obtained using predicted accuracy of telemetry data as well
as positional accuracy of VCPs which are estimated based on accuracy of reference database and
matching procedure. Accurate 3D geo-location of any object visible in the image then can be
obtained using refined camera parameters following a forward geo-referencing process. It should be
noted that for obtaining coarse coordinates of image borders in the ground space, one can by
neglecting the topography of the ground surface simply consider a mean height for the area and
thereby prevent repetitive computations needed for ray-DSM intersection procedure. This strategy
is more logical because a confidence margin is considered around the obtained area. But, whenever
geo-locating a single target on the image is of purpose - and geo-locating the whole scene is not
required- the repetitive procedure for ray-DSM intersection must be followed in order to prevent
displacement due to altitude difference. So, considering divergence risk of common method, we will
use a different method for solving ray-DSM intersection problem in the next section.
Ray-DSM intersection: Figure 1 (a) illustrates the conventional method for solving iterative
procedure of ray-DSM intersection. As it can be seen in Figure 1 (a-c), this process only convergence
0 for the cases in which slopes of the light ray from the perspective center is greater than the slope of
Ui
Q_ the ground surface in the intersection area. Even though this condition is more common with
^ manned aerial and specially satellite imagery, UAV platforms generally fly in low altitudes and also
E o
may capture imagery from high oblique attitudes, so the cases (b) and (c) in the Figure 1 may be common in this type of platforms resulting divergence when using traditional ray-DSM intersection technique.

o
For these divergency cases we use a technique similar to bisection method in finding roots of nonlinear functions in the numerical analysis domain. Bisection method - as its name illustrates -uses successive bisections in an interval around the root of the function f(x) (Figure 2. a). So, it is enough to find two starting points with different signs in order t^^^dS^etpt. The similarity of the


root finding concept with ray-DSM intersection problem can be find out with comparing two images depicted in Figure 2 (a) and (b). By considering light ray as x axis and ground surface as the function its root (i.e. intersection with x-axis) must be find, equivalency of two concepts becomes clear. As it is depicted in Figure 2 (b) common characteristic of all points in each side of the light ray is that Z differences obtained for those from collinearity equations and DSM have the same sign. Two first starting points are obtained using the first two repetitions of common method (as illustrated in Figure 1 (a) these points have different signs). Then, the coordinates of third point is calculated by averaging coordinates of these two points. For the next repetition, third point according to its position with respect to intersection point is replaced with the first or second point. Then, using new first and second points explained steps are repeated. This procedure continues until Z difference calculated from collinearity and interpolated from DSM will becomes negligible (Figure 2. c). Fine geo-locate inverse : Availability of accurate camera parameters as well as altitudinal information from DEM data makes it possible to geo-reference the whole UAV image frame with different ground sampling distances (GSD) in a backward geo-referencing process. In backward projection, each pixel in the geo-referenced image takes its pixel value from UAV image using the collinearity condition and the ground space coordinates X, Y, and Z of the corresponding DSM cell. These geo-referenced imagery then can be used to produce a wider mosaic from the area.

CLAIMS

A system in artificial intelligence localization and mapping for obtaining target GPS
co ordinatesfrom an Unmanned Aerial system.
A system as where an object geo coordinates are obtained using image processing of the
images obtained from an unmanned aerial vehicle.
A system as where an object geo coordinates are obtained using image processing of the
images obtained from an unmanned aerial vehicle using bisection based ray DSM
intersection method.
A system as where an object or target geo coordinates are obtained in day or night using
colour camera or thermal imagery and using image processing of the images obtained using
bisection based ray DSM intersection method

Documents

Application Documents

# Name Date
1 201811040368-SSI REGISTRATION-261018.pdf 2018-10-30
2 201811040368-Other Patent Document-261018.pdf 2018-10-30
3 201811040368-FORM28-261018.pdf 2018-10-30
4 201811040368-Form 5-261018.pdf 2018-10-30
5 201811040368-Form 2(Title Page)-261018.pdf 2018-10-30
6 201811040368-Form 1-261018.pdf 2018-10-30
7 abstract.jpg 2018-12-17