Sign In to Follow Application
View All Documents & Correspondence

A Method For Aerial Imagery Acquisition And Analysis

Abstract: A method and system for multi-spectral imagery acquisition and analysis the method including capturing preliminary multi-spectral aerial images according to pre- defined survey parameters at a pre-selected resolution automatically performing preliminary analysis on site or location in the field using large scale blob partitioning of the captured images in real or near real time detecting irregularities within the pre- defined survey parameters and providing an output corresponding thereto and determining from the preliminary analysis output whether to perform a second stage of image acquisition and analysis at a higher resolution than the pre-selected resolution. The invention also includes a method for analysis and object identification including analyzing high resolution multi-spectral images according to pre-defined object parameters when parameters within the pre-defined object parameters are found performing blob partitioning on the images containing such parameters to identify blobs and comparing objects confined to those blobs to pre-defined reference parameters to identify objects having the pre-defined object parameters.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 May 2018
Publication Number
39/2018
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2024-12-17
Renewal Date

Applicants

AGROWING LTD
11 Moshe Levi St 7565828 Rishon Lezion

Inventors

1. DVIR, Ira
13 Benbenishti St 75779 Rishon Lezion
2. RABINOWITZ BATZ, Nitzan
7 Hartsit St 48580 Rosh Ha'ayin

Specification

The present invention relates to image acquisition and analysis, in general and, in particular, to aerial imagery acquisition and analysis.

BACKGROUND OF THE INVENTION

Aerial remote sensing has been developing rapidly over the last decades.

Remote sensing is being used for various purposes, among which are agricultural and other environmental monitoring purposes. With the rapid development of light aerial vehicles, such as drones, hovercraft and many other types of UAVs, in addition to the affordable cost of ultra-light and light manned aerial vehicles, remote sensing, including image acquisition, is developing and accessible even to small scale organizations and farmers.

Aerial vehicles can be controlled and managed remotely. When flying over a city or a field, images can be captured for remote sensing purposes. The flight mission of unmanned aerial vehicles, just like manned ones equipped with an automatic pilot, can be pre-planned for specific routes according to the mission purposes, and also altered in real time, if required.

NDVI (Normalized Difference of Vegetation Index) software and tools, which are the main vehicle for agricultural remote sensing measurement, like many remote sensing tools for other purposes, assist organizations and farmers in monitoring the environment and fields/crops status. The NDVI vegetation index (Normalized

Difference Vegetation Index) measure can be used to analyze remote sensing measurements, typically but not necessarily from a space platform or an aerial vehicle, and assess whether the target being observed contains live green vegetation or not.

NDVI and other multi-spectral and hyper-spectral analysis tools, when used properly and based on the right image acquisition equipment, can indicate, for example, the presence of jellyfish swarms and sardine flocks in the sea, military mines in shallow ground, the presence of metal objects on the ground, the stress and vigor of vegetation, dry/wet areas, forestry health and diseases, the presence of pests and livestock, and so on. The resulting output of NDVI analysis and similar tools can be presented as a simple graphical indicator (for example, a bitmap image).

However, the analysis of NDVI and all other tools is usually done off-line (following the drone/airplane/satellite's acquisition) and the resulting image or set of images (orthophotos), which present different areas of the image in various colors, are presented to the farmer/user after a significant delay. In addition, for an average person/farmer, the output images of these tools are not of much value, as he is usually not an expert, and is unable to perform the necessary analyses in order to fully understand what is presented in the bitmap. Furthermore, in most cases based on the analyses results, the user is sent to the field for a closer look, in order to find the exact nature of the irregularities which were indicated. There are services which analyze the images and send a report to the farmer, but usually these reports are prepared by a human expert, who examines the images in a similar way that a physician examines an X- ay radiograph, and such analyses are sent a day or a few days after the acquisition of the imagery, and as stated above, require in many cases additional and more detailed exploration.

A typical agricultural use of remote sensing can serve as a good example of such a need. If a farmer wants to have a survey in order to detect, in a timely manner, the presence of white fly or aphids in his crops, and the size of both white fly and aphids could be of lmm-2mm only, it is clear that one cannot screen every inch of the crops searching for them. However, there could be changes, which are evident in lower resolution imagery (visual, hyper-spectral or multi-spectral images), which indicate that a certain area of a field could be infected with some unidentified pests or diseases. Unfortunately, the best satellite imagery (like GeoEye-1) is of 1 pixel per 40cm, which is far from sufficient for early detection of such pests. Aerial drone imaging, shooting, for example, with a 25 mm lens three meters above the ground can cover a rectangular area of 1.6x2.4m (3.84 square meters). Using a 10 Mega Pixels camera, this means 26 pixels per square cm. Such detailed imagery could allow the identification of whitefly, but acquiring images of hundreds of acres at such resolution will require excessive resources, and will turn the process into an impractical one.

Accordingly, there is a need for an intelligent method for automatic analysis and a decision support system, which will allow not only for better remote sensing, but will provide the end user (organizations/farmers) with immediate or short term analysis results and advice, as in many cases delayed advice could be useless, and the damage of a late analysis could be irreversible.

There are known image processing computer programs for analyzing acquired images of an object of unknown shape and comparing them to shapes in a database in order to identify the object. The image classification method based on a deep neural network approach is one of the most commonly used methods for this purpose.

In computer image analysis, blob detection methods are known for detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. Informally, a blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered in some sense to be similar to each other. In the blob partitioning methodology, each digital image is comprised of grey level brightness, that is to say 256 levels of brightness. Each pixel in the image is associated with one of these levels. The blob partitioning approach groups adjacent pixels of the same brightness and represents them on a display as a discrete object or blob. That is to say, that the size of each blob is defined by the number of included pixels.

SUMMARY OF THE INVENTION

There is provided according to the present invention a method for automatic analysis and a decision support system permitting acquisition and analysis of multi-spectral image data, preferably in real or near real time. In particular, the method includes an initial image acquisition and blob partitioning on a large scale, amounting to hundreds up to thousands of pixels per blob, for an initial analysis, in order to determine whether to proceed with a second image acquisition and blob partitioning on a small scale, grouping tens of pixels per blob, according to selected criteria in order to investigate irregularities in the initial images.

There is provided, according to the invention, a method for multi-spectral imagery acquisition and analysis, the method including capturing preliminary multi-spectral aerial images according to pre-defined survey parameters at a pre-selected resolution, automatically performing preliminary analysis on site or location in the field using large scale blob partitioning of the captured images in real or near real time, detecting irregularities within the pre-defined survey parameters and providing an output corresponding thereto, and determining, from the preliminary analysis output, whether to perform a second stage of image acquisition and analysis at a higher resolution than the pre-selected resolution.

According to embodiments of the invention, the method further includes associating GPS data with the detected irregularities, directing an image acquisition device, in real time or near real time, to capture additional images of at least one of the detected irregularities at higher resolution than the pre-selected resolution using the associated GPS data, and performing analysis using small scale blob partitioning of the captured images in real or near real time.

There is further provided, according to the invention, a system for multi-spectral imagery acquisition and analysis, the system including at least one multi-spectral image capturing device, a processor coupled to the image capturing device, the processor running an image processing module including a blob partitioning module to automatically analyze captured images by blob partitioning according to predefined survey parameters and provide output corresponding to irregularities on each image falling within said predefined survey parameters, wherein the blob partitioning module is capable of implementing both large scale blob partitioning and small scale blob partitioning, and a geographical location indicator adapted and configured to provide an indication of a geographical location of the irregularities, the processor being configured to automatically determine whether to direct one of the

multi-spectral image capturing devices to the indicated geographical location to capture images of the irregularities in response to the output.

There is also provided, according to the invention, a method for analysis and object identification including analyzing high resolution multi-spectral images according to pre-defined object parameters, when parameters within the pre-defined object parameters are found, performing blob partitioning on the images containing such parameters to identify blobs, and comparing objects confined to those blobs to pre-defined reference parameters to identify objects having the pre-defined object parameters.

BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be further understood and appreciated from the following detailed description taken in conjunction with the drawings in which:

Fig. la shows a pair of images shot from a UAV flying above a plantation field using suitable filters to obtain Visible and NIR images of the field in accordance with one embodiment of the present invention;

Fig. lb shows the pixels in Figure la associated with selected NDVI values superimposed on the visible image;

Fig. lc shows the set of largest 34 blobs from Figure lb derived from this set of pixels;

Fig. 2a shows a pair of Visible and NIR images of a given plantation field; Fig. 2b shows pixels associated with selected NDVI values ranging derived from the images of Fig. 2a;

Fig. 2c shows a set of points associated with polygons having the highest density values from Fig. 2b;

Figs. 3a and 3b illustrate a method where a detection is declared, according to one embodiment of the invention;

Figs. 4a and 4b show a set of detected SMS pests resulting from extracting small blobs whose colors are confined to a distance "close" to the white color;

Figs. 5a and 5b illustrate a high correlation, flagging the presence SMS pests;

Figs. 6a, 6b and 6c illustrate flagging of pest detection when the correlation measure exceeds a pre designed threshold value;

Figs. 7a, 7b and 7c illustrate an original CMS pest, a binary image resulting from the projection of the pixels of the largest three blobs of the original pest, and resulting boundaries of the pest after processing, respectively;

Figs. 8a and 8b illustrate a blob partition CMS-based detection method;

Fig. 9a shows a typical mine of a Citrus Leafminer;

Figs. 9b and 9c shows boundaries and calculated corner points of the image of Fig. 9a;

Fig. 10 illustrates use of color manipulation to strengthen the colors in the plant that are associated with blight; and

Fig. 11 is a block diagram illustration of a system constructed and operative in accordance with embodiments of the present invention.

DETAILED DESCRIPTION OF THE INVENTION

The present invention provides an inventive method for image acquisition and analysis and a decision support system, which will provide better remote sensing, and provide the end user (organizations/farmers) with enhanced imagery acquisition, analysis results and advice. In particular, the method permits a one- or two-stage acquisition and analysis of multi-spectral imagery, wherein at least the first stage analysis, at low resolution over a large area, is performed in real or near real time. From results of this analysis, a determination is made whether or not to proceed with a second stage of multi-spectral image acquisition and analysis, at high resolution on a smaller area. Analysis can be performed on location, i.e., at the time of acquisition of each image or frame with later addition of geographical indications, or analysis can be performed on a geotagged orthophoto of a larger area. For purposes of the present invention, analysis in near real time, for each acquired image, particularly on location (in the field) analysis, means no more than a few seconds (5 seconds or less) after the acquisition session, and, for creation and analysis of a geotagged orthophoto covering a field of a few tens of hectares or more, near real time means a few minutes (15 minutes or less) following acquisition. Providing a processor on board the aircraft that is capturing the multi-spectral images can allow analysis of the individual captured frames and identification of areas of interest in real time. It is also possible, according to the current invention, to perform such near real time or real time analysis if the captured imagery is transmitted over a wireless network to a processing platform (e.g., computer) on the ground.

According to the invention, analysis of the low resolution images is performed using blob partitioning on a large scale, in order to identify irregularities or regions of interest. The second stage of analysis utilizes blob partitioning on a small scale, when analyzing results of high resolution image acquisition, for example, for pest detection and identification. In particular, the invention utilizes acquisition and analysis of multi-spectral data and not just of images in the visible range. According to some embodiments of the invention, the determination as to whether to perform a second stage of acquisition and analysis is made automatically.

Such an automatic decision support system will analyze the acquired imagery, assisting the management of the acquisition process, and allow acquisition of more detailed imagery of selected geographical areas, based on the initial analysis of the preliminary captured images, thus enabling the examination of points of interest in the surveyed areas at a resolution which is impractical to acquire over a larger area. Among other tasks, an enhanced imagery acquisition is concerned with accurately delineating areas with specific required properties, defined in advance by the person requesting the survey, such as a specific desired range of NDVI values or other measures.

The present invention provides an efficient solution for automatically delineating areas associated with desired NDVI - or other vegetation index based -values or any other selected parameters, by providing a method for additional efficient visual and hyper-spectral or multi-spectral automatic aerial imagery acquisition and analysis. This is accomplished by acquiring the images at a resolution termed "low resolution acquisition", sufficient, during analysis, to allow accurate detection of blobs whose associated values are confined to the required NDVI or other parameter range. This is performed, typically, by automatically searching for blobs indicating vegetation in a state of stress of some kind (for example, as manifested in NDVI values which are on average lower by 15% to 20% than the optimal NDVI value). The stress could indicate various issues regarding vegetation state and health, such as dryness, vigor, pests and diseases.

It will be appreciated that the image acquisition device can be arranged to acquire multi- spectral images across a wide band of the spectrum. Alternatively, or in addition, the image acquisition device can be arranged to acquire images in a plurality of pre-selected bands or color channels, selected according to the objects being sought in the survey. In either case, analysis of the multi-spectral image will be performed utilizing acquired color channels selected according to the objects being sought in the survey.

Preferably, the low resolution acquisition device uses a low distortion camera, preferably using a NADIR gimbal. This enables securing a vertical view, thus

minimizing the distortion due to the camera angle. Alternatively, other suitable low resolution acquisition devices can be utilized. Preferably, the image acquisition device is equipped with an automatic digital calibration apparatus, thus enabling the processing of individually captured images, without the need for complex alignment of the color channels, and avoiding the need for pre-processing, such as morphing the images and distortion correction. Examples of suitable image capturing apparatus are described in Applicants' pending US patent application USSN 62/260,272 filed 26 Nov. 2015.

The current invention allows fast and even real time analysis of the acquired imagery by its unique imagery acquisition and file type. The multi-spectral image acquisition should preferably be carried out using a file type (like JPEG, RAW, TIFF or any other image file format) while the multi-spectral channels (Blue, Green, Red, Red-Edge, NIR - all or some of them) are saved as multiple channels images side by side. This can be performed most efficiently through the use of multiple lenses (as described in the image capturing patent application cited above) using a single sensor divided into separate areas, although the invention is not limited to these examples. Using such a camera (having multiple lenses and a single sensor), the different RGB (Red Green Blue) and NIR (Near InfraRed) channels can be saved separately. Thus, for example, the red channel can be captured on the left side of the sensor, while the near infra-red channel can be captured on the right side of the sensor. Also, different ranges of green and blue can be captured on the different areas of the sensor, while all the channels are then saved in a single file. Such a capturing process allows simple splitting of up to 6 distinctive channels using two lenses with a single sensor. Any side may contain optimally up to three channels of shades of red, green and blue. Such a file structure allows simple and fast channel separation. As the different red types (650nm or 710nm) are saved side by side with the 850nm channel, and the same way one can have different narrow blue and green channels, splitting the channels to RGB and separating the image into two (in case two lenses are used), will produce all the different channels. Such file format can be any standard RGB format file like JPG, BMP, PNG, TIFF etc., whereas these files are limited to 3 bands or 3 bands and a

Gamma channel. As important vegetation indices currently in use employ more than two bands to accurately detect blobs of interest (such as A VI - "Atmospherically Resistance Vegetation Index" which employs Red Blue and NIR bands), such side by side separation is vital for correct identification of blobs of interest.

The present invention utilizes blob detection methods to detect regions in the digital images that differ in properties, such as brightness or color, compared to surrounding regions. Such regions or irregularities indicate possible problem areas that require further investigation. The detection of blobs associated with predefined parameters in the surveyed area guides the second step of the imagery acquisition mission, the high resolution step. Such guidance can be accomplished by associating GPS data with these blobs and directing an image capturing device on a UAV/drone/hovercraft/ultralight, etc., to take additional highly detailed (higher resolution) images of points of interest (irregularities), whether in near-real time, realtime or post processing. Such detailed imagery acquisition can be done by the same aerial vehicle and capturing device, or by the same vehicle using another capturing device, or by another aerial vehicle with a higher resolution capturing device, or by a ground vehicle with a capturing device.

Once highly detailed imagery is acquired (VIS, hyper-spectral or multi-spectral imagery), it is possible to analyze the acquired imagery and assist the farmer in deciding what steps he needs to take, immediately or in the short term, or in the long term. This second stage of analysis is also performed using blob partitioning, typically small scale blob partitioning.

In other words, the current invention applies automatic analysis of first low and then high resolution imagery, in order to produce a fully automatic decision support system utilizing remote sensing devices. Low resolution imagery can be the result of high altitude and/or a low resolution camera and/or short focal length aerial imagery acquisition. Since lower resolution provides less data to process, low resolution scanning is much faster and more efficient. High resolution detailed imagery can be acquired through low altitude acquisition and/or a high resolution camera and/or a long focal length (zoom) or even acquiring the imagery from the ground.

According to the present invention, different methods are used for each step of the image acquisition process, in order to utilize efficiently the image capturing devices, and perform fast and practical scanning of the environment/fields where possible in real time or near real time, while "focusing" in high resolution imagery on blobs of interest and identifying, as accurately as possible, their nature according to the required survey. For agricultural purposes, for example, the current invention implements automatic systematics to identify pests, diseases, and vegetation vigor and other aspects of the crops.

It is important to clarify that blobs of interest are defined in advance, according to the purpose of the particular survey. If, for example, landmines are to be found, then the pixels representing their graphic representation after the suitable analysis (using NDVI or any other vegetation index based metric or pre-selected parameters) will be defined as the interesting pixels. If, for example, cotton canopy is to be found, then the pixels representing the white cotton bolls will be defined as the interesting pixels and blobs.

The first step of one possible embodiment according to the current invention includes aerial acquisition of multi-spectral imagery of rather low resolution. Such low resolution could be of 1 pixel per 100 square centimeters (10cm x 10cm) or even 1 pixel per square meter, or any other resolution similar to that captured by satellites, which has been proven over the years to be adequate for remote sensing for the required survey. In other words, if, for example, an agricultural survey is focused on cotton canopy, it is already known (as documented in various US DOA publications) that image acquisition can be done from an altitude of a few hundred meters using a 50mm focal length camera. In such a case, it is known that a multi- spectral camera of four spectral bands (Red; Green; Blue and Near Infra-Red) can acquire the required imagery for NDVI presentation.

The second step of this agricultural remote-sensing example, according to the current invention, includes an analysis, according to NDVI (and/or one of the tools based on it). The resulting values of the analysis can be presented as an image, which presents the areas covered with cotton in one color and those uncovered by cotton in another color. The same principle applies to most agricultural surveys, where dry or unhealthy vegetation or weeds will be "painted" one color and healthy vegetation will appear "painted" another color. These colored pixels are merely representing the results of the analyses in a user-friendly way, and the automatic analysis uses the values, which are attached to these pixels.

The third step, according to the current invention, includes the automatic gathering of the pixels associated with the pre-defined parameters, e.g., NDVI values (or any other agricultural vegetation index metric) during image analysis, into blobs. The creation of such blobs entails a uniform quantization operation that is dependent on the required survey. In some surveys (like whitefly detection), very few pixels cannot be ignored and even small blobs are of significance, while in other surveys (like tuna school detection) only large blobs are of importance. As the NDVI is actually a synthetic image, i.e., each pixel has a calculated value taken from RED and NIR channels, prior to any such blob creation, the NDVI map may undergo an advanced de-noising operation allowing the creation of continuous blobs that optimally preserves important features of the acquired image.

This preliminary analysis is used by the system processor to determine whether a second stage of image acquisition and analysis is required, for example, if irregularities are observed and if their extent or size passes a pre-selected threshold. If not, the survey ends and this information is sent to the person requesting the survey. On the other hand, if it is determined that a second stage is warranted, the processor either automatically proceeds, as describe hereafter, or notifies the person requesting the survey and awaits manual instructions. Alternatively, the analysis of the images after blob partitioning can be accomplished manually (visually by a human being) in order to determine whether to proceed to a second step of acquisition.

In the case of automatic acquisition and analysis, the fourth step includes the attachment of GPS data to the blobs indicating irregularities on which the survey is focused (pest infested fields, water leaks, etc.). The GPS data can be attached to the images in various ways. Some cameras have an embedded GPS sensor integrated with them. In other cases, cameras are operated wirelessly by an operating smartphone (as

is the case with Sony's QX1 lens-like camera and Olympus Air A01 camera, which can add the GPS data from the operating smartphone or an autopilot/processing platform attached to the camera). The preferred solution, according to the current invention, due to better accuracy, is to add the GPS data from the aerial vehicle's GPS sensor, which is usually positioned on top of the vehicle and connected to the autopilot, or directly to the camera (if supported by the camera). Synchronizing the capturing time of the imagery and the GPS data is of importance, as aerial vehicles can move fast. It will be appreciated that each frame can have GPS data associated with it, or an orthophoto of a larger area can be geotagged.

Another way, and the most accurate one, is by creating an orthophoto of the acquired images and fitting it to an accurate map. Such matching, although it is slow and demands rather heavy computation, overcomes bias and deviation due to lens distortion, lack of satellite signals, and the angle of the vehicle while acquiring the imagery.

The fifth step, according to the current invention, includes the automatic preparation of highly detailed imagery acquisition of the areas represented by the blobs of interest. The imagery acquisition device (e.g., airborne or ground borne camera), which is preferably a multi-spectral or hyperspectral camera, is sent to the geographical location of these blobs to capture detailed imagery of those locations. Some possible ways to acquire highly detailed imagery are to use a low altitude hover craft or helicopter flying a few feet/meters above the crop or sending a ground robot and/or a human being to acquire the imagery from the closest possible distance. Another method is to use a camera with a longer focal length of zoom. However, this method is quite expensive and inefficient, if hyper-spectral or multi-spectral highly detailed imagery is required. The accuracy of the GPS data is of importance, as such close looks at the blobs of interest could mean, for example, the acquisition of imagery of a 0.8m x 1.2m rectangle at lOMega Pixels, as described above.

It should be noted that in some surveys (like tuna school detection), the GPS data of a blob in a single image may not be sufficient, and a trajectory needs to be predicted based on a number of successive images, with an additional analysis of the GPS data changes of the blobs. Such analysis, calculated with the processing and acquisition time, will allow the system to follow the required blobs of interest, which are not static, and acquire detailed imagery of such blobs at their predicted location.

The sixth step, according to the current invention, includes automatic systematics and identification of the findings. Systematics, or systematic biology, for purposes of the invention, includes, inter alia, describing and providing classifications for the organisms, keys for their identification, and data on their distributions. This step is a rather complex step, and it includes a few sub-steps in itself. It will be appreciated that the automatic systematics and identification of objects described below can be utilized to analyze high resolution imagery captured in any fashion, and not only by means of the automatic image acquisition methods and system described above.

First, the acquired imagery is analyzed according to the survey type. For example, identifying greenfly on green leaves is more difficult than identifying whitefly, if the acquired imagery is of the visual spectrum only. However, adding NIR imagery of greenfly on green leaves removes the aphid's green camouflage, as the greenfly does not reflect the near infrared spectrum in the same way that chlorophyll does.

Second, once the presence of suspected objects (aphids, greenfly, worms, grasshoppers, etc.) is found, fragments of the imagery containing such objects are extracted, typically by drawing rectangular squares centered at a centroid of each fragment, the squares being a few pixels long. Preferably, each fragment contains a single object with minimal background "noise", to allow for better classification and identification. The suspected objects are preferably first classified by utilizing a remote server containing a reference pests data base according to the type of survey. A preferable way to classify the pests/diseases of an agricultural survey, for example, will include the location, crop type, and a reference to a bank of potential pests/diseases, which are relevant to that specific type of crop. However, it is also possible to perform the automatic systematics in the acquisition platform. In this case, the acquisition platform will be equipped with an internal program capable of

identifying the pest by comparing it to a small data set of reference pest images residing in the platform processing memory. The seventh step includes the identification of the suspected (and classified, if they were classified in the previous step) objects.

The basic underlying idea for identifying the objects in this step is associated with the claim that proper partitioning of the acquired image into confined size sub-images guarantees, in high probability, that, if the suspected objects are present in the detailed acquired image, they are likely to be found in the confined size sub-images.

The size of the confined size sub-images is set according to the survey type, matching the image size to that of the blobs which are supposed to represent the objects which the survey is focused on. Once a set of such confined size sub-images, which contain blobs, is prepared, the process of identifying the objects within this set of sub-images can be accomplished by either sending the set of these sub-images to a remote server for identification or by identifying the object on the spot by activating a local identification code.

Various image partitioning methods and corresponding detection methods are detailed in this invention. Generally, two types of objects are searched for, objects having simple structure and objects having complex structure. A few specific image partition-based methods are tailored for each type.

The eighth step, according to the current invention, includes sending the results of the analysis to the addressee (the one who ordered the survey or the service provider). The results include the possible identification of the findings and they may also include advice regarding the steps that need to be taken in order to, for example, exterminate a pest or put out a fire or to put a fence and warning signs around old land mines (depending on the survey). The results can be sent with the small fragments of the imagery from the cloud (Internet), or even from the aerial vehicle, for example when the processing platform is a smartphone equipped with a cellular connection (e.g., 3G or LTE). In some possible embodiments of the current invention, the fractions of the imagery, the location and the advice to the user are sent through multimedia

messaging service (MMS) or through an instant messaging service (IM), like Messenger, WhatsApp, Skype, etc.

Detailed description of some of the steps in an exemplary agricultural survey

Identification of Areas Associated with Specific NDVI (and similar metrics) Values

A preferred process will include the following steps:

1. Capturing visual and/or hyper- spectral or multi-spectral imagery from an efficient range (preferably an altitude of a few dozens of meters to a few hundred meters and even more);

2. Analyzing the imagery according to the desired survey (e.g., dryness; vegetation vigor; pests; diseases; etc.);

3. Defining the areas of interest in the images which call for more detailed examination by large scale blob partitioning.

4. Attaching precise GPS data to the areas of interest.

5. Automatically directing the same aerial vehicle or another vehicle with an image capturing device to these areas of interest, directing it to take detailed (higher resolution) imagery according to the survey type. Typically, the criteria directing the platform to perform high resolution acquisition will be the presence of blobs that are associated with NDVI values 10% or 20% lower than the optimal NDVI value - i.e., an indication of "vegetation stress".

CLAIMS

1. A method for multi-spectral imagery acquisition and analysis, the method comprising:

capturing multi-spectral aerial images according to pre-defined survey parameters at a pre-selected resolution;

automatically performing preliminary analysis using large scale blob partitioning of the captured images in real or near real time;

detecting irregularities in the analyzed image data according to the pre-defined survey parameters and providing an output corresponding thereto;

associating GPS data with the detected irregularities; and

determining, from said output, whether to perform a second stage of image acquisition at a higher resolution than said pre-selected resolution and analysis using small scale blob partitioning.

2. The method according to claim 1, further comprising:

directing an image acquisition device, to capture additional multi-spectral images of at least one of said detected irregularities at higher resolution than said preselected resolution using said associated GPS data; and

performing analysis using small scale blob partitioning of the additional multi-spectral images in real or near real time.

3. The method according to claim 1 or claim 2, wherein said step of determining is performed automatically.

4. The method according to claim 2, wherein the step of directing includes automatically directing the image acquisition device.

5. The method according to claim 1 or claim 2, wherein the step of determining includes:

constructing a binary map of the image, where pixels associated with predefined range of parameter values is set to 1 and the rest of the pixels is set to 0; constructing a set of all possible blobs associated with the binary map by raster scanning the binary map, finds blobs along the scanning process and assigning them with a running index;

defining a group of blobs as an irregularity when:

a. a number of pixels belonging to a given single blob exceeds a predefined threshold parameter; or

b. a number of pixels belonging to a set of blobs, all confined to a given radius (from a gravity center of the set of blobs), exceeds a predefined threshold.

6. The method according to claim 2, further comprising comparing morphology of at least one of said detected irregularities with morphology of known objects.

7. The method according to any one of the preceding claims, wherein said multi-spectral images include at least two images selected from the group including visible and NI images.

8. A system for multi-spectral imagery acquisition and analysis, the system comprising:

at least one multi-spectral image capturing device;

a processor coupled to the image capturing device;

the processor running an image processing module including a blob partitioning module to automatically analyze captured images by blob partitioning according to predefined survey parameters and provide output corresponding to irregularities on each image falling within said predefined survey parameters;

wherein the blob partitioning module is capable of implementing both large scale blob partitioning and small scale blob partitioning and

a geographical location indicator adapted and configured to provide an indication of a geographical location of the irregularities;

the processor being configured to automatically determine whether to direct one of the at least one multi-spectral image capturing devices to said indicated geographical location to capture images of said irregularities in response to said output.

9. The system according to claim 7, wherein the processor is in two way communication with a user for exchanging data and instructions.

10. A method for identifying objects in multi-spectral images, the method comprising:

analyzing high resolution multi- spectral images according to pre-defined object parameters;

when parameters within the pre-defined object parameters are found, performing blob partitioning on said images containing such parameters to identify blobs;

comparing objects confined to said blobs to pre-defined reference parameters to identify objects having said object parameters.

1 1. The method according to claim 10, further comprising:

classifying said objects before the step of comparing.

12. The method according to either claim 10 or claim 1 1, further comprising

partitioning the acquired images into a confined size sub-images set according to the survey type,

matching an image size to that of the blobs; and

identifying the objects within the set of sub-images by either one of:

sending the set of these sub-images to a remote server for identification; or

identifying the object on the spot by activating a local identification code.

13. The method according to either claim 10 or claim 1 1 , further comprising activating an internal pattern matching program between the blobs and reference patterns in order to detect a selected Simple Morphology Structure (SMS) object in the image.

14. The method according to either claim 10 or claim 1 1, further comprising: converting the image to a gray level image that undergoes a gray level blob-based partitioning;

a binary map of each of the gray level values is subject to a Hoshen-Kopelman cluster labeling algorithm and a set of all possible blobs consisting of all gray level values is created and stored;

extracting, from this set of calculated blobs, only small blobs having a size of smaller than a predefined threshold number of pixels;

partitioning the image to several, equal size sub-images;

counting a number of small blobs in each sub- image ;
and declaring detection of the presence of Simple Morphology Structure pests when a number of small blobs in a given percentage of the sub-images exceeds a predefined calibrated threshold value.

15. The method according to either claim 10 or claim 11, further comprising: converting the image to a gray level image that undergoes a gray level blob partition;

selecting, from the set of partitioned blobs, blobs having predefined very small size and registering their gravity centers;

extracting sub images around each registered gravity center; and

comparing the extracted sub images to reference images of Simple Morphology Structure pests, using a machine vision method; and

declaring detection of Simple Morphology Structure pests when the correlation exceeds a preset threshold value, and/or successful classification is flagged.

16. The method according to either claim 10 or claim 1 1, further comprising: performing blob partitioning on the multi-spectral images; selecting small blobs having a color that stands within a close distance to a pre-selected reference color that typically characterizes the Simple Morphology Structure pests, where the distance between two given RGB colors is defined as follows:

If XI = { R1,G1,B 1 } and X2 = {R2,G2,B2}

Then

Distance[Xl,X2] = Max{Abs[Rl-R2] , Abs[Gl-G2] , Abs[B l-B2] }.

17. The method according to either claim 10 or claim 11, further comprising: partitioning an acquired gray-level image X of a Visible image into several very similar non-overlapping images, by forming a row wise and column wise interlacing operator, resulting in forming four sub images XI, X2, X3 and X4 whose union accurately spans the original image;

defining image A as an acceleration image of X: A = (XI + X4 - 2*X3); will result in an image A that enhances contrasts;

extracting from A a set of a pre-selected number of pixels having the highest brightness value to yield a set of pixels;

extracting small sub images around the set of calculated high value pixels; and comparing their content to reference Simple Morphology Structure pests' images residing in a memory; and

flagging presence of Simple Morphology Structure pests when there is a high correlation.

18. The method according to either claim 10 or claim 1 1, further comprising: converting a Visible or NIR image to a gray scale image;

causing the gray level of the images to undergo an edge detection operator; from the resulting edge detected image, a set of pixels whose intensity values exceed a given threshold value is selected;

extracting small sub images around these pixels;

comparing content of said sub images against known reference images using a correlation method; and

flagging pest detection if the correlation measure exceeds a pre designed threshold value.

19. The method according to either claim 10 or claim 1 1, further comprising: performing a binary threshold operation on a gray level image of a Visible or

NI image;

from the resulting image, selecting a set of pixels whose intensity values exceed a predefined threshold value;

extracting small sub images around these pixels;

comparing content of the extracted small sub images against known reference images using a correlation method; and

flagging Simple Morphology Structure pest detection when a correlation measure exceeds a pre designed threshold value.

20. The method according to either claim 10 or claim 1 1, further comprising: performing gray level conversion and histogram equalization on an acquired image;

performing blob partitioning on the gray level image;

selecting blobs having a moderate to high number of pixels and extracting sub images surrounding the gravity centers of these blobs having a radius less than a preselected size;

performing further blob partition on each of the selected sub images;

arranging the calculated blobs with respect to their size in pixels;

extracting a selected number of large blobs and creating a binary image comprising their associated pixels;

in case a Complex Morphology Structure pest is "captured" in the extracted sub image, two morphology binary operators are applied on the binary map comprising the large blobs: a binary dilate operator and a binary erode operator;

calculating a difference between the dilated and the eroded images, resulting in an image that approximately captures boundaries of a pest morphology; and

comparing the calculated boundaries with stored reference images of CMS pest boundaries

21. The method according to either claim 10 or claim 1 1 , further comprising: converting a Visible image or a NI image to a gray level image;

performing blob partition on the gray level image;

extracting blobs having a selected pixels size;

extracting an image surrounding each such blob; and

classifying each such image in a remote server to a category of CMS pests using a machine vision method.

22. The method according to either claim 10 or claim 1 1, further comprising: extracting images of blobs having a relatively large number of pixels;

calculating boundaries of likely objects residing in the extracted images;

applying an Image Corner Filter On each such image,

counting a number of calculated corner coordinates; and

if the calculated number of corner coordinates exceeds a given threshold a detection of Citrus Leafminer is declared by indirect identification.

23. The method according to either claim 10 or claim 1 1 , further comprising: performing color manipulation on multi-spectral images of a plant to strengthen colors in the plant that are associated with blight;

performing blob partitions to a set of all pixels in the image associated with a principal dominant color, using a pre-defined tolerance;

seeking a match for the detected principal color blobs with pixels associated with a secondary dominant color in the immediate vicinity of that blob;

declaring a blight leaf when such a match is found.

24. The method according to any one of claims 10 or 1 1, further

comprising:

performing color manipulation on a high resolution multi-spectral image to strengthen and isolate an object's appearance from other objects in the image;

performing blob partitioning on the manipulated image;

dividing the images to color channels according to at least two color spaces; and

combining selected color channels from the at least two color spaces to enhance the object's appearance.

25. The method according to claim 24, further comprising performing cross detection of various color combinations to improve object separation from the background color

26. The method according to claim 24 or claim 25, wherein the step of combining further includes incorporating the 650 nm red band of the multi-spectral spectrum.

Documents

Application Documents

# Name Date
1 201817019943-Correspondence to notify the Controller [18-11-2024(online)].pdf 2024-11-18
1 201817019943-IntimationOfGrant17-12-2024.pdf 2024-12-17
1 201817019943-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2018(online)].pdf 2018-05-28
2 201817019943-FORM 1 [28-05-2018(online)].pdf 2018-05-28
2 201817019943-FORM-26 [18-11-2024(online)].pdf 2024-11-18
2 201817019943-PatentCertificate17-12-2024.pdf 2024-12-17
3 201817019943-Annexure [06-12-2024(online)].pdf 2024-12-06
3 201817019943-DRAWINGS [28-05-2018(online)].pdf 2018-05-28
3 201817019943-US(14)-HearingNotice-(HearingDate-22-11-2024).pdf 2024-10-25
4 201817019943-PETITION UNDER RULE 137 [06-12-2024(online)].pdf 2024-12-06
4 201817019943-FER.pdf 2021-10-18
4 201817019943-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2018(online)].pdf 2018-05-28
5 201817019943-Written submissions and relevant documents [06-12-2024(online)].pdf 2024-12-06
5 201817019943-COMPLETE SPECIFICATION [28-05-2018(online)].pdf 2018-05-28
5 201817019943-Annexure [18-08-2021(online)].pdf 2021-08-18
6 abstract.jpg 2018-07-11
6 201817019943-Correspondence to notify the Controller [18-11-2024(online)].pdf 2024-11-18
6 201817019943-ABSTRACT [17-08-2021(online)].pdf 2021-08-17
7 201817019943.pdf 2018-08-01
7 201817019943-FORM-26 [18-11-2024(online)].pdf 2024-11-18
7 201817019943-CLAIMS [17-08-2021(online)].pdf 2021-08-17
8 201817019943-COMPLETE SPECIFICATION [17-08-2021(online)].pdf 2021-08-17
8 201817019943-FORM-26 [28-08-2018(online)].pdf 2018-08-28
8 201817019943-US(14)-HearingNotice-(HearingDate-22-11-2024).pdf 2024-10-25
9 201817019943-CORRESPONDENCE [17-08-2021(online)].pdf 2021-08-17
9 201817019943-FER.pdf 2021-10-18
9 201817019943-Proof of Right (MANDATORY) [04-09-2018(online)].pdf 2018-09-04
10 201817019943-Annexure [18-08-2021(online)].pdf 2021-08-18
10 201817019943-DRAWING [17-08-2021(online)].pdf 2021-08-17
10 201817019943-Power of Attorney-310818.pdf 2018-09-05
11 201817019943-ABSTRACT [17-08-2021(online)].pdf 2021-08-17
11 201817019943-FER_SER_REPLY [17-08-2021(online)].pdf 2021-08-17
11 201817019943-OTHERS-310818.pdf 2018-09-05
12 201817019943-CLAIMS [17-08-2021(online)].pdf 2021-08-17
12 201817019943-Correspondence-310818.pdf 2018-09-05
12 201817019943-FORM 3 [17-08-2021(online)].pdf 2021-08-17
13 201817019943-FORM-26 [17-08-2021(online)].pdf 2021-08-17
13 201817019943-FORM 18 [10-09-2019(online)].pdf 2019-09-10
13 201817019943-COMPLETE SPECIFICATION [17-08-2021(online)].pdf 2021-08-17
14 201817019943-CORRESPONDENCE [17-08-2021(online)].pdf 2021-08-17
14 201817019943-OTHERS [17-08-2021(online)].pdf 2021-08-17
15 201817019943-DRAWING [17-08-2021(online)].pdf 2021-08-17
15 201817019943-FORM 18 [10-09-2019(online)].pdf 2019-09-10
15 201817019943-FORM-26 [17-08-2021(online)].pdf 2021-08-17
16 201817019943-Correspondence-310818.pdf 2018-09-05
16 201817019943-FER_SER_REPLY [17-08-2021(online)].pdf 2021-08-17
16 201817019943-FORM 3 [17-08-2021(online)].pdf 2021-08-17
17 201817019943-FORM 3 [17-08-2021(online)].pdf 2021-08-17
17 201817019943-OTHERS-310818.pdf 2018-09-05
17 201817019943-FER_SER_REPLY [17-08-2021(online)].pdf 2021-08-17
18 201817019943-FORM-26 [17-08-2021(online)].pdf 2021-08-17
18 201817019943-Power of Attorney-310818.pdf 2018-09-05
18 201817019943-DRAWING [17-08-2021(online)].pdf 2021-08-17
19 201817019943-CORRESPONDENCE [17-08-2021(online)].pdf 2021-08-17
19 201817019943-OTHERS [17-08-2021(online)].pdf 2021-08-17
19 201817019943-Proof of Right (MANDATORY) [04-09-2018(online)].pdf 2018-09-04
20 201817019943-COMPLETE SPECIFICATION [17-08-2021(online)].pdf 2021-08-17
20 201817019943-FORM 18 [10-09-2019(online)].pdf 2019-09-10
20 201817019943-FORM-26 [28-08-2018(online)].pdf 2018-08-28
21 201817019943.pdf 2018-08-01
21 201817019943-Correspondence-310818.pdf 2018-09-05
21 201817019943-CLAIMS [17-08-2021(online)].pdf 2021-08-17
22 201817019943-ABSTRACT [17-08-2021(online)].pdf 2021-08-17
22 201817019943-OTHERS-310818.pdf 2018-09-05
22 abstract.jpg 2018-07-11
23 201817019943-Annexure [18-08-2021(online)].pdf 2021-08-18
23 201817019943-COMPLETE SPECIFICATION [28-05-2018(online)].pdf 2018-05-28
23 201817019943-Power of Attorney-310818.pdf 2018-09-05
24 201817019943-Proof of Right (MANDATORY) [04-09-2018(online)].pdf 2018-09-04
24 201817019943-FER.pdf 2021-10-18
24 201817019943-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2018(online)].pdf 2018-05-28
25 201817019943-DRAWINGS [28-05-2018(online)].pdf 2018-05-28
25 201817019943-FORM-26 [28-08-2018(online)].pdf 2018-08-28
25 201817019943-US(14)-HearingNotice-(HearingDate-22-11-2024).pdf 2024-10-25
26 201817019943-FORM 1 [28-05-2018(online)].pdf 2018-05-28
26 201817019943-FORM-26 [18-11-2024(online)].pdf 2024-11-18
26 201817019943.pdf 2018-08-01
27 201817019943-Correspondence to notify the Controller [18-11-2024(online)].pdf 2024-11-18
27 201817019943-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2018(online)].pdf 2018-05-28
27 abstract.jpg 2018-07-11
28 201817019943-COMPLETE SPECIFICATION [28-05-2018(online)].pdf 2018-05-28
28 201817019943-Written submissions and relevant documents [06-12-2024(online)].pdf 2024-12-06
29 201817019943-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2018(online)].pdf 2018-05-28
29 201817019943-PETITION UNDER RULE 137 [06-12-2024(online)].pdf 2024-12-06
30 201817019943-Annexure [06-12-2024(online)].pdf 2024-12-06
30 201817019943-DRAWINGS [28-05-2018(online)].pdf 2018-05-28
31 201817019943-FORM 1 [28-05-2018(online)].pdf 2018-05-28
31 201817019943-PatentCertificate17-12-2024.pdf 2024-12-17
32 201817019943-IntimationOfGrant17-12-2024.pdf 2024-12-17
32 201817019943-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2018(online)].pdf 2018-05-28

Search Strategy

1 201817019943E_25-02-2021.pdf

ERegister / Renewals

3rd: 21 Feb 2025

From 08/11/2018 - To 08/11/2019

4th: 21 Feb 2025

From 08/11/2019 - To 08/11/2020

5th: 21 Feb 2025

From 08/11/2020 - To 08/11/2021

6th: 21 Feb 2025

From 08/11/2021 - To 08/11/2022

7th: 21 Feb 2025

From 08/11/2022 - To 08/11/2023

8th: 21 Feb 2025

From 08/11/2023 - To 08/11/2024

9th: 21 Feb 2025

From 08/11/2024 - To 08/11/2025