Abstract: Disclosed herein is a method and system for counting trees with accurate geotagging which can be reliable inputs for environmental impact analysis. The method employs an imaging device (102) to capture aerial images (202) over target areas (200), a GPS circuit (104) to obtain latitude, longitude, pixel coordinate information, and a microprocessor (300) for image processing/analysis with tree detection by a deep neural network (304). The microprocessor (300) is configured to: select centres (c1, c2) of two nearby images having an overlapping factor; estimate a height (H) each image using distances (?lat, ?long) between the latitudes/longitudes of the centres (c1, c2) and the overlapping factor, and a yaw angle (Y) between an imaging device movement direction (106) and a north-south line (N-S); derive a length (P) represented by each pixel by dividing the height (H) by number (n) of pixels present along the height (H); compute four relative distances of the tree (T) with respect to north/east (N/E) directions along height/width (H/W) of the image (202) using the length (P), the yaw angle (Y), and the pixel coordinates (x, y) of the tree (T); and determine actual latitude/longitude of the tree by adding the latitude/longitude of the centre with the corresponding relative distances. Fig. 1
Description:FIELD OF THE INVENTION
The present invention broadly relates to tree inventory management for environmental impact analysis. More particularly, the present invention relates to a method and system for counting trees with their exact latitude and longitude (geotagging map) by deploying image processing/analysis technique on their aerial images.
BACKGROUND OF THE INVENTION
Trees/plants play a vital role in the ecosystem, and their loss can have a profound impact on the environment. They are important for sustaining conservational stability and ecological biodiversity. A systematic tree inventory of the forested areas, including urban and rural areas, is a great asset to the stakeholders and decision-makers as it helps in determining tree density and distribution. Hence, it is essential to monitor and evaluate the changes in tree density, which is achieved by counting the number of trees per unit area, a common practice in the fields of environmental science and ecology. It is an essential factor aiding decision-makers in formulating environmental policies, regulations, and laws. For instance, counting the number of trees per unit area provides critical information to forest managers and researchers to effectively manage the forest, track changes over time, and make decisions about harvesting, deforestation, and conservation initiatives. Tree counts can also help to identify inequities in tree distribution, find areas with fewer trees, and guide the additional tree-planting efforts. In urban areas, it can act as an indicator for assessing the health of urban forests, monitoring changes over time, and assessing the ecological and economic benefits of urban trees.
Traditional techniques of tree counting include manual counting via ground surveys or using remote sensing technologies like LiDAR, which can be costly, and time-consuming. These old approaches have proven ineffective and can also lead to inconsistent data collection, especially when the spatial scope of such projects is large. And when it is needed to identify the GPS locations of individual trees for monitoring and governance purposes, the task becomes tedious and laborious multifold. However, recent advancements in the field of Unmanned Aerial Vehicles (UAV) or drones and Artificial Intelligence (AI) have opened possibilities through advanced deep learning algorithms making tree counting tasks efficient and cost-effective. Counting trees from high-resolution aerial images captured by drones offers several advantages over traditional methods. Deep learning algorithms can be applied to images to detect and identify trees, increasing the speed and efficiency of the tree-counting tasks while reducing costs and labour requirements.
A reference may be made to Mansur Muhammad Aliero et al, who reported an automatic palm oil tree counting using UAV-based multispectral images that is based on the concept of crown geometry and vegetation response to radiation. Spatial analysis involving the use of convolution and morphological analysis are adapted to detect and delineate the palm oil crown; and image thresholding is used for creating the palm oil tree centroids. The automated palm oil tree counting was executed using ENVI EX software and open-source program “ImageJ”. However, this document does not give any hint of geolocation of palm oil tree.
Another reference may be made to US10002416B2 that discloses tree counting and density outputs for the inventory, growth, and risk prediction using image processing system, in which a semi-supervised machine learning technique is applied, which uses past planting information related to a likely location of each of the trees. If the past planting information is not available or the tree distribution area is unknown/random, then the output results may be erroneous. Therefore, determining geolocations of the trees in real time can only solve these issues.
One more reference may be made to US10546195B2 that discloses a supervised learning-based classifier (generalized method and system) for detecting objects and their positions from aerial imagery. Although, this method calculates number of the recognized target objects in the images and shows them on a geographic information system (GIS) map; however, there is no hint of any specific mechanism or operational parameters (with experimental evidence) to map the position of the recognized target objects as appeared in the images to their actual positions, thus the geotagging of objects (especially wide range of trees distributed in large scales) requires further investigation to achieve precise tracking/location with improved accuracy for the tree inventory.
In view of above limitations, it is proposed to develop an improved image processing/analysis (computer vision) mechanism for computing precise geolocations of trees (i.e., exact latitude/longitude) that can eliminate error of double-counting, thus enhancing accuracy level for better environmental impact analysis. Further, all the existing/conventional image processing/analysis techniques have some issues with respect to their sustainable design, hardware implementation, configurational complexity, and economical feasibility, there exists a need to develop an improved approach, device/system/apparatus and method/process which would in turn address a variety of issues including, but not limited to, tree data (shape, size, geometrical features) selection, classifier training, mapping of numerical features, implementation/execution, elimination of double counting of trees, choosing appropriate confidence level in tree detection, and exact geotagging of trees in user friendly, cost effective and expedient manner. Moreover, it is desired to develop a technically advanced system and method for geotagging of trees, which includes all the advantages of the conventional/existing techniques/methodologies and overcomes the deficiencies of such techniques/methodologies.
OBJECT OF THE INVENTION
It is an object of the present invention to overcome deficiencies of conventional aerial image processing/analysis techniques applicable in tree inventory by way of eliminating double counting of trees, and computing exact geolocations of the trees distributed in large areas.
It is another object of the present invention to implement a deep neural network model to detect trees from sequential aerial images and distinguish trees from one another in overlapping aerial images.
It is one more object of the present invention to develop a microprocessor implementable dedicated code (algorithm) to determine accurate number of trees with their latitudes and longitudes so that environmental impact analysis can be done with reliable tree inventory input.
It is a further object of the present invention to devise a method and system for geotagging of trees that output a comprehensive report containing essential information such as number of trees present in target areas, actual latitude/longitude of such trees, and category/class (i.e., small, medium, large) of such trees.
SUMMARY OF THE INVENTION
In one aspect, the present invention provides a method for counting trees with geotagging which can be reliable inputs for environmental impact analysis. The method employs an imaging device to capture aerial images over target areas, a GPS circuit to obtain latitude/longitude/coordinate information of various points in the images, a microprocessor for image processing/analysis with tree detection by a deep neural network, and an application interface communicatively coupled to the microprocessor for providing input/output access to operators. The method comprises steps of: selecting centres of two nearby images having an overlapping factor; estimating a height each image using distances between the latitudes/longitudes of the centres and the overlapping factor, and a yaw angle between an imaging device movement direction and a north-south line; deriving a length represented by each pixel by dividing the height by number of pixels present along the height; computing four relative distances of the tree with respect to north/east directions along height/width of the image using the length, the yaw angle, and the pixel coordinates of the tree; and determining actual latitude/longitude of the tree by adding the latitude/longitude of the centre with the corresponding relative distances.
Other aspects, advantages, and salient features of the present invention will become apparent to those skilled in the art from the following detailed description, which delineate the present invention in different embodiments.
BRIEF DESCRIPTION OF DRAWINGS
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying figures.
Fig. 1 is a schematic diagram illustrating hardware components of a system for geotagging of trees, in accordance with an embodiment of the present invention.
Fig. 2 illustrates various operational/method steps as performed by various hardware of the system, in accordance with an embodiment of the present invention.
Fig. 3 is a geometrical representation illustrating a yaw angle made between an imaging device (drone) movement direction and a north-south line, in accordance with an exemplary embodiment of the present invention.
Fig. 4 illustrates distance between two centres with respect to overlapping height of area covered between two nearby/overlapping image frames, in accordance with an exemplary embodiment of the present invention.
Fig. 5 illustrates a relationship among distance between latitudes/longitudes of two centres of two overlapping image frames, and Yaw angle, in accordance with an embodiment of the present invention.
Fig. 6 is a geometrical representation illustrating coordinates of centre and tree in an image frame with respect to North-East plot, in accordance with an embodiment of the present invention.
Fig. 7 is a pixel level (3X3 sub portion of image) illustration of relative distances of tree from centre in North/East directions along height/width of the image, in accordance with an embodiment of the present invention
Fig. 8 illustrates input raw images (Fig. 8a) and output geotagged images (Fig. 8b) as displayed in an application interface screen of the system, in accordance with an embodiment of the present invention.
List of reference numerals
100 Unmanned aerial vehicle (UAV)/drone
102 imaging device (camera)
104 global positioning system (GPS) circuit
106 UAV flying direction (imaging device movement direction)
200 target tree covered area (survey region)
202 input aerial images (visual frames)
202a 3X3 pixel level sub portion of image
204 output images with geotagged tree map
300 microprocessor
302 GPS data analysis unit
304 deep neural network model/classifier unit
306 tree counting and geotagging module
400 display (application interface screen)
Y yaw angle
T detected tree in images
C, c1, c2 centre of image frames
x, y pixel coordinates of centre/tree in image frame
H height (breadth) covered by each image frame
W width of each image frame
n number of pixels present along the height (H)
P length in meter as represented by each pixel
?lat distance between latitudes of centres (c1, c2) of two nearby image frames
?long distance between longitudes of centres (c1, c2) of two nearby image frames
Nh relative distance in North direction when moving along height (H) of image
Eh relative distance in East direction when moving along height (H) of image
Nw relative distance in North direction when moving along width (W) of image
Ew relative distance in East direction when moving along width (W) of image
DETAILED DESCRIPTION OF THE INVENTION
Various embodiments described herein are intended only for illustrative purposes and subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the scope of the present invention. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
The use of terms “including,” “comprising,” or “having” and variations thereof herein are meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the terms, “an” and “a” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Furthermore, the term ‘objects’ is used herein to refer all kind of trees/plants distributed in rural, sub-urban and urban areas.
In accordance with an embodiment of the present invention, as shown in Fig. 1, a system for counting objects (trees) with geotagging is depicted. The system comprises an imaging device (102), a global position system (GPS) circuit/device (104), a microprocessor (300), and an application interface (400). The imaging device (102) and the GPS circuit/device (104) may be mounted in an unmanned aerial vehicle (UAV) (100) capable of flying at a desired height/direction through a remote-control mechanism. The UAV may be a drone or a mini-aircraft or similar flying device adapted for monitoring/tracking tree inventory in any tree distributed geographical regions (target/survey areas) (200) including urban, semi-urban, rural areas. The imaging device (102) may be a high resolution RGB camera adapted to capture aerial images (202) over the target/survey areas (200). The (GPS) circuit (104) is coupled to the imaging device (102) for capturing GPS data of the images (202). All the captured aerial images (202) with the GPS data are transmitted to the microprocessor (300) for processing/analysis. The microprocessor (300) comprises a GPS data processing unit (302) and a deep neural network model (304) embedded therein. The GPS data processing unit (302) is configured to derive information associated with latitudes, longitudes, and coordinates (x, y) of different points [i.e., centres (Cx,y), trees (Tx, y)] in the images (202) in degree/meter unit (as shown in Fig. 6). The deep neural network model (304) is trained to detect the trees in the images (202). The application interface (400) is communicatively coupled to the microprocessor (300) for providing input/output access to operators.
In accordance with an embodiment of the present invention, the microprocessor (300) may be mounted in the UAV (100) for ease of operation, and that UAV remains in communication with a ground level monitoring and controlling device. The microprocessor (300) and the application interface (400) may be integrated with any edge computing device such smartphone, PC, tab, laptop etc., which are easily linked through internet communication with or without cloud server. The microprocessor (300) may be an integrated circuit (IC), field programable gate arrays, or similar microcontroller having embedded therein software codes to execute all operations. The application interface (400) may be a mobile App (smartphone implementable interface) or web-based (computer implementable) interface.
Referring to plot between N-S (north-south) and E-W (east-west) axis as shown in Fig. 3-4, a yaw angle (Y) can be defined as an angle between an imaging device movement direction (106) (i.e., current UAV path direction) and a north-south (N-S) line. The UAV path direction can be determined using the latitudes/longitudes of centres (c1, c2) of two nearby/consecutive/overlapping image frames (Frame 1 and Frame 2). Each image frame has a size of height (H) X width (W). Two nearby/consecutive/overlapping image frames (Frame 1 and Frame 2) have an overlapping factor (for example 20%-60% overlap factor which is subject to variation in hardware imperfection as well as environmental factors such as wind speed, relative speed of the UAV against wind thrust etc.) that is responsible for double counting of the same tree (i.e., some trees appear in both frames), thus a special computing technique needs to adapted to eliminate double counting error in the final output results. Alternatively, it can be understood that one tree is counted/geotagged only once so that the counting/geotagging accuracy is improved. By default, the image size is measured in pixel. A novel code/program is developed to convert pixel values into physical distance in meter unit. In this way, a length (P) in meter unit represented by each pixel is derived using number of pixels present along the height (H) of the image (202).
Referring to Fig. 6-7, the coordinates of the centre (Cx,y) and the detected tree (Tx,y) with respect to the height (H) and the width (W) of the image (202) in the N-E geometrical graph are depicted. The values of the coordinates (x, y) are in pixel unit which can be converted in meter unit. The location/distance of the tree (T) along the height (H) of the image (202) is ‘y’ pixel and along the width (W) of the image (202) is ‘x’ pixel from the centre (C). This distance may be positive in the first quadrant and negative in the third quadrant. Then the relative distances (Nh, Eh, Nw, Ew) of the tree (T) (in meter unit) from the centre (C) in N/E directions along the H/W of the image (202) is computed using the overlapping factor, the length (P), and the yaw angle (Y). The actual latitude and longitude (in meter unit) of the tree is determined by adding the latitude and longitude of the centre with the corresponding relative distances. Finally, the actual latitude and longitude of the tree is converted into the degree unit which is displayed in form a geotagged map (204) in the application interface (400). Additionally, a compressive report is generated to display essential/useful information such as number of trees present in target areas, actual latitude and longitude of such trees, and category/class (for example relative size of trees such as small, medium, and large) of such trees.
In accordance with an embodiment of the present invention, the microprocessor (300) may comprise a counting and geotagging module (306) configured to: select centres of two nearby/overlapping images (Frame 1 and Frame 2), estimating the height (H) of each of the images using the latitudes/longitudes (in meter unit) of the centres (c1, c2) and the overlapping factor of the two nearby images, and a yaw angle (Y) between an imaging device movement direction (106) and a north-south line; deriving a length (P) in meter unit represented by each pixel by dividing the height by number of pixels present along the height (H) of the image; computing four relative distances (in north/east detections along height/width of the image) (Nh, Eh, Nw, Ew) of each of the detected trees from the centre (C) in meter unit by deploying the overlapping factor, the length (P), and the yaw angle (Y) in specific equations; and determining actual latitude and longitude (in meter unit) of the tree by adding the latitude and longitude of the centre (C) with the corresponding relative distances (Nh, Eh, Nw, Ew).
In accordance with an embodiment of the present invention, as shown in Fig. 1-2, a method for counting objects (trees) with geotagging is depicted. The method employs hardware components such as an imaging device (102), a global position system (GPS) circuit/device (104), a microprocessor (300), and an application interface (400). The microprocessor (300) comprises a GPS data processing unit (302), a deep neural network (304), and a counting and geotagging module (306).
In accordance with an embodiment of the present invention, the method comprises a step (S1) of receiving a series of aerial images (202) over the target areas (200) from the imaging device (102). The raw aerial images (202) may undergo a preprocessing step to remove noise using filters, if required.
In accordance with an embodiment of the present invention, the method comprises a step (S2) of detecting the trees in the images (202) using the deep neural network model (304). The object detection algorithm mainly consists of a deep neural network, the purpose of which is to extract feature maps. The object features are collected to create hierarchical array of features. These features are then manipulated for the final object detection to give the final output/result. The transfer learning approach is employed to train the dataset/model using large number aerial tree images considering low-level features such as lines, edges, and points of the objects (trees). For purpose of robust learning, after resizing of the training data dynamic augmentation is applied such as adjustments in brightness and exposure level, horizontal and vertical flips of images, bounding boxes, and rotation of images by 90 degrees clockwise and counter-clockwise. After successful testing and validation, the deep neural network model (304) is hosted in the microprocessor (300). This model (304) may be trained and hosted in a cloud server.
In accordance with an embodiment of the present invention, the method comprises a step (S3) of obtaining information associated with latitudes/longitudes and pixel coordinates (x, y) of the centres (C) and the detected trees (T) in the images (202). The latitudes and longitudes of the centres (C) are converted from degree unit into meter unit by a standard procedure. The position of a point in X-Y plane of an image frame from its centre is represented in terms of the pixel coordinates. As shown in Fig. 3, the north-south (N-S) line and the east-west (E-W) line are plotted with respect to the centre (C) of an image frame (202). The latitudes/longitudes, pixel coordinate data/information received from the GPS device (104) are interpreted/decoded by the GPS data processing unit (302). The pixel coordinates of the centre (C) and the tree (T) are represented as Cx,y, and Tx,y, respectively, in the image frame (202). A yaw angle (Y) is measured between the imaging device movement direction (106) and the north-south (N-S) line.
In accordance with an embodiment of the present invention, the method comprises a step (S4) of selecting the centres (c1, c2) of two nearby/overlapping images having an overlapping factor/part as shown in Fig. 4.
In accordance with an embodiment of the present invention, the method comprises a step (S5) of estimating the height (H) covered by each of the two nearby images, and the yaw angle (Y). For instance, if 60% of the image overlaps along its height (H), it is possible to estimate the overlapping portion in terms of pixels. Fig. 4 depicts that the overlapping portion covers 0.6 x H (Height of each image in pixels) and a central distance (in pixels) between the centres (c1, c2) of two consecutive images (Frame 1 and Frame 2) is 0.4x H. Thus, the overlapping factor is 0.6, and the distance between the two centres (c1, c2) turns out to be (1-0.6 =) 0.4 times the height (0.4H) of the area covered in the image. Fig. 5 depicts distance (?lat) between the latitudes of the centres (c1, c2) in meters, and distance (?long) between the longitudes of the centres (c1, c2) in meters.
Now, the height (H) (in meter unit) of each image is estimated using the latitudes/longitudes (in meter unit) of the centres (c1, c2) and the overlapping factor of the two nearby images in equation 1.
H=v(??lat(in meter)?^2+??long(in meter)?^2 )/((1-overlapping factor)) equation 1
The yaw angle (Y) between the UAV flying direction (106) and the north-south (N-S) line is estimated using values of distances between the latitudes/longitudes of the centres (c1, c2) in equation 2.
equation 2
Where, ?lat and ?long are converted from its degree unit into meter unit using a standard conversion procedure.
In other words, the yaw angle (Y) is defined as an inverse tangent of ratio of the distance (?lat) between the latitudes and the distance (?long) between the longitude of the centres (c1, c2) of the two nearby images.
In accordance with an embodiment of the present invention, the method comprises a step (S6) of deriving a length (P) (in meter unit) represented by each pixel by dividing the height (H) (in meter) by number (n) of pixels present along the height (H) of the image frame using equation 3.
P = H/n equation 3
In accordance with an embodiment of the present invention, the method comprises a step (S7) of computing four relative distances of each of the detected trees (T) from the centre (C) in meter unit by performing mathematical operation on the length (P), the yaw angle (Y), and pixel coordinates (x, y) of the tree (T). As shown in Fig. 6, the distance of the tree (T) along the height (H) of the image (202) is ‘y’ pixel and along the width (W) of the image (202) is ‘x’ pixel from the centre (C), where the pixel coordinates of the centre are (0, 0). These distances are positive in the first quadrants and negative in the third quadrants. Therefore, four relative distances (in meter unit) are computed with respect to the north/east (N/E) directions while travelled/moved along the height/width (H/W) of the 3X3 image part as indicated in Fig. 7. The relative distances include a first distance (Nh) in north (N) direction along the height (H) of the image (202), a second distance (Eh) in east (E) direction along the height (H) of the image (202), a third distance (Nw) in the north (N) direction along width (W) of the image (202), and a fourth distance (Ew) in the East (E) direction along the width (W) of the image (202). The relative distance computation involves multiplication of the pixel coordinate values (x, y) of the tree (T) with the length value (P) and sine/cosine functional value of the yaw angle (Y) with respect to the north/east (N/E) directions along the height/width (H/W) of the image (202), considering the pixel coordinates values of the centre (C) as (0, 0). The values of Nh, Eh, Nw, Ew (in meter unit) are computed using equations 4, 5, 6, 7 respectively.
N_h=[P. cos(Y)] . y equation 4
E_h=[P. sin(Y)] . y equation 5
N_w=[ P. sin(Y)] . x equation 6
E_w=[P. cos(Y)] . x equation 7
In accordance with an embodiment of the present invention, the method comprises a step (S8) of determining actual latitude/longitude of the tree in meter unit by adding the latitude of the centre (C) with the second distance (Eh) and the fourth distance (Ew), and the longitude of the centre (C) with the first distance (Nh) and the third distance (Nw), respectively. The latitude/longitude of the centre (C) are considered in meter unit. The actual latitude/longitude of the tree (T) are determined using equation 8 and 9 respectively.
Tree latitude (in metre) = centre latitude (in meter) + (E_h+ E_w ) equation 8
Tree longitude (in metre) = centre longitude (in meter) + ?(N?_h+ N_w) equation 9
The actual latitude/longitude of the tree (T) are then converted from meter unit to degree unit by a standard procedure.
In accordance with an embodiment of the present invention as shown in Fig. 7, the determining step (S8) comprises outputting a geotagged tree map (204) with an indication of number of the trees (as final output) displayed in the screen (400) coupled to the microprocessor (300). Based on size of bounding box of the detected trees as indicated in the map, the trees are classified under different classes/categories such as big tree, or medium tree, or small tree.
In accordance with an embodiment of the present invention, the determining step (S8) comprises generating a report (visual/text format) containing essential/useful information such as number of trees present in target areas, actual latitude and longitude of such trees, and category/class of such trees. In this way, the possibility of counting error (such as double counting) is eliminated and accurate geotagging/geolocation with size categorization of the trees becomes possible in real time. As shown in Fig. 8a, in the map/report the detected tree are marked with different colours with the corresponding geotagging within bounding boxes, where the colour marks represent different classes/categories such as small tree, medium tree, and large tree. In this way it becomes possible to very precisely count the trees of any target area with all critical information (geolocation, classes) essentially required for environmental impact analysis.
Experimental Analysis
In order to check technical effects of the present invention, an experiment is conducted in two image dataset of trees collected from two different regions of the Mungeli district, Chattisgarh, India. The performance of the proposed system/method is evaluated for two sites using various performance metrics. The optimal threshold values are determined by analysing the accuracy versus confidence score. Fig.8a shows two nearby images captured from the UAV with overlap factor of 0.6, and Fig. 8b shows simultaneous tree detection and geo tagging in the images. Especially, in Fig. 8b, the corresponding geo locations are shown on top of bounding box, and the category/class of detected trees is represented by coloured dots. i.e., red- small tree, blue- large tree, and green – medium tree.
An exemplary pseudocode as embedded in the microprocessor (300) to carry out all method/operational steps (S1-S8) is presented in Table 1.
Table 1
Input: Tree images with metadata of GPS locations of the centres
Output: Tree counting and geo locations along with comprehensive report describing size locations and counts of trees in the surveyed area
Function distance(C1, C2)
Return distance between centres of two nearby images (in meters) by standard conversion
Function pixelWidth (Current_Image, Next_Image)
P=distance (Current_Image, Next_Image)/((1-overlap) x Image_Height(in pixel))
Return P
Function Yaw_Angle (Current_Image, Next_Image)
Calculate Yaw_Angle using equation (2)
Return Yaw_Angle
For each Image do
save the centre latitude and longitude
detect trees in image using trained deep learning algorithm
Y = calculate Yaw angle from current image and next image
For each detected tree in image do
d=calculate distance between centre of image and detected tree in pixels x,y
Tree latitude(in metre) = centre latitude + (E_h.y+ E_w.x).
Tree longitude(in metre) = centre longitude + ?(N?_h.y+ N_w.x
Send labels, size, latitude and longitude (in degrees) of trees and images to user interface
Save labels, latitude and longitude list (in degrees)
Return generate a comprehensive report for all the images in electronic file format
The above code effectively uses distance corrections to determine precise geolocations of the detected trees, thus eliminating the problem of double counting due to the overlapping of images. Thus, the counting of trees with their exact geolocation are visualized in a map form in the display for end users, and such critical information are very useful for environmental impact analysis.
Further, the present invention provides following advantages including but not limited to:
The deep learning-assisted approach automatically and accurately detects trees and calculates the geolocations with classification of the detected trees from the UAV images.
User friendly for inexperienced staffs engaged in tree inventory since it provides accurate counting number with geotagging of trees without any expert interference.
Cost effective, quick, and configurable with simplified installation, applicability, and maintenance.
The foregoing descriptions of exemplary embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiment was chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable the persons skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions, substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but is intended to cover the application or implementation without departing from the scope of the claims of the present invention. , Claims:We claim:
1. A method for geotagging of trees, the method comprises steps of:
receiving (S1) aerial images (202) over target areas (200) from an imaging device (102);
detecting (S2) the trees in the received images (202) through a deep neural network model (304) embedded in a microprocessor (300);
obtaining (S3), from a global position system (GPS) (104) coupled to the imaging device (102), information associated with latitudes, longitudes, and pixel coordinates (x, y) of centres (C) and the detected trees (T) in the images (202);
selecting (S4) the centres (c1, c2) of two nearby images (Frame 1 and Frame 2) having an overlapping factor;
estimating (S5), by the microprocessor (300), a height (H) of each of the images using distances (?lat, ?long) (in meter unit) between the latitudes and the longitudes of the centres (c1, c2) and the overlapping factor of the two nearby images, and estimating a yaw angle (Y) between an imaging device movement direction (106) and a north-south (N-S) line;
deriving (S6), by the microprocessor (300), a length (P) (in meter unit) represented by each pixel by dividing the height (H) by number (n) of pixels present along the height (H) of the image (202);
computing (S7), by the microprocessor (300), four relative distances (in meter unit) of each of the detected trees (T) from the centre (C) using the length (P), the yaw angle (Y), and the pixel coordinates (x, y) of the tree (T); wherein the relative distances include a first distance (Nh) in north (N) direction along the height (H) of the image (202), a second distance (Eh) in east (E) direction along the height (H) of the image (202), a third distance (Nw) in the north (N) direction along width (W) of the image (202), and a fourth distance (Ew) in the East (E) direction along the width (W) of the image (202); and
determining (S8), by the microprocessor (300), actual latitude and longitude (in meter unit) of the tree by adding the latitude of the centre (C) with the second distance (Eh) and the fourth distance (Ew), and by adding the longitude of the centre (C) with the first distance (Nh) and the third distance (Nw) respectively, wherein the actual latitudes/longitudes of the trees are converted from meter unit into degree unit.
2. The method as claimed in claim 1, wherein the estimating step (S5) comprises deriving the yaw angle (Y) as an inverse tangent of ratio of the distance (?lat) between the latitudes and the distance (?long) between the longitude of the centres (c1, c2) of the two nearby images.
3. The method as claimed in claim 1, wherein the computing step (S7) comprises multiplication of the pixel coordinate values (x, y) of the tree (T) with the length value (P) and sine/cosine functional value of the yaw angle (Y) with respect to the north/east (N/E) directions along the height/width (H/W) of the image (202), considering the pixel coordinates values of the centre (C) as (0, 0).
4. The method as claimed in claim 1, wherein the determining step (S8) comprises outputting a geotagged tree map (204) with an indication of number of the trees displayed in a screen (400) coupled to the microprocessor (300).
5. The method as claimed in claim 1, wherein the determining step (S8) comprises generating a visual/text report containing information associated with number of trees present in the target areas (200), the actual latitude and longitude of the trees, and class/category of the trees.
6. A system for geotagging of trees, the system comprises:
an imaging device (102) adapted to capture aerial images (202) over target areas (200);
a global position system (GPS) circuit (104) coupled to the imaging device (102) for capturing GPS data of the images (202);
a microprocessor (300) communicatively coupled to the imaging device (102) and the GPS circuit (104), wherein the microprocessor (300) comprises a GPS data processing unit (302) configured to derive information associated with latitudes and longitudes (in meter unit), and pixel coordinates (x, y) of different points in the images (202), and a deep neural network model (304) trained to detect the trees in the images (202); and
an application interface (400) communicatively coupled to the microprocessor (300) for providing input/output access to operators,
wherein the microprocessor (300) comprises a counting and geotagging module (306) configured to:
select centres (c1, c2) of two nearby images (Frame 1 and Frame 2) having an overlapping factor,
estimate a height (H) of each of the images using distances (?lat, ?long) (in meter unit) between the latitudes and the longitudes of the centres (c1, c2) and the overlapping factor of the two nearby images, and estimating a yaw angle (Y) between an imaging device movement direction (106) and a north-south (N-S) line,
derive a length (P) (in meter unit) represented by each pixel by dividing the height (H) by number (n) of pixels present along the height (H) of the image (202);
compute four relative distances (in meter unit) of each of the detected trees (T) from the centre (C) using the length (P), the yaw angle (Y), and the pixel coordinates (x, y) of the tree (T); wherein the relative distances include a first distance (Nh) in north (N) direction along the height (H) of the image (202), a second distance (Eh) in east (E) direction along the height (H) of the image (202), a third distance (Nw) in the north (N) direction along width (W) of the image (202), and a fourth distance (Ew) in the East (E) direction along the width (W) of the image (202); and
determine actual latitude and longitude (in meter unit) of the tree by adding the latitude of the centre (C) with the second distance (Eh) and the fourth distance (Ew), and the longitude of the centre (C) with the first distance (Nh) and the third distance (Nw) respectively; wherein the actual latitude/longitude of the tree (T) are converted from meter unit into degree unit.
7. The system as claimed in claim 6, wherein the application interface (400) is adapted to display a geotagged tree map (204) with an indication of number of the trees as final output.
8. The system as claimed in claim 6, wherein the imaging device (102) and the GPS circuit (104) are mounted in an unmanned aerial vehicle (UAV).
9. The system as claimed in claim 8, wherein the microprocessor (300) is mounted in the unmanned aerial vehicle (UAV) or in a ground level monitoring/controlling device.
10. The system as claimed in claim 8, wherein the deep neural network model (304) is hosted in a server in communication with the unmanned aerial vehicle (UAV) and the ground level monitoring/controlling device.
| # | Name | Date |
|---|---|---|
| 1 | 202321063042-Proof of Right [20-09-2023(online)].pdf | 2023-09-20 |
| 2 | 202321063042-FORM FOR SMALL ENTITY(FORM-28) [20-09-2023(online)].pdf | 2023-09-20 |
| 3 | 202321063042-FORM FOR SMALL ENTITY [20-09-2023(online)].pdf | 2023-09-20 |
| 4 | 202321063042-FORM 1 [20-09-2023(online)].pdf | 2023-09-20 |
| 5 | 202321063042-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-09-2023(online)].pdf | 2023-09-20 |
| 6 | 202321063042-EVIDENCE FOR REGISTRATION UNDER SSI [20-09-2023(online)].pdf | 2023-09-20 |
| 7 | 202321063042-DRAWINGS [20-09-2023(online)].pdf | 2023-09-20 |
| 8 | 202321063042-COMPLETE SPECIFICATION [20-09-2023(online)].pdf | 2023-09-20 |
| 9 | 202321063042-FORM-9 [22-09-2023(online)].pdf | 2023-09-22 |
| 10 | 202321063042-FORM-26 [22-09-2023(online)].pdf | 2023-09-22 |
| 11 | Abstract.jpg | 2023-10-19 |
| 12 | 202321063042-MSME CERTIFICATE [27-10-2023(online)].pdf | 2023-10-27 |
| 13 | 202321063042-FORM28 [27-10-2023(online)].pdf | 2023-10-27 |
| 14 | 202321063042-FORM-8 [27-10-2023(online)].pdf | 2023-10-27 |
| 15 | 202321063042-FORM 18A [27-10-2023(online)].pdf | 2023-10-27 |
| 16 | 202321063042-FER.pdf | 2024-01-02 |
| 17 | 202321063042-FORM 3 [13-01-2024(online)].pdf | 2024-01-13 |
| 18 | 202321063042-FER_SER_REPLY [07-04-2024(online)].pdf | 2024-04-07 |
| 19 | 202321063042-CLAIMS [07-04-2024(online)].pdf | 2024-04-07 |
| 20 | 202321063042-US(14)-HearingNotice-(HearingDate-25-07-2024).pdf | 2024-07-11 |
| 21 | 202321063042-Correspondence to notify the Controller [18-07-2024(online)].pdf | 2024-07-18 |
| 22 | 202321063042-Written submissions and relevant documents [26-07-2024(online)].pdf | 2024-07-26 |
| 23 | 202321063042-Annexure [26-07-2024(online)].pdf | 2024-07-26 |
| 24 | 202321063042-PatentCertificate16-10-2024.pdf | 2024-10-16 |
| 25 | 202321063042-IntimationOfGrant16-10-2024.pdf | 2024-10-16 |
| 1 | SearchHistoryE_18-12-2023.pdf |