Abstract: The present invention discloses a computer implemented system for cotton yield prediction and method thereof. The system (100) comprises of a capturing unit (106) comprising an unmanned aerial vehicle (UAV) (102) and a sensors (104) mounted on the unmanned aerial vehicle (UAV) (102), a processing engine (110) communicatively coupled with the capturing unit (106) via a network (108) and is configured to receive the captured images of the cotton field (F) and a database (126). The system and method for cotton yield prediction through UAV/drone imagery using machine learning techniques before harvesting the crop, is fast, non-invasive and precise. The system and method eliminate the entire manual effort of harvesting the crop and taking the yield. As UAV/drones are used, this is several times faster. Moreover, fewer human resources are required even for many acreages of field.
DESC:TECHNICAL FIELD OF THE INVENTION
[0001] The present invention relates to the field of yield prediction, and in particular, to crop yield prediction using imagery from an unmanned aerial vehicle (UAV) and machine learning techniques. More specifically, the present invention relates to a computer implemented system for cotton yield prediction and method thereof.
BACKGROUND OF THE INVENTION
[0002] Yield estimation or yield prediction is highly valued in the field of agriculture. Timely and accurate prediction of crop yield helps in adequate crop management, especially for harvest planning and storage needs. Currently, yield estimation for cotton crops is challenging, and time and resource consuming. It is mainly labor-intensive work and dependent on farmers’ and breeder’s empirical knowledge.
[0003] Additionally, conventional method for yield prediction is an invasive method which uses random sampling and extrapolation approach. In the conventional method laborers and field supervisors must go to a farm field and randomly harvest cotton from a few plants per plot and extrapolate for the entire farm/field to predict the yield. However, this approach is time consuming and a laborious task. Furthermore, the chances of errors are unavoidable in the conventional yield estimation method as it involves manual intervention.
[0004] Therefore, there is a need of a computer implemented system and method which can solve the above stated problems and predict the yield of cotton by using advanced approaches such as imagery from unmanned aerial vehicle (UAV) and machine learning techniques, which is fast, non-invasive and precise.
SUMMARY OF THE INVENTION
[0005] This summary is provided to introduce concepts of the present invention. This summary is neither intended to identify essential features of the present invention nor is it intended for use in determining or limiting the scope of the present invention.
[0006] In one aspect, the present invention provides a computer implemented system for cotton yield prediction through UAV/drone imagery using machine learning techniques before harvesting the crop, which is fast, non-invasive and precise. The system eliminates the entire manual effort of harvesting the crop and taking the yield. As UAV/drones are used, this is several times faster. Moreover, fewer human resources are required even for many acreages of field. The prediction of yield before harvesting the crop not only saves a lot of resources in term of time and labor but also helps in making important business decisions.
[0007] A computer implemented system for cotton yield prediction comprises a capturing unit comprising an unmanned aerial vehicle and a sensor mounted on the unmanned aerial vehicle and configured to capture one or more images of a cotton field through forward and backward flight paths, a processing engine communicatively coupled with the capturing unit via a network and configured to receive the captured images of the cotton field, detect and count cotton bolls in the cotton field from the captured images and predict yield of the cotton field and a database in communication with the processing engine to store data related to the captured images of the cotton field. An exposure time of the capturing unit is 1/4000 with cloud brightness setting and the sensor of the capturing unit is a RGB camera mounted in an oblique orientation at an angle between 30-60 degrees to a plane of ground of the cotton field when the UAV is flying parallel to ground of the cotton field.
[0008] Typically, the oblique orientation at an angle of 30-60 degrees provides 0.19 to 0.22 centimeter/ pixel resolution coverings an 8.2mx 6.5m field of view of the cotton field.
[0009] The processing engine comprises a memory configured to store pre-determined rules related to detecting and counting cotton bolls in the cotton field and predicting yield of the cotton bolls in the cotton field, a processor configured to filter noise from each captured image to generate enhanced images, and to annotate the enhanced images with labels, a plot creation unit in communication with the processor and configured to divide the enhanced images into multiple plots. The plot creation unit further comprises of a generation unit which is configured to receive the enhanced images from the processor, identify red, green, and blue (RGB) pixels from the enhanced images and perform an oblique stitching of the RGB image into an orthomosaic representing the entire cotton field, and a border formation unit in communication with the generation unit and configured to receive the orthomosaic image from the generation unit and divide the orthomosaic image into a number of small plots by using a checkerboard or barcode detection technique for image correction and border formation. The processing engine further comprises of a cotton boll detection and counting unit in communication with the border formation unit and configured to receive the individual plots from the border formation unit and detect and count cotton bolls from the plots and estimate a score and a prediction unit, in communication with the cotton boll detection and counting unit, configured to receive the score and predict the yield based on a yield prediction technique.
[0010] Preferably, the cotton boll detection and counting unit is further configured to bound one or more cotton bolls with bounding boxes in the individual plots received from the border formation unit, score the individual bounding boxes and estimate the score of the individual bounding boxes from the annotated and enhanced images using learnt weights computed by the cotton boll detection and counting unit and prestored in the database. To estimate the score of the individual bounding boxes, the cotton boll detection and counting unit combines neighboring bounding boxes and scores, selects the bounding box with the highest score, compares overlapping of bounding boxes with a threshold value, removes the bounding box with intersection > 50% and estimates the bounding box score.
[0011] Optionally, the processor is further configured to cooperate with the memory to receive the pre-determined rules and with the cotton boll detection and counting unit through the plot creation unit to pass the enhanced images, and generate system processing commands for operating the cotton boll detection and counting unit.
[0012] Further, to perform the oblique stitching, the images are vertically stitched alternatively and then horizontally stitched to generate the orthomosaic image of the entire cotton field.
[0013] Preferably, each plot has a pre-determined identification number in the form of a barcode with ground control points (GCP) and the border formation unit automatically extracts the plot(s) by the bar code and determines the boundaries of the plots.
[0014] Optionally, the database is in further communication with the generation unit, the border formation unit, the cotton boll detection and counting unit and the prediction unit and is configured to store pre-defined factors for predicting the cotton yield, pre-defined labelled images, pre-defined parameters, and learnt weights computed by the cotton boll detection and counting unit.
[0015] Preferentially, the cotton boll detection and counting unit operates in a training mode during initialization of the system and thereafter in a testing mode.
[0016] Further, in the training mode of the cotton boll detection and counting unit, the processing engine uses one or more pre-defined labelled images of the cotton field, wherein the labels include full white boll labels, partial white boll labels and green boll labels already stored in the database, and extracts feature maps from the labels, wherein, the extracted feature maps are used to train the cotton boll detection and counting unit.
[0017] Typically, multi-scale feature maps are employed for detection of cotton bolls when a scale varies for size of the cotton bolls due to oblique imagery.
[0018] Furthermore, in the testing mode of the cotton boll detection and counting unit, the individual plots received from the border formation unit are passed through the trained cotton boll detection and counting unit for label prediction, and counting of number of bounding boxes is performed to estimate the bounding boxes score using the trained cotton boll detection and counting unit.
[0019] Typically, based on the bounding boxes scores, the cotton boll detection and counting unit combines multiple bounding boxes into a single object and maps the object with respect to the labels.
[0020] Preferably, the cotton boll detection and counting unit is implemented using a CNN architecture.
[0021] In another aspect, the present invention provides a computer implemented method for cotton yield prediction in a cotton field. The method is implemented for training a cotton boll detection and counting unit, the method comprising: acquiring, by a capturing unit, one or more images of a cotton field; enhancing, by a processor, the images by performing noise filtering on each image; annotating, by the processor, the images with labels to detect feature maps; passing, by the processor, the enhanced images with labels to a cotton boll detection and counting unit; extracting, by the cotton boll detection and counting unit, multi-scale features from the enhanced and labeled images and optimizing the extracted features; calculating, by the cotton boll detection and counting unit, an error in the optimized features and minimizing the errors; updating, by the cotton boll detection and counting unit, weights of customized algorithm as a learnt weight; and storing by the cotton boll detection and counting unit, the learnt weight in a database.
[0022] In the above method, if the error is not minimized, the cotton boll detection and counting unit repeats the steps until the error is minimized.
[0023] Typically, the labels include full white boll labels, partial white boll labels and green boll labels.
[0024] Preferably, the method comprises mapping each combined bounding boxes and score with respect to the labels to detect the feature maps.
[0025] In third aspect, the present invention provides a computer implemented method for cotton yield prediction in a cotton field. The method comprising: acquiring, by a capturing unit, one or more images of a cotton field; enhancing, by a processor, the images by performing noise filtering on each image; performing, by a cotton boll detection and counting unit, grid sampling by nxn on each enhanced image; using, by the a cotton boll detection and counting unit, learnt weights prestored in a database to initially predict bounding boxes and scores; bounding, by the cotton boll detection and counting unit, one or more individual cotton bolls with boxes and estimating a score of the individual bolls using learnt weights from the sampled and enhanced images; combining, by the cotton boll detection and counting unit, neighboring bounding boxes and scores; selecting, by the cotton boll detection and counting unit, a bounding boxes with the highest score; comparing, by the cotton boll detection and counting unit, the overlapping of bounding boxes with a threshold value; removing, by the cotton boll detection and counting unit, the bounding boxes with intersection > 50% and estimating a score; predicting, by the cotton boll detection and counting unit, final bounding boxes based on the estimated score; predicting, by a prediction unit, the yield based on the predicted final bounding boxes and predetermined parameters prestored in the database; and storing, by the processor unit, the yield in the database.
[0026] Additionally, the step of bounding further comprises: identifying, by a generation unit of the plot creation unit, red, green, and blue (RGB) images from the sampled and enhanced images; performing, by the generation unit of the plot creation unit, an oblique stitching of the RGB pixels into an orthomosaic representing the entire cotton field; dividing, by a border formation unit of the plot creation unit, the orthomosaic image into a number of small plots by using a checkerboard or barcode detection technique for image correction and border formation.
[0027] Further, the step of performing the oblique stitching comprises vertically stitching the images alternatively and then horizontally stitching to generate the orthomosaic image of the entire cotton field.
[0028] Furthermore, each plot has a pre-determined identification number in a form of a barcode with ground control points (GCP), and wherein the step of dividing comprises automatically extracting the plot(s) by the bar code and determining the boundaries of the plots.
[0029] The method is implemented as a non-maximum suppression technique (NMS) which is repeated for K iterations to get final bounding box and score for individual cotton bolls by the cotton boll detection and counting unit.
[0030] Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0031] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer to the features and units.
[0032] Figure 1 illustrates an architectural block diagram depicting a computer implemented system for cotton yield prediction in accordance with an embodiment of the present invention.
[0033] Figure 2 illustrates a schematic block diagram depicting selection of a region of interest and prediction of the yield in kilograms per plot in accordance with an embodiment of the present invention.
[0034] Figure 3 illustrates a flow diagram of a training method of a cotton boll detection and counting unit for detecting the cotton bolls using labelled images in accordance with an embodiment of the present invention.
[0035] Figure 4 illustrates a flow diagram of a testing method of a cotton boll detection and counting unit for depicting yield prediction of the cotton bolls in accordance with an embodiment of the present invention.
[0036] Figure 5 illustrates a use case scenario depicting cotton bolls detection in accordance with an embodiment of the present invention.
[0037] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present invention. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION OF THE INVENTION
[0038] The various embodiments of the present invention describe a computer implemented system for cotton yield prediction and method thereof.
[0039] In the following description, for the purpose of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details. One skilled in the art will recognize that embodiments of the present invention, some of which are described below, may be incorporated into a number of systems.
[0040] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the present invention and are meant to avoid obscuring of the present invention.
[0041] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0042] In an embodiment, the present invention provides a computer implemented system and method for cotton yield prediction using an unmanned aerial vehicle (UAV) imagery and machine learning techniques.
[0043] An architectural block diagram depicting a computer implemented system for cotton yield prediction in accordance with an embodiment of the present invention is illustrated in Figure 1. The computer implemented system for cotton yield prediction (hereinafter referred to as “system”) (100) includes an unmanned aerial vehicle (UAV) (102) and a sensor (104) mounted on the unmanned aerial vehicle (UAV) (102). The system further includes a network (108), a processing engine (110) and a database (126).
[0044] The UAV (102) with the mounted sensor (104) is together referred to as a capturing unit (106) to capture one or more images of a cotton field (F). In an exemplary embodiment, the exposure time of the capturing unit (106) may be 1/4000 with cloud brightness setting.
[0045] In one embodiment, the sensor (104) of the capturing unit (106) may include, but is not limited to, a RGB camera installed in an oblique orientation at an angle between 30-60 degrees to the plane of ground when the UAV (102) is flying parallel to the ground. This orientation provides 0.19 to 0.22 centimeter (cm)/ pixel resolution that covers approximately 8.2 meters (m) (Horizontal)* 6.5m (vertical) field of view.
[0046] The processing engine (110) may be communicatively coupled with the capturing unit (106) via the network (108) to receive the captured images of the cotton field (F), detect and count cotton bolls in the cotton field (F) from the captured images and predict yield of the cotton field (F). In one embodiment, the network (108) includes wired and wireless networks. Examples of wired networks include a Wide Area Network (WAN) or a Local Area Network (LAN), a client-server network, a peer-to-peer network, and so forth. Examples of the wireless networks include Wi-Fi, a Global System for Mobile communications (GSM) network, and a General Packet Radio Service (GPRS) network, an enhanced data GSM environment (EDGE) network, 802.5 communication networks, Code Division Multiple Access (CDMA) networks, or Bluetooth networks.
[0047] The processing engine (110) further includes a memory (112), a processor (114), a plot creation unit (116), a cotton boll detection and counting unit (122), and a prediction unit (124). The plot creation unit (116) further includes a generation unit (118) and a border formation unit (120). In one embodiment of the present invention, the cotton boll detection and counting unit (122) is implemented using a CNN architecture.
[0048] The memory (112) may be configured to store pre-determined rules related to detecting and counting cotton bolls, training of the cotton boll detection and counting unit (122), and prediction of labels and yield of the cotton bolls. In an embodiment, the memory (112) may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
[0049] The processor (114) may be configured to cooperate with the memory (112) to receive the pre-determined rules and with the cotton boll detection and counting unit (122) through the plot creation unit (116). The processor (114) may be further configured to generate system processing commands. In an embodiment, the processor (114) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that generate commands/signals based on operational instructions. Among other capabilities, the processor (114) may be configured to fetch the pre-determined rules from the memory (112) and generate commands to operate the system (100). Also, the processor (114) is further configured to filter noise from each captured image to generate enhanced images, and to annotate the enhanced images with labels.
[0050] As mentioned above, the cotton boll detection and counting unit (122) is implemented using a CNN architecture. In an embodiment, the CNN Architecture i.e. the cotton boll detection and counting unit (122) operates in a training mode or an inference/testing mode. The cotton boll detection and counting unit (122) may operate in the training mode once at the time of initialization and thereafter, the cotton boll detection and counting unit (122) may operate in the inference/testing mode.
[0051] The database (126) is in communication with the processing engine (110) to store data related to the captured images of the cotton field (F). The database (126) further includes pre-defined factors for predicting the cotton yield, pre-defined labelled images, pre-defined parameters, learnt weights computed by the cotton boll detection and counting unit (122) etc. In an embodiment, the database (126) may be implemented as a single database such as an enterprise database, a remote database, a local database, and the like.
[0052] In an embodiment, the capturing unit (106) takes images of the cotton field (F) through forward and backward flight paths and transmits the image to the processing engine (110). In the processing engine (110), the processor (114) filters the noise from each captured image to generate enhanced images. In the training mode of the cotton boll detection and counting unit (122) these enhanced images are further annotated with labels. The labels include full white boll labels, partial white boll labels and green boll labels.
[0053] In the testing mode of the cotton boll detection and counting unit (122) these enhanced images are passed to the plot creation unit (116). The plot creation unit (116) is in communication with the processor (114) and is configured to divide the enhanced images into multiple plots. The plot creation unit (116) further includes a generation unit (118) and a border formation unit (120).
[0054] The generation unit (118) receives the enhanced images from the processor (114). The generation unit (118) is further configured to identify the red, green, and blue (RGB) pixels from the enhanced images and perform oblique stitching of the RGB image into an orthomosaic representing the entire field (F). The images received are vertically stitched alternatively, and then horizontally stitched to generate the orthomosaic image of the entire cotton field (F). Typically, oblique stitching techniques which are used in conventional systems have a problem of resolution. The above disclosed generation unit (118) which performs oblique stitching of the RGB image provides better clarity for orthomosaic images. In an advantageous aspect, the orthomosaic generation time may be reduced to a tune of 10X compared to any conventional systems.
[0055] The output of generation unit (118) i.e., orthomosaic image is given as an input to the border formation unit (120). The border formation unit (120) may be configured to divide the orthomosaic image into a number of small plots by using a checkerboard/ barcode detection technique for image correction and border formation for the small plots. In an embodiment, each individual plot has a pre-determined identification number in the form of a code, for example, a barcode with ground control points (GCP). The border formation unit (120) automatically extracts the plot(s) by using the bar code and determines the boundaries of the plots.
[0056] The output of the border formation unit (120) is given as input to the cotton boll detection and counting unit (122). The cotton boll detection and counting unit (122) may be configured to receive the individual plots from the border formation unit (120) and to detect and count cotton bolls from the plots.
[0057] In an embodiment, the cotton boll detection and counting unit (122) may be configured to detect and count cotton bolls based on two processes, i.e., a training process, and a testing/inference process. In an exemplary embodiment, the training process is a one-time process, while the testing/inference process is done every time when the cotton bolls counting is required.
[0058] Multiple pre-defined labelled images of the cotton field (F) are already stored in the database wherein the images are labelled as full white bolls, partial white bolls as well as green bolls. In an embodiment, in the training process, the cotton boll detection and counting unit (122) uses the pre-defined labelled images of the cotton field (F) from the database (126) and further extracts feature maps from these labeled images. As mentioned above, one or more images of a cotton field (F) are acquired by the capturing unit (106). The processor (114) enhances these images by performing noise filtering on each image and annotates the images with labels to detect feature maps. The cotton boll detection and counting unit (122) in the training mode extracts multi-scale features from the enhanced and labeled images and optimizes the extracted features. Further, it also calculates an error in the optimized features and minimizes the errors. Once the error is minimized, the cotton boll detection and counting unit (122) updates the weights of customized algorithm as a learnt weight and stores it in a database (126). These extracted feature maps are then used to train the cotton boll detection and counting unit (122). In one embodiment, the cotton boll detection and counting unit (122) may use multi-scale feature maps for better detection when a scale varies for size of the cotton bolls due to oblique imagery.
[0059] In another embodiment, in the testing/inference process, the orthomosaic image may be divided into a number of individual plots. In this, each divided plot may pass through the trained cotton boll detection and counting unit (122) for label prediction. The cotton boll detection and counting unit (122) further estimates a score with the help of the learnt weights stored in the database when the cotton boll detection and counting unit (122) was working in the training mode.
[0060] To estimate the score, the cotton boll detection and counting unit (122) further bounds one or more cotton bolls with bounding boxes in the individual plots received from the border formation unit (120). Once the bounding is completed, it scores the individual bounding boxes and estimate the score of the individual bounding boxes from the annotated and enhanced images using learnt weights computed by the cotton boll detection and counting unit (122) and already stored in the database (126). The estimation of the score of the individual bounding boxes is performed by combining the neighboring bounding boxes and scores, selecting the bounding box with the highest score, comparing overlapping of bounding boxes with a threshold value, removing the bounding box with intersection > 50% and estimates the bounding box score. In one embodiment, the threshold value is an assigned value of bounding boxes.
[0061] The output of the cotton boll detection and counting unit (122) is given as input to the prediction unit (124). The prediction unit (124) may be configured to predict the yield based on a yield prediction technique. In an embodiment, the yield prediction technique uses the pre-defined parameters stored in the database (126). The parameters include, but are not limited to, cotton boll count (i.e., white, partial white, green, etc.), weight per boll, and occlusion percentage etc. In an embodiment, by using the parameters, the prediction unit (124) may be configured to predict the yield per plot in kilogram. In an exemplary embodiment, by using a non-maximum suppression (NMS) technique during the training process multiple predictions of the same object may be eliminated.
[0062] Referring to Figure 2, a schematic block diagram (200) depicting selecting region of interest and predicting of the yield in kilograms per plot, according to an exemplary implementation of the present invention is illustrated.
[0063] In the block diagram (200), the block (202) defines acquisition of the cotton bolls in the field (F) through the UAV based capturing unit (106). The processing engine (110) may be configured to acquire images of the cotton bolls through the UAV (102) based capturing unit (106). For acquiring the cotton bolls’ images, the generation unit (118) may be configured to perform flight planning (as shown by block (204)), and the border formation unit (120) may be configured to perform placement of the ground control points (GCP) (as shown by block (206)) prior to acquiring the cotton bolls’ images. In an embodiment, the generation unit (118) may be accessible by a user through a graphic user interface (GUI) giving the user the flexibility to choose either forward or backward path based on the resolution of the RGB pixels. In an exemplary embodiment, the user may be an administrator of the system (100). After acquiring the cotton bolls’ images, the generation unit (118) may be configured to perform image stitching (as shown by block (208)). In that, the generation unit (118) may be configured to identify the red, green, and blue (RGB) pixels from the captured images and perform oblique stitching of the RGB images into an orthomosaic representing the entire field. In an embodiment, the border formation unit may be further configured to perform path wise imagery arrangement (as shown by block (210)) which is further employed in data stitching to generate whole field composite imagery (as shown by block (212)). In that, the border formation unit (120) may be configured to divide the orthomosaic image into a number of small units called plots, where each plot has a pre-determined identification number in the form of a code, for example, a barcode with the ground control points (GCP). The border formation unit (120) automatically extracts the plot(s) by using the bar code and determines the boundaries of the plots for performing the path wise imagery arrangement using image geo-tags and generating the whole field composite imagery. After generating whole field composite imagery, the prediction unit (124) estimates cotton yield (as shown by block (214)) and predicts the yield in kilograms per plot as shown by the block (218). Also, a selected region of interest (as shown by block (216)) is provided to the cotton yield estimation (214) at the prediction stage.
[0064] Referring to Figure 3, a flow diagram of a training method of the cotton boll detection and counting unit (122) for detecting the cotton bolls using labelled images and extracting the feature maps in accordance with an embodiment of the present invention is illustrated. In the training process, multiple pre-defined labelled images of the cotton field are used for training. Multiple pre-defined labelled images of the cotton field (F) are already stored in the database wherein the images are labelled as full white bolls, partial white bolls as well as green bolls. In an embodiment, in the training process, the cotton boll detection and counting unit (122) uses the pre-defined labelled images of the cotton field (F) from the database (126) and further extracts feature maps from these labeled images. These extracted feature maps are then used to train the cotton boll detection and counting unit (122). In an example, the cotton boll detection and counting unit (122) is trained to detect and distinguish between the full white bolls, partial white bolls as well as green bolls. In one embodiment, multi-scale feature maps may be used for better detection when a scale varies for size of the cotton bolls due to oblique imagery.
[0065] The training method for detecting the cotton bolls using labelled images and extracting the feature maps includes the following steps:
[0066] At a step (302), the capturing unit (106) acquires one or more images using the UAV (102) and passes the images to the processor (114).
[0067] At a step (304), the images are enhanced, by the processor (114), by performing noise filtering on each image.
[0068] At a step (306), the images are then annotated with labels by the processor (114), to detect the feature maps. In one embodiment, the labels include full white boll labels, partial white boll labels and green boll labels.
[0069] At a step (308), the enhanced images are passed, by the processor (114), to the cotton boll detection and counting unit (122).
[0070] At a step (310), the cotton boll detection and counting unit (122) extracts the multi-scale features from the enhanced and labeled images and optimizes the extracted features.
[0071] At a step (312), the cotton boll detection and counting unit (122) calculates an error in the optimized features and minimizes the errors.
[0072] At a step (314), the cotton boll detection and counting unit (122) based upon the optimized images updates the weight of customized algorithm as a learnt weight.
[0073] At a step (316), the cotton boll detection and counting unit (122) stores the learnt weight in the database (126).
[0074] In an embodiment, if the error is not minimized at the step (312), the cotton boll detection and counting unit (122) repeats the steps (308 to 314) until the error in correctly detecting the objects is minimized.
[0075] The stored learnt weights in the database (126) are further used for the testing mode of cotton boll detection and counting unit (122).
[0076] In one preferred embodiment, the method comprises mapping each combined bounding boxes and score with respect to the labels to detect the feature maps.
[0077] Referring to Figure 4, a flow diagram of a testing method depicting yield prediction of the cotton bolls in accordance with an embodiment of the present invention is illustrated.
[0078] In an embodiment, in the testing/inference process, the orthomosaic image may be divided into a number of plots. In this, each divided plot may pass through the trained cotton boll detection and counting unit (122) for label prediction. The one or more cotton bolls in the plots are bounded with bounding boxes to estimate the score of the individual bounding boxes from the annotated and enhanced images using learnt weights computed by the cotton boll detection and counting unit (122) and already stored in the database (126). The multiple bounding boxes are combined into a single object, and then the object is mapped with respect to the labels.
[0079] Referring to Figure 4, the testing/inference process has been defined.
[0080] At a step (402), the capturing unit (106) acquires one or more images using the capturing unit (106) and are passed to the processor (114) of the processing engine (110) through the network (108).
[0081] At a step (404), the images are enhanced by the processor (114) by performing noise filtering on each image. Further, the enhanced images are passed to the plot creation unit (116) where the images are divided into multiple plots and passed to the cotton boll detection and counting unit (122). Further, the steps below are performed on one plot of multiple plots.
[0082] At a step (406), the cotton boll detection and counting unit (122) performs grid sampling by nxn on each image.
[0083] As mentioned above, the cotton boll detection and counting unit (122) is implemented using a CNN architecture. At a step (408), the enhanced and sampled images are passed through the multiple layers of the CNN architecture i.e. cotton boll detection and counting unit (122).
[0084] At a step (410), the cotton boll detection and counting unit (122) uses the stored learnt weights (step (316) of Figure 3) and predicts the initial bounding boxes and scores.
[0085] At a step (412), the cotton boll detection and counting unit (122) bound one or more individual cotton bolls with boxes and estimating a score of the individual bolls using learnt weights from the sampled and enhanced images;
[0086] At a step (414), the cotton boll detection and counting unit (122) combines bounding boxes and scores.
[0087] At a step (416), the cotton boll detection and counting unit (122) selects the bounding boxes with the highest score.
[0088] At a step (418), the cotton boll detection and counting unit (122) compares overlapping of bounding boxes with a threshold value.
[0089] At a step (420), the cotton boll detection and counting unit (122) removes the bounding boxes with intersection > 50% and estimates a score.
[0090] At a step (422), the cotton boll detection and counting unit (122) predicts the final bounding boxes based on the estimated score.
[0091] At a step (424), the prediction unit (124) then predicts the yield based the predicted final bounding boxes and predefined parameters prestored in the database (126).
[0092] At a step (426), the processor unit (114) stores the yield in the database (124).
[0093] After the images are enhanced at step (404) and before grid sampling being performed by the cotton boll detection and counting unit (122) at step (406), the enhanced images are passed to the plot creation unit (116) by the processor (114) where the images are divided into multiple plots. The method for dividing the images includes the following steps
[0094] The generation unit (118) identifies red, green, and blue (RGB) pixels from the sampled and enhanced images and performs an oblique stitching of the RGB images into an orthomosaic representing the entire cotton field (F). The oblique stitching comprises vertically stitching the images alternatively and then horizontally stitching to generate the orthomosaic image of the entire cotton field (F).
[0095] Further, the border formation unit (120), the orthomosaic image into a number of small plots by using a checkerboard or barcode detection technique for image correction and border formation. Each plot has a pre-determined identification number in the form of a barcode with ground control points (GCP), and the step of dividing comprises automatically extracting the plot(s) by the bar code and determining the boundaries of the plots.
[0096] In an embodiment, the steps 412 through 420 are implemented as a non-maximum suppression technique (NMS) which is repeated for K iterations to get only final bounding boxes for individual cotton bolls by the cotton boll detection and counting unit (122).
[0097] A use case scenario (500) depicting cotton bolls detection, according to an exemplary implementation of the present invention is illustrated in Figure 5.
[0098] In Figure 5, the detected cotton bolls are shown, where the white bolls represent the cotton bolls that are fully grown ready for harvesting, partial white bolls represent the cotton bolls that are partially visible in the imagery, and green bolls represent the cotton bolls that are immature cotton bolls.
[0099] In an advantageous aspect, the disclosed computer implemented system and method for cotton yield prediction through UAV/drone imagery using machine learning techniques before harvesting the crop, is fast, non-invasive and precise. The system and method eliminate the entire manual effort of harvesting the crop and taking the yield. As UAV/drones are used, this is several times faster. Moreover, fewer human resources are required even for many acreages of field. The prediction of yield before harvesting the crop not only saves a lot of resources in term of time and labor but also helps in making important business decisions.
[00100] In another advantageous aspect, the disclosed computer implemented system and method for cotton yield prediction, by early yield prediction, facilitates estimation of the seed requirement for the forthcoming season well in advance. This is a data driven approach to handle the demand and supply of seeds.
[00101] The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the invention.
,CLAIMS:
1. A computer implemented system for cotton yield prediction, the system (100) comprising:
a capturing unit (106) comprising an unmanned aerial vehicle (UAV) (102), and a sensor (104) mounted on the unmanned aerial vehicle (UAV) (102), the capturing unit (106) configured to capture one or more images of a cotton field (F) through forward and backward flight paths;
a processing engine (110) communicatively coupled with the capturing unit (106) via a network (108), the processing engine (110) configured to receive the captured images of the cotton field (F), detect and count cotton bolls in the cotton field (F) from the captured images and predict yield of the cotton field (F), and
a database (126) in communication with the processing engine (110) to store data related to the captured images of the cotton field (F),
wherein an exposure time of the capturing unit (106) is 1/4000 with cloud brightness setting and the sensor (104) of the capturing unit (106) is a RGB camera mounted in an oblique orientation at an angle between 30-60 degrees to a plane of ground of the cotton field (F) when the UAV (102) is flying parallel to ground of the cotton field (F).
2. The system as claimed in claim 1, wherein the oblique orientation at an angle of 30-60 degrees provides 0.19 to 0.22 centimeter/ pixel resolution coverings in an 8.2m X 6.5m field of view of the cotton field (F).
3. The system as claimed in claim 1, wherein the processing engine (110) further comprises:
a memory (112) configured to store pre-determined rules related to detecting and counting cotton bolls in the cotton field (F) and predicting yield of the cotton bolls in the cotton field (F);
a processor (114) configured to filter noise from each captured image to generate enhanced images, and to annotate the enhanced images with labels,
a plot creation unit (116) in communication with the processor (114) and configured to divide the enhanced images into multiple plots, the plot creation unit (116) further comprising:
a generation unit (118) configured to receive the enhanced images from the processor (114), identify red, green, and blue (RGB) pixels from the enhanced images and perform an oblique stitching of the RGB image into an orthomosaic representing the entire cotton field (F), and
a border formation unit (120) in communication with the generation unit (118), the border formation unit (120) configured to receive the orthomosaic image from the generation unit (118) and divide the orthomosaic image into a number of small plots by using a checkerboard or barcode detection technique for image correction and border formation;
a cotton boll detection and counting unit (122) in communication with the border formation unit (120), the cotton boll detection and counting unit (122) configured to receive the individual plots from the border formation unit (120) and detect and count cotton bolls from the plots and estimate a score;
a prediction unit (124), in communication with the cotton boll detection and counting unit (122), configured to receive the score and predict the yield based on a yield prediction technique.
4. The system as claimed in claim 3, wherein the cotton boll detection and counting unit (122) is further configured to bound one or more cotton bolls with bounding boxes in the individual plots received from the border formation unit (120), score the individual bounding boxes and estimate the score of the individual bounding boxes from the annotated and enhanced images using learnt weights computed by the cotton boll detection and counting unit (122) and prestored in the database (126).
5. The system as claimed in claims 3 and 4, wherein to estimate the score of the individual bounding boxes, the cotton boll detection and counting unit (122) combines neighbor bounding boxes and scores, selects the bounding box with the highest score, compares overlapping of bounding boxes with a threshold value, removes the bounding box with intersection > 50% and estimates the bounding box score.
6. The system as claimed in claim 3, wherein the processor (114) is further configured to cooperate with the memory (112) to receive the pre-determined rules and with the cotton boll detection and counting unit (122) through the plot creation unit (116) to pass the enhanced images, and generate system processing commands for operating the cotton boll detection and counting unit (122).
7. The system as claimed in claim 3, wherein to perform the oblique stitching, the images are vertically stitched alternatively and then horizontally stitched to generate the orthomosaic image of the entire cotton field (F).
8. The system as claimed in claim 3, wherein each plot has a pre-determined identification number in a form of a barcode with ground control points (GCP) and the border formation unit (120) automatically extracts the plot(s) by the bar code and determines the boundaries of the plots.
9. The system as claimed in any one of claims 1-3, wherein the database (126) is in further communication with the generation unit (118), the border formation unit (120), the cotton boll detection and counting unit (122) and the prediction unit (124), the database (126) configured to store pre-defined factors for predicting the cotton yield, pre-defined labelled images, pre-defined parameters, and learnt weights computed by the cotton boll detection and counting unit (122).
10. The system as claimed in claim 3, wherein the cotton boll detection and counting unit (122) operates in a training mode during initialization of the system and thereafter in a testing mode.
11. The system as claimed in claim 10, wherein in the training mode of the cotton boll detection and counting unit (122), the processing engine (110) uses one or more pre-defined labelled images of the cotton field (F), wherein the labels include full white boll labels, partial white boll labels and green boll labels already stored in the database (126), and extracts feature maps from the labels,
wherein, the extracted feature maps are used to train the cotton boll detection and counting unit (122).
12. The system as claimed in claim 11, wherein multi-scale feature maps are employed for detection of cotton bolls when a scale varies for size of the cotton bolls due to oblique imagery.
13. The system as claimed in claim 10, wherein in the testing mode of the cotton boll detection and counting unit (122), the individual plots received from the border formation unit (120) are passed through the trained cotton boll detection and counting unit (122) for label prediction, and counting of number of bounding boxes is performed to estimate the bounding boxes score using the trained cotton boll detection and counting unit (122).
14. The system as claimed in claim 13, wherein based on the bounding boxes scores the cotton boll detection and counting unit (122) combines multiple bounding boxes into a single object and maps the object with respect to the labels.
15. The system as claimed in any one of claims 1 to 14, wherein the cotton boll detection and counting unit (122) is implemented using a CNN architecture.
16. A computer implemented method for cotton yield prediction in a cotton field (F), the method implemented for training a Cotton boll detection and counting unit (122), the method comprising:
acquiring (302), by a capturing unit (106) one or more images of a cotton field (F);
enhancing (304), by a processor (114), the images by performing noise filtering on each image;
annotating (306), by the processor (114), the images with labels to detect feature maps;
passing (308), by the processor (114), the enhanced images with labels to a cotton boll detection and counting unit (122);
extracting (310), by the cotton boll detection and counting unit (122), multi-scale features from the enhanced and labeled images and optimizing the extracted features;
calculating (312), by the cotton boll detection and counting unit (122), an error in the optimized features and minimizing the errors;
updating (314), by the cotton boll detection and counting unit (122), weights of customized algorithm as a learnt weight; and
storing (316) by the cotton boll detection and counting unit (122), the learnt weight in a database (126).
17. The method as claimed in claim 16, wherein if the error is not minimized at the step (312), the cotton boll detection and counting unit (122) repeats the steps (308) to (314) until the error in is minimized.
18. The method as claimed in claim 16, wherein in step (306) the labels include full white boll labels, partial white boll labels and green boll labels.
19. The method as claimed in claims 16, wherein the method comprises mapping each combined bounding boxes and score with respect to the labels to detect the feature maps.
20. A computer implemented method for cotton yield prediction in a cotton field (F), the method comprising:
acquiring (402), by a capturing unit (106), one or more images of a cotton field (F);
enhancing (404), by a processor (114), the images by performing noise filtering on each image;
performing (406), by a cotton boll detection and counting unit (122), grid sampling by nxn on each enhanced image;
using (410), by the cotton boll detection and counting unit (122), learnt weights prestored in a database (126) to initially predict bounding boxes and scores;
bounding (412), by the cotton boll detection and counting unit (122), one or more individual cotton bolls with boxes and estimating a score of the individual bolls using learnt weights from the sampled and enhanced images;
combining (414), by a cotton boll detection and counting unit (122), neighboring bounding boxes and scores;
selecting (416), by the cotton boll detection and counting unit (122), a bounding boxes with the highest score;
comparing (418), by the cotton boll detection and counting unit (122), the overlapping of bounding boxes with a threshold value;
removing (420), by the cotton boll detection and counting unit (122), the bounding boxes with intersection > 50% and estimating a score;
predicting (422), by the cotton boll detection and counting unit (122), final bounding boxes based on the estimated score;
predicting (424), by a prediction unit (124), the yield based on the predicted final bounding boxes and predefined parameters prestored in the database (126); and
storing (426), by the processor unit (114), the yield in the database (124).
21. The method as claimed in claim 20, wherein before performing the step of grid sampling (406), the method further comprises:
identifying, by a generation unit (118) of a plot creation unit (116), red, green, and blue (RGB) images from the sampled and enhanced images;
performing, by the generation unit (118) of the plot creation unit (116), an oblique stitching of the RGB images into an orthomosaic representing the entire cotton field (F); and
dividing, by a border formation unit (120) of the plot creation unit (116), the orthomosaic image into a number of small plots by using a checkerboard or barcode detection technique for image correction and border formation.
22. The method as claimed in claim 21, wherein the step of performing the oblique stitching comprises vertically stitching the images alternatively and then horizontally stitching to generate the orthomosaic image of the entire cotton field (F).
23. The method as claimed in claim 21, wherein each plot has a pre-determined identification number in a form of a barcode with ground control points (GCP), and wherein the step of dividing comprises automatically extracting the plot(s) by the bar code and determining the boundaries of the plots.
24. The method as claimed in claim 20, wherein the method is implemented as a non-maximum suppression technique (NMS) which is repeated for K iterations to get final plots for individual cotton bolls by the cotton boll detection and counting unit (122).
| # | Name | Date |
|---|---|---|
| 1 | 202321068414-PROVISIONAL SPECIFICATION [11-10-2023(online)].pdf | 2023-10-11 |
| 2 | 202321068414-FORM 1 [11-10-2023(online)].pdf | 2023-10-11 |
| 3 | 202321068414-DRAWINGS [11-10-2023(online)].pdf | 2023-10-11 |
| 4 | 202321068414-FORM-26 [09-01-2024(online)].pdf | 2024-01-09 |
| 5 | 202321068414-Proof of Right [10-04-2024(online)].pdf | 2024-04-10 |
| 6 | 202321068414-FORM-8 [11-10-2024(online)].pdf | 2024-10-11 |
| 7 | 202321068414-FORM-5 [11-10-2024(online)].pdf | 2024-10-11 |
| 8 | 202321068414-FORM-26 [11-10-2024(online)].pdf | 2024-10-11 |
| 9 | 202321068414-FORM 3 [11-10-2024(online)].pdf | 2024-10-11 |
| 10 | 202321068414-FORM 18 [11-10-2024(online)].pdf | 2024-10-11 |
| 11 | 202321068414-DRAWING [11-10-2024(online)].pdf | 2024-10-11 |
| 12 | 202321068414-CORRESPONDENCE-OTHERS [11-10-2024(online)].pdf | 2024-10-11 |
| 13 | 202321068414-COMPLETE SPECIFICATION [11-10-2024(online)].pdf | 2024-10-11 |
| 14 | 202321068414-Form 1 (Submitted on date of filing) [14-11-2024(online)].pdf | 2024-11-14 |
| 15 | 202321068414-Covering Letter [14-11-2024(online)].pdf | 2024-11-14 |
| 16 | 202321068414-CERTIFIED COPIES TRANSMISSION TO IB [14-11-2024(online)].pdf | 2024-11-14 |
| 17 | Abstract.jpg | 2025-01-06 |