Sign In to Follow Application
View All Documents & Correspondence

Deep Learning Integrated Image Based Approach For Detecting Leaf Pathology In Crops

Abstract: The agriculture industry contributes most to improving economies and demographics and is essential to the production of high-quality food. Plant diseases can wipe out species variety and result in large losses in the production of food. Utilizing precise or automated detection methods to identify plant illnesses early on can improve the cultivation of food efficiency and save financial losses. Deep learning has significantly increased the identification reliability of recognizing objects and picture classification techniques in the past few decades. Therefore, for effective plant disease proof of identity, we used models that had been trained that utilized convolutional neural networks (CNNs) in this article. We concentrated on fine-tuning the hyperparameters that are part of widely used pre-trained models, including Inception V4, DenseNet-121, ResNet-50, and VGG-16. The well-known PlantVillage dataset, which includes 54,305 picture samples of various plant disease species in 38 classifications, was used for the tests. The model's performance was assessed using the F1 score, sensitivity, specificity, and reliability of categorization. The tests demonstrated that DenseNet-121 outperformed contemporary algorithms with a 99.81% greater accuracy in classification. 4 Claims & 1 Figure

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 October 2023
Publication Number
42/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

MLR Institute of Technology
Laxman Reddy Avenue, Dundigal-500043

Inventors

1. Dr. M. Kalpana Chowdary
Department of Computer Science and Engineering, MLR Institute of Technology, Laxman Reddy Avenue, Dundigal-500043
2. Dr. Ajmeera Kiran
Department of Computer Science and Engineering, MLR Institute of Technology, Laxman Reddy Avenue, Dundigal-500043
3. Mrs. I. Sapthami
Department of Computer Science and Engineering, MLR Institute of Technology, Laxman Reddy Avenue, Dundigal-500043
4. Mr.B. Murali Krishna
Department of Computer Science and Engineering, MLR Institute of Technology, Laxman Reddy Avenue, Dundigal-500043

Specification

Description:Field of Invention
The current revelation advances the technical domains of machine vision and digital computational imaging. The disclosure is primarily related to crop disease identification from digital photographs with a wide field of view, but it also deals with the identification of agricultural diseases more generally utilizing digital imagery.

The Objectives of this Invention
The innovation's main goal is to make farmers' to easily identify and recognizes the various leaf diseases by analyzing the various input images by applying the deep learning-based algorithms.
Background of the Invention
In (TWI2011/435234B), This invention pertains to an apparatus, a method, and a means of recording substances for plant disease recognizing something. Specifically, it concerns a system, a method, and a recording substance for using plant image and color tone evaluation technology for quick analysis of plant indications and plant disease recognizing something. The majority of the existing autonomous plant pest and disease detection technology involves biogas sensing (such as laser photoacoustic sensing), which requires biogas technology for sensing to recognize the gas released by plants in an entirely enclosed space. Furthermore, more research is needed to determine the relationship between different plant maladies and pests such as insects and the gases that plants emit, and the hardware required to develop this technology is expensive. According to (CN2019/109840549B), Agriculture-related technology is the subject of this development, specifically a device and procedure for diagnosing illnesses of plants and insect pests. The technique consists of the following steps: building a conventional growth framework by comparing a plant growth library with the types of plants present in a signed area and the user's current development period of time; collecting real data for each plant growth index in the signed area and comparing it with a standard development simulation to further assess the plants' growth state; and when a plant is in an infectious disease or insect pest state, gathering images in the signed area, classifying the disease or pest using the recognition of images and an inventory of disease and pests, and encouraging the classification to a knowledgeable terminal. The invention creates a comprehensive index database of plants beforehand, matches the actual data gathered with the conventional information in order in order to precisely track the plant's development state, assesses the degree of plant disease and insect pest occurrence, and uses the recognition of images to further identify the types of plant diseases and pests so that the user is able to quickly, with precision, and in a timely manner identify the various types of plant illnesses and pests. In addition to (CN2019/110188824B), An apparatus and a tiny sample identification of plant illnesses methodology are disclosed in the invention. The process involves the following steps: building a training set by utilizing all or part of the additional sample set that passes the verification procedure and the original picture that does not include the disease; instruction of the convolutional neural networks based on a training set to obtain a classification model, and inputting the disease image to be identified into the classification model in order to obtain a disease identification result. The initially selected sample set is obtained by choosing at random a plurality of original pictures containing diseases. The second sample set is then expanded by an improved generation countermeasure network based on deep convolution to operate to obtain a second sample set. In order to expand the collection of samples and target disease images from small specimens of plants, the method uses a more effective deep convolution to create a diametrically opposed network. This allows the increased second sample set to have a positive-negative ratio of roughly 1:1, harmonious data, and an assortment of positive-negative ratios and data points approaching ten thousand. The broadened second sample set is then used to classify the infection using a convolutional neural network, which improves the method's ability to classify diseases. In addition to (CN2019/110148120B), The innovation offers a CNN-based system and clever identification method for diseases. The apparatus can reduce surroundings picture interference, which achieves higher accuracy in identification even with fewer samples and provides examples for training with higher multi-classification operation efficiency. The disease image recognition method consists of the following steps: Using a deep convolutional neural network ( CNN) to learn the first image characteristic found in a normal plant image, transfer learning to learn the following image distinctive of a disease plant image, and ultimately bringing together the first and second image characteristics to perform categorization and recognizing something. Preconditioning an image, standardizing its size, and quickly establishing a sickness region by using fast-RCNN to discharge surroundings interference; and obtaining the features of the image, namely obtaining the characteristics of the image through the use of a triplet similarities measurements model, and then completing calculated fusion by adopting SIFT characteristics as reimbursement characteristics, and (3) determining the disease and recognizing something. In addition (AU2020/103613A4), With a CNN and transferable learning foundation, the current invention offers a disease-intelligent recognition technique and framework that can reduce picture backdrop disruption accomplish high accuracy in identification when samples are few, support five multi-classifications of training specimens, and have a high operational effectiveness. The following procedures make up an intelligent classification approach for illness images: picture preprocessing includes sizing images appropriately and employing Faster-RCNN to swiftly locate a disease-sign region and remove background contamination; image features the extraction process: utilizing a triplet resemblance assessment approach for extracting image characteristics.
In (Praveena et al. (2020), European Journal of Molecular & Clinical Medicine, Vol. 7, Issue. 4, pp. 2438-2445), the environment and all living things depend heavily on plants. Plants keep the surroundings balanced. Plant disease is a deterioration of a plant's normal state that prevents or changes a plant's vital processes. Every plant species, whether domesticated or wild, can become unwell. While leaves are the only organs affected by these diseases, fruits and stems may also be affected. For the majority of plants, leaf diseases are the most prevalent type of disease. The scientific study of infections and environmental factors that sicken crops is known as plant pathology. Among the organisms that can spread disease include fungi, oomycetes, bacteria, viruses, viroid, and so on. The most recent method uses photos of plant leaves to automatically classify illnesses. The neural network-based method of microbe hunting and improvement is largely concerned with accomplishing via a neural system, the planar basic principle is used. In (Anupriya et al. [2023], 2023 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 2023, pp. 1-6), By identifying each of the thirteen distinct and intricate types of crop illnesses from healthful leaves, this suggested system can identify them. The primary goal of introducing this suggested methodology for identifying plant diseases and prognosis is to target the agro-based and agricultural industries. This aids in the early diagnosis and discovery of agricultural illnesses, enabling farmers to take the appropriate preventative or corrective action that may ultimately aid in increasing crop output.

Summary of the Invention
In the present investigation, we effectively analyzed various transfer learning models that are appropriate for accurately classifying 38 distinct plant disease categories. Employing transfer learning approaches, state-of-the-art multilayer neural network algorithms were standardized and evaluated based on F1 score, sensitivity, specificity, and accuracy of classification. Based on the evaluation of the several pre-trained designs, DenseNet-121 fared better than ResNet-50, VGG-16, and Inception V4. Because the DenseNet-121 model had fewer variables that could be trained and lower computing costs, training it appeared to be simple.
Detailed Description of the Invention
CNN-based approaches work well with picture datasets for recognizing objects and categorization. CNNs have benefits, but there are still drawbacks, like the lengthy training period and the need for big datasets. Deep CNN models are needed to extract the intricate and low-level characteristics from pictures, which makes the model procedure for training more difficult. The difficulties listed above can be overcome with the use of transferable learning strategies. Pre-trained models are used in the transfer learning process so that the parameters of models acquired on one dataset can be applied to different scenarios. Multiple photos of both healthy and damaged specimens of plants are stored in diseased plant information sets, and every specimen is linked to a specific class. All the photos of both healthy and diseased specimens of plants growing bananas, for example, will be transferred to the same class if we think of the banana vine as a single entity. At this point, the features that were taken from the original image are the only factors used to classify the intended image. Using the plant that produces bananas as an example, the pineapple class is affected by four different disease groups: bunchy top virus, fusarium wilt, black sigatoka, and Xanthomonas wilt. The evaluation of the output is going to identify the exact identity of the sickness from amongst each of the four groups assigned under that specific class when an illustration of a certain disease is retrieved as input after retraining with all four sets of clinical data under a particular banana class. As a result, though each category inside a class is regarded as a separate class in multi-label categorization, classes in multi-class classification are independent of one another. If there are N classes, we can speak about N multi-classes; if there are M subcategories in the N classes, then every grouping inside the N classes is regarded as a class in and of itself.
Even when trained on top-tier GPU processors, most state-of-the-art models require days or weeks for training and fine-tuning. The process of training and creating a model from the start takes time. A pre-trained CNN model utilizing a transfer learning strategy achieved 63% accuracy in nearly half the number of iterations (around 100 epochs) compared to a CNN model created from scratch using a publicly accessible plant disease dataset that appeared to reach 25% accuracy in 200 epochs. There are various approaches to transfer learned approaches; which one to use relies on the specifics of the set of data and the previously trained network model to be used for classifications. A network of convolutional neural networks with 50 deep layers is called ResNet-50. The convexity and identification blocks are divided into five stages of the framework. The foundation of tasks associated with computer vision is provided by these residual structures. The idea of stacking layering convolution layers on top of one another was first presented by ResNet. They include many disconnected connections in addition to overlaying the neural network's convolution layers, which allows the convolutional neural network (CNN) to reach its output without going through the input that was provided. In order to address the vanishing gradient problem, the skip connection can also be positioned prior to the function of activation. Consequently, deeper models have more faults; to address these problems, the residual neural network's missing connections were included. All these quick links have in common is identity mapping.
Convolution is an identification block in ResNet-50. Three layers of convolutional neural networks and more than 23 million parameters that can be trained are present in every identity block. The two multipliers are input x and shortcuts x. These are able to be combined if the normalization process's batch and combination layer's output dimensions are the same after the convolution layer's input and shortcut. If not, shortcut x needs to pass through continuous normalization and a convolution layer in order to fit the dimension. The Oxford University Visual Geometry Group at Oxford University created the VGG-16 [50] networks approach, sometimes referred to as the Very Deep Convolutional Network for Large-Scale Image Identification. The depth is increased to 138 M parameters that can be trained and 16–19 weighted levels. Additionally, by decreasing the convolution filter size to 3 3, the model's depth is increased. This model uses more storage space and takes longer to train.
A deep CNN algorithm called DenseNet-121 uses dense layers with fewer connections between each other to classify images. Each layer in the aforementioned network passes on its produced map of characteristics to the layer above it, along with new inputs from the layers above it. Every layer is concatenated, which allows the layer after it to get the accumulated understanding of all the layers before it. Additionally, because the feature representations of the previous layers are mapped to the next layers, the network is narrow and thin. This reduces the total number of routes in a dense block and signifies the growth rate of a channel with k. Figure 4 illustrates how a DenseNet dense block functions. The process of regularization activation, and convolution procedures are performed for the resulting feature maps of k channels for every compositional layer. To change the result of later layers, batch normalization, pooling, and ReLu activation and convolution are used. The different layers have more varied characteristics and a strong gradient flow. When compared to ResNet, DenseNet is smaller. Additionally, DenseNet uses all characteristics—regardless of their varying complexities—and offers seamless decision limitations, although the classification algorithms in the conventional ConvNet model process complex information.
Pictures can range in size and are packed with important elements and characteristics. It is difficult to select the ideal filter size for feature extraction because of these size discrepancies. A larger kernel size should be used for extracting worldwide data, whereas a smaller kernel size would be used for extracting specific data. Expanding the convolution layers could lead to issues with disappearing gradients and overfitting. In order to address this, the Inception modules include varying kernel sizes in every block, causing the network model to enlarge rather than contract. For example, after three phases of convolution, the naïve Inception component can use 3 3, 1 1, or 5 5 sizes for the filter. After max-pooling, the result has been combined and sent to the following layer. The Inception layer's stem is used to configure a preliminary set of tasks that must be completed before the launch of the Inception module. Compression blocks are another feature in Inception V4 that allows you to change the grids' height and width.
Our system were evaluated using a GPU NVIDIA GeForce GTX workstation as the starting point system. The system specifications were Windows 10, GDDR5 graphics memory, a Core i5 9th generation processor, and 8 GB of RAM. The libraries Anaconda3, Keras, OpenCV, Numpy CuDNN, and Theano were used in the software execution procedure. CUDNN and CUMeM are straightforward libraries created specifically to execute deep learning computations more quickly and with reduced memory usage. NVIDIA created both of these libraries to function with the Theano backend. With its support for Linux, Windows, Mac OS, iOS, Python, Java, and Android interfaces, OpenCV facilitates the construction of projects for both research and business purposes. Both the testing and training reliability for every experiment were assessed in this investigation. For every model, the losses incurred during the training and testing stages were computed. In order to use transfer learning models to speed up CNN learning, the algorithms were trained using the PlantVillage data set. The models that had been trained beforehand ResNet-50, Inception V4, VGG-16, and DenseNet-121 were used for our investigation. These models were trained using the ImageNet dataset, which had 1.2 M images and 1000 image classifications. A freely accessible resource with several kinds of plant diseases is PlantVillage. There are 54,305 photos in 38 classes in this dataset. We partitioned the dataset into three categories: training, testing, and validation components in order to conduct the experimental study. Of the PlantVillage dataset, 80% was utilized for training the pre-trained models, and the remaining 20% was used for validation and testing purposes. Furthermore, there were 54,305 samples accessible in total for the plant categories; 43,955 specimens were used for training, 4902 samples for validation, and 5488 samples for testing. Every one of the 38 classes of distinct plant diseases is represented in these train, test, and validation sets. Table 2 presents the specifics of the dataset split. Preparation and Enrichment of Data
There were 14 crop species and 26 illnesses spread across 38 groups in the dataset. We utilized the PlantVillage dataset's color photos for our experiment since they worked well with the transfer algorithms for learning. Since we employed various pre-trained network models that require varying input sizes, the photos were standardised to 256 256 pixels. The input size (height, width, and channel width) for VGG-16, DenseNet-121, and ResNet-50 is 224 224 3, while the input shape (height, width, and channel width) for Inception V4 is 299 299 3. Despite the size of the dataset—which contains over 54,000 photos of various crop diseases—the photos are accurate representations of actual photos taken by farmers using various image-acquisition tools, including smartphones, high-definition cameras, and Kinect sensors. Furthermore, overfitting is a common problem with a dataset of this size. Consequently, overfitting periodicity methods were developed to get around this, such as data enhancement following preprocessing. Using the preprocessed photos, augmentation techniques included rescaling, flipping the images horizontally and vertically, rotating the images clockwise and counterclockwise, and intensifying the zoom. Physical copies of the enhanced photos were employed in the process temporarily rather than being kept because the images were altered rather than replicated throughout the training phase. This augmentation strategy not only keeps the model from overfitting and losing its shape, but it also makes the model more resilient, allowing it to classify real-world plant disease photos more accurately. The transfer learning approach has two advantages over scratch models: it learns more quickly and allows for the freezing of model layers, with the final layers being trained for more precise categorization. First, a few hyperparameter-standardized processes for various pre-trained models were carried out.
4 Claims & 1 Figure
Brief description of Drawing
In the figure which are illustrate exemplary embodiments of the invention.
Figure 1, the Architecture of the Proposed Invention , Claims: The scope of the invention is defined by the following claims:

Claim:
1. A system/method to the identification of various plant diseases using deep learning algorithms, said system/method comprising the steps of:
a) The system starts up, data is gathered in datasets with three different processes (1). The data processed using deep learning models (2).
b) The developed system will be functioning based on training models (3), and testing models (4).
c) Based on these data’s the different architecture will apply to analyse diseases in the plant (5).
2. As per the claim 1, the data’s are collected and processed in three different process like gathering data, pre-processing the data and data augmentations.
3. As per the claim 1, the proposed invention data’s are processed with two different phases like testing and training phases of plant disease data.
4. As mentioned in claim 1, the PlantVillage datasets and previously trained CNN frameworks, including VGG-16, DenseNet-121, ResNet-50, and Inception V4, have been utilized in the tests

Documents

Application Documents

# Name Date
1 202341069032-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-10-2023(online)].pdf 2023-10-13
2 202341069032-FORM-9 [13-10-2023(online)].pdf 2023-10-13
3 202341069032-FORM FOR STARTUP [13-10-2023(online)].pdf 2023-10-13
4 202341069032-FORM FOR SMALL ENTITY(FORM-28) [13-10-2023(online)].pdf 2023-10-13
5 202341069032-FORM 1 [13-10-2023(online)].pdf 2023-10-13
6 202341069032-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-10-2023(online)].pdf 2023-10-13
7 202341069032-EVIDENCE FOR REGISTRATION UNDER SSI [13-10-2023(online)].pdf 2023-10-13
8 202341069032-EDUCATIONAL INSTITUTION(S) [13-10-2023(online)].pdf 2023-10-13
9 202341069032-DRAWINGS [13-10-2023(online)].pdf 2023-10-13
10 202341069032-COMPLETE SPECIFICATION [13-10-2023(online)].pdf 2023-10-13