Sign In to Follow Application
View All Documents & Correspondence

Agriai: Smart Plant Disease Prediction Using Deep Learning

Abstract: The proposed invention development and implementation of an Arduino-based vacuum cleaner equipped with obstacle avoidance capabilities. The vacuum cleaner utilizes ultrasonic sensors for detecting obstacles and Arduino microcontroller boards for processing sensor data and controlling motorized components. Through customizable programming, users can tailor the vacuum cleaner's behavior to specific cleaning patterns and preferences. The system's autonomy enables hands-free operation, enhancing convenience and efficiency in household cleaning tasks. The vacuum cleaner's ability to navigate autonomously while avoiding obstacles represents a significant advancement in home cleaning technology. Experimental results demonstrate the effectiveness and reliability of the Arduino- based vacuum cleaner in real-world cleaning scenarios. This innovative solution has the potential to revolutionize household cleaning practices, offering a practical and user-friendly approach to maintaining clean living spaces.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 August 2025
Publication Number
36/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

MLR Institute of Technology
Hyderabad

Inventors

1. Dr. Ajmeera Kiran
Department of Computer Science and Engineering, MLR Institute of Technology, Hyderabad
2. Mr. S K Lokesh Naik
Department of Computer Science and Engineering, MLR Institute of Technology, Hyderabad
3. Mrs. Pushpa Rani
Department of Computer Science and Engineering, MLR Institute of Technology, Hyderabad
4. Mrs. I Sapthami
Department of Computer Science and Engineering, MLR Institute of Technology, Hyderabad

Specification

Description:Field of Invention
This invention involves agriculture technology and AI. It's a system and method for detecting and predicting plant diseases using deep learning on crop or plant picture data.
Objectives of the Invention
Agriculture is vulnerable to crop loss from plant diseases caused by bacteria, viruses, fungi, or the environment. Traditional disease detection methods use manual inspection, which is time-consuming, unreliable, and unreachable in remote locations. Existing technologies cannot identify many plant diseases and often require expert intervention.
Machine learning, especially deep learning, can be used to construct a comprehensive, scalable, and automated method to identify and predict plant diseases using picture data. Deep neural networks are used to create a highly accurate, real-time, and user-friendly plant disease prediction model that improves current systems.
Background of the Invention
The training of the neural network has a synaptic maintenance process. Synaptogenic trimming is initially applied to the inputs of the neurons via a synaptogenic factor for each neuron. The factor is computed as a function of the standard deviation of a measured match of the input and synaptic weight value. Then it is a top-k competition between all the neurons, and a subset of them is chosen to be the winning neurons. Only the winning neurons are updated via the process of neuronal learning by updating their synaptic weights and their synaptogenic variables. It is clear in the document US9424514B2).
Whereas in US11308325B2, a system includes a computer device in electronic communication with the camera and a camera for creating picture data by its configuration. One or more representative images are sent to the camera by the computer device, which contains at least one CPU and is configured to receive said images. The computer device is also configured to apply a trained classifier to the one or more images to determine the classification of the location into one of at least two risk categories, where the determination is based on a probability of a subject exhibiting a target behavior based on the subject being in the location. The computing hardware is also configured to transmit a danger warning upon the trained classifier determining that the location is in a category that is high-risk.
The computer-implemented process for predicting the fate of an embryo from the processing of video image data. Part of the process is the reception of image data which is derived from a video of a target embryo recorded at a frame rate that is significantly real-time over an embryo observation period of time. There is a recorded morphogenetic movement of the target embryo that takes place over the embryo monitoring period of time. The movement is marked in the film. (US20220189022A1) offers corresponding embryo fate data. The movement is marked in the image data which was received, and the image data which was received is processed based upon a model which was created by the use of machine learning. All such data is provided.
The technology (US20230207060A1) to which the invention is applied is a method of obtaining a multiple sequence alignment comparing a query residue sequence to multiple non-query residue sequences. It applies a succession of spaced regularly masks to a first group of residues at certain positions in the alignment, then it removes a section of the alignment including these masks at the first positions, and also a second group of residues at other positions where the masks are not used. There is also a residue of interest at a position of interest in the query residue sequence that belongs to the first group of residues.
Sensor inputs such as image and environmental sensors are utilized for real-time growth optimization of plants in indoor farming. This is achieved by modifying light and other environmental factors. The sensors utilize wireless connectivity for the formation of an Internet of Things network. Machine learning analysis and image recognition of the plants to be grown are taken into account in order to decide upon the optimization. After the formation of, and/or training in, the cloud of a machine-learning model, the machine-learning model is installed to an edge device that is located within the indoor farm. This is achieved in order to prevent connectivity issues previously experienced between the sensors and the cloud. Crops grown in an indoor farm are continuously monitored, and the light energy intensity and spectrum output are automatically modified to optimal intensities at optimal times in order to produce superior crops. The systems and methods are self-regulating; light controls plant growth, and plant growth controls light volume and intensity. It can be seen in the document (US20250131268A1).
Summary of the Invention
Crops can be lost due to bacterial, viral, fungal, or environmental plant diseases. Traditional disease diagnosis is based on visual inspection, which is time-consuming, not very specific, and not accessible in rural regions. There are many plant diseases that cannot be detected with the technologies currently available, and they are often knowledge-based on experts.
Deep learning is likely to create a big, scalable, and autonomous method to identify and predict plant diseases from image data. Deep neural networks are utilized to develop a highly accurate, real-time, and interactive plant disease prediction model that can improve current systems.
"AgriAI" is a deep learning algorithm that is capable of detecting and predicting plant diseases from leaf pictures to aid precision agriculture. The algorithm applies strong convolutional neural networks (CNNs) to classify and diagnose plant diseases accurately, allowing farmers to respond in time.
The farming sector is highly susceptible to plant disease loss, even more often caused by bacteria, viruses, fungi, or environmental contamination. Conventional disease diagnosis techniques depend on visual inspection, which is not only time-consuming but also prone to inconsistency and unavailability in most rural areas. Technological solutions available are combined with a limited scope, can't detect a broad spectrum of plant diseases, and often require the services of an expert.
Using image data, one can develop an automated, scalable, and resilient solution. This is achievable with the current advances in machine learning, particularly deep learning. The function of this solution is to properly detect and predict plant diseases. Through the use of deep neural networks, this technology comes up with a plant disease prediction model that is precise, real-time, and user-friendly. It achieves this by avoiding the shortcomings of the existing methods.
Detailed Description of the Invention
The invention suggested for disease prediction allows for real-time monitoring of different plant species. Leaves of plants in this plan are recorded in real time and are routed to a microcontroller, where they are used to differentiate between various kinds of plants based on the location of the already identified plant. Leaves are then isolated from an image stream and routed to the cloud, where we employ an algorithm that is machine learning-based to determine if a leaf is infected or not. This is repeated until the leaves are no longer infected. We also have a database of pesticides that are effective against diseases that infect various plant species. If a leaf of a plant is determined to be infected, a pesticide suggestion is made for that specific plant species so that the disease is cured or the disease is inhibited from further infection. Plant diseases can be detected using CNN deep learning and data which have been obtained. Leaves that are diseased and leaves that are nondiseased constitute this data pool. Leaves infected by disease are again divided or differentiated based on the type of disease. The pesticides we have in our database enable us to recommend pesticides that may be accessed through the internet or a mobile app.
Data collection process involves collection of leaf images of plants from various sources, i.e., Kaggle and plant nurseries. In this specific case, we have collected datasets of potato, tomato, and bell pepper. Each type is again classified into multiple categories. There are three tags: Potato Healthy, Potato Early Blight, and Potato Late Blight. Potatoes contain all three of these tags. There are ten various diagnoses that can be provided to tomatoes. These are Tomato Mosaic Virus, Target Spot, Bacterial Spot, Yellow Leaf Curl Virus, Late Blight, Leaf Mold, Early Blight, Spider Mites Two-Spotted Spider Mite, Tomato Healthy, and Septoria Leaf Spot disease. There are two tags on bell peppers: the Bacterial Spot mark and the Healthy label. A cleaning process is performed once the data is collected in order to eliminate any unwanted fuzzy images. Once the data cleaning process has been performed, the total number of images is 20598. The process of splitting the data is performed once the process of data cleansing has been completed. In this step, we split our data into three categories: training data, validation data, and test data, with the ratio of each being seven, one, and two respectively. It was found that the number of training data is 14440, the number of validation data is 2058, and the number of test data is 4140. This was found after the partitioning process was completed.
Validation data is used to validate training data batches per epoch (iteration), and test data is used to test the CNN-ML model before deployment. Training data is used to train the model, and validation data is used to validate training data batches. Once we have segmented the data, we begin the process of data augmentation, where we resize, scale the data, flip the data horizontally, and flip the data vertically. We can create a number of datasets from the current dataset using the process of data augmentation, where we perform the above activities. This is in an effort to increase the accuracy of the model. Due to the process of data augmentation, these images are transformed to three-dimensional spatial data structures with a value range of 0-255 on the RGB scale. These data structures are then passed to the CNN model as input. The three layers that make up the bulk of the CNN model are the convolution layer, the activation layer, and the pooling layer. Feature extraction is the task that the convolution layer is responsible for performing through the utilization of filters. Once it has performed a dot product on the two-dimensional matrix of the image and the filter, the output of the dot product is then passed to the activation layer as input. During pooling, the process of dimensionality reduction is performed, resulting in the layer becoming tolerant to a variety of distortions. There are two types of pooling: max pooling and average pooling. In our model, we have used max pooling. Through this process, we develop our CNN model.
The model will be deployed on the cloud, and from the front end, we will upload a photo of a leaf to the back end, and our model will process the image and obtain the results. It will be accomplished using online, mobile, and microcontroller apps. Sign-in/sign-up page and input/output fields are some of the features that are a part of the user interface module. Through the use of this module, only authenticated users can log in. After logging in, users can choose the plant type and scan the leaf to identify any disease. For disease prediction, it is used to install the machine learning trained module as well as picture data feed from user apps. Moreover, the data received for disease prediction is obtained. To clean the obtained data, it needs to be assessed using data that has been collected manually, install a pipeline that will clean data, and split the data into three types: validation, test, and train data.
The use case activity of the invention is the collection of activities that take place while a process is ongoing, for example: Firstly, the user must register to the application, and secondly, after registration, the user must log in to the application with the right login credentials. Immediately after the login process is done, photos are captured using a mobile or any other camera, and then the captured images are sent as an input stream to the server. At that time, the extracted data is sent to the server stored in the cloud. The database stored in the cloud contains a huge number of datasets including healthy and unhealthy leaves and a list of pesticides used for treating the leaf disease detected. The machine learning convolutional neural network (CNN) model is trained such that it can identify the disease presence based on the data set that already exists in the database. If any disease is identified, the application will inform the user of the disease presence, and the application will also suggest a pesticide that is beneficial in treating the disease based on the prediction.
The disease prediction developed here enables the monitoring of several plant species. Here, the information of plant leaves is gathered, and the leaf images are capable of being sent through the internet using a mobile app or a web application. The leaves will then be extracted from the image stream and transmitted to the cloud, and our machine learning algorithm will decide whether the leaf is healthy or diseased. Along with that, we have a list of pesticides that can be used for diseases infecting various plant species. In case a leaf of a plant is infected, a pesticide recommendation is generated for a certain plant species in order to heal the disease or prevent its further spread. Using CNN deep learning and data that has been obtained, one can identify plant diseases by training an algorithm. Healthy leaves and infected leaves are part of this data set. Leaves that have been diseased are classified or categorized again based on the disease type. The pesticides in our database enable us to suggest pesticides that can be accessed through the internet or a mobile app.
Brief description of Drawing
In the figure which are illustrate exemplary embodiments of the invention.
Figure 1. System Architecture of Proposed method , Claims:The scope of the invention is defined by the following claims:

Claim:
A system/method to the recommend the fashion based on the previous data, said system/method comprising the steps of:
1. A deep learning-based method for predicting plant diseases that includes:
a) An image acquisition tool that can take pictures of plant leaves;
b) A preprocessing engine that is set up to normalize the pictures and get them ready for analysis.
c) A deep learning model that was taught to use those pictures to sort plant diseases into groups;
d) A prediction engine that is set up to give a report based on that classification; and
e) A user interface that shows the expected disease and suggested ways to treat it.
2. According to claim 1, the system has a deep learning model made up of a convolutional neural network that was trained on a set of tagged images of plant leaves.
3. According to claim 1, the system's prediction engine gives out a disease name and a confidence score that goes with it and a background removal module in the preparation engine that separates the plant leaf from its surroundings.
4. According to claim 1, the feedback system that lets users accept or reject the predicted result. This feedback is used to make the model work better and a user interface that gives treatment advice, such as suggestions for pesticides and growing tips.

Documents

Application Documents

# Name Date
1 202541074777-REQUEST FOR EARLY PUBLICATION(FORM-9) [06-08-2025(online)].pdf 2025-08-06
2 202541074777-FORM-9 [06-08-2025(online)].pdf 2025-08-06
3 202541074777-FORM FOR STARTUP [06-08-2025(online)].pdf 2025-08-06
4 202541074777-FORM FOR SMALL ENTITY(FORM-28) [06-08-2025(online)].pdf 2025-08-06
5 202541074777-FORM 1 [06-08-2025(online)].pdf 2025-08-06
6 202541074777-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-08-2025(online)].pdf 2025-08-06
7 202541074777-EVIDENCE FOR REGISTRATION UNDER SSI [06-08-2025(online)].pdf 2025-08-06
8 202541074777-EDUCATIONAL INSTITUTION(S) [06-08-2025(online)].pdf 2025-08-06
9 202541074777-DRAWINGS [06-08-2025(online)].pdf 2025-08-06
10 202541074777-COMPLETE SPECIFICATION [06-08-2025(online)].pdf 2025-08-06