Sign In to Follow Application
View All Documents & Correspondence

A System And A Method For Grading Fruits Using Threedimensional Mapping

Abstract: A system and a method for grading fruits using three-dimensional mapping is disclosed. A depth camera (115) configured to capture farm videos to generate depth data. The depth camera (115) is configured to be attached via a dedicated attachment mechanism being physically separate from the depth camera and to maintain the depth camera in a fixed orientation during use. An image analysis module (320) configured to generate a three-dimensional map of a scene based on the depth data, distinguish a fruit from a background, identify the fruit within the scene by analysing visual features and recognise the fruits based on an artificial intelligence module (310) trained on a labelled dataset. The image analysis module (320) estimates a size and a volume of each fruit and determine a weight using a region-specific density profile. A classification module (335) configured to generate a grade mix estimate. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 August 2025
Publication Number
36/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

CHIFU AGRITECH PRIVATE LIMITED
8-5-119/5, MALLIKARJUNA COLONY, ROAD NO. 3, OLD BOWEN PALLY, KUKAT PALLY, HYDERABAD-500011, TELANGANA, INDIA

Inventors

1. ARUN KUMAR ARJUNAN
BUILDING NO. 4(SARAYU), 3RD FLOOR 19TH MAIN ROAD, SECTOR 4, HSR LAYOUT, BESIDE DECATHLON, BENGALURU-560102, KARNATAKA, INDIA

Specification

Description:FIELD OF INVENTION
[0001] Embodiments of the present disclosure relate to a field of agricultural produce grading and more particularly to, a system and a method for grading fruits using three-dimensional mapping.
BACKGROUND
[0002] In the agricultural industry, fruit grading is a critical step in post-harvest processing, as it directly influences the commercial value, packaging, and distribution strategy of the produce. Grading typically involves classifying fruits based on various attributes such as size, weight, colour, and overall quality. For crops like pomegranates, accurate grading is especially important due to market preferences and pricing structures that are closely tied to weight and visual quality.
[0003] Traditionally, grading of fruits on farms has been performed manually by labourers who visually inspect and handle each fruit. This manual method is not only labour-intensive and time-consuming but also introduces subjectivity and inconsistency in the grading process. The scalability of manual grading is limited, making it unsuitable for large-scale farming operations or rapid throughput requirements. Moreover, manual processes are susceptible to human fatigue and environmental conditions, which can further degrade accuracy and repeatability.
[0004] In recent years, mechanized grading systems have been introduced in controlled environments, such as packhouses, where conveyor-based systems equipped with sensors can perform sorting based on predefined criteria. However, such systems are expensive, infrastructure-intensive, and generally immobile, which restricts their adoption in on-field or small to mid-sized farm settings. These systems also typically require substantial power sources and trained operators, adding to operational complexity.
[0005] Depth sensing technologies, including stereo vision and structured light systems, have demonstrated potential for improving dimensional analysis of objects in various applications. In agricultural contexts, integrating depth data can provide enhanced three-dimensional representations of fruits, enabling more precise estimations of their physical characteristics. Despite this potential, there remains a lack of compact, field-deployable solutions that combine mobile computing with depth data acquisition for fruit grading purposes.
[0006] Hence, there is a need for an improved system and method for grading fruits using three-dimensional mapping to address the aforementioned issue(s).
OBJECTIVES OF THE INVENTION
[0007] The primary objective of the invention is to provide a system and method for grading fruits, particularly pomegranates, apple, sweet lime, orange and mango, based on their weight by utilizing depth sensing and mobile device integration to enable real-time, on-field grading.
[0008] Another objective of the invention is to enable accurate weight estimation of individual fruits using depth data acquired from a stereo depth camera, combined with a region-specific density profile to account for variations in fruit density.
[0009] Yet another objective of the invention is to facilitate the identification and recognition of individual fruits from video frames captured in real-time under varying lighting and environmental conditions by analysing visual and spatial features from the captured scene.
[0010] A further objective of the invention is to classify the recognized fruits into multiple weight-based categories (buckets) and generate a grade mix estimate, representing the distribution of fruits across these categories for efficient post-harvest planning.
[0011] Another objective of the invention is to perform all data acquisition and processing locally on a mobile device, thereby minimizing the reliance on cloud-based infrastructure, reducing latency, and ensuring efficient operation even in remote field locations with limited connectivity.
[0012] Yet another objective of the invention is to provide a user interface that offers real-time feedback and visual representation of the grade mix, along with the ability to generate harvest quality reports to assist in decision-making related to sorting, packaging, and distribution.
BRIEF DESCRIPTION
[0013] In accordance with an embodiment of the present disclosure, a system for grading fruits using three-dimensional mapping. The system includes a depth camera configured to capture a plurality of farm videos in real-time under varying lighting conditions to generate a depth data. The depth camera is configured to be removably attached via a dedicated attachment mechanism, the attachment mechanism being physically separate from the depth camera and adapted to maintain the depth camera in a fixed orientation during use. The system also includes a processor, and a memory coupled to the processor. The memory includes instructions that when executed by the processor, cause the processor to: generate a three-dimensional map of a scene based on the depth data; distinguish the plurality of fruits from a corresponding background pertaining to the scene; identify a plurality of fruits within the scene by analysing multiple visual features from the three-dimensional map; recognize the plurality of fruits based on an artificial intelligence model, wherein the artificial intelligence model is trained on a labelled dataset of fruits images to recognize the plurality of fruits under diverse environmental conditions in the farm; estimate a size and a volume of each identified fruit based on the depth data; determine a weight of each identified fruits by correlating the estimated size and the volume with a region-specific density profile in combination with the variations in the plurality of fruits density specific to the geographic region of the farm; and generate a grade mix estimate corresponding to the distribution of the plurality of fruits across a plurality of weight-based categories.
[0014] In accordance with an embodiment of the present disclosure, a method for grading fruits using three-dimensional mapping. The method includes capturing, by a depth camera, a plurality of farm videos in real-time under varying lighting conditions to generate a depth data, wherein the depth camera is configured to be removably attached via a dedicated attachment mechanism, the attachment mechanism being physically separate from the depth camera and adapted to maintain the depth camera in a fixed orientation during use. The method includes generating, by an image analysis module of a processor, a three-dimensional map of a scene based on the depth data. The method includes distinguishing, by the image analysis module of the processor, the plurality of fruits from a corresponding background pertaining to the scene. The method includes identifying, by the image analysis module of the processor, a plurality of fruits within the scene by analysing multiple visual features from the three-dimensional map. The method includes recognizing, by the image analysis module of the processor, the plurality of fruits based on an artificial intelligence model, wherein the artificial intelligence model is trained on a labelled dataset of fruits images to recognize the plurality of fruits under diverse environmental conditions in the farm. The method includes estimating, by the image analysis module of the processor, a size and a volume of each identified fruit based on the depth data. The method includes determining, by the image analysis module of the processor, a weight of each identified fruits by correlating the estimated size and the volume with a region-specific density profile in combination with the variations in the plurality of fruits density specific to the geographic region of the farm. The method includes generating, by a classification module of the processor, a grade mix estimate corresponding to the distribution of the plurality of fruits across a plurality of weight-based categories.
[0015] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0017] FIG. 1 illustrates a network environment for a system for grading fruits using three-dimensional mapping in accordance with an embodiment of the present disclosure;
[0018] FIG. 2 illustrates a schematic diagram of a user device of FIG. 1, in accordance with an example implementation of the present subject matter;
[0019] FIG. 3 illustrates a schematic diagram of a system for grading fruits using three-dimensional mapping of FIG. 1, in accordance with an embodiment of the present disclosure; and
[0020] FIG. 4 is a flow chart representing the steps involved in a method for grading fruits using three-dimensional mapping in accordance with an embodiment of the present disclosure.
[0021] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.

DETAILED DESCRIPTION
[0022] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[0023] The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or subsystems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0024] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0025] In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
[0026] Embodiments of the present disclosure relate to a system for grading fruits using three-dimensional mapping is provided. The system includes a depth camera configured to capture a plurality of farm videos in real-time under varying lighting conditions to generate a depth data. The depth camera is configured to be removably attached via a dedicated attachment mechanism, the attachment mechanism being physically separate from the depth camera and adapted to maintain the depth camera in a fixed orientation during use. The system also includes a processor, and a memory coupled to the processor. The memory includes instructions that when executed by the processor, cause the processor to: generate a three-dimensional map of a scene based on the depth data; distinguish the plurality of fruits from a corresponding background pertaining to the scene; identify a plurality of fruits within the scene by analysing multiple visual features from the three-dimensional map; recognize the plurality of fruits based on an artificial intelligence model, wherein the artificial intelligence model is trained on a labelled dataset of fruits images to recognize the plurality of fruits under diverse environmental conditions in the farm; estimate a size and a volume of each identified fruit based on the depth data; determine a weight of each identified fruits by correlating the estimated size and the volume with a region-specific density profile in combination with the variations in the plurality of fruits density specific to the geographic region of the farm; and generate a grade mix estimate corresponding to the distribution of the plurality of fruits across a plurality of weight-based categories.
[0027] FIG. 1 illustrates a network environment for implementing example techniques for a system for grading fruits using three-dimensional mapping in accordance with an embodiment of the present disclosure. Referring to FIG. 1, a user device (104) operated by a user (118) may be communicatively coupled to a system (100). Further, the user (118) may access the system (100) over a communication network (106). The communication network (106) may be a single communication network or a combination of multiple communication networks and may use a variety of different communication protocols. The communication network (106) may be a wireless network, a wired network, or a combination thereof. Examples of such individual communication networks include, but are not limited to, Global System for Mobile Communication (GSM) network, Universal Mobile Telecommunications System (UMTS) network, Personal Communications Service (PCS) network, Time Division Multiple Access TDMA) network, Code Division Multiple Access (CDMA) network, Next Generation Network (NON), Public Switched Telephone Network (PSTN). Depending on the technology, the communication network (106) may include various network entities, such as gateways and routers; however, such details have been omitted for the sake of brevity of the present description.
[0028] The system (100) may include one or more computing devices, such as one or more servers (e.g., in a cloud deployment or in a data centre), one or more personal computers, and/or the like. The user device (104) may include a computing device, such as a desktop or laptop computer, a tablet, a mobile phone, etc. In an example, access to the system (100) may be provided as a web-link via a web browser on the user device (104) or a dedicated application installed on the user device (104). This application is not limited thereto.
[0029] The system (100) may be provided with a database (130). In an example implementation of the system (100) including one or more servers, the database (130) may a database local to the server or may be remote to the server. The database (130) may serve, amongst other things, as a repository for pre-storing sets of assessment items, corresponding assessment item weight of each of the assessment items, suggestions corresponding to each of the assessment items, number of pre-defined responses, pre-defined responses and corresponding response weights, pre-defined first expected responses pre-stored as benchmark data, one or more actions that may be fetched, processed, received, or generated by the system (100). It may be noted that the data in the database (130) may be stored as a table or may be pre-stored as a mapping with the other. This application is not limited thereto.
[0030] Further, the system (100) may include a first processor(s) and a first memory(s). The first processor may fetch and execute the computer readable instructions stored in the first memory(s) to facilitate grading of fruits, amongst other functions. Similarly, the user device (104) may include a second processor(s) and a second memory(s). The second processor may fetch and execute the computer-readable instructions stored in the second memory(s) to facilitate resilience assessment, amongst other functions.
[0031] A depth camera (115) is configured to capture a plurality of farm videos in real-time under varying lighting conditions to generate a depth data, wherein the depth camera is configured to be removably attached via a dedicated attachment mechanism, the attachment mechanism being physically separate from the depth camera (115) and adapted to maintain the depth camera (115) in a fixed orientation during use. The depth camera (115) can operate effectively under varying lighting conditions, including low-light scenarios such as dusk, dawn, or shaded crop canopies. The purpose of capturing real-time videos is to generate depth data corresponding to a scene within the farm environment, such as trees, vines, or rows of fruit-bearing plants. An example of the scene includes, but is not limited to, a section of a fruit orchard, a conveyer belt in a post-harvest processing unit, or a sorting area in a packing facility, wherein a plurality of fruits are spatially distributed and observed in real-time. This depth data enables subsequent generation of a three-dimensional (3D) representation of the scene for fruit grading and classification.
[0032] The depth camera (115) in the present embodiment is removably attached to a support structure or a mobile agricultural apparatus, an example of the support structure includes, but is not limited to a drone, harvesting machine, or a handheld rig via a dedicated attachment mechanism. This attachment mechanism is physically separate from the camera itself, meaning it is not integrally formed with or dependent on the structural body of the depth camera.
[0033] In another embodiment, the attachment mechanism may include, but is not limited to, a shock-absorbing mechanical bracket, a gimbal stabilizer, a magnetic docking plate, or a quick-release locking arm mounted onto a harvesting robot or a UAV
[0034] It may be noted that the foregoing system is an exemplary system and may be implemented as computer executable instructions in any computing or processing environment, including in digital electronic circuitry or in computer hardware, firmware, device driver, or software. As such, the system is not limited to any specific hardware or software configuration.
[0035] FIG. 2 illustrates a schematic diagram of a user device, in accordance with an example implementation of the present subject matter. Referring to FIG. 2, the user device (104) may comprise a processor(s) (202), a memory(s) (204) coupled to and accessible by the processor(s) (202), and an interface (210) coupled to the memory(s) (20). The user device (104) disclosed herein may be same as the user device (104) described in FIG. 1. The functions of various elements shown in the figs., including any functional blocks labelled as "processor(s)", may be provided through the use of dedicated hardware as well as hardware capable of executing instructions. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" would not be construed to refer exclusively to hardware capable of executing instructions, and may implicitly comprise, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA). Other hardware, standard and/or custom, may also be coupled to the processor(s) (202). The user device (104) may further include a display (206) in addition to other components such as, but not limited to, keyboard, sensors, logic circuits etc. Further, the user device (104) may include data (208) which may include data that may be stored, utilized or generated during the operation of the user device (104).
[0036] The memory(s) (204) may be a computer-readable medium, examples of which comprise volatile memory (e.g., RAM), and/or non-volatile memory (e.g., Erasable Programmable read-only memory, i.e. EPROM, flash memory, etc.). The memory(s) (204) may be an external memory, or internal memory, such as a flash drive, a compact disk drive, an external hard disk drive, or the like. The user device (104) may further include an interface (210) that may allow the connection or coupling of the user device (104) with one or more other devices, through a wired (e.g., Local Area Network, i.e., LAN) connection or through a wireless connection (e.g., Bluetooth®, Wi-Fi), for example, for connecting to the system (100) shown in FIG. 1. The interface (210) may also enable intercommunication between different logical as well as hardware components of the user device (104).
[0037] FIG. 3 illustrates a schematic diagram of a system for grading fruits using three-dimensional mapping of FIG. 1, in accordance with an embodiment of the present disclosure. Referring to FIG. 3, the system (100) includes a processor(s) (302), a memory(s) (304) coupled to and accessible by the processor(s) (302), and a user interface (330) coupled to the memory(s) (304).
[0038] The system (100) disclosed herein is the same as the system (100) described in FIG. 1. The functions of various elements shown in the figs., including any functional blocks labelled as "processor(s)", may be provided through the use of dedicated hardware as well as hardware capable of executing instructions. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" would not be construed to refer exclusively to hardware capable of executing instructions, and may implicitly comprise, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA). Other hardware, standard and/or custom, may also be coupled to the processor(s) (302).
[0039] The memory(s) (304) may be a computer-readable medium, examples of which comprise volatile memory (e.g., RAM), and/or non-volatile memory (e.g., Erasable Programmable read-only memory, i.e. EPROM, flash memory, etc.). The memory(s) (304) may be an external memory, or internal memory, such as a flash drive, a compact disk drive, an external hard disk drive, or the like. The system (100) may further include the user interface (330) that may allow the connection or coupling of the system (100) with one or more other devices, through a wired (e.g., Local Area Network, i.e., LAN) connection or through a wireless connection (e.g., Bluetooth®, Wi-Fi)., for example, for connecting to the user device (104) as shown in FIG. 1. The user interface (330) may also enable intercommunication between different logical as well as hardware components of the system (100).
[0040] The system (100) may be provided with a database (326) to store to store the depth data, the three-dimensional representation generated from the depth data, and data corresponding to the plurality of fruits recognized during the grading process. This storage operation ensures data persistence for subsequent analysis, audit, or traceability, thereby supporting historical comparison and enhancing decision-making in agricultural yield estimation and quality assessment. In an example implementation of the system (100) including one or more servers, the databases may databases local to the server or may be remote to the server. It may be noted that the data in the databases may be stored as a table or may be pre-stored as a mapping with the other. This application is not limited thereto.
[0041] The system (100) may include module(s). The module(s) may include an image analysis module (320), a classification module (325), and an artificial intelligence model (310). In one example, the module(s) may be implemented as a combination of hardware and firmware. In an example described herein, such combinations of hardware and firmware may be implemented in several different ways. For example, the firmware for module(s) may be processor (302) executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the module(s) may include a processing resource (for example, implemented as either single processor or combination of multiple processors), to execute such instructions.
[0042] In operation, the image analysis module (320) is configured to generate a three-dimensional map of a scene based on the depth data. This mapping process is foundational to the fruit grading system, as it reconstructs the spatial structure of the scene, enabling precise identification, segmentation, and volumetric analysis of individual fruits within a natural farming environment.
[0043] The depth data, as captured by the depth camera (115, FIG. 1) typically consists of a dense array of pixel values, each encoding distance information corresponding to a specific point in the real-world scene. The image analysis module (320) utilizes this depth information to reconstruct a spatial point cloud or a depth mesh, representing the three-dimensional map surfaces of the environment. This point cloud includes data for both the fruits and their corresponding background elements, such as branches, leaves, trellises, or the ground plane.
[0044] In another embodiment, the three-dimensional mapping function may include, but is not limited to, the use of Simultaneous Localization and Mapping (SLAM) algorithms and so on.
[0045] An example of the three-dimensional map includes, but is not limited to, a voxel-based representation of fruit positions on a conveyor belt, a mesh model of a tree branch laden with fruits, or a depth-annotated point cloud indicating the distance and contour of each fruit in the camera’s field of view.
[0046] In one embodiment, the image analysis module (320) is configured to distinguish the plurality of fruits from a corresponding background pertaining to the scene. This operation is crucial for segmenting and isolating fruit objects from non-fruit elements in the three-dimensional map of the farming environment.
[0047] The corresponding background may include, but is not limited to, a variety of farm-based elements such as leaves, branches, stems, wires, support structures, tree trunks, soil, or other environmental features that visually or spatially coexist with the fruits in the captured scene.
[0048] In another embodiment, the image analysis module (320) utilizes computer vision models trained on an annotated datasets where the plurality of fruits is labelled distinctly from the background, enabling the processor (302) to perform pixel-level or point-level classification. Techniques employed may include, but are not limited to, semantic segmentation, instance segmentation, and object detection using convolutional neural networks (CNNs) or transformer-based architectures.
[0049] In one embodiment, the image analysis module (320) is configured to identify a plurality of fruits within the scene by analysing multiple visual features from the three-dimensional map. This identification process is performed after the image analysis module (320) has distinguished the plurality of fruits from the corresponding background, ensuring that the recognition task focuses only on isolated fruit-like structures within the captured scene.
[0050] The three-dimensional map represents the spatial structure of the farming environment and includes both surface-level and volumetric data points that correspond to the physical arrangement of objects. Within this three-dimensional scene, the processor analyses a plurality of visual features that serve as distinguishing characteristics for the plurality of fruits. These visual features include, but are not limited to, shape, surface curvature, colour intensity distribution (if RGB is fused), texture, geometric symmetry, reflectivity, and spatial orientation.
[0051] In one example, identifying the plurality of fruits includes locating spherical or ellipsoidal objects with smooth and convex surfaces that correspond to known fruit geometries in the dataset.
[0052] In one embodiment, the image analysis module (320) is configured to recognize the plurality of fruits based on an artificial intelligence module (310), wherein the artificial intelligence model is trained on a labelled dataset of fruits images to recognize the plurality of fruits under diverse environmental conditions in the farm. The artificial intelligence module (310) is trained using a labelled dataset of fruit images, where each image is annotated with metadata such as fruit type, ripeness level, colour variation, and lighting condition. The labelled dataset includes, but is not limited to, images of fruits such as apples, oranges, mangoes, bananas, guavas, and pomegranates, captured under a wide range of environmental conditions, including direct sunlight, partial shade, overcast skies, and nighttime illumination with artificial lighting.
[0053] Recognition of the plurality of fruits includes comparing the features extracted from the identified fruit-like objects with the trained parameters of the artificial intelligence module (310), which returns a classification output indicating the likely fruit category. The recognition process can further include assigning a confidence score to each classification to quantify certainty under varying image quality and environmental noise.
[0054] In one embodiment, the image analysis module (320) is configured to estimate a size and a volume of each identified fruit based on the depth data. This process is initiated after the plurality of fruits has been recognized by the artificial intelligence module (310) and their positions localized within the three-dimensional map corresponding to the scene.
[0055] The size estimation of each identified fruit includes, but is not limited to, calculating parameters such as height, width, and depth, using the bounding box dimensions derived from the point cloud representation generated from the depth data.
[0056] The volume estimation involves applying shape-fitting or surface reconstruction algorithms to the three-dimensional geometry of each fruit. In one implementation, the fruit’s shape may be approximated by an ellipsoid, sphere, or other geometric primitive, based on its contour and curvature profile extracted from the depth data. The volume is then computed using standard volumetric equations corresponding to the selected shape model.
[0057] An example of the plurality of fruits whose size and volume may be estimated using this approach includes, but is not limited to, pomegranates, apples, mangoes, guava and oranges each of which presents differing geometries and surface features.
[0058] In one embodiment, the image analysis module (320) is configured to determine a weight of each identified fruits by correlating the estimated size and the volume with a region-specific density profile in combination with the variations in the plurality of fruits density specific to the geographic region of the farm. This determination of the weight enables to accurately approximate the mass of each identified fruit without physical contact or weighing mechanisms, thereby facilitating non-invasive, real-time fruit grading.
[0059] The region-specific density profile in this embodiment refers to a predefined dataset or model stored in the memory (304) that contains average density values corresponding to various fruit types as cultivated in different geographical locations.
[0060] An example of the plurality of fruits for which weight can be determined in this manner includes, but is not limited to, mangoes, guavas, pomegranates, and papayas.
[0061] In another embodiment, the image analysis module (320) is configured to apply a pre-measured density profile corresponding to a geographical region, wherein the pre-measured density profile is utilized to correlate the estimated size and the volume of each of the plurality of fruits recognized with a corresponding weight.
[0062] The pre-measured density profile comprises region-specific fruit density data derived from empirical measurements collected over multiple harvest cycles within a defined agricultural zone. These density values reflect the typical mass-to-volume ratio of fruits grown in varying soil, climate, and cultivation conditions.
[0063] An example of a pre-measured density profile includes, but is not limited to, a tabulated dataset where a specific fruit type such as mangoes grown in Region A is assigned an average density of 1.02 g/cm³, while the same fruit type grown in Region B is associated with a density of 0.95 g/cm³ due to differences in irrigation, soil nutrition, and sunlight exposure.
[0064] In one embodiment, the classification module (335) is operatively coupled to the image analysis module (320), wherein the classification module (335) is configured to generate a grade mix estimate corresponding to the distribution of the plurality of fruits across a plurality of weight-based categories. The classification module (335) utilises the weight of each identified fruits computed earlier to classify the fruits into defined grading tiers based on their individual mass and organize them into a structured statistical output representing the overall yield composition.
[0065] The grade mix estimate, in this embodiment, refers to a computed aggregation that represents the proportion of the plurality of fruits falling into various predefined weight ranges, each associated with a grading level as an example Grade 1, Grade 2, Grade 3.
[0066] For example, a fruit grading scheme includes, but is not limited to grade 1 as the plurality of fruits weighing more than 250 grams, grade 2 as the plurality of fruits weighing between 150 grams and 250 grams, grade 3 as the plurality of fruits weighing less than 150 grams.
[0067] In another embodiment, the classification module (335) is configured to transmit the grade mix estimate corresponding to the plurality of fruits recognized to the user (118, FIG. 1). The grade mix estimate refers to a categorized assessment of the recognized fruits based on computed weight values and predefined grading thresholds, which are used to segment the fruits into a plurality of weight-based categories such as premium grade, mid-grade, and low-grade classifications.
[0068] An example of the transmission of the grade mix estimate includes, but is not limited to, sending a structured data packet or visual report containing category-wise fruit counts, average weights, and quality indicators via a wireless communication interface (e.g., Wi-Fi, Bluetooth, or GSM) to a user device (104, FIG. 1) such as a smartphone, tablet, or handheld terminal associated with the user (118, FIG. 1).
[0069] In another embodiment, the classification module (335) is configured to sort the plurality of fruits recognized into the plurality of weight-based categories based on the weight determined. The sorted plurality of fruits recognized data is then processed in conjunction with a region-specific density profile to determine the weight of each fruit. Based on the determined weight, each of the plurality of fruits is sorted into a corresponding weight-based category defined by predetermined threshold ranges.
[0070] An example of sorting the plurality of fruits includes, but is not limited to, assigning fruits into categories such as “Grade Premium” for fruits weighing above 200 grams, “Grade Standard” for those between 150 grams and 200 grams, and “Grade Process” for fruits weighing below 150 grams.
[0071] The system may further include engine(s). The engine(s) may be implemented as a combination of hardware and programming, for example, programmable instructions to implement a variety of functionalities of the engine(s). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the engine(s) may be executable instructions. Such instructions may be stored on a non-transitory machine-readable storage medium which may be coupled either directly with the system or indirectly (for example, through networked means). In an example, the engine(s) may include a processing resource, for example, either a single processor or a combination of multiple processors, to execute such instructions. In the present examples, the non-transitory machine-readable storage medium may store instructions that, when executed by the processing resource, implement engine(s). In other examples, the engine(s) may be implemented as electronic circuitry.
[0072] The engine(s) includes a feedback engine (345) and other engine(s) (345). The system (100) is adapted to cause the processor (302) to provide real-time feedback to a user (118, FIG. 1) operating a user device (104, FIG. 1) regarding the grade mix estimate. The grade mix estimate corresponds to the distribution of the plurality of fruits recognized across a plurality of weight-based categories, as determined by the classification module in conjunction with the image analysis module. Real-time feedback refers to the immediate or near-immediate transmission of this grade mix estimate to the user (118, FIG. 1), enabling the user (118, FIG.1) to assess the grading output dynamically during or shortly after the fruit scanning and analysis process. The feedback may be delivered through a user interface (330) integrated into a mobile device, tablet, display unit, or control terminal that is communicatively coupled to the processor (302).
[0073] An example of real-time feedback includes, but is not limited to, a visual display on a handheld tablet that shows the proportion of fruits categorized into grades such as “premium,” “standard,” and “substandard” based on predefined weight thresholds. The user (118, FIG. 1) may be a farm operator, quality inspector, or system administrator responsible for monitoring grading accuracy and making informed decisions accordingly.
[0074] The other engine(s) may further implement functionalities that supplement functions performed by the system or any of the engine(s). Further, the system includes data. The data may include data that is either stored or generated as a result of functions implemented by any of the engine(s) or the system. It may be further noted that information stored and available in data may be utilized by the engine(s) for performing various functions by the system. In an example, data may include the depth data, the three-dimensional data, the plurality of fruits data. It may be noted that such examples of the various functions are only indicative. The present approaches may be applicable to other examples without deviating from the scope of the present subject matter.
[0075] In the present examples, the non-transitory machine-readable storage medium may store instructions that, when executed by the processing resource, implement the functionalities of modules(s). In such examples, the system (100) may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions. In other examples of the present subject matter, the machine-readable storage medium may be located at a different location but accessible to the system (100) and the processor(s) (302).
[0076] The user-interface (330) is configured to display the grade mix estimate corresponding to a distribution of the plurality of fruits recognized across the plurality of weight-based categories.
[0077] The grade mix estimate represents a categorized distribution of the plurality of fruits, segmented based on weight calculations performed by the image analysis module (320) and the classification module (335).
[0078] An example of a user interface (330) includes, but is not limited to, mobile application interfaces for smartphones and tablets and soon on.
[0079] In another embodiment, the user interface (330) is configured to generate a plurality of reports based on the grade mix estimate, wherein the plurality of reports comprises insights corresponding to the plurality of fruits’ quality and yield of a harvest in the farming environment. The user interface (330) receives processed grading data from the classification module (335) and organizes the data into structured formats for end-user interpretation and decision-making.
[0080] Consider a non-limiting example wherein a user “Y”, a farm supervisor, enters the orchard in the morning to initiate the automated fruit grading system (100) via a handheld user device (104, FIG. 1). A depth camera (115, FIG. 1) mounted on a farm vehicle begins capturing a plurality of farm videos in real-time under varying lighting conditions. Ambient lighting is moderately diffused due to cloud cover. The image analysis module (320) receives the depth data and begins generating a three-dimensional map of the orchard scene, identifying a cluster of ripe mangoes on one side of the tree canopy. The image analysis module (320) distinguishes the fruits from the background elements like leaves, stems, and sky, using trained segmentation algorithms. It further identifies the fruits by analysing visual features such as shape, texture, and surface reflectance from the 3D map. The artificial intelligence model (310) recognizes the mangoes and classifies them as “Alphonso variety,” associating them with a pre-labelled dataset and known harvesting patterns in that region.
[0081] The image analysis module (320) then estimates the size and volume of each mango and applies a pre-measured density profile for the geographical region (Konkan belt) to determine weight with high accuracy. The classification module (335) categorizes the fruits into multiple weight-based categories and generates a grade mix estimate. This grade mix estimate is transmitted to user “Y” via the user interface module (330), which displays a summary: “Batch #17: 40% Premium (>300g), 35% Medium (200–300g), 25% Small (<200g)”. The user interface module (330) also provides real-time insights such as estimated yield per tree and potential revenue based on market prices. Simultaneously, a query is entered by user “Y”: “What’s the ideal harvest time for this batch?”. The processor (302) sends the query to an expert system and responds: “Based on size progression and forecasted humidity, ideal harvest is in 3 days.” User “Y” approves the batch for selective harvesting and downloads a report for logistics planning, which is automatically stored in the farm’s cloud record system linked to the user profile.
[0082] FIG. 4 is a flow chart representing the steps involved in a method for grading fruits using three-dimensional mapping in accordance with an embodiment of the present disclosure. The method (400) includes capturing, by a depth camera, a plurality of farm videos in real-time using a depth camera under varying lighting conditions to generate a depth data, wherein the depth camera is configured to be removably attached via a dedicated attachment mechanism, the attachment mechanism being physically separate from the depth camera and adapted to maintain the depth camera in a fixed orientation during use in step 405.
[0083] The method (400) includes generating, by an image analysis module of a processor, a three-dimensional map of a scene based on the depth data in step 410.
[0084] The method (400) includes distinguishing, by the image analysis module of the processor, the plurality of fruits from a corresponding background pertaining to the scene in step 415.
[0085] The method (400) includes identifying, by the image analysis module, a plurality of fruits within the scene by analysing multiple visual features from the three-dimensional mapping step 420.
[0086] The method (400) includes recognizing, by the image analysis module of the processor, the plurality of fruits based on an artificial intelligence model, wherein the artificial intelligence model is trained on a labelled dataset of fruits images to recognize the plurality of fruits under diverse environmental conditions in the farm in step 425.
[0087] The method (400) includes estimating, by the image analysis module of the processor, a size and a volume of each identified fruit based on the depth data in step 430.
[0088] The method (400) includes determining, by the image analysis module of the processor, a weight of each identified fruits by correlating the estimated size and the volume with a region-specific density profile in combination with the variations in the plurality of fruits density specific to the geographic region of the farm in step 435.
[0089] The method (400) includes generating, by a classification module of the processor, a grade mix estimate corresponding to the distribution of the plurality of fruits across a plurality of weight-based categories in step 440.
[0090] Thus, various embodiments of the system and method for grading fruits using three-dimensional mapping provides several benefits. The depth camera (115) enables robust real-time depth capture under varying lighting conditions, ensuring accurate spatial data acquisition in diverse farming environments. The image analysis module (320) enhances object segmentation and identification by generating three-dimensional maps and applying advanced visual feature analysis, allowing reliable fruit detection even in cluttered or occluded scenes. The image analysis module (320) also facilitates precise estimation of size, volume, and weight through integration with regional density profiles, increasing grading accuracy.
, Claims:1. A system for grading fruits using three-dimensional mapping, comprising:
a depth camera is configured to capture a plurality of farm videos in real-time under varying lighting conditions to generate a depth data,
wherein the depth camera is configured to be removably attached via a dedicated attachment mechanism, the attachment mechanism being physically separate from the depth camera and adapted to maintain the depth camera in a fixed orientation during use;
a processor; and
a memory coupled to the processor, wherein the memory comprises instructions that, when executed by the processor, cause the processor to:
generate a three-dimensional map of a scene based on the depth data;
distinguish the plurality of fruits from a corresponding background pertaining to the scene;
identify a plurality of fruits within the scene by analysing multiple visual features from the three-dimensional map;
recognize the plurality of fruits based on an artificial intelligence module, wherein the artificial intelligence module is trained on a labelled dataset of fruits images to recognize the plurality of fruits under diverse environmental conditions in the farm;
estimate a size and a volume of each identified fruit based on the depth data;
determine a weight of each identified fruits by correlating the estimated size and the volume with a region-specific density profile in combination with the variations in the plurality of fruits density specific to the geographic region of the farm; and
generate a grade mix estimate corresponding to the distribution of the plurality of fruits across a plurality of weight-based categories
2. The system as claimed in claim 1, to cause the processor to:
store the depth data, the three-dimensional representation, and the plurality of fruits recognized in a database.

3. The system as claimed in claim 1, to cause the processor to:
display the grade mix estimate corresponding to a distribution of the plurality of fruits recognized across the plurality of weight-based categories.

4. The system as claimed in claim 1, to cause the processor to:
provide real-time feedback to a user operating a user device regarding the grade mix estimate.

5. The system as claimed in claim 1, to cause the processor to:
apply a pre-measured density profile corresponding to a geographical region, wherein the pre-measured density profile is utilized to correlate the estimated size and the volume of each of the plurality of fruits recognized with a corresponding weight.

6. The system as claimed in claim 1, to cause the processor to:
transmit the grade mix estimate corresponding to the plurality of fruits recognized to the user.

7. The system as claimed in claim 1, to cause the processor to:
generate a plurality of reports based on the grade mix estimate, wherein the plurality of reports comprises insights corresponding to the plurality of fruits quality and yield of a harvest in the farming environment.

8. The system as claimed in claim 1, to cause the processor to:
sort the plurality of fruits recognized into the plurality of weight-based categories based on the weight determined.

9. A method for grading fruits using three-dimensional mapping, comprising:
capturing, by a depth camera, a plurality of farm videos in real-time using a depth camera under varying lighting conditions to generate a depth data, wherein the depth camera is configured to be removably attached via a dedicated attachment mechanism, the attachment mechanism being physically separate from the depth camera and adapted to maintain the depth camera in a fixed orientation during use;
generating, by an image analysis module of a processor, a three-dimensional map of a scene based on the depth data;
distinguishing, by the image analysis module of the processor, the plurality of fruits from a corresponding background pertaining to the scene;
identifying, by the image analysis module of the processor, a plurality of fruits within the scene by analysing multiple visual features from the three-dimensional map;
recognizing, by the image analysis module of the processor, the plurality of fruits based on an artificial intelligence model, wherein the artificial intelligence model is trained on a labelled dataset of fruits images to recognize the plurality of fruits under diverse environmental conditions in the farm;
estimating, by the image analysis module of the processor, a size and a volume of each identified fruit based on the depth data;
determining, by the image analysis module of the processor, a weight of each identified fruits by correlating the estimated size and the volume with a region-specific density profile in combination with the variations in the plurality of fruits density specific to the geographic region of the farm; and
generating, by a classification module of the processor, a grade mix estimate corresponding to the distribution of the plurality of fruits across a plurality of weight-based categories.

Dated this 07th day of August 2025
Signature

Manish Kumar
Patent Agent (IN/PA-5059)
Agent for the Applicant

Documents

Application Documents

# Name Date
1 202541075461-STATEMENT OF UNDERTAKING (FORM 3) [07-08-2025(online)].pdf 2025-08-07
2 202541075461-REQUEST FOR EARLY PUBLICATION(FORM-9) [07-08-2025(online)].pdf 2025-08-07
3 202541075461-PROOF OF RIGHT [07-08-2025(online)].pdf 2025-08-07
4 202541075461-POWER OF AUTHORITY [07-08-2025(online)].pdf 2025-08-07
5 202541075461-FORM-9 [07-08-2025(online)].pdf 2025-08-07
6 202541075461-FORM FOR SMALL ENTITY(FORM-28) [07-08-2025(online)].pdf 2025-08-07
7 202541075461-FORM FOR SMALL ENTITY [07-08-2025(online)].pdf 2025-08-07
8 202541075461-FORM 1 [07-08-2025(online)].pdf 2025-08-07
9 202541075461-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [07-08-2025(online)].pdf 2025-08-07
10 202541075461-EVIDENCE FOR REGISTRATION UNDER SSI [07-08-2025(online)].pdf 2025-08-07
11 202541075461-DRAWINGS [07-08-2025(online)].pdf 2025-08-07
12 202541075461-DECLARATION OF INVENTORSHIP (FORM 5) [07-08-2025(online)].pdf 2025-08-07
13 202541075461-COMPLETE SPECIFICATION [07-08-2025(online)].pdf 2025-08-07
14 202541075461-MSME CERTIFICATE [11-08-2025(online)].pdf 2025-08-11
15 202541075461-FORM28 [11-08-2025(online)].pdf 2025-08-11
16 202541075461-FORM 18A [11-08-2025(online)].pdf 2025-08-11
17 202541075461-FORM-8 [12-08-2025(online)].pdf 2025-08-12
18 202541075461-FER.pdf 2025-11-21

Search Strategy

1 202541075461_SearchStrategyNew_E_202541075461_search_strategyE_21-11-2025.pdf