Abstract: A system (100) and a method for optimized identification and measurement of characteristics of food products is provided. The present invention provides for analyzing image of food product and marker received from an image acquisition device (110) based on one or more pre-determined parameters and perform pre-determined modifications in image of food product based on the marker. Further, three-level filtering of the food product image is performed. Physical characteristics of processed food product image is extracted based on trained Machine Learning (ML) models. The physical characteristics of food product are compared with physical characteristics of standard food product for determining similarity or dissimilarity between characteristics obtained from the processed food product image and the standard food product characteristics to obtain comparison result. Lastly, comparison result is compared with a pre-determined threshold range value for determining quality of the food product present in the processed food product image.
FORM 2
THE PATENTS ACT 1970
(39 OF 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION:
“SYSTEM AND METHOD FOR OPTIMIZED IDENTIFICATION AND MEASUREMENT OF CHARACTERISTICS OF FOOD PRODUCTS”
2. APPLICANT:
(a) NAME: Retranz Infolabs Pvt. Ltd.
(b) NATIONALITY: Indian
(c) ADDRESS: Godown D1, Indian Corporation Compound,
Location Diva - Anjur Village, Bhiwandi-421302, Maharashtra, India
3. PREAMBLE TO THE DESCRIPTION
COMPLETE
The following specification particularly describes the invention and the manner in which it is to be performed.
Field of the invention
[0001] The present invention relates generally to the field of grain recognition, and more particularly to a system and a method for optimized identification and measurement of physical characteristics of food products.
Background of the invention
[0002] Millers, Farmers Producers Organizations (FPO), and traders, typically, face challenges in accurately identifying and measuring physical characteristics of food products for determining type, quality, and segregation of the food products. Largely, manual techniques are employed, such as, a screw gauge or a vernier caliper for measuring physical characteristics of food products, which are cumbersome, inefficient, and render inaccurate results. Also, conventionally, lab-based grain measurement techniques are used, which apart from being laborious and time-consuming, are expensive and results obtained are not reliable.
[0003] Also, some of the existing techniques provide inappropriate results if the food products are produced on a large scale. These techniques are not scalable, not portable, are not able to analyze all types of food products, are subject to human error and do not produce consistent results. Further, in existing techniques, use of high-resolution industrial cameras or flat-bed scanners for taking images of food products is expensive and, moreover, image processing is very slow with inconsistent and inaccurate results.
[0004] In light of the aforementioned drawbacks, there is a need for a system and a method which provides for optimized identification and measurement of physical characteristics of food products. There is a need for a system and a method which provides for identifying and measuring physical characteristics of all food products in an accurate and consistent manner without any human error. Also, there is a need for a system and a method which provides for identifying and measuring physical characteristics of different types of food products in an easily accessible and scalable manner. Yet further, there is a need for a system and a method which provides for a quick, efficient, and cost-effective identification and measurement of physical characteristics of food products.
Summary of the invention
[0005] In various embodiments of the present invention, a system for optimized identification and measurement of characteristics of food products is provided. The system comprises a memory storing program instructions, a processor executing instructions stored in the memory and a food product recognition engine executed by the processor. The food product recognition engine is configured to analyze an image of a food product and a marker received from an image acquisition device based on one or more pre-determined parameters including analyzing the image based on determination of a shape, color and features of the marker and perform pre-determined modifications in the image of the food product based on the marker. The food product recognition engine is configured to perform a three-level filtering of the food product image based on area, size limit and overlap of the food product present in the image. Further, the food product recognition engine is configured to extract one or more physical characteristics of the processed food product image based on trained Machine Learning (ML) models. The ML models are trained to compare the extracted physical characteristics with pre-determined physical characteristics to determine physical characteristics of the processed food product present in the image accurately. Further, the food product recognition engine is configured to compare the determined physical characteristics of the food product with physical characteristics of a standard food product for determining similarity or dissimilarity between the characteristics obtained from the processed food product image and the standard food product characteristics to obtain a comparison result. Lastly, the food product recognition engine is configured to compare the comparison result with a pre-determined threshold range value for determining quality of the food product present in the processed food product image.
[0006] In various embodiments of the present invention, a method for optimized identification and measurement of characteristics of food products is provided. The method is implemented by a memory
storing program instructions and a processor executing instructions stored in the memory. The method comprises analyzing an image of a food product and a marker received from an image acquisition device based on one or more pre-determined parameters including analyzing the image based on determination of a shape, color and features of the marker and performing pre-determined modifications in the image of the food product based on the marker. Further, the method comprises performing a three-level filtering of the food product image based on area, size limit and overlap of the food product present in the image. Further, the method comprises extracting one or more physical characteristics of the processed food product image based on trained Machine Learning (ML) models. The ML models are trained to compare the extracted physical characteristics with pre-determined physical characteristics to determine physical characteristics of the processed food product present in the image accurately. Further, the method comprises comparing the determined physical characteristics of the food product with physical characteristics of a standard food product for determining similarity or dissimilarity between the characteristics obtained from the processed food product image and the standard food product characteristics to obtain a comparison result. Lastly, the method comprises comparing the comparison result with a pre-determined threshold range value for determining quality of the food product present in the processed food product image.
Brief description of the accompanying drawings
[0007] The present invention is described by way of embodiments illustrated in the accompanying drawings wherein:
[0008] Fig. 1 is a detailed block diagram of a system for optimized identification and measurement of physical characteristics of food products, in accordance with an embodiment of the present invention;
[0009] Fig. 2 illustrates a screenshot of a Graphical User Interface (GUI) rendered on a user device for determining physical characteristics of food products, in accordance with an embodiment of the present invention;
[0010] Fig. 3 illustrates a screenshot of the GUI depicting different categories of food products for which physical characteristics are to be determined, in accordance with an embodiment of the present invention;
[0011] FIG. 4 illustrates a marker which is used as a reference object and FIG. 4A illustrates use of the marker as the reference object for capturing images of food product, in accordance with an embodiment of the present invention;
[0012] Fig. 4B illustrates a screenshot of the GUI depicting a captured image of a food product along with the marker, in accordance with an embodiment of the present invention;
[0013] Fig. 5 illustrates a screenshot of the GUI depicting blur removal from a food product image, in accordance with an embodiment of the present invention;
[0014] Fig. 6 illustrates a screenshot of the GUI depicting image thresholding of a food product image, in accordance with an embodiment of the present invention;
[0015] Figs. 7-7B illustrate a screenshot of the GUI depicting grain contour determination of a food product, in accordance with an embodiment of the present invention;
[0016] Figs. 8A-8B illustrate a screenshot of the GUI depicting extracted physical characteristics of grains, in accordance with an embodiment of the present invention;
[0017] Figs. 9A-9B illustrate a screenshot of the GUI depicting grain count determination, in accordance with an embodiment of the present invention;
[0018] Figs. 10A-10B illustrate a screenshot of the GUI depicting classification of different rice types, in accordance with an embodiment of the present invention;
[0019] Figs. 11A-11B illustrate a screenshot of the GUI depicting classification of different pulse types, in accordance with an embodiment of the present invention;
[0020] Fig. 12 illustrates a screenshot of the GUI depicting classification of W-320 grade of cashews, in accordance with an embodiment of the present invention;
[0021] Fig. 13 illustrates a screenshot of the GUI depicting computed HuMoments values for different cashew kernels and computed grade of the cashew kernels, in accordance with an embodiment of the present invention;
[0022] Figs. 14A-14C illustrate a screenshot of the GUI depicting a summary view of computed dimension types of grains, in accordance with an embodiment of the present invention;
[0023] Fig. 15 and Fig. 15A is a flowchart illustrating a method for optimized identification and measurement of physical characteristics of food products, in accordance with an embodiment of the present invention; and
[0024] Fig. 16 illustrates an exemplary computer system in which various embodiments of the present invention may be implemented.
Detailed description of the invention
[0025] The present invention discloses a system and a method for optimized identification and measurement of physical characteristics of food products. The present invention provides for a system and a method for efficient, accurate and cost-effective identification and measurement of physical characteristics of all food products such as grains, seeds, and spices. Further, the present invention provides for a system and a method for identification and measurement of physical characteristics of different types of food products in a consistent manner without any human intervention. Furthermore, the present invention provides for a system and a method for identification and measurement of physical characteristics of food products in an easily accessible and scalable manner. Yet further, the present invention provides for a system and a method for effectively determining quality of food products from a sample image of the food products.
[0026] The disclosure is provided in order to enable a person having ordinary skill in the art to practice the invention. Exemplary embodiments herein are provided only for illustrative purposes and various modifications will be readily apparent to persons skilled in the art. The general principles defined herein may be applied to other embodiments and applications without departing from the scope of the invention. The terminology and phraseology used herein is for the purpose of describing exemplary embodiments and should not be considered limiting. Thus, the present invention is to be accorded the widest scope encompassing numerous alternatives, modifications, and equivalents consistent with the principles and features disclosed herein. For purposes of clarity, details relating to technical material that is known in the technical fields related to the invention have been briefly described or omitted so as not to unnecessarily obscure the present invention.
[0027] The present invention would now be discussed in context of embodiments as illustrated in the accompanying drawings.
[0028] Fig. 1 is a detailed block diagram of a system 100 for optimized identification and measurement of physical characteristics of food products, in accordance with various embodiments of the present invention.
[0029] Referring to Fig. 1, in an embodiment of the present invention, the system 100 comprises a food product recognition subsystem 102 (subsystem 102) and an image acquisition device 110. The image acquisition device 110 is connected to the subsystem 102 via a communication channel (not shown). The communication channel (not shown) may include, but is not limited to, a physical transmission medium, such as, a wire, or a logical connection over a multiplexed medium, such as, a radio channel in telecommunications and computer networking. The examples of radio channel in telecommunications and computer networking may include, but are not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN) and a Wide Area Network (WAN).
[0030] In an embodiment of the present invention, the subsystem 102 is configured with a built-in-mechanism for carrying out optimized identification and measurement of physical characteristics of food products. The subsystem 102 is a self-optimizing and an intelligent system. The subsystem 102 provides accurate and precise measurement of physical characteristics of different types of food products based on a combination of calibration techniques, computer vision techniques (e.g., Representational State Transfer Application Programming Interfaces (REST APIs)), image processing techniques and machine learning techniques. Further, the subsystem 102 is configured to apply pre-processing and feature extraction techniques that enable accurate processing of different types of food products. Further, the subsystem 102 is configured to extract specific physical characteristics of food products and carry out statistical analysis for grading the food products.
[0031] In another embodiment of the present invention, the subsystem 102 may be implemented in a cloud computing architecture (e.g., azure cloud) in which data, applications, services, and other resources are stored and delivered through shared datacenters. In an exemplary embodiment of the present invention, the functionalities of the subsystem 102 are delivered to a user as Software as a Service (SaaS) over a communication network.
[0032] In another embodiment of the present invention, the subsystem 102 may be implemented as a client-server architecture. In this embodiment of the present invention, a client terminal accesses a server hosting the subsystem 102 over a communication network. The client terminals may include, but are not limited to, a computer, a tablet, or any other wired or wireless terminal. The server may be a centralized or a decentralized server.
[0033] In an embodiment of the present invention, the subsystem 102 comprises a food product recognition engine 104 (engine 104), a processor 106 and a memory 108. The engine 104 executes a combination of computer vision techniques, image processing techniques and machine learning techniques for optimally identifying and measuring physical characteristics of food products. The engine 104 has multiple units, which work in conjunction with each other, and the units are operated via the processor 106 specifically programmed to execute instructions stored in the memory 108 for executing respective functionalities of the units of the engine 104, in accordance with various embodiments of the present invention.
[0034] In an embodiment of the present invention, the engine 104 comprises an image processing unit 112, a first characteristics extraction unit 114, a database 116, a comparison unit 118, a second characteristics extraction unit 120, a comparison result processing unit 122, a food product characteristics computation unit 124, a threshold comparison unit 126, and a feedback unit 128. The engine 104 communicates with a server 130 over the communication channel (not shown).
[0035] In operation, in an embodiment of the present invention, the image acquisition device 110 is configured to capture images of food products. Examples of food products include rice, wheat, pulses, dry fruits (e.g., cashews, almonds, etc.), spices, sugar, etc. The food products are collected and evenly spread on a background of a flat surface such that the food products do not form clusters or touch each other. Further, examples of physical characteristics associated with the food products may include, but are not limited to, length, width, color, and impurity. Examples of the background may include, but is not limited to, a sheet of paper or a sheet of cloth that is non-glossy and non-reflective. The background may be of black or white color having a smooth texture with visible textures or patterns of less than 1mm size. In an embodiment of the present invention, a reference object of a pre-defined size is placed along with
the food products on the background for capturing images of food products. In an exemplary embodiment of t he pre s e nt inve nt i on , t he r e f e r e n c e o bj ec t i s m ade up o f a non-glossy material for accurate reproduction of reference colors and very low color loss/fading. In an exemplary embodiment of the present invention, the reference object has a minimum dimensional accuracy of 0.01 mm and a minimum positional accuracy of 0.01 mm. The reference object may include a marker of a pre-defined shape and dimension (e.g., a rectangular reference object, a circular reference object, an L-shaped reference object, a U-shaped reference object, etc.). The marker may contain pre-defined color references. As illustrated in Fig. 4, the marker has six color references and four ArUco® markers, in accordance with an exemplary embodiment of the present invention. The marker is used to compute camera pose estimation and camera calibration, which aids in camera angle validation. Advantageously, the marker aids in capturing images of food products by placing the image acquisition device 110 at different heights, as illustrated in Fig. 4A. A known size of the marker is used to calibrate distance between the image acquisition device 110 and the marker. The color of the marker is used to accurately measure color of the food products by adjusting for a variation in lighting conditions.
[0036] In an embodiment of the present invention, the image acquisition device 110 is associated with or installed in a user device (not shown). Examples of user device include, but are not limited to, electronic devices such as a smartphone, a tablet, and a laptop. The image acquisition device 110 is in communication with the server 130 via the communication channel (not shown). In an exemplary embodiment of the present invention, the server 130 is an Amazon Web Services® Elastic Cloud Compute (AWS EC2) Instance server. In an embodiment of the present invention, an application is invoked on the image acquisition device 110. Upon invocation of the application, a Graphical User Interface (GUI) is rendered for selecting a category of food products, as illustrated in Fig. 3. After selection of the food product category, the image of the food product and the marker are captured via a camera associated with the image acquisition device 110 or via a camera feature associated with the application. The image is captured in proper lighting conditions and with no formation of shadow on the food products, as illustrated in Fig. 4B. One or more sensors associated with the image acquisition device 110 are used to determine orientation of the image acquisition device 110 for capturing appropriate image.
[0037] In an exemplary embodiment of the present invention, a top view of the food product spread on the background is captured via the image acquisition device 110. The captured image is received by the image processing unit 112, which is configured to analyze the received image based on one or more pre-determined parameters. Firstly, checking the presence of an appropriate background on which the food product is spread. Further, in an exemplary embodiment of the present invention, the image processing unit 112 validates if the camera angle is in a preferred range for capturing the image based on a cloud Application Processing Interface (API). In an embodiment of the present invention, the image is analyzed based on a determination of a shape, color and features of the marker. In the event an actual shape of the marker is rectangular then the camera angle validation is carried out analyzing a distorted shape of the marker present in the image against the actual rectangular shape of the marker, and based on the analysis a model is generated to compute a camera angle deviation from an ideal camera angle orientation (i.e., camera’s plane parallel to the plane of the platform containing the food product sample). In the event, actual shape of the marker is circular, then camera angle validation for the captured image is carried out based on the marker’s circularity. In the event, actual shape of the marker is elliptical, then the camera angle validation for the captured image is carried out based on a ratio of a major and a minor axis of the marker’s shape. In the event, actual shape of the marker is L-shaped or U-shaped, then the camera angle validation for the captured image is carried out based on a transformation of a rectangle shape of the marker in an x-axis and y-axis. In an embodiment of the present invention, the captured image is validated based on generic constraints or food product specific constraints and properties of the captured image of the food products.
[0038] Secondly, the analysis entails carrying out pre-determined modifications in the captured image based on the marker. In an exemplary embodiment of the present invention, the predetermined modifications include perspective correction and color correction of the image. In this exemplary embodiment, shape of the marker is used to carry out a perspective correction of the image based on actual shape of the marker and shape of the marker in the image. In another exemplary embodiment of the present invention, color correction of the image is carried out using the known colors of the marker and colors of
the markers in the image. For example, a marker with a circular shape comprises four quadrants containing four colors i.e., blue, green, red, and black. The rectangular shaped marker comprises six colors (e.g., red, yellow, green, blue, black, and grey) and four ArUco® markers (as illustrated in Fig. 4). The L-shaped marker comprises six colors. The U-shaped marker comprises eleven colors. In an embodiment of the present invention, the image processing unit 112 is configured to transform colors of the image of the food product based on the marker. In an exemplary embodiment of the present invention, the image processing unit 112 applies a linear regression technique and a comparative analysis of colors in different color channels of different color spaces to carry out the transformation. The image processing unit 112 determines a closest color using Red, Green, Blue (RGB) values for the food products present in the image. The image processing unit 112, for instance, determines the closest color by a name (e.g., gainsboro). In an embodiment of the present invention, the image processing unit 112 is configured to carry out color correction of the food product image based on a color correction model. In an exemplary embodiment of the present invention, the color correction model is a regression model. The color correction model is generated based on inputs including a source input and a reference input. The source input comprises an input color comprising a mean of pixel level color values for each color area after subtracting the ArUco® marker (reference object) extracted from the food product image. The reference input comprises one or more known reference color values for each color area in the ArUco® markers (reference object). In an exemplary embodiment of the present invention, the reference color values (RGB values) for the ArUco® markers may be described as [[42.101, 53.378, 28.19], [81.257, -0.638, -0.335], [72.532, -23.709, 57.255], [40.02, 10.41, -45.964], [20.461, -0.079, -0.973], [43.139, -13.095, 21.905]]. The color correction model provides output in a standard RGB color space and further determines a distance between the two inputs (i.e., the source input and the reference input). In an exemplary embodiment of the present invention, the color correction model provides a 3x3 linear transformation on color values and performs color linearization for accurate calibration of the colors in the food product image. The color correction model is trained with sample food product images for carrying out efficient color correction. In an embodiment of the present invention, the trained color correction model is configured to compute a transformation matrix and a loss value. In an embodiment of the present invention, the color of the food product in the image is calibrated by the color correction model based on the transformation matrix. Further, the color correction model computes colors of the food products from the color corrected image for accurate representation of colors of the food products.
[0039] In an embodiment of the present invention, after the transformation, the image processing unit 112 determines color of the food product present in the captured image. In an exemplary embodiment of the present invention, the image processing unit 112 determines if the food product is translucent (e.g., sugar) or light colored or dark colored (e.g., pepper, urad dal, etc.). In an exemplary embodiment of the present invention, color of the translucent food product is determined by comparing the colors measured at a grain cluster level and at a separate grain level based on a histogram matching technique. In another embodiment of the present invention, color of the translucent product is determined at a cluster level and size is measured at a grain level. In yet another exemplary embodiment of the present invention, the dark color of the food product is determined by a white background validation, shadow removal and mask inversion.
[0040] In an embodiment of the present invention, the captured image of the food product is pre-processed by the image processing unit 112 by applying one or more image pre-processing techniques. In an exemplary embodiment of the present invention, the pre-processing technique includes resizing of the image of the food product by applying smart dynamic cropping and gray scaling. In another exemplary embodiment of the present invention, the pre-processing technique includes removing blur (e.g., Gaussian blur) from the images (as illustrated in Fig. 5), by applying noise filtration (e.g., bilateral filtering) to the image, applying gamma correction to the image, and applying image thresholding (e.g., Otsu’s thresholding, adaptive thresholding, etc.) to the image (as illustrated in Fig. 6). In yet another exemplary embodiment of the present invention, the pre-processing technique includes carrying out edge detection (e.g., canny edge detection) of the image.
[0041] In an embodiment of the present invention, after carrying out pre-processing of the image, the image processing unit 112 is configured to carry out three-levels of filtering on the food product image. Firstly, the image processing unit 112 carries out area filtering of the food product present in the image
by using area property and removes dust and noise from the background of the image. Secondly, the image processing unit 112 carries out a size limit filtering based on a size property of the food product. In an exemplary embodiment of the present invention, size property computation includes computing a maximum length and a maximum width of the food product present in the image. Size limit of the food product is computed from a large number of samples of each category of the food products. The size limit may include, a small, a medium and a large food product size. For instance, a small food product size includes food products of less than 5mm size limit. Examples of a 5mm size limit food product include millets, mustard seeds, etc. For the smaller food product size, processing is carried out by applying a canny edge mask technique and bitwise AND operation. Medium food product size includes food products in a range of, for instance, between 5mm to 12mm size limit. Examples of food product sizes of size limit between 5mm to 12mm size limit include rice, wheat, pulses, etc. The size limit of the medium food product size is determined by applying a canny edge mask technique and bitwise AND operation. Large food product size includes sizes greater than, for instance, 12mm size limit. Examples of large food product size includes, almonds, cashews, etc. The size limit of the large food product size is determined by applying image thresholding technique for hue, saturation, value channels and combining them with bitwise operations, and further carrying out grain level thresholding. Thirdly, the image processing unit 112 carries out an overlapping filtering of the food product present in the image.
[0042] In an embodiment of the present invention, the pre-processed image is processed by the image processing unit 112 in a pre-defined manner for carrying out a feature extraction process. The feature extraction process is carried out firstly by optimizing attributes, such as, contour or perimeter of the food product for a detailed analysis (e.g., count for cashews, pulses, wheat), as illustrated in Figs. 7-7B. Secondly, the feature extraction process is carried out by computing a pixel area of each food product in the image. Thirdly, the feature extraction process is carried out by converting pixel areas of each food product present in the image to a physical area in dimensions of millimeters (mm2). The feature extraction process identifies the food product as slender, medium, and bold based on an aspect ratio.
[0043] In an embodiment of the present invention, the first characteristics extraction unit 114 is configured to receive the processed image and extract one or more physical characteristics of the food product from the image by applying computer vision and classification techniques. The computer vision and classification techniques are applied along with Machine Learning (ML) models for effectively identifying physical characteristics of the food product. The first characteristics extraction unit 114 is configured with ML models, which are trained over a period of time with a large number of different types of images of food products for accurately extracting physical characteristics of the food product. In an exemplary embodiment of the present invention, the ML models employ a label encoder technique and a random forest classifier technique for extracting physical characteristics of the food product. ML models are trained for extracting physical characteristics of the food product such as, length, width, color (e.g., in a hex value format), perimeter, area, shape, circularity of the food product, impurities, and type of the food product, as illustrated in Figs. 8A-8B. The ML model compares the extracted physical characteristics of the food product with pre-determined physical characteristics of the food product associated with the trained ML model and accurately determines physical characteristics of the food product present in the image. The comparison is carried out by optimizing color detection algorithms to use a customized color palette as a reference. In an exemplary embodiment of the present invention, the marker that is used for color comparison is optimized to 40mm x 25mm with six colors. In an embodiment of the present invention, the first characteristics extraction unit 114 is configured to apply cloud-based Flask API for processing multiple images for same samples of the food products.
[0044] In another embodiment of the present invention, the first characteristics extraction unit 114 identifies a count of the food product present in the image, as illustrated in Figs. 9A-9B. In yet another embodiment of the present invention, the first characteristics extraction unit 114 is configured to determine food product specific characteristics of the food product present in the image. In an example, if the food product type is rice, then chalkiness of rice is determined. Chalkiness of rice is determined based on a computing percentage of rice grain portion that is chalky. The chalky portion of rice grains is determined by measuring rice grain pixels in the image that are whiteish as compared to rest of the rice grain pixels in the image. Pixels that are concentrated in a region in the image are identified and noise is filtered out from the rice grain image. In another example, the marker is used to determine the color of
the chalky portion of rice grains to measure chalkiness percentage of rice grains accurately. In yet another embodiment of the present invention, the first characteristics extraction unit 114 is configured to determine chalkiness of the rice by applying a median color-based thresholding technique, morphological operations (e.g., erosion, dilation, and bitwise AND operations), and filtering of false positive chalkiness pixels.
[0045] In an embodiment of the present invention, the first characteristics extraction unit 114 applies the ML models to classify the food products by applying exploratory data analysis on the extracted characteristics. For example, the first characteristics extraction unit 114 classifies rice (as illustrated in Figs. 10A-10B) based on its different varieties such as basmati rice, kolam rice, etc., as illustrated in Figs. 11A and 11B, and further classifies the rice as steamed, boiled, or raw. In another example, the first characteristics extraction unit 114 classifies wheat based on its different varieties such as lokwan and sihori. In yet another example, the first characteristics extraction unit 114 classifies different varieties of pulses such as tur dal, masoor dal, chana dal, mong dal, urad dal, vatana, matki, chavali, etc., as illustrated in Figs. 11A and 11B. In another example, the first characteristics extraction unit 114 classifies pulses as whole unpolished (with skin), whole polished (without skin), unpolished split and polished split. In yet another example, the first characteristics extraction unit 114 classifies different varieties of dry fruits such as, almonds, cashews, raisins, pistachios, dates, fox nut, groundnut, etc. In another example, the first characteristics extraction unit 114 classifies different varieties of spices such as cardamom, black pepper, coriander seeds, cumin, mustard seeds, sesame seeds, etc. In an example, the first characteristics extraction unit 114 classifies different varieties of millets such as ragi, bajra, jawar, etc. In yet another example, the first characteristics extraction unit 114 is configured to classify different grades of cashews based on area of the cashew kernels such as, W-180, W-240, W-320, W-400, B,LWP, and BB,SWP, as illustrated in Fig. 12. Advantageously, cashew grading is carried out by a count based on area without any dependence on any weighing scale. In another example, the first characteristics extraction unit 114 is configured to classify different varieties of chana such as vishal, mosambi, kanta, kabuli, etc., and further determines its color, size, count, and impurities. In yet another example, the first characteristics extraction unit 114 is configured to classify different grades of walnut (without shell). In this example, in one instance, grading may relate to four types based on shades of color i.e., very light, light, dark, and very dark. In another instance, grading may relate to three types based on shape i.e., quarter (half of 1 wing), half (1 wing), full (2 wings). In an embodiment of the present invention, the first characteristics extraction unit 114 is further configured to extract moisture percentage from the food product for computing the moisture percentage data of the food products by employing a moisture meter (not shown).
[0046] In an embodiment of the present invention, the first characteristics extraction unit 114 determines quality of the grains by applying HuMoments technique for certain commodities e.g., cashews, almonds, etc. For example, as illustrated in Fig. 13, if the food product is cashew, then the HuMoments value for un-broken cashew kernels is computed as 0.21 with a grade value identified as W-320. In another example, as illustrated in Fig. 13, for broken cashew kernels the HuMoments value is computed as 0.19 or 0.17 and grade value is identified as broken. In another embodiment of the present invention, the first characteristics extraction unit 114 is configured to determine impurities in the food product by employing the ML models. Impurities may include, but are not limited to, damaged grains, discolored grains, broken grains, weevilled or damaged by insects, immature grain, and presence of foreign matter.
[0047] In an embodiment of the present invention, the first characteristics extraction unit 114 transmits the extracted physical characteristics data associated with the food product image along with the moisture percentage data to the database 116 for storage. In an exemplary embodiment of the present invention, the database 116 operates as a MySQL and Firebase database. In an embodiment of the present invention, the data stored in the database 116 are employed for training the ML model. In an embodiment of the present invention, the second characteristics extraction unit 120 is configured to fetch standard food product physical characteristics along with moisture percentage data computed for the standard food product from the server 130 and process the characteristics for removing any blur and noise. In an embodiment of the present invention, the comparison unit 118 is configured to fetch the stored physical characteristics of the food product extracted from the food product image from the database 116 for comparison with standard food product physical characteristics processed by the second characteristics
extraction unit 120, which aids in efficiently determining the type of food product present in the processed food product image.
[0048] In an embodiment of the present invention, the comparison result processing unit 122 communicates with the comparison unit 118 to generate results of the comparison of the characteristics obtained from the processed food product image along with the moisture percentage data of the food product and the standard food product characteristics. The comparison results generated indicate similarity or dissimilarity between the characteristics obtained from the processed food product image and the standard food product characteristics. In an embodiment of the present invention, the comparison result processing unit 122 is configured to communicate with the food product characteristics computation unit 124 for determining characteristics of the food products.
[0049] In an embodiment of the present invention, the food product characteristics computation unit 124 is configured to communicate with the threshold comparison unit 126 for comparing the comparison result with a pre-determined threshold range value for determining quality of the food product present in the processed food product image. In an exemplary embodiment of the present invention, in the event it is determined that the comparison results are within a pre-defined threshold range value, then the food product characteristics computation unit 124 computes one or more parameter types for the food product associated with the processed food product image. The one or more parameter types may include, but are not limited to, computing statistical measurements of the food product, and descriptive measurements of the food product. The computed statistical measurements may include, but are not limited to, mean, median, range, and standard deviation. The computed descriptive measurements may include, but are not limited to, determining shape of the food product (e.g., long slender shape for rice), and determining color of the food product (e.g., “floral white” for color of the grains of rice). In an exemplary embodiment of the present invention, a summary view of computed dimensions of the food product is rendered via the GUI on the image acquisition device 110, as illustrated in Figs. 14A-14C. The summary view provides a generic summary of the food product and a specific summary of the food product (e.g., broken rice percentage). In an exemplary embodiment of the present invention, quality score and result prediction may be computed by the food product characteristics computation unit 124 for the food product based on the formulae as illustrated herein below:
For a given food product and a sample
where, Score property is score for the property of the food product (e.g., width of rice); I property is the computed value of input sample’s property (e.g., computed mean length), S property is the standard value of the property from experiments and data analysis (e.g., standard length-width ration), C is the correction factor based on property and food product type.
where, Result property is accept, warn, reject results for the property, Score property is score for the property of
the food product.
[0050] In another exemplary embodiment of the present invention, in the event it is determined that the comparison results are within the pre-defined threshold range value, then the food product is determined to be of good quality via the feedback unit 128 and renders a good quality status via the GUI of the image acquisition device 110. In an exemplary embodiment of the present invention, an indicia representing saleability of the food product is determined based on the formulae, as illustrated in herein below:
For a given food product and a sample:
where score sample is the score for a given sample of a food product, PWi is weightage for the property determined from a linear regression model, n is the total number of properties, Score property is the score for the property of the food product, Csample is the parameter, which is a function of sample size and distribution, and Eimages is the error correction based on average error associated with images analyzed for the sample.
where Result sample is procurement decision i.e., accept, warn, reject results for the sample, score sample is score for given sample of the commodity, Result property is accept, warn, reject results for all the properties, correction factor is func (Type, Subtype, Product SKU)
[0051] In an embodiment of the present invention, visual feedback data is provided by the feedback unit 128 via the GUI of the image acquisition device 110 associated with detected contours drawn on an input food product image and intermediate pre-processing of the food product images such as removal of Gaussian blur. The feedback data aids in tracking quality of the food product tested with a sample during procurement and the quality of same batch of product during sorting. Further, the feedback data aids to optimize weightage of various physical characteristics of the food products and fine-tune the ML model for effective measurement of physical characteristics of the food product and generating accurate results. Further, in an embodiment of the present invention, the feedback data from the feedback unit 128 is processed to provide a descriptive output for decision making with respect to accept, warn, or reject the food product based on one or more index scores (i.e., overall and parameter level scores) computed for the food products.
[0052] Fig. 15 and Fig. 15A is a flowchart illustrating a method for optimized identification and measurement of physical characteristics of food products, in accordance with various embodiments of the present invention.
[0053] At step 1502, images of food products are captured, and the captured image are analyzed based on one or more pre-determined parameters. In an embodiment of the present invention, examples of food products include rice, wheat, pulses, dry fruits (e.g., cashews, almonds, etc.), spices, sugar, etc. The food products are collected and evenly spread on a background of a flat surface such that the food products do not form clusters or touch each other. Further, examples of physical characteristics associated with the food products may include, but are not limited to, length, width, color, and impurity. Examples of the background may include, but is not limited to, a sheet of paper or a sheet of cloth that is non-glossy and non-reflective. The background may be of black or white color having a smooth texture with visible textures or patterns of less than 1mm size. In an embodiment of the present invention, a reference object of a pre-defined size is placed along with the food products on the background for capturing images of food products. In an exemplary embodiment of the present invention, the reference object is made up of a non-glossy material for accurate reproduction of reference colors and very low color loss/fading. In an exemplary embodiment of the present invention, the reference object has a minimum dimensional accuracy of 0.01 mm and a minimum positional accuracy of 0.01 mm. The reference object may include a marker of a pre-defined shape and dimension (e.g., a rectangular reference object, a circular reference object, an L-shaped reference object, a U-shaped reference object, etc.). The marker may contain pre¬defined color references. The marker has six color references and four ArUco® markers, in accordance with an exemplary embodiment of the present invention. The marker is used to compute camera pose estimation and camera calibration, which aids in camera angle validation. A known size of the marker is used to calibrate distance between an image acquisition device 110 and the marker. The color of the marker is used to accurately measure color of the food products by adjusting for a variation in lighting conditions.
[0054] In an embodiment of the present invention, an application is invoked on the image acquisition device 110. Upon invocation of the application, a Graphical User Interface (GUI) is rendered for selecting a category of food products. After selection of the food product category, the image of the food product and the marker are captured via a camera associated with the image acquisition device 110 or via a camera feature associated with the application. The image is captured in proper lighting conditions and with no formation of shadow on the food products. One or more sensors associated with the image acquisition
device 110 are used to determine orientation of the image acquisition device 110 for capturing appropriate image.
[0055] In an exemplary embodiment of the present invention, a top view of the food product spread on the background is captured via the image acquisition device 110. The captured image is analyzed based on one or more pre-determined parameters. Firstly, checking the presence of an appropriate background on which the food product is spread. Further, in an exemplary embodiment of the present invention, a validation is carried out if the camera angle is in a preferred range for capturing the image based on a cloud Application Processing Interface (API). In an embodiment of the present invention, the image is analyzed based on a determination of a shape, color and features of the marker. In the event actual shape of the marker is rectangular then the camera angle validation is carried out by analyzing a distorted shape of the marker present in the image against actual rectangular shape of the marker and based on the analysis a model generated to compute a camera angle deviation from an ideal camera angle orientation (i.e., camera’s plane parallel to the plane of the platform containing the food product sample). In the event, the actual shape of the marker is circular, then camera angle validation for the captured image is carried out based on the marker’s circularity. In the event, the actual shape of the marker is elliptical, then the camera angle validation for the captured image is carried out based on a ratio of a major and a minor axis of the marker’s shape. In the event, actual shape of the marker is L-shaped or U-shaped, then the camera angle validation for the captured image is carried out based on a transformation of a rectangle shape of the marker in an x-axis and y-axis. In an embodiment of the present invention, the captured image is validated based on generic constraints or food product specific constraints and properties of the captured image of the food products.
[0056] Secondly, the analysis entails carrying out pre-determined modifications in the captured image based on the marker. In an exemplary embodiment of the present invention, the predetermined modifications include perspective correction and color correction of the image. In this exemplary embodiment, shape of the marker is used to carry out a perspective correction of the image based on actual shape of the marker and shape of the marker in the image. In another exemplary embodiment of the present invention, color correction of the image is carried out using the known colors of the marker and colors of the markers in the image. For example, a marker with a circular shape comprises four quadrants containing four colors i.e., blue, green, red, and black. The rectangular shaped marker comprises six colors (e.g., red, yellow, green, blue, black, and grey) and four ArUco® markers. The L-shaped marker comprises six colors. The U-shaped marker comprises eleven colors. In an embodiment of the present invention, colors of the image of the food product are transformed based on the marker. In an exemplary embodiment of the present invention, a linear regression technique and a comparative analysis of colors in different color channels of different color spaces is applied to carry out the transformation. A closest color using Red, Green, Blue (RGB) values for the food products present in the image are determined. The closest color is determined by a name (e.g., gainsboro). In an embodiment of the present invention, the color correction of the food product image is carried out based on generation of a color correction model. In an exemplary embodiment of the present invention, the color correction model is a regression model. The color correction model is generated based on a source input and a reference input. The source input comprises a mean of pixel level color values for each color area after subtracting the ArUco® markers (reference object) extracted from the food product image. The reference input comprises one or more known reference color values for each color area in the ArUco® marker (reference object). In an exemplary embodiment of the present invention, the reference color values (RGB values) for the ArUco® marker may be described as [[42.101, 53.378, 28.19], [81.257, -0.638, -0.335], [72.532, -23.709, 57.255], [40.02, 10.41, -45.964], [20.461, -0.079, -0.973], [43.139, -13.095, 21.905]]. Further, the color correction model provides output image in a standard RGB color space and further determines a distance between the two inputs (i.e., the source input and the reference input). In an exemplary embodiment of the present invention, the color correction model provides a 3x3 linear transformation on color values and performs color linearization for accurate calibration of the colors in the food product image. The color correction model is trained with sample food product images for carrying out efficient color correction. In an embodiment of the present invention, the trained color correction model is configured to compute a transformation matrix and a loss value. In an embodiment of the present invention, the color of the food product in the image is calibrated by the color correction model based on the transformation matrix.
Further, the color correction model computes colors of the food products from the color corrected image for accurate representation of colors of the food products.
[0057] At step 1504, color of the food product present in the captured image is determined. In an embodiment of the present invention, after the transformation, color of the food product present in the captured image is determined. In an exemplary embodiment of the present invention, it is determined if the food product is translucent (e.g., sugar) or light colored or dark colored (e.g., pepper, urad dal, etc.). In an exemplary embodiment of the present invention, color of the translucent food product is determined by comparing the colors measured at a grain cluster level based on a histogram matching technique. In another exemplary embodiment of the present invention, color of the translucent product is determined at a cluster level and size is measured at a grain level. In yet another exemplary embodiment of the present invention, the dark color of the food product is determined by a white background validation, shadow removal and mask inversion.
[0058] At step 1506, the captured image is pre-processed based on applying one or more image pre¬processing techniques. In an embodiment of the present invention, the pre-processing technique includes resizing of the image of the food product by applying smart dynamic cropping and gray scaling. In another exemplary embodiment of the present invention, the pre-processing technique includes removing blur (e.g., Gaussian blur) from the images, by applying noise filtration (e.g., bilateral filtering) to the image, applying gamma correction to the image, and applying image thresholding (e.g., Otsu’s thresholding, adaptive thresholding, etc.) to the image. In yet another exemplary embodiment of the present invention, the pre-processing technique includes carrying out edge detection (e.g., canny edge detection) of the image.
[0059] In an embodiment of the present invention, after carrying out pre-processing of the image, three-levels of filtering is carried out on the food product image. Firstly, area filtering of the food product present in the image is carried out by using area property and removes dust and noise from the background of the image. Secondly, a size limit filtering is carried out based on a size property of the food product. In an exemplary embodiment of the present invention, size property computation includes computing a maximum length and a maximum width of the food product present in the image. Size limit of the food product is computed from a large number of samples of each category of the food products. The size limit may include, a small, a medium and a large food product size. For instance, a small food product size includes food products of less than 5mm size limit. Examples of a 5mm size limit food product include millets, mustard seeds, etc. For the smaller food product size, processing is carried out by applying a canny edge mask technique and bitwise AND operation. Medium food product size includes food products in a range of, for instance, between 5mm to 12mm size limit. Examples of food product sizes of size limit between 5mm to 12mm size limit include rice, wheat, pulses, etc. The size limit of the medium food product size is determined by applying a canny edge mask technique and bitwise AND operation. Large food product size includes sizes greater than, for instance, 12mm size limit. Examples of large food product size includes, almonds, cashews, etc. The size limit of the large food product size is determined by applying image thresholding technique for hue, saturation, value channels and combining them with bitwise operations, and further carrying out grain level thresholding. Thirdly, an overlapping filtering of the food product present in the image is carried out.
[0060] At step 1508, the pre-processed image is processed in a pre-defined manner for carrying out a feature extraction process. In an embodiment of the present invention, the feature extraction process is carried out firstly by optimizing attributes, such as, contour or perimeter of the food product for a detailed analysis (e.g., count for cashews, pulses, wheat). Secondly, the feature extraction process is carried out by computing a pixel area of each food product in the image. Thirdly, the feature extraction process is carried out by converting pixel areas of each food product present in the image to a physical area in dimensions of millimeters (mm2). The feature extraction process identifies the food product as slender, medium, and bold based on an aspect ratio.
[0061] At step 1510, one or more physical characteristics of the food product are extracted from the image by applying computer vision and classification techniques along with Machine Learning (ML) models. In an embodiment of the present invention, one or more physical characteristics of the food
product are extracted from the image by applying computer vision and classification techniques. The computer vision and classification techniques are applied along with Machine Learning (ML) models for effectively identifying physical characteristics of the food product. In an exemplary embodiment of the present invention, the ML models employ a label encoder technique and a random forest classifier technique for extracting physical characteristics of the food product. ML models are trained for extracting physical characteristics of the food product such as, length, width, color (e.g., in a hex value format), perimeter, area, shape, circularity of the food product, impurities, and type of the food product. The ML model compares the extracted physical characteristics of the food product with pre-determined physical characteristics of the food product associated with the trained ML model and accurately determines physical characteristics of the food product present in the image. The comparison is carried out by optimizing color detection algorithms to use a customized color palette as a reference. In an exemplary embodiment of the present invention, the marker that is used for color comparison is optimized to 40mm x 25mm with six colors. In an embodiment of the present invention, cloud-based Flask API is applied for processing multiple images for same samples of the food products.
[0062] In another embodiment of the present invention, a count of the food product present in the image is identified. In yet another embodiment of the present invention, food product specific characteristics of the food product present in the image are determined. In an example, if the food product type is rice, then chalkiness of rice is determined. Chalkiness of rice is determined based on a computing percentage of rice grain portion that is chalky. The chalky portion of rice grains is determined by measuring rice grain pixels in the image that are whiteish as compared to rest of the rice grain pixels in the image. Pixels that are concentrated in a region in the image are identified and noise is filtered out from the rice grain image. In another example, the marker is used to determine the color of the chalky portion of rice grains to measure chalkiness percentage of rice grains accurately. In yet another embodiment of the present invention, chalkiness of the rice is determined by applying a median color-based thresholding technique, morphological operations (e.g., erosion, dilation, and bitwise AND operations), and filtering of false positive chalkiness pixels.
[0063] In an embodiment of the present invention, the ML models are applied to classify the food products by applying exploratory data analysis on the extracted characteristics. For example, rice is classified based on its different varieties such as basmati rice, kolam rice, etc., and further the rice is classified as steamed, boiled, or raw. In another example, wheat is classified based on its different varieties such as lokwan and sihori. In yet another example, different varieties of pulses are classified such as tur dal, masoor dal, chana dal, mong dal, urad dal, vatana, matki, chavali, etc. In another example, pulses are classified as whole unpolished (with skin), whole polished (without skin), unpolished split and polished split. In yet another example, different varieties of dry fruits are classified such as, almonds, cashews, raisins, pistachios, dates, fox nut, groundnut, etc. In another example, different varieties of spices are classified such as cardamom, black pepper, coriander seeds, cumin, mustard seeds, sesame seeds, etc. In an example, different varieties of millets are classified such as ragi, bajra, jawar, etc. In yet another example, different grades of cashews are classified based on area of the cashew kernels such as, W-180, W-240, W-320, W-400, B,LWP, and BB,SWP. In another example, different varieties of chana are classified such as vishal, mosambi, kanta, kabuli, etc., and further determines its color, size, count, and impurities. In yet another example, different grades of walnut (without shell) are classified. In this example, in one instance, grading may relate to four types based on shades of color i.e., very light, light, dark, and very dark. In another instance, grading may relate to three types based on shape i.e., quarter (half of 1 wing), half (1 wing), full (2 wings). In an embodiment of the present invention, moisture percentage is extracted from the food product for computing the moisture percentage data of the food products by employing a moisture meter (not shown).
[0064] In an embodiment of the present invention, quality of the grains is determined by applying HuMoments technique for certain commodities e.g., cashews, almonds, etc. For example, if the food product is cashew, then the HuMoments value for un-broken cashew kernels is computed as 0.21 with a grade value identified as W-320. In another example, for broken cashew kernels the HuMoments value is computed as 0.19 or 0.17 and grade value is identified as broken. In another embodiment of the present invention, impurities in the food product are determined by employing the ML models. Impurities may
include, but are not limited to, damaged grains, discolored grains, broken grains, weevilled or damaged by insects, immature grain, and presence of foreign matter.
[0065] In an embodiment of the present invention, the extracted physical characteristics data associated with the food product image along with the moisture percentage data is transmitted to a database 116 for storage. In an embodiment of the present invention, the data stored in the database 116 are employed for training the ML model. In an embodiment of the present invention, standard food product physical characteristics are fetched along with moisture percentage data computed for the standard food product from a server 130 and process the characteristics for removing any blur and noise. In an embodiment of the present invention, the stored physical characteristics of the food product extracted from the food product image are fetched from the database 116 for comparison with standard food product physical characteristics processed, which aids in efficiently determining the type of food product present in the processed food product image.
[0066] At step 1512, results of the comparison of the characteristics obtained from the processed food product image are generated along with the moisture percentage data of the food product and standard food product characteristics. In an embodiment of the present invention, the comparison results generated indicate similarity or dissimilarity between the characteristics obtained from the processed food product image and the standard food product characteristics. In an embodiment of the present invention, characteristics of the food products is determined.
[0067] In an embodiment of the present invention, the comparison result is compared with a pre-determined threshold range value for determining quality of the food product present in the processed food product image. In an exemplary embodiment of the present invention, in the event it is determined that the comparison results are within a pre-defined threshold range value, then one or more parameter types are computed for the food product associated with the processed food product image. The one or more parameter types may include, but are not limited to, computing statistical measurements of the food product, and descriptive measurements of the food product. The computed statistical measurements may include, but are not limited to, mean, median, range, and standard deviation. The computed descriptive measurements may include, but are not limited to, determining shape of the food product (e.g., long slender shape for rice), and determining color of the food product (e.g., “floral white” for color of the grains of rice). In an exemplary embodiment of the present invention, a summary view of computed dimensions of the food product is rendered via the GUI on the image acquisition device 110. The summary view provides a generic summary of the food product and a specific summary of the food product (e.g., broken rice percentage). In an exemplary embodiment of the present invention, quality score and result prediction may be computed for the food product based on the formulae as illustrated herein below:
For a given food product and a sample
where, Score property is score for the property of the food product (e.g., width of rice); I property is the computed value of input sample’s property (e.g., computed mean length), S property is the standard value of the property from experiments and data analysis (e.g., standard length-width ration), C is the correction factor based on property and food product type.
where, Result property is accept, warn, reject results for the property, Score property is score for the property of the food product.
[0068] In another exemplary embodiment of the present invention, in the event it is determined that
the comparison results are within the pre-defined threshold range value, then the food product is
determined to be of good quality and rendered via the GUI of the image acquisition device 110. In an
exemplary embodiment of the present invention, an indicia representing saleability of the food product is
determined based on the formulae, as illustrated in herein below:
For a given food product and a sample:
where score sample is the score for a given sample of a food product, PWi is weightage for the property determined from a linear regression model, n is the total number of properties, Score property is the score for the property of the food product, Csample is the parameter, which is a function of sample size and distribution, and Eimages is the error correction based on average error associated with images analyzed for the sample.
where Result sample is procurement decision i.e., accept, warn, reject results for the sample, score sample is score for given sample of the commodity, Result property is accept, warn, reject results for all the properties, correction factor is func (Type, Subtype, Product SKU)
[0069] At step 1514, visual feedback data is provided. In an embodiment of the present invention, visual feedback data is provided via the GUI of the image acquisition device 110 associated with detected contours drawn on an input food product image and intermediate pre-processing of the food product images such as removal of Gaussian blur. The feedback data aids in tracking quality of the food product tested with a sample during procurement and the quality of same batch of product during sorting. Further, the feedback data aids to optimize weightage of various physical characteristics of the food products and fine-tune the ML model for effective measurement of physical characteristics of the food product and generating accurate results. Further, in an embodiment of the present invention, the feedback data is processed to provide a descriptive output for decision making with respect to accept, warn, or reject the food product based on one or more index scores (i.e., overall and parameter level scores) computed for the food products.
[0070] Advantageously, in accordance with various embodiments of the present invention, the present invention provides for efficient and accurate identification and measurement of physical characteristics of different varieties of food products. The present invention provides for computing physical dimensions of food products at a precision level of 0.01mm with an error of less than 5%. The present invention provides for accurate detection of small size objects (grains) using OpenCV based computer vision technique. The present invention does not require the need for fixing distance between food products and camera, as a standard marker of known dimension is used. The present invention provides for using computer-vision based validation to effectively identify various lighting conditions, noise, and background texture variations associated with images of food products. Further, the present invention provides for validations to remove the image acquisition device’s 110 camera angle induced errors. Furthermore, the present invention provides for using optimization techniques to maximize accuracy and minimize standard deviations in results obtained for same samples of the food products, thereby reducing effect of higher sensitivity of image pixels that effects precision of the results. Yet further, the present invention provides for an efficient and cost-effective identification and measurement of physical characteristics of the food products without any human intervention.
[0071] FIG. 16 illustrates an exemplary system 1600 in which various embodiments of the present invention may be implemented. The computer system 1602 comprises a processor 1604 and a memory 1606. The processor 1604 executes program instructions and is a real processor. The computer system 1602 is not intended to suggest any limitation as to scope of use or functionality of described embodiments. For example, the computer system 1602 may include, but not limited to, a programmed microprocessor, a micro-controller, a peripheral integrated circuit element, and other devices or arrangements of devices that are capable of implementing the steps that constitute the method of the present invention. In an embodiment of the present invention, the memory 1606 may store software for implementing various embodiments of the present invention. The computer system 1602 may have additional components. For example, the computer system 1602 includes one or more communication channels 1608, one or more input devices 1610, one or more output devices 1612, and storage 1614. An
interconnection mechanism (not shown) such as a bus, controller, or network, interconnects the components of the computer system 1602. In various embodiments of the present invention, operating system software (not shown) provides an operating environment for various softwares executing in the computer system 1602 and manages different functionalities of the components of the computer system 1602.
[0072] The communication channel(s) 1608 allow communication over a communication medium to various other computing entities. The communication medium provides information such as program instructions, or other data in a communication media. The communication media includes, but is not limited to, wired or wireless methodologies implemented with electrical, optical, RF, infrared, acoustic, microwave, Bluetooth, or other transmission media.
[0073] The input device(s) 1610 may include, but not limited to, a keyboard, mouse, pen, joystick, trackball, a voice device, a scanning device, touch screen or any another device that is capable of providing input to the computer system 1602. In an embodiment of the present invention, the input device(s) 1610 may be a sound card or similar device that accepts audio input in analog or digital form. The output device(s) 1612 may include, but not limited to, a user interface on CRT or LCD, printer, speaker, CD/DVD writer, or any other device that provides output from the computer system 1602.
[0074] The storage 1614 may include, but not limited to, magnetic disks, magnetic tapes, CD-ROMs, CD-RWs, DVDs, flash drives or any other medium which can be used to store information and can be accessed by the computer system 1602. In various embodiments of the present invention, the storage 1614 contains program instructions for implementing the described embodiments.
[0075] The present invention may suitably be embodied as a computer program product for use with the computer system 1602. The method described herein is typically implemented as a computer program product, comprising a set of program instructions which is executed by the computer system 1602 or any other similar device. The set of program instructions may be a series of computer readable codes stored on a tangible medium, such as a computer readable storage medium (storage 1614), for example, diskette, CD-ROM, ROM, flash drives or hard disk, or transmittable to the computer system 1602, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications channel(s) 1608. The implementation of the invention as a computer program product may be in an intangible form using wireless techniques, including but not limited to microwave, infrared, Bluetooth, or other transmission techniques. These instructions can be preloaded into a system or recorded on a storage medium such as a CD-ROM or made available for downloading over a network such as the internet or a mobile telephone network. The series of computer readable instructions may embody all or part of the functionality previously described herein.
[0076] The present invention may be implemented in numerous ways including as a system, a method, or a computer program product such as a computer readable storage medium or a computer network wherein programming instructions are communicated from a remote location.
[0077] While the exemplary embodiments of the present invention are described and illustrated herein, it will be appreciated that they are merely illustrative. It will be understood by those skilled in the art that various modifications in form and detail may be made therein without departing from or offending the scope of the invention.
We claim:
1. A system (100) for optimized identification and measurement of characteristics of food products, the
system (100) comprising:
a memory (108) storing program instructions;
a processor (106) executing instructions stored in the memory (108); and
a food product recognition engine (104) executed by the processor (106) and configured to:
analyze an image of a food product and a marker received from an image acquisition device (110) based on one or more pre-determined parameters including analyzing the image based on determination of a shape, color and features of the marker and perform pre-determined modifications in the image of the food product based on the marker;
perform a three-level filtering of the food product image based on area, size limit and overlap of the food product present in the image;
extract one or more physical characteristics of the processed food product image based on trained Machine Learning (ML) models, wherein the ML models are trained to compare the extracted physical characteristics with pre-determined physical characteristics to determine physical characteristics of the processed food product present in the image accurately;
compare the determined physical characteristics of the food product with physical characteristics of a standard food product for determining similarity or dissimilarity between the characteristics obtained from the processed food product image and the standard food product characteristics to obtain a comparison result; and
compare the comparison result with a pre-determined threshold range value for determining quality of the food product present in the processed food product image.
2. The system (100) as claimed in claim 1, wherein the physical characteristics associated with the food products comprises length, width, color, and impurity.
3. The system (100) as claimed in claim 1, wherein a reference object is placed along with the food product on a background for capturing images of the food product, the reference object includes the marker of a pre-defined shape, dimension, and color references for determining color of the food product, and wherein the marker has six color references and four ArUco® markers.
4. The system (100) as claimed in claim 1, wherein the food product recognition engine (104) receives the image from the image acquisition device (110) installed in a user device, and wherein an application is invoked on the image acquisition device (110) for rendering a Graphical User Interface (GUI) for selecting a category of the food product, and wherein after selection the image of the food product and the marker are captured via a camera associated with the image acquisition device (110) or via a camera feature associated with the application, and wherein one or more sensors associated with the image acquisition device (110) are used to determine orientation of the image acquisition device (110) for capturing the image.
5. The system (100) as claimed in claim 1, wherein the pre-determined parameters include checking presence of an appropriate background on which the food product is spread, validating by an image processing unit (112) in the food product recognition engine (104) if a camera angle is in a preferred range for capturing the image via a cloud Application Processing Interface (API).
6. The system (100) as claimed in claim 5, wherein the image processing unit (112) performs the camera angle validation for the captured image, wherein in the event an actual shape of the marker is rectangular then the camera angle validation is carried out by analyzing a distorted shape of the marker present in the image against the actual rectangular shape of the marker, and based on the analysis a model is generated to compute a camera angle deviation from an ideal camera angle orientation, and wherein in the event the actual shape of the marker is circular then the camera angle validation is carried out based on the marker’s circularity, and wherein in the event the actual shape of the marker is elliptical then the camera angle validation for the captured image is carried out based on a ratio of a major and a minor axis of the marker’s shape, and wherein in the event the actual shape of the marker is L-shaped or U-shaped then the camera angle validation for the captured image is carried out based
on a transformation of a rectangle shape of the marker in an x-axis and y-axis, and wherein the captured image is validated based on generic constraints or food product specific constraints and properties of the captured image of the food product.
7. The system (100) as claimed in claim 1, wherein the predetermined modifications include carrying out a perspective correction and color correction of the image, and wherein the perspective correction of the image is carried out based on an actual shape of the marker and the shape of the marker in the image, and wherein the color correction of the image is carried out using the known colors of the marker and colors of the markers in the image, and wherein the marker is used to compute camera pose estimation and camera calibration, which aids in camera angle validation.
8. The system (100) as claimed in claim 7, wherein the image processing unit (112) transforms colors of the image of the food product based on the marker, and wherein the image processing unit (112) applies a linear regression technique and a comparative analysis of colors in different color channels of different color spaces to carry out the transformation.
9. The system (100) as claimed in claim 8, wherein the image processing unit (112) carries out the transformation by determining a closest color using Red, Green, Blue (RGB) values for the food product present in the image, and wherein the image processing unit (112) is configured to carry out color correction of the food product image by generating a color correction model.
10. The system (100) as claimed in claim 9, wherein the color correction model is computed based on inputs including a source input and a reference input, wherein the source input comprises an input color comprising a mean of pixel level color values for each color area after subtracting an ArUco® markers extracted from the food product image and the reference input comprises one or more known reference color values for each color area in the ArUco® markers, and wherein the color correction model provides output in a standard RGB color space and determines a distance between the two inputs.
11. The system (100) as claimed in claim 10, wherein the color correction model provides a 3x3 linear transformation on color values and performs color linearization for accurate calibration of the colors in the food product image, and wherein the color correction model is trained with sample food product images for carrying out efficient color correction.
12. The system (100) as claimed in claim 11, wherein the trained color correction model is configured to compute a transformation matrix and a loss value, and wherein the color of the food product in the image is calibrated by the color correction model based on the transformation matrix, and wherein the color correction model computes colors of the food products from the color corrected image for accurate representation of colors of the food products.
13. The system (100) as claimed in claim 12, wherein the image processing unit (112) is configured to determine color of the food product after the transformation, wherein the image processing unit (112) determines if the food product is translucent or light colored or dark colored, the color of the translucent food product is determined by comparing the colors measured at a grain cluster level and at a separate grain level of the food product image based on a histogram matching technique or color of the translucent product is determined at a cluster level and size is measured at a grain level, and wherein the dark color of the food product is determined by a white background validation, shadow removal and mask inversion of the food product image.
14. The system (100) as claimed in claim 1, wherein the food product recognition engine (104) pre-process the analyzed image by applying one or more image pre-processing techniques, the pre-processing includes resizing of the image of the food product by applying smart dynamic cropping and gray scaling, removing blur from the image by applying noise filtration to the image, applying gamma correction to the image, applying image thresholding to the image, and carrying out edge detection of the image.
15. The system (100) as claimed in claim 1, wherein the image processing unit (112) is configured to carry out the three-level filtering on the food product image, the three-level filtering includes carrying out area filtering of the food product present in the image by using area property and removing dust and noise from background of the image, carrying out a size limit filtering based on a size property of the food product present in the image, and carrying out an overlapping filtering of the food product present in the image.
16. The system (100) as claimed in claim 15, wherein the size property is determined by computing a maximum length and a maximum width of the food product present in the image, the size limit includes a small, a medium and a large food product size, and wherein for the smaller food product size and the medium food product size processing is carried out by applying a canny edge mask technique and bitwise AND operation and the size limit of the large food product size is determined by applying image thresholding technique for hue, saturation, value channels and combining them with bitwise operations, and carrying out grain level thresholding.
17. The system (100) as claimed in claim 1, wherein the food product recognition engine (104) comprises a feature characteristics extraction unit (114) for extracting the physical characteristics of the processed food product by optimizing attributes including contour or perimeter of the food product present in the image, by computing a pixel area of each food product in the image, and by converting pixel areas of each food product present in the image to a physical area in dimensions of millimeters (mm2), and identifying the food product as slender, medium, and bold based on an aspect ratio.
18. The system (100) as claimed in claim 1, wherein the ML models employ a label encoder technique and a random forest classifier technique for extracting the physical characteristics of the food product, and wherein the ML models are trained for extracting the physical characteristics of the food product including length, width, color, perimeter, area, shape, circularity of the food product, impurities, and type of the food product.
19. The system (100) as claimed in claim 1, wherein the food product recognition engine (104) comprises a comparison unit (118) for carrying out the comparison by optimizing color detection algorithms to use a customized color palette as a reference.
20. The system (100) as claimed in claim 17, wherein the first characteristics extraction unit (114) is configured to identify a count of the food product present in the image.
21. The system (100) as claimed in claimed in claim 20, wherein the first characteristics extraction unit (114) is configured to determine specific characteristics of the food product present in the image, and wherein if it is determined that the food product type is rice then chalkiness of rice is determined based on a computing percentage of rice grain portion that is chalky, and wherein the chalky portion of rice grains is determined by measuring rice grain pixels in the image that are whiteish as compared to rest of the rice grain pixels in the image, and wherein pixels that are concentrated in a region in the image are identified and noise is filtered out from the rice grain image.
22. The system (100) as claimed in claim 21, wherein the marker is used to determine the color of the chalky portion of rice grains to measure a chalkiness percentage of rice grains accurately.
23. The system (100) as claimed in claim 17, wherein the first characteristics extraction unit (114) applies the ML models to classify the food product by applying exploratory data analysis on the extracted physical characteristics, and wherein the first characteristics extraction unit (114) is configured to extract a moisture percentage from the food product for computing a moisture percentage data of the food product by employing a moisture meter.
24. The system (100) as claimed in claim 17, wherein the first characteristics extraction unit (114) determines quality of the grains by applying HuMoments technique for certain food products.
25. The system (100) as claimed in claim 17, wherein the first characteristics extraction unit (114) is configured to determine impurities in the food product by employing the ML models, and wherein the impurities comprise damaged grains, discolored grains, broken grains, weevilled or damaged by insects, immature grain, and presence of foreign matter.
26. The system (100) as claimed in claim 1, wherein the food product recognition engine (104) comprises a second characteristics extraction unit (120) executed by the processor (106) and configured to fetch the standard food product physical characteristics along with moisture percentage data computed for the standard food product from a server (130) and process the characteristics for removing any blur and noise, and wherein a comparison unit (118) in the food product recognition engine (104) is configured to fetch the stored physical characteristics of the food product extracted from the food product image from a database (116) for comparison with the standard food product physical characteristics processed by the second characteristics extraction unit (120).
27. The system (100) as claimed in claim 26, wherein the food product recognition engine (104) comprises a comparison result processing unit (122) executed by the processor (106) and configured to communicate with the comparison unit (118) to generate the comparison results of the characteristics obtained from the processed food product image along with the moisture percentage data of the food product and the standard food product characteristics.
28. The system (100) as claimed in claim 27, wherein the comparison result processing unit (122) is configured to communicate with a food product characteristics computation unit (124) for determining the physical characteristics of the food products, and wherein the food product characteristics computation unit (124) is configured to communicate with a threshold comparison unit (126) for comparing the comparison result with a pre-determined threshold range value for determining quality of the food product present in the processed food product image.
29. The system (100) as claimed in claim 28, wherein in the event it is determined that the comparison results are within the pre-defined threshold range value, then the food product characteristics computation unit (124) computes one or more parameter types for the food product associated with the processed food product image, the one or more parameter types comprises computing statistical measurements of the food product, and descriptive measurements of the food product, the computed statistical measurements comprises mean, median, range, and standard deviation, and wherein the computed descriptive measurements comprises determining shape of the food product, and determining color of the food product.
30. The system (100) as claimed in claim 1, wherein the food product recognition engine (104) comprises a feedback unit (128) configured to generate a summary view of computed dimensions of the food product for rendering via a Graphical User Interface (GUI) on the image acquisition device (110), and wherein the summary view provides a generic summary of the food product and a specific summary of the food product.
31. The system (100) as claimed in claim 29, wherein the food product recognition engine (104) comprises a feedback unit (128) that determines the food product to be of good quality in the event the comparison results are determined to be within the pre-defined threshold range value for rendering the good quality status via a GUI of the image acquisition device (110).
32. The system (100) as claimed in claim 1, wherein the food product recognition engine (104) comprises a feedback unit (128) executed by the processor (106) and configured to provide visual feedback data via a GUI of the image acquisition device (110) associated with detected contours drawn on an input food product image and carries out intermediate pre-processing of the food product image such as removal of Gaussian blur, and wherein the feedback data from the feedback unit (128) is processed to provide a descriptive output for decision making with respect to accept, warn, or reject the food product based on one or more index scores computed for the food products.
33. A method for optimized identification and measurement of characteristics of food products, the
method is implemented by a memory (108) storing program instructions and a processor (106)
executing instructions stored in the memory (108), the method comprises:
analyzing an image of a food product and a marker received from an image acquisition device (110) based on one or more pre-determined parameters including analyzing the image based on determination of a shape, color and features of the marker and performing pre-determined modifications in the image of the food product based on the marker;
performing a three-level filtering of the food product image based on area, size limit and overlap of the food product present in the image;
extracting one or more physical characteristics of the processed food product image based on trained Machine Learning (ML) models, wherein the ML models are trained to compare the extracted physical characteristics with pre-determined physical characteristics to determine physical characteristics of the processed food product present in the image accurately;
comparing the determined physical characteristics of the food product with physical characteristics of a standard food product for determining similarity or dissimilarity between the characteristics obtained from the processed food product image and the standard food product characteristics to obtain a comparison result; and
comparing the comparison result with a pre-determined threshold range value for determining quality of the food product present in the processed food product image.
34. The method as claimed in claim 33, wherein a reference object of a pre-defined size is placed along with the food product on a background for capturing images of the food product, and wherein the reference object includes a marker of a pre-defined shape, dimension, and pre-defined color references for measuring color of the food product, and wherein the marker has six color references and four ArUco® markers.
35. The method as claimed in claim 33, wherein an application is invoked in a user device for rendering a Graphical User Interface (GUI) for selecting a category of food products, and subsequent to the selection the image of the food product and the marker are captured via a camera or via a camera feature associated with the application.
36. The method as claimed in claim 33, wherein the pre-determined parameters include checking presence of an appropriate background on which the food product is spread, validating if a camera angle is in a preferred range for capturing the image via a cloud Application Processing Interface (API).
37. The method as claimed in claim 34, wherein in the event an actual shape of the marker is rectangular then the camera angle validation is carried out by analyzing a distorted shape of the marker present in the image against the shape rectangular shape of the marker, and based on the analysis a model is generated to compute a camera angle deviation from an ideal camera angle orientation, and wherein in the event the actual shape of the marker is circular then camera angle validation for the captured image is carried out based on the marker’s circularity, and wherein in the event the actual shape of the marker shape is elliptical then the camera angle validation for the captured image is carried out based on a ratio of a major and a minor axis of the marker’s shape, and wherein in the event the actual shape of the marker is L-shaped or U-shaped, then the camera angle validation for the captured image is carried out based on a transformation of a rectangle shape of the marker in an x-axis and y-axis, and wherein the captured image is validated based on generic constraints or food product specific constraints and properties of the captured image of the food products.
38. The method as claimed in claim 33, wherein the predetermined modifications include carrying out a perspective correction and color correction of the image and wherein the perspective correction of the image is carried out based on an actual shape of the marker and the shape of the marker in the image, and wherein the color correction of the image is carried out using the known colors of the marker and colors of the markers in the image, and wherein the marker is used to compute camera pose estimation and camera calibration, which aids in camera angle validation.
39. The method as claimed in claim 38, wherein colors of the image of the food product are transformed based on the marker, and wherein a linear regression technique and a comparative analysis of colors is applied in different color channels of different color spaces to carry out the transformation.
40. The method as claimed in claim 39, wherein a closest color is determined using Red, Green, Blue (RGB) values for the food product present in the image, and wherein the image processing unit (112) is configured to carry out color correction of the food product image by generating a color correction model.
41. The method as claimed in claim 40, wherein the color correction model is computed based on inputs including a source input and a reference input, wherein the source input comprises an input color comprising a mean of pixel level color values for each color area after subtracting an ArUco® marker extracted from the food product image and the reference input comprises one or more known reference color values for each color area in the ArUco® marker, and wherein the color correction model provides output in a standard RGB color space and determines a distance between the two inputs.
42. The method as claimed in claim 41, wherein the color correction model provides a 3x3 linear transformation on color values and performs color linearization for accurate calibration of the colors in the food product image, and wherein the color correction model is trained with sample food product images for carrying out efficient color correction.
43. The method as claimed in claim 42, wherein the trained color correction model is configured to compute a transformation matrix and a loss value, and wherein the color of the food product in the image is calibrated by the color correction model based on the transformation matrix, and wherein the color correction model computes colors of the food products from the color corrected image for accurate representation of colors of the food products.
44. The method as claimed in claim 43, wherein color of the food product is determined after the transformation, and wherein it is determined if the food product is translucent or light colored or dark colored, the color of the translucent food product is determined by comparing the colors measured at a grain cluster level and at a separate grain level of the food product image based on a histogram matching technique or color of the translucent product is determined at a cluster level and size is measured at a grain level, and wherein the dark color of the food product is determined by a white background validation, shadow removal and mask inversion of the food product image.
45. The method as claimed in claim 33, wherein the method comprises pre-processing the analyzed image by applying one or more image pre-processing techniques, the pre-processing includes resizing of the image of the food product by applying smart dynamic cropping and gray scaling, removing blur from the image by applying noise filtration to the image, applying gamma correction to the image, applying image thresholding to the image, and carrying out edge detection of the image.
46. The method as claimed in claim 33, wherein the three-level filtering for processing the food product image includes carrying out area filtering of the food product present in the image by using area property and removing dust and noise from the background of the image, carrying out a size limit filtering based on a size property of the food product present in the image, and carrying out an overlapping filtering of the food product present in the image.
47. The method as claimed in claim 46, wherein the size property is determined by computing a maximum length and a maximum width of the food product present in the image, the size limit includes a small, a medium and a large food product size, and wherein for the smaller food product size and the medium food product size processing is carried out by applying a canny edge mask technique and bitwise AND operation and the size limit of the large food product size is determined by applying image thresholding technique for hue, saturation, value channels and combining them with bitwise operations, and carrying out grain level thresholding.
48. The method as claimed in claim 33, wherein the extraction of the physical characteristics is carried out by optimizing attributes including contour or perimeter of the food product present in the image, by computing a pixel area of each food product in the image, and by converting pixel areas of each food product present in the image to a physical area in dimensions of millimeters (mm2), and wherein the feature extraction process identifies the food product as slender, medium, and bold based on an aspect ratio.
49. The method as claimed in claim 33, wherein the ML models employ a label encoder technique and a random forest classifier technique for extracting the physical characteristics of the food product, and wherein the ML models are trained for extracting the physical characteristics of the food product including length, width, color, perimeter, area, shape, circularity of the food product, impurities, and type of the food product.
50. The method as claimed in claim 33, wherein the comparison is carried out by optimizing color detection algorithms to use a customized color palette as a reference.
51. The method as claimed in claim 33, wherein food product specific characteristics of the food product present in the image are determined, and wherein if the food product type is determined to be rice then chalkiness of rice is determined based on a computing percentage of rice grain portion that is chalky, and wherein the chalky portion of rice grains is determined by measuring rice grain pixels in the image that are whiteish as compared to rest of the rice grain pixels in the image, and wherein pixels that are concentrated in a region in the image are identified and noise is filtered out from the rice grain image.
52. The method as claimed in claim 33, wherein the ML models are applied to classify the food products by applying exploratory data analysis on the extracted characteristics, and wherein moisture percentage is extracted from the food product for computing a moisture percentage data of the food product by employing a moisture meter.
53. The method as claimed in claim 33, wherein impurities in the food product are determined by employing the ML models, and wherein the impurities comprise damaged grains, discolored grains, broken grains, weevilled or damaged by insects, immature grain, and presence of foreign matter.
54. The method as claimed in claim 52, wherein the standard food product physical characteristics are fetched along with moisture percentage data computed for the standard food product from a server (130) and the characteristics are processed for removing any blur and noise, and wherein the stored physical characteristics of the food product extracted from the food product image are fetched from a database (116) for comparison with the standard food product physical characteristics.
55. The method as claimed in claim 33, wherein in the event it is determined that the comparison results are within a pre-defined threshold range value then one or more parameter types for the food product associated with the processed food product image are computed, the one or more parameter types comprises computing statistical measurements of the food product, and descriptive measurements of the food product, and wherein the computed statistical measurements comprise mean, median, range, and standard deviation, and wherein the computed descriptive measurements comprises determining shape of the food product, and determining color of the food product.
| # | Name | Date |
|---|---|---|
| 1 | 202221073593-STATEMENT OF UNDERTAKING (FORM 3) [19-12-2022(online)].pdf | 2022-12-19 |
| 2 | 202221073593-PROVISIONAL SPECIFICATION [19-12-2022(online)].pdf | 2022-12-19 |
| 3 | 202221073593-PROOF OF RIGHT [19-12-2022(online)].pdf | 2022-12-19 |
| 4 | 202221073593-POWER OF AUTHORITY [19-12-2022(online)].pdf | 2022-12-19 |
| 5 | 202221073593-FORM 1 [19-12-2022(online)].pdf | 2022-12-19 |
| 6 | 202221073593-FIGURE OF ABSTRACT [19-12-2022(online)].pdf | 2022-12-19 |
| 7 | 202221073593-DRAWINGS [19-12-2022(online)].pdf | 2022-12-19 |
| 8 | 202221073593-Request Letter-Correspondence [20-12-2022(online)].pdf | 2022-12-20 |
| 9 | 202221073593-Covering Letter [20-12-2022(online)].pdf | 2022-12-20 |
| 10 | 202221073593-FORM-26 [28-12-2022(online)].pdf | 2022-12-28 |
| 11 | 202221073593-ORIGINAL UR 6(1A) FORM 1 & FORM 26-261222.pdf | 2022-12-29 |
| 12 | 202221073593-CORRESPONDENCE(IPO)-(CERTIFIED COPY WIPO DAS)-(04-01-2023).pdf | 2023-01-04 |
| 13 | 202221073593-ORIGINAL UR 6(1A) FORM 26-050123.pdf | 2023-01-07 |
| 14 | 202221073593-STARTUP [18-04-2023(online)].pdf | 2023-04-18 |
| 15 | 202221073593-FORM28 [18-04-2023(online)].pdf | 2023-04-18 |
| 16 | 202221073593-FORM-9 [18-04-2023(online)].pdf | 2023-04-18 |
| 17 | 202221073593-FORM FOR STARTUP [18-04-2023(online)].pdf | 2023-04-18 |
| 18 | 202221073593-FORM 18A [18-04-2023(online)].pdf | 2023-04-18 |
| 19 | 202221073593-DRAWING [18-04-2023(online)].pdf | 2023-04-18 |
| 20 | 202221073593-CORRESPONDENCE-OTHERS [18-04-2023(online)].pdf | 2023-04-18 |
| 21 | 202221073593-COMPLETE SPECIFICATION [18-04-2023(online)].pdf | 2023-04-18 |
| 22 | Abstract1.jpg | 2023-05-24 |
| 23 | 202221073593-FER.pdf | 2023-10-19 |
| 24 | 202221073593-FER_SER_REPLY [04-04-2024(online)].pdf | 2024-04-04 |
| 25 | 202221073593-US(14)-HearingNotice-(HearingDate-07-06-2024).pdf | 2024-05-06 |
| 26 | 202221073593-Correspondence to notify the Controller [15-05-2024(online)].pdf | 2024-05-15 |
| 27 | 202221073593-Written submissions and relevant documents [21-06-2024(online)].pdf | 2024-06-21 |
| 28 | 202221073593-US(14)-HearingNotice-(HearingDate-21-01-2025).pdf | 2024-12-24 |
| 29 | 202221073593-Correspondence to notify the Controller [16-01-2025(online)].pdf | 2025-01-16 |
| 30 | 202221073593-Written submissions and relevant documents [24-01-2025(online)].pdf | 2025-01-24 |
| 31 | 202221073593-Written submissions and relevant documents [30-01-2025(online)].pdf | 2025-01-30 |
| 32 | 202221073593-PatentCertificate30-01-2025.pdf | 2025-01-30 |
| 33 | 202221073593-IntimationOfGrant30-01-2025.pdf | 2025-01-30 |
| 34 | 202221073593-FORM FOR SMALL ENTITY [25-04-2025(online)].pdf | 2025-04-25 |
| 1 | SearchHistoryE_19-10-2023.pdf |