Abstract: A method and system for quantifying concentration of an analyte through artificial intelligence based image interpretation is provided. The system includes an image acquisition module (115) to receive an image from a lateral flow assay by a user via a camera. The system includes an image processing module (120) to determine frames from the image to identify a first region of interest, apply a noise reduction technique to eliminate one or more unwanted signals from the first region of interest and detect a top band and a bottom band within the first region of interest that signifies an area for quantification. The system includes a machine learning module (125) to calculate a ratio of the sum of the signal intensities and determine the concentration of the analyte based on a training set fed as an input to a linear regression model thereby predicting the concentration value of the analyte. FIG. 1
Description:FIELD OF INVENTION
[0001] Embodiments of the present disclosure relate to lateral flow technology and more particularly to a method and a system to quantify concentration of an analyte from a lateral flow assay through artificial intelligence based image interpretation.
BACKGROUND
[0002] Lateral flow assays (LFAs) dominate the non-glucose rapid testing in humans and also other areas of application that require rapid generation of a test result, including veterinary diagnostics, agricultural testing, bio-warfare testing and food safety testing, as examples. Further, LFAs work on the principle of immunochromatography that are used to monitor biological interactions. The biological interactions produce results in the form of the presence or absence of lines that can be read or interpreted by the eyes. The lines are represented usually by means of color generation via biosensors. Consequently, LFAs as the biosensors are used to analyze the presence of an analyte. This technique leads to several advantages such as easy to operate, eliminates the need of technical proficiency and is independent of any instrument to read the results. Owing to these advantages, LFAs have displayed widespread application in the form of point-of-care devices where a user can perform a biochemical test in minutes to know the presence of the analyte.
[0003] Despite the mentioned advantages, the applicability of LFAs is limited to qualitative measurements where a user can detect the presence or absence of an analyte. Particularly, information of the concentration of the analyte is absent. Further, the LFAs require assistance from quantitative profiling methods that in turn require technical personnel operating a massive instrument. Several methods have been developed to interpret the color generated in an LFA by means of various read-out devices and smartphones to harvest additional information. However, these methods are usually limited to validating the presence of the assay lines.
[0004] Quantification of an analyte from the assay lines on a LFA requires a precise measurement of the color intensity due to environmental and hardware variability. The variables can be omitted by using a stand-alone reader unit to some extent however, this compromises the applicability of the LFAs as point-of-care devices. Further, regardless of the method of measurement, the quantification of an analyte requires the color intensity to be correlated with a calibration curve to give a precise concentration. The calibration, besides the imaging variability is not answered by the methods attempting the quantification of the analyte, restricting the LFAs to explore their complete potential as diagnostic devices.
[0005] Hence, there is a need for an improved method and system to quantify concentration of an analyte based on image interpretation using artificial intelligence to address the aforementioned issue(s).
BRIEF DISCRIPTION
[0006] In accordance with an embodiment of the present disclosure, a system for quantifying concentration of an analyte using artificial intelligence image interpretation is provided. The system includes a processing subsystem hosted on a server and configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes an image acquisition module configured to receive an image with corresponding guard rails from a lateral flow assay in response to a user capturing the image via a camera configured on a user device, wherein the image upon receiving is subjected to processing. Further, the processing subsystem includes an image processing module operatively coupled to the image acquisition module, wherein the image processing module is configured to determine one or more frames from the image to identify a first region of interest wherein the first region of interest signifies a potential area to be used for quantification. Further, the image processing module is configured to apply a noise reduction technique to eliminate one or more unwanted signals from the first region of interest. Furthermore, the image processing module is configured to detect a top band and a bottom band within the first region of interest to determine a second region of interest wherein the second region of interest signifies an area for quantification. Furthermore, the processing subsystem includes a machine learning module operatively coupled to the image processing module, wherein the machine learning module is configured to calculate a ratio of the sum of the signal intensities, upon receiving the second region of interest wherein the signal intensities corresponds to a plurality of signals within the top band and the bottom band. Further, the machine learning module is configured to determine the concentration of the analyte based on a training set generated from a plurality of trail images wherein the training set is fed as an input from a database to a linear regression model to subsequently train the linear regression model thereby predicting the concentration value of the analyte. The processing subsystem includes a display module operatively coupled to the machine learning module wherein the display module is configured to display the predicted concentration value of the analyte to the user.
[0007] In accordance with another embodiment of the present disclosure, a method for quantifying concentration of an analyte using artificial intelligence image interpretation is provided. The method includes receiving, by an image acquisition module of a processing subsystem, an image with corresponding guard rails from a lateral flow assay in response to a user capturing the image via a camera configured on a user device, wherein the image upon receiving is subjected to processing. The method also includes determining, by the image processing module of the processing subsystem, one or more frames from the image to identify a first region of interest wherein the first region of interest signifies a potential area to be used for quantification. Further, the method includes applying, by the image processing module of the processing subsystem, a noise reduction technique to eliminate one or more unwanted signals from the first region of interest. Furthermore, the method includes detecting, by the image processing module of the processing subsystem, a top band and a bottom band within the first region of interest to determine a second region of interest wherein the second region of interest signifies an area for quantification. Moreover, the method includes calculating, by a machine learning module of the processing subsystem, a ratio of the sum of the signal intensities, upon receiving the second region of interest wherein the signal intensities corresponds to a plurality of signals within the top band and the bottom band. The method also includes determining, by the machine learning module of the processing subsystem the concentration of the analyte based on a training set generated from a plurality of trail images wherein the training set is fed as an input from a database to a linear regression model to subsequently train the linear regression model thereby predicting the concentration value of the analyte. The method includes displaying, by a display module of the processing subsystem, the predicted concentration value of the analyte to the user.
[0008] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0009] FIG. 1 is a block diagram representation of system for quantifying concentration of an analyte from a lateral flow assay through artificial intelligence based image interpretation, in accordance with an embodiment of the present disclosure;
[00010] FIG. 2 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure; and
[00011] FIG. 3(a) and FIG. 3(b) illustrates a flow chart representing steps involved in a method for quantifying concentration of an analyte from a lateral flow assay through artificial intelligence based image interpretation, in accordance with an embodiment of the present disclosure.
[00012] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[00013] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[00014] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[00015] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[00016] In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
[00017] Embodiments of the present disclosure relate to a system and method to quantify concentration of an analyte from a lateral flow assay through artificial intelligence based image interpretation. The system includes a processing subsystem hosted on a server and configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes an image acquisition module configured to receive an image with corresponding guard rails from a lateral flow assay in response to a user capturing the image via a camera configured on a user device, wherein the image upon receiving is subjected to processing. Further, the processing subsystem includes an image processing module operatively coupled to the image acquisition module, wherein the image processing module is configured to determine one or more frames from the image to identify a first region of interest wherein the first region of interest signifies a potential area to be used for quantification. Further, the image processing module is configured to apply a noise reduction technique to eliminate one or more unwanted signals from the first region of interest. Furthermore, the image processing module is configured to detect a top band and a bottom band within the first region of interest to determine a second region of interest wherein the second region of interest signifies an area for quantification. Furthermore, the processing subsystem includes a machine learning module operatively coupled to the image processing module, wherein the machine learning module is configured to calculate a ratio of the sum of the signal intensities, upon receiving the second region of interest wherein the signal intensities corresponds to a plurality of signals within the top band and the bottom band. Further, the machine learning module is configured to determine the concentration of the analyte based on a training set generated from a plurality of trail images wherein the training set is fed as an input to a linear regression model to subsequently train the linear regression model thereby predicting the concentration value of the analyte. The detailed process to quantify concentration of an analyte from a lateral flow assay is further described in FIG. 1 onwards.
[00018] FIG. 1 is a block diagram representation of system (100) for quantifying concentration of an analyte from a lateral flow assay through artificial intelligence based image interpretation, in accordance with an embodiment of the present disclosure. The system (100) includes a processing subsystem (105) hosted on a server (108). In one embodiment, the server (108) may include a cloud server. In another embodiment, the server (108) may include a local server.
[00019] In another embodiment, parts of the server (108) may be a local server coupled to a user device (140). Examples of the user device (140) includes but is not limited to, a mobile phone, desktop computer, portable digital assistant (PDA), smart phone, tablet, ultra-book, netbook, laptop, multi-processor system, microprocessor-based or programmable consumer electronic system, or any other communication device. Specifically, the user device (140) is configured with a camera capable of capturing an image from the lateral flow assay. In a preferred embodiment, the camera is a phone camera.
[00020] The processing subsystem (105) is configured to execute on a network (110) to control bidirectional communications among a plurality of modules. In one example, the network (110) may be a private or public local area network (LAN) or Wide Area Network (WAN), such as the Internet. In another embodiment, the network (110) may include both wired and wireless communications according to one or more standards and/or via one or more transport mediums. In one example, the network (110) may include wireless communications according to one of the 802.11 or Bluetooth specification sets, or another standard or proprietary wireless communication protocol. In yet another embodiment, the network (110) may also include communications over a terrestrial cellular network, including, a global system for mobile communications (GSM), code division multiple access (CDMA), and/or enhanced data for global evolution (EDGE) network.
[00021] Further, the processing subsystem (105) includes an image acquisition module (115) configured to receive an image with corresponding guard rails from a lateral flow assay in response to a user (135) capturing the image via a camera configured on a user device (140). It must be noted that the camera is a primary input for subsequent processing of the image. Therefore, the user must ensure appropriate camera resolution, focus and lighting conditions to capture the image. In one embodiment, a user guidance may be provided to the user. Further, in another embodiment, auto focus of the camera may be enabled via programs configured in the user device (140), to capture the best resolution. Further, the user device (140) is configured with machine readable instructions (in other words, a mobile app) that guides the user to capture the image with its appropriate guard rails. Subsequently, the image is subjected to processing.
[00022] In one embodiment, the image is zoomed multiple times to arrive at the potential area to be used for quantification. The image is analyzed based on brightness gradient, absolute brightness, shadow, sharpness, absorbance, transmittance, contrast and combinations thereof. This ensures that the right region is used within the entire image that is initially captured. Edges are detected and boundaries are set to define the region of interest for quantifying. In one embodiment, appropriate masking is applied to filter out the regions for detection.
[00023] In another embodiment, the image acquisition module (115) is configured to validate the image, upon applying the noise reduction technique, for occurrence of one or more control and test lines. Further, the image acquisition module (115) is configured to apply low pass filter and signal smoothing to accurately detect one or more peaks in the signal.
[00024] The processing subsystem (105) includes an image processing module (120) configured to determine one or more frames from the image to identify a first region of interest wherein the first region of interest signifies a potential area to be used for quantification. The first region of interest represents only a small portion of the entire image. The image processing module (120) is also configured to apply a noise reduction technique to eliminate one or more unwanted signals from the first region of interest. Further, the image processing module (120) is configured to detect a top band and a bottom band within the first region of interest to determine a second region of interest wherein the second region of interest signifies an area for quantification.
[00025] In one embodiment, the top band and the bottom band are detected based on one or more peaks present towards a midpoint of the image.
[00026] The processing subsystem (105) includes a machine learning module (125) operatively coupled to the image processing module (120). The machine learning module (125) is configured to calculate a ratio of the sum of the signal intensities, upon receiving the second region of interest wherein the signal intensities corresponds to a plurality of signals within the top band and the bottom band. Further, the machine learning module (125) is configured to determine the concentration of the analyte based on a training set generated from a plurality of images/ samples wherein the training set is fed as an input to a linear regression model to subsequently train the linear regression model thereby predicting the concentration value of the analyte. The samples may be, without limitation, a biological sample, a chemical sample or an environmental sample. Further, the training set is stored in a database (150).
[00027] In one embodiment, the predicted concentration value of the analyte is further validated based on a training set of a plurality of trail images wherein the training set is fed as an input to a Lagrangian Support Vector Machine (LSVM).
[00028] The processing subsystem (105) includes a display module (130) operatively coupled to the machine learning module (125) wherein the display module (130) is configured to display the predicted concentration value of the analyte to the user (135).
[00029] FIG. 2 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure. The server (200) includes processor(s) (230), and memory (210) operatively coupled to the bus (220). The processor(s) (230), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
[00030] The memory (210) includes several subsystems stored in the form of executable program which instructs the processor (230) to perform the method steps illustrated in FIG. 3(a) and FIG. 3(b). The memory (210) includes a processing subsystem (108) of FIG.1. The processing subsystem (108) further has the following modules: an image acquisition module (115), an image processing module (120), a machine learning module (125) and a display module (130).
[00031] The image acquisition module (120) configured to receive an image with corresponding guard rails from a lateral flow assay in response to a user capturing the image via a camera configured on a user device, wherein the image upon receiving is subjected to processing. Further, the processing subsystem includes an image processing module (125) operatively coupled to the image acquisition module (120), wherein the image processing module (120) is configured to determine one or more frames from the image to identify a first region of interest wherein the first region of interest signifies a potential area to be used for quantification. Further, the image processing module (125) is configured to apply a noise reduction technique to eliminate one or more unwanted signals from the first region of interest. Furthermore, the image processing module (125) is configured to detect a top band and a bottom band within the first region of interest to determine a second region of interest wherein the second region of interest signifies an area for quantification. Furthermore, the processing subsystem (108) includes a machine learning module (125) operatively coupled to the image processing module (125), wherein the machine learning module (125) is configured to calculate a ratio of the sum of the signal intensities, upon receiving the second region of interest wherein the signal intensities corresponds to a plurality of signals within the top band and the bottom band. Further, the machine learning module (125) is configured to determine the concentration of the analyte based on a training set generated from a plurality of trail images wherein the training set is fed as an input to a linear regression model to subsequently train the linear regression model thereby predicting the concentration value of the analyte. The processing subsystem (108) includes a display module (130) operatively coupled to the machine learning module (125) wherein the display module (130) is configured to display the predicted concentration value of the analyte to the user.
[00032] The bus (430) as used herein refers to be internal memory channel or computer network that is used to connect computer components and transfer data between them. The bus (430) includes a serial bus or a parallel bus, wherein the serial bus transmits data in bit-serial format and the parallel bus transmits data across multiple wires. The bus (430) as used herein, may include but not limited to, a system bus, an internal bus, an external bus, an expansion bus, a frontside bus, a backside bus and the like.
[00033] FIG. 3(a) and FIG. 3(b) illustrates a flow chart representing steps involved in a method (300) for quantifying concentration of an analyte from a lateral flow assay through artificial intelligence based image interpretation, in accordance with an embodiment of the present disclosure. The method (300) includes receiving, by an image acquisition module of a processing subsystem, an image with corresponding guard rails from a lateral flow assay in response to a user capturing the image via a camera configured on a user device, wherein the image upon receiving is subjected to processing in step 305.
[00034] In one embodiment, the image is captured using the auto-focus feature of the camera to ensure a high resolution of the image.
[00035] The method (300) includes determining, by the image processing module of the processing subsystem, one or more frames from the image to identify a first region of interest wherein the first region of interest signifies a potential area to be used for quantification in step 310. The first region of interest is identified by using edge detection of a preferred landmark (for instance, location of QR code, existence of lateral flow band and the like) and sets boundaries.
[00036] Further, the image is zoomed for a predetermined number of times to ensure that the best region is selected for quantification.
[00037] Further, the method (300) includes applying, by the image processing module of the processing subsystem, a noise reduction technique to eliminate one or more unwanted signals from the first region of interest in step 315. It must be noted that any suitable noise reduction technique is used to ensure that the exact region of interest.
[00038] Furthermore, the method (300) includes detecting, by the image processing module of the processing subsystem, a top band and a bottom band within the first region of interest to determine a second region of interest wherein the second region of interest signifies an area for quantification in step 320.
[00039] Moreover, the method (300) includes calculating, by a machine learning module of the processing subsystem, a ratio of the sum of the signal intensities, upon receiving the second region of interest wherein the signal intensities corresponds to a plurality of signals within the top band and the bottom band in step 325. The detection of the bands is performed via computer vision and signal processing. Typically, images are accompanied by noise that must be smoothed. This is achieved by identifying the closest peaks to the center of the image.
[00040] Typically, the RCB image is converted into a corresponding LAB colour space. Specifically, a co-relation using the LAB colour space is determined for the lateral flow assay. The ‘A’ channel allows to determine the Red-Green axis based on the Gold Nano-particle conjugate used. This may be referred to ‘channel separation’.
[00041] In addition, the method (300) includes determining, by the machine learning module of the processing subsystem the concentration of the analyte based on a training set generated from a plurality of trail images wherein the training set is fed as an input to a linear regression model to subsequently train the linear regression model thereby predicting the concentration value of the analyte in step 330.
[00042] Typically, the machine learning module is trained with a plurality of samples. The samples include, but are not limited to, blood, serum, saliva and urine.
[00043] The method (300) also includes displaying, by a display module of the processing subsystem, the determined concentration of the analyte to the user in step 335.
[00044] Various embodiments of the present disclosure as described above provides accuracy in quantifying results of the analyte through the artificial intelligence based image interpretation. The system and method resolve the aforementioned problems by channel separation, global thresholding, morphological operations, edge and contour detection thereby detecting the concentration of the analyte. The process involves separating the channels of the image and allowing for application of a mask. This helps to eliminate noise from the image to hide all the non-relevant portions of the image thereby helping in a quantification with higher level of accuracy.
[00045] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof.
[00046] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[00047] The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, the order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. , Claims:1. A system (100) for quantifying concentration of an analyte using artificial intelligence image interpretation comprising:
a processing subsystem (105) hosted on a server (108) and configured to execute on a network (110) to control bidirectional communications among a plurality of modules comprising:
an image acquisition module (115) configured to receive an image with corresponding guard rails from a lateral flow assay in response to a user (135) capturing the image via a camera configured on a user device (140), wherein the image upon receiving is subjected to processing;
an image processing module (120) operatively coupled to the image acquisition module (115), wherein the image processing module (120) is configured to:
determine one or more frames from the image to identify a first region of interest wherein the first region of interest signifies a potential area to be used for quantification; and
apply a noise reduction technique to eliminate one or more unwanted signals from the first region of interest;
detect a top band and a bottom band within the first region of interest to determine a second region of interest wherein the second region of interest signifies an area for quantification;
a machine learning module (125) operatively coupled to the image processing module (120), wherein the machine learning module (125) is configured to:
calculate a ratio of the sum of the signal intensities, upon receiving the second region of interest wherein the signal intensities corresponds to a plurality of signals within the top band and the bottom band;
determine the concentration of the analyte based on a training set generated from a plurality of trail images wherein the training set is fed as an input from a database (150) to a linear regression model to subsequently train the linear regression model thereby predicting the concentration value of the analyte; and
a display module (130) operatively coupled to the machine learning module (125) wherein the display module (130) is configured to display the predicted concentration value of the analyte to the user (135).
2. The system as claimed in claim 1 wherein the image is zoomed multiple times to arrive at the potential area to be used for quantification.
3. The system as claimed in claim 1 wherein the image processing module (120) is configured to validate the image, upon applying the noise reduction technique, for occurrence of one or more control and test lines.
4. The system as claimed in claim 1 wherein the image processing module (120) is configured to apply low pass filter and signal smoothing to accurately detect one or more peaks in the signal.
5. The system as claimed in claim 1 wherein the predicted concentration value of the analyte is further validated based on a training set of a plurality of trail images wherein the training set is fed as an input to a Lagrangian Support Vector Machine.
6. The system as claimed in claim 1 wherein the top band and the bottom band are detected based on one or more peaks present towards a midpoint of the image.
7. The system as claimed in claim 1 wherein the image captured is converted into a lab colour space image to identify the first region of interest.
8. The system as claimed in claim 1 wherein the image is received from an immunochromatographic assay functioning through one of a generation and change of color in response to a variation of an analyte in a tested sample.
9. A method (300) for quantifying concentration of an analyte using artificial intelligence image interpretation comprising:
receiving, by an image acquisition module of a processing subsystem, an image with corresponding guard rails from a lateral flow assay in response to a user capturing the image via a camera configured on a user device, wherein the image upon receiving is subjected to processing; (305)
determining, by the image processing module of the processing subsystem, one or more frames from the image to identify a first region of interest wherein the first region of interest signifies a potential area to be used for quantification; (310)
applying, by the image processing module of the processing subsystem, a noise reduction technique to eliminate one or more unwanted signals from the first region of interest; (315)
detecting, by the image processing module of the processing subsystem, a top band and a bottom band within the first region of interest to determine a second region of interest wherein the second region of interest signifies an area for quantification; (320)
calculating, by a machine learning module of the processing subsystem, a ratio of the sum of the signal intensities, upon receiving the second region of interest wherein the signal intensities corresponds to a plurality of signals within the top band and the bottom band; (325)
determining, by the machine learning module of the processing subsystem the concentration of the analyte based on a training set generated from a plurality of trail images wherein the training set is fed as an input to a linear regression model to subsequently train the linear regression model thereby predicting the concentration value of the analyte; (330) and
displaying, by a display module of the processing subsystem, the determined concentration of the analyte to the user. (335)
Dated this 10th day of February 2023
Signature
Jinsu Abraham
Patent Agent (IN/PA-3267)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202341008846-DRAWING [20-07-2023(online)].pdf | 2023-07-20 |
| 1 | 202341008846-FORM-8 [22-04-2025(online)].pdf | 2025-04-22 |
| 1 | 202341008846-STATEMENT OF UNDERTAKING (FORM 3) [10-02-2023(online)].pdf | 2023-02-10 |
| 1 | 202341008846-US(14)-HearingNotice-(HearingDate-21-04-2025).pdf | 2025-02-27 |
| 2 | 202341008846-DRAWING [20-07-2023(online)].pdf | 2023-07-20 |
| 2 | 202341008846-FER_SER_REPLY [20-07-2023(online)].pdf | 2023-07-20 |
| 2 | 202341008846-REQUEST FOR EARLY PUBLICATION(FORM-9) [10-02-2023(online)].pdf | 2023-02-10 |
| 2 | 202341008846-US(14)-ExtendedHearingNotice-(HearingDate-26-05-2025)-1300.pdf | 2025-04-17 |
| 3 | 202341008846-Correspondence to notify the Controller [16-04-2025(online)].pdf | 2025-04-16 |
| 3 | 202341008846-FER_SER_REPLY [20-07-2023(online)].pdf | 2023-07-20 |
| 3 | 202341008846-FORM 3 [20-07-2023(online)].pdf | 2023-07-20 |
| 3 | 202341008846-PROOF OF RIGHT [10-02-2023(online)].pdf | 2023-02-10 |
| 4 | 202341008846-FORM 3 [20-07-2023(online)].pdf | 2023-07-20 |
| 4 | 202341008846-FORM-26 [16-04-2025(online)].pdf | 2025-04-16 |
| 4 | 202341008846-OTHERS [20-07-2023(online)].pdf | 2023-07-20 |
| 4 | 202341008846-POWER OF AUTHORITY [10-02-2023(online)].pdf | 2023-02-10 |
| 5 | 202341008846-US(14)-HearingNotice-(HearingDate-21-04-2025).pdf | 2025-02-27 |
| 5 | 202341008846-OTHERS [20-07-2023(online)].pdf | 2023-07-20 |
| 5 | 202341008846-MSME CERTIFICATE [10-02-2023(online)].pdf | 2023-02-10 |
| 5 | 202341008846-FER.pdf | 2023-05-17 |
| 6 | 202341008846-FORM28 [10-02-2023(online)].pdf | 2023-02-10 |
| 6 | 202341008846-FORM-26 [28-02-2023(online)].pdf | 2023-02-28 |
| 6 | 202341008846-FER.pdf | 2023-05-17 |
| 6 | 202341008846-DRAWING [20-07-2023(online)].pdf | 2023-07-20 |
| 7 | 202341008846-COMPLETE SPECIFICATION [10-02-2023(online)].pdf | 2023-02-10 |
| 7 | 202341008846-FER_SER_REPLY [20-07-2023(online)].pdf | 2023-07-20 |
| 7 | 202341008846-FORM-26 [28-02-2023(online)].pdf | 2023-02-28 |
| 7 | 202341008846-FORM-9 [10-02-2023(online)].pdf | 2023-02-10 |
| 8 | 202341008846-COMPLETE SPECIFICATION [10-02-2023(online)].pdf | 2023-02-10 |
| 8 | 202341008846-DECLARATION OF INVENTORSHIP (FORM 5) [10-02-2023(online)].pdf | 2023-02-10 |
| 8 | 202341008846-FORM 3 [20-07-2023(online)].pdf | 2023-07-20 |
| 8 | 202341008846-FORM FOR SMALL ENTITY(FORM-28) [10-02-2023(online)].pdf | 2023-02-10 |
| 9 | 202341008846-DECLARATION OF INVENTORSHIP (FORM 5) [10-02-2023(online)].pdf | 2023-02-10 |
| 9 | 202341008846-DRAWINGS [10-02-2023(online)].pdf | 2023-02-10 |
| 9 | 202341008846-FORM FOR SMALL ENTITY [10-02-2023(online)].pdf | 2023-02-10 |
| 9 | 202341008846-OTHERS [20-07-2023(online)].pdf | 2023-07-20 |
| 10 | 202341008846-DRAWINGS [10-02-2023(online)].pdf | 2023-02-10 |
| 10 | 202341008846-EVIDENCE FOR REGISTRATION UNDER SSI [10-02-2023(online)].pdf | 2023-02-10 |
| 10 | 202341008846-FER.pdf | 2023-05-17 |
| 10 | 202341008846-FORM 18A [10-02-2023(online)].pdf | 2023-02-10 |
| 11 | 202341008846-EVIDENCE FOR REGISTRATION UNDER SSI [10-02-2023(online)].pdf | 2023-02-10 |
| 11 | 202341008846-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-02-2023(online)].pdf | 2023-02-10 |
| 11 | 202341008846-FORM 1 [10-02-2023(online)].pdf | 2023-02-10 |
| 11 | 202341008846-FORM-26 [28-02-2023(online)].pdf | 2023-02-28 |
| 12 | 202341008846-COMPLETE SPECIFICATION [10-02-2023(online)].pdf | 2023-02-10 |
| 12 | 202341008846-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-02-2023(online)].pdf | 2023-02-10 |
| 12 | 202341008846-FORM 1 [10-02-2023(online)].pdf | 2023-02-10 |
| 13 | 202341008846-FORM 18A [10-02-2023(online)].pdf | 2023-02-10 |
| 13 | 202341008846-FORM 1 [10-02-2023(online)].pdf | 2023-02-10 |
| 13 | 202341008846-EVIDENCE FOR REGISTRATION UNDER SSI [10-02-2023(online)].pdf | 2023-02-10 |
| 13 | 202341008846-DECLARATION OF INVENTORSHIP (FORM 5) [10-02-2023(online)].pdf | 2023-02-10 |
| 14 | 202341008846-DRAWINGS [10-02-2023(online)].pdf | 2023-02-10 |
| 14 | 202341008846-FORM 18A [10-02-2023(online)].pdf | 2023-02-10 |
| 14 | 202341008846-FORM FOR SMALL ENTITY [10-02-2023(online)].pdf | 2023-02-10 |
| 15 | 202341008846-DECLARATION OF INVENTORSHIP (FORM 5) [10-02-2023(online)].pdf | 2023-02-10 |
| 15 | 202341008846-EVIDENCE FOR REGISTRATION UNDER SSI [10-02-2023(online)].pdf | 2023-02-10 |
| 15 | 202341008846-FORM FOR SMALL ENTITY [10-02-2023(online)].pdf | 2023-02-10 |
| 15 | 202341008846-FORM FOR SMALL ENTITY(FORM-28) [10-02-2023(online)].pdf | 2023-02-10 |
| 16 | 202341008846-COMPLETE SPECIFICATION [10-02-2023(online)].pdf | 2023-02-10 |
| 16 | 202341008846-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-02-2023(online)].pdf | 2023-02-10 |
| 16 | 202341008846-FORM FOR SMALL ENTITY(FORM-28) [10-02-2023(online)].pdf | 2023-02-10 |
| 16 | 202341008846-FORM-9 [10-02-2023(online)].pdf | 2023-02-10 |
| 17 | 202341008846-FORM-9 [10-02-2023(online)].pdf | 2023-02-10 |
| 17 | 202341008846-FORM28 [10-02-2023(online)].pdf | 2023-02-10 |
| 17 | 202341008846-FORM 1 [10-02-2023(online)].pdf | 2023-02-10 |
| 17 | 202341008846-FORM-26 [28-02-2023(online)].pdf | 2023-02-28 |
| 18 | 202341008846-FORM28 [10-02-2023(online)].pdf | 2023-02-10 |
| 18 | 202341008846-MSME CERTIFICATE [10-02-2023(online)].pdf | 2023-02-10 |
| 18 | 202341008846-FORM 18A [10-02-2023(online)].pdf | 2023-02-10 |
| 18 | 202341008846-FER.pdf | 2023-05-17 |
| 19 | 202341008846-FORM FOR SMALL ENTITY [10-02-2023(online)].pdf | 2023-02-10 |
| 19 | 202341008846-MSME CERTIFICATE [10-02-2023(online)].pdf | 2023-02-10 |
| 19 | 202341008846-OTHERS [20-07-2023(online)].pdf | 2023-07-20 |
| 19 | 202341008846-POWER OF AUTHORITY [10-02-2023(online)].pdf | 2023-02-10 |
| 20 | 202341008846-PROOF OF RIGHT [10-02-2023(online)].pdf | 2023-02-10 |
| 20 | 202341008846-POWER OF AUTHORITY [10-02-2023(online)].pdf | 2023-02-10 |
| 20 | 202341008846-FORM FOR SMALL ENTITY(FORM-28) [10-02-2023(online)].pdf | 2023-02-10 |
| 20 | 202341008846-FORM 3 [20-07-2023(online)].pdf | 2023-07-20 |
| 21 | 202341008846-FER_SER_REPLY [20-07-2023(online)].pdf | 2023-07-20 |
| 21 | 202341008846-FORM-9 [10-02-2023(online)].pdf | 2023-02-10 |
| 21 | 202341008846-PROOF OF RIGHT [10-02-2023(online)].pdf | 2023-02-10 |
| 21 | 202341008846-REQUEST FOR EARLY PUBLICATION(FORM-9) [10-02-2023(online)].pdf | 2023-02-10 |
| 22 | 202341008846-DRAWING [20-07-2023(online)].pdf | 2023-07-20 |
| 22 | 202341008846-FORM28 [10-02-2023(online)].pdf | 2023-02-10 |
| 22 | 202341008846-REQUEST FOR EARLY PUBLICATION(FORM-9) [10-02-2023(online)].pdf | 2023-02-10 |
| 22 | 202341008846-STATEMENT OF UNDERTAKING (FORM 3) [10-02-2023(online)].pdf | 2023-02-10 |
| 23 | 202341008846-MSME CERTIFICATE [10-02-2023(online)].pdf | 2023-02-10 |
| 23 | 202341008846-STATEMENT OF UNDERTAKING (FORM 3) [10-02-2023(online)].pdf | 2023-02-10 |
| 23 | 202341008846-US(14)-HearingNotice-(HearingDate-21-04-2025).pdf | 2025-02-27 |
| 24 | 202341008846-POWER OF AUTHORITY [10-02-2023(online)].pdf | 2023-02-10 |
| 24 | 202341008846-FORM-26 [16-04-2025(online)].pdf | 2025-04-16 |
| 25 | 202341008846-Correspondence to notify the Controller [16-04-2025(online)].pdf | 2025-04-16 |
| 25 | 202341008846-PROOF OF RIGHT [10-02-2023(online)].pdf | 2023-02-10 |
| 26 | 202341008846-REQUEST FOR EARLY PUBLICATION(FORM-9) [10-02-2023(online)].pdf | 2023-02-10 |
| 26 | 202341008846-US(14)-ExtendedHearingNotice-(HearingDate-26-05-2025)-1300.pdf | 2025-04-17 |
| 27 | 202341008846-STATEMENT OF UNDERTAKING (FORM 3) [10-02-2023(online)].pdf | 2023-02-10 |
| 27 | 202341008846-FORM-8 [22-04-2025(online)].pdf | 2025-04-22 |
| 28 | 202341008846-Correspondence to notify the Controller [22-05-2025(online)].pdf | 2025-05-22 |
| 29 | 202341008846-Written submissions and relevant documents [05-06-2025(online)].pdf | 2025-06-05 |
| 1 | SearchHistoryE_15-05-2023.pdf |
| 1 | SS_202341008846AE_08-11-2024.pdf |
| 2 | SearchHistoryE_15-05-2023.pdf |
| 2 | SS_202341008846AE_08-11-2024.pdf |