Sign In to Follow Application
View All Documents & Correspondence

A System And A Method For Classification Of Retinal Images Through Empirical Measures And Machine Learning

Abstract: ABSTRACT A SYSTEM AND A METHOD FOR CLASSIFICATION OF RETINAL IMAGES THROUGH EMPIRICAL MEASURES AND MACHINE LEARNING A system for classification of retinal images through empirical measures and machine learning comprises of an ophthalmic digital fundus camera (3.1), a cloud-based server component (3.2), an image viewer comprising either ophthalmic viewer PC (3.3) or ophthalmic viewer mobile phone (3.4) and reporting modules (3.5), the reporting modules being PCs or mobile phones which are optionally suitably distributed over the site. A method for classification of retinal images is also disclosed. FIG. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
17 January 2020
Publication Number
30/2021
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
mail@seenergi.com
Parent Application

Applicants

EKLAKSHYA ACADEMY LLP
C-LITE BUILDING, KLE TECHNOLOGICAL UNIVERSITY VIDYANAGAR, HUBBALLI, DHARWAD KARNATAKA 580031, INDIA

Inventors

1. MOHANACHANDRAN, POORNIMA
241, 6TH MAIN, MAITHRI LAYOUT, HOPE FARM, WHITEFIELD, BANGALORE 560066, INDIA
2. G., SREELEKHA
SNEHADEEPAM, CHOYIMADAM ROAD, CHATHAMANGALAM P.O., KOZHIKODE -673601, KERALA, INDIA
3. NISHA, K. L.
3-188, ANNAI SARATHA ILLAM, AZHAGAN PANAI VILAI, KATTATHURAI (P.O.), KANYAKUMARI DISTRICT, PIN: 629158, TAMIL NADU, INDIA
4. SATHIDEVI, P. S.
ASHTAPADI, 12TH MILE, NITC CAMPUS P.O., CALICUT - 673601, KERALA, INDIA
5. VINEKAR, ANAND S
28, NANDIDURG ROAD, BENSON TOWN, BANGALORE 560046, INDIA

Specification

FIELD OF THE INVENTION
The present invention relates in general to ophthalmology and retinal fundus image analysis and in particular to a system and a method for retinal image classification (clinical decision) through empirical measures and machine learning. The system and method have several applications including, but not limited to, implementation of various methods of screening and image classification, research/study of image features and their impact on image classification and training of imaging technicians, study of inter clinician variations in decisions, study of consistency of decisions of one clinician and training new ophthalmologists. Retinal fundus images are obtained using ophthalmic digital fundus cameras.
BACKGROUND AND PRIOR ART
Automated and semi-automated methods for image analysis and
classification are known in the prior art. Generally, the prior art innovations are related to segmentation and feature extraction methods and accuracy improvements of the same to improve the end result of classification.
Many of the known methods use machine learning approaches and deep learning networks like Convolutional Neural Network (CNN) for classification with various types of retinal images like Optical Coherence Tomography (OCT), 3D retinal images and retinal fundus images.
The accuracy of any classification system depends on the features used for classification. Therefore, the current approaches primarily try to mimic the decision of a clinician. i.e. discuss with clinician to identify list of features [observations] that is used in decision making, use image processing to extract the features as defined by clinician and try to improve the method of extraction to get to better than 99% accuracy. Several years of research may be needed to get 0.5% improvement in accuracy which may or may not impact final classification results.

This approach, which is currently practiced in the prior art, has limitations of making each feature extraction accurate as also the factors resulting from the challenges related to inter clinician variance and variance with respect to ethnicity. In these existing approaches, addition of a new feature to improve classification accuracy can be time consuming and new systems need to be built for classification of images of different ethnicity. Any change or improvement in classification approach would result in huge amount of new technology development, which is time consuming and costly.
The CNN based methods for retinal image classification uses a hierarchical end-to-end supervised learning approach where the features are extracted automatically by the CNN. These networks are typically based on “big data”, and the accuracy of feature extraction and classification depends on the size of the training samples used. While using such systems the significance of individual features or measures extracted from features cannot be assessed, because the features extracted are unknown.
Regarding the type of images used, this tool is designed for retinal fundus image classification taken by any fundus camera. In comparison, OCT gives a cross-sectional view of the retina with the details of retina’s distinct layers, whereas retinal fundus images provide the vascular structure of the retina. Hence the method analysis applicable for retinal fundus images cannot be replicated for OCT images. Also imaging modalities like 3-D retinal images and interferometric images of retina are difficult to capture, specifically where large population is involved. For applications like screening where imaging in large scale is required, retinal fundus imaging can be used and requires less skill than OCT imaging.
Thus, there is a need in the field for a method for retinal image classification which has fully automated segmentation and feature extraction, is versatile, supports study of inter clinician variations of classifications and does not involve addition of substantial new technology development. The present invention seeks to fulfil this need.

OBJECTS OF THE INVENTION
Accordingly, the primary object of the invention is to build a classifier for retinal fundus images which can be trained for different clinical scenarios including telemedicine and retinal image-based research. The invention describes a classifier for retinal images using machine learning based on combination of multiple empirical features extracted from the image and other clinical data.
Another object of the invention is to save time in development of image classifier algorithms to the required levels of accuracy for decision making, rather than aiming for a 100% accuracy level with respect to ground truth and also model decision making of different clinicians.
No other invention can support creating decision models of say two clinicians and help analyse differences in approach if any. No other invention can create decision model of one specific clinician during different periods of time, to study his/her consistency with respect to decision making.
As this method uses machine learning, the classifier can be trained differently for a new image set or for a new approach to classification or based on decisions of different clinicians.
Another object of the invention is to provide decision modelling in different use cases. The model can be used for study of inter clinician variations in decisions, study of consistency of decisions of one clinician and training new ophthalmologists and technicians.
How the foregoing objects are achieved will be clear from the following description. In this context it is clarified that the description provided is non-limiting and is only by way of explanation.
SUMMARY OF THE INVENTION
Accordingly the present invention provides a system for classification of retinal images through empirical measures and machine learning comprising of an ophthalmic digital fundus camera, a cloud-based server component, an

image viewer comprising either an ophthalmic viewer PC or an ophthalmic viewer mobile phone and a plurality of reporting modules, the reporting modules being PCs or mobile phones, which are suitably distributed over the site.
In accordance with preferred embodiments of the system as described hereinbefore:
-said ophthalmic digital fundus camera is a paediatric wide-angle retinal fundus camera for capturing raw retinal image pixels and is adapted to uploading them to the cloud-based server (3.2), which serves as the database of images and patient data and contains visualization and decision support algorithms;
-said ophthalmic digital fundus camera is used to capture images of the retina to first produce an original image, which is treated by colour enhancement feature of the invention to produce a colour enhanced image which enhances the accuracy of the observed results.
The present invention also provides a method for classification of retinal images through empirical measures and machine learning, using the system as described hereinbefore, the method comprising of the steps of:
- obtaining a plurality of features through a feature extraction stage;
- extracting measures from these features in a measures-extraction stage;
- forming a final decision score by combining the measures together after giving appropriate weightage to them;
- deciding the similarity of an image to a class through the decision score; and
- building a classifier decision model, the parameters of which model, namely features, measure and weights, can be extracted for further study of inter clinician variations in decisions, study of consistency of decisions of one clinician, training new ophthalmologists, technicians and for analysis.

In accordance with preferred embodiments of the method as described hereinbefore:
-the features are obtained from the raw retinal image pixels or by using any appropriate transformation;
-the features are non-image based and comprise of the patient details stored along with the image;
-the measures denote the information obtained from the features and could be obtained from a single feature or a group of features;
-the method of obtaining the measure and the features on which it is calculated is dependent on the desired application;
-the classifier can be trained as new classification approaches evolve and decisions taken based on multiple sets of features extracted to a certain level of accuracy, which are then combined empirically using different weights;
-fully automated segmentation and Feature extraction is provided, requiring minimum manual intervention;
-said measures are unique measures calculated from features like number of leaf nodes denoting the branching rate and vessel density denoting percentage of pixels belonging to blood vessels, in addition to the features of tortuosity and width.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The nature and scope of the present invention will be better understood from the accompanying drawings, which are by way of illustration of a preferred embodiment and not by way of any sort of limitation. In the accompanying drawings:-Figure 1 is an illustration of a classifier model according to the present invention.
Figure 2 shows a classifier model for plus disease classification.

Figure 3 shows the flow of activity according to the present invention.
Figure 4 shows visualization aids with colour enhancement and feature segmentation [e.g. vessel segmentation].
DETAILED DESCRIPTION OF THE INVENTION
Having described the main features of the invention above, a more detailed and non-limiting description of a preferred embodiment will be given in the following paragraphs with reference to the accompanying drawings.
All through the specification including the claims, the technical terms and abbreviations are to be interpreted in the broadest sense of the respective terms and include all similar items in the field known by other terms, as may be clear to persons skilled in art. Restriction or limitation if any referred to in the specification, is solely by way of example and understanding the present invention.
This invention relates to image classification and decision analysis. It describes a classifier for retinal fundus images. The expected functioning of the invention is to differentiate retinal images into different classes based on the weighted measure calculated from multiple feature sets. So far, the methodology has been validated in paediatric retinal images. The approach uses machine learning for classification and hence the system can be trained to create different classes of retinal images based on use cases. The working principle is as follows:
The input data are the retinal fundus images and the associated clinical data. Some examples of features are blood vessels or optic disc. The features could be grouped into different sets based on similar characteristics or could be used individually. A measure is calculated from a single feature or multiple features which directly relates to the information conveyed by the features. Examples of such measures are vessel ness measures, diameter, tortuosity and measures from histogram. Several such measures are combined together with different weightage to form the decision score. The input image is

classified to different classes based on certain thresholds obtained dynamically from the training data for a specific specificity and sensitivity target.
At the end, the classifier model with parameters is obtained which can be deployed for the specific use case or can be used for further research. As stated before, the model can be used for study of inter clinician variations in decisions, study of consistency of decisions of one clinician and training new ophthalmologists.
• Since the information from multiple features are combined to form the decision, dependency on absolute accuracy of measurements with respect to ground truth is minimized. Getting to high levels of accuracy with specific clinical parameters requires alignment with clinician, developing mathematical models and computation which is time consuming and many a times not feasible. There are years of research done to improve specificity and accuracy by 0.1 percent or 0.2 percent.
• There are typically high levels of inter clinician variation in classification and hence a classifier that works across all these scenarios is impossible to be built which results in building of different classifiers. The proposed classifier can be trained with any type of classification approaches and clinicians can even analyze differences in approaches.
• No other invention can support creating decision models of multiple clinicians and help analyze differences in approach, if any.
• No other invention can create decision model of one specific clinician during different periods of time, to study his/her consistency with respect to decision making.
• New measures extracted from the image feature set other than conventional measures that can improve the accuracy of classification.
• Thresholds for classification are derived from the training data/images, and available for review and learning unlike in a CNN based approach.
• The classifier can be trained as new classification approaches evolve.

• The algorithms of the decision support system are implemented
/coded to run in the server.
The invention also includes a system that can learn the different classes which are used for classification or grouping, making the grouping learnable rather than pre-defined. The system includes a digital fundus camera, software to upload images and patient data, a server component, viewers and reporting software, which generates /makes available the details of empirical decision [or model of classifier] which can be used in education and further research.
In this invention, decisions are taken based on multiple sets of features extracted to a certain level of accuracy, which are then combined empirically using different weights. The novel feature of this invention is the usage of empirically determined features and weightage for decision making.
• Existing – Manual/Semi-automated Segmentation [vessels and Optic Disc]
and Feature extraction
New - Fully automated Segmentation and Feature Extraction.
Generally, any classification system requires a segmentation stage (blood vessels, optic disc, etc.) followed by feature extraction where the relevant features are extracted. Most of the existing methods use manual methods either in segmentation stage or feature extraction stage to reduce the computation errors and to improve accuracy. This process is time consuming and expert dependent. The novel feature of our invention is that it is fully automated and hence requires minimum manual intervention.
Existing - Tortuosity as a measure, Dilation as a measure
New- Features – Leaf Node Count, Vessel Density
New- Measures – Histogram based measures of Tortuosity and Width
The present invention uses unique measures calculated from features like number of leaf nodes (denotes the branching rate) and vessel density (percentage of pixels belonging to blood vessels), in addition to the existing features of tortuosity and width. Additionally, histogram is calculated from tortuosity and width values obtained from the different vessel branches, to

be used as a measure. Therefore, additional information obtained from these measures helps in improving the accuracy of classification.
• Existing- Using a CNN based classifier (requires large image data set,
Model of classifier unknown)
New - Decision Modelling
New - Using parameter Fusion – Measures based on segmented features with weightage for
classification
New - Image classifier parameters [features, measures and weightage] available for study
and analysis
New - Flexible model which can be expanded or generalized without redesigning
The existing methods either train using classified image sets or use high level
classifiers like CNN to improve the accuracy. CNN based method requires
large image dataset and extensive training which is not achievable in certain
use cases. Additionally, parameters of the classifier model are not available
to be used for modification or further research. Our invention uses fusion of
measures based on segmented features with weightage for classification. The
measures can be of individual features, a combination of features or a
different representation of the same feature, conveying various
summarization of the information. Since this ensures the best features for a given target, the classifier could be trained using relatively smaller dataset. Further a classifier decision model is built and the parameters of the model (features, measure and weights) can be extracted for further study and analysis.
Reference is made to figure 1 which shows a model of a classifier (1.0) according to the present invention.

The main parts of this sub-assembly are: A feature extraction stage (1.1) where a plurality of features is obtained. The features may be obtained from the raw retinal image pixels or by using any appropriate transformation. The features may also be non-image based which comprises of the patient details stored along with the image. Next, an extraction stage (1.2) extracts measures from these features. The measures denote the information obtained from the features and could be obtained from a single feature or a group of features. The method of obtaining the measure and the features on which it is calculated is dependent on the desired application. Then the measures are combined together after giving appropriate weightage to them and a final decision score (1.3) is formed. The decision score decides the similarity of an image to a particular class (1.4). The image is said to belong to a particular class, if it has the highest decision score for the particular class.
The method was implemented for developing a plus disease classifier (2.0) for preterm infants and is shown as a sample system in Fig 2. The block diagram shows new features (2.1), measures (2.2), decision threshold (2.3) computed for the classifier aiming at 100% sensitivity for a particular class (2.4).
Reference is now made to figure 3. It shows the workflow of activity according to the present invention.
An example of the workflow is as follows:
A physician classifies a set of 100 images, enters his classification [clinical decision] and the system creates a model of the decision.
The system includes an ophthalmic digital fundus camera (3.1), software to upload images and patient data, a cloud-based server component (3.2), image viewer incorporating either ophthalmic viewer PC (3.3) or ophthalmic viewer mobile phone (3.4) and reporting modules (3.5), which are PCs or mobile phones provided with suitable hardware and software components.
The ophthalmic digital fundus camera (3.1) is a paediatric wide-angle retinal fundus camera. However, it can be any fundus camera and the present

invention is applicable for all types of fundus camera and the one illustrated (3.1) here, is just an example.
It captures the raw retinal image pixels and uploads them to the cloud-based server (3.2). This server is the database of images and patient data. The server contains the visualization and decision support algorithms.
The system provides secured connectivity between the server (3.2) and the image viewer which is either a PC (3.3) or a mobile phone (3.4). Reporting modules (3.5), which are PCs connected to the server, may be suitably distributed over the site.
It may be mentioned here that the same flow diagram is applicable for adult retinal images also and this is within the scope of the invention.
Figure 4 shows a special feature of the present invention. The system
contains visualization aids with colour enhancement and feature
segmentation [e.g. vessel segmentation] for ease of determination of the results of observation.
As shown in figure 4, segmented vessels (4.1) are selected for imaging by the ophthalmic digital fundus camera (3.1). The original image (4.2) captured by the camera is shown on the left side of the figure. This image is treated by the colour enhancement feature of the present invention, which produces a colour enhanced image (4.3). A comparison of the original image (4.2) captured by the camera and the colour enhanced image (4.3) shows that the details are far more clearly visible in the colour enhanced image, which makes it much easier and convenient for the observer to record the results. Thus, the accuracy of the observed results is enhanced by this feature.
Advantages:
Some of the non-limiting advantages of the present invention are mentioned in the list below. Other advantages will be clear to a person skilled in the art from the description provided above.
i) Since the system is designed to be used with multiple features and measures, dependency of accuracy of the classification system on the absolute accuracy of a particular feature is reduced. Therefore, extensive

research to improve the absolute accuracy of the feature extraction algorithms is not necessary. This is very important in a field where inter clinician variation is high.
ii) Since the final decision is based on multiple features and various measures derived out of them, the usage of exact features as used by a clinician is not required. Also image related properties could be experimented with – whether it is impacting decision or not.
iii) Any new feature to improve the classification performance can easily be added to the system without the need to redesign the system.
iv) Decision modelling helps in understanding inter-clinician variances.
v) The features extracted, the method for calculation of the measure and the number of classes could be varied depending on the application or use cases hence making the system suitable for training and research.
The present invention has been described with reference to some drawings and a preferred embodiment purely for the sake of understanding and not by way of any limitation and the present invention includes all legitimate developments within the scope of what has been described herein before and claimed in the appended claims.

We Claim:
1. A system for classification of retinal images through empirical measures and machine learning comprising of an ophthalmic digital fundus camera (3.1), a cloud-based server component (3.2), an image viewer comprising either an ophthalmic viewer PC (3.3) or an ophthalmic viewer mobile phone (3.4) and a plurality of reporting modules (3.5), the reporting modules being PCs or mobile phones, which are suitably distributed over the site.
2. The system for classification of retinal image as claimed in claim 1, wherein said ophthalmic digital fundus camera (3.1) is a paediatric wide-angle retinal fundus camera for capturing raw retinal image pixels and is adapted to uploading them to the cloud-based server (3.2), which serves as the database of images and patient data and contains visualization and decision support algorithms.
3. The system for classification of retinal image as claimed in claim 1, wherein said ophthalmic digital fundus camera (3.1) is used to capture images of the retina to first produce an original image (4.2), which is treated by colour enhancement feature of the invention to produce a colour enhanced image (4.3) which enhances the accuracy of the observed results.
4. A method for classification of retinal images through empirical
measures and machine learning, using the system as claimed in
claims 1-3, the method comprising of the steps of:
- obtaining a plurality of features through a feature extraction stage (1.1);
- extracting measures from these features in a measures-extraction stage (1.2);
- forming a final decision score (1.3) by combining the measures together after giving appropriate weightage to them;

- deciding the similarity of an image to a class (1.4) through the decision score; and
- building a classifier decision model, the parameters of which model, namely features, measure and weights, can be extracted for further study of inter clinician variations in decisions, study of consistency of decisions of one clinician, training new ophthalmologists, technicians and for analysis.

5. The method for classification of retinal images as claimed in claim 4, wherein the features are obtained from the raw retinal image pixels or by using any appropriate transformation.
6. The method for classification of retinal images as claimed in claim 4, wherein the features are non-image based and comprise of the patient details stored along with the image.
7. The method for classification of retinal images as claimed in claim 4, wherein the measures denote the information obtained from the features and could be obtained from a single feature or a group of features.
8. The method for classification of retinal images as claimed in claim 4, wherein the method of obtaining the measure and the features on which it is calculated is dependent on the desired application.
9. The method for classification of retinal images as claimed in claim 4, wherein the classifier (1.0) can be trained as new classification approaches evolve and decisions taken based on multiple sets of features extracted to a certain level of accuracy, which are then combined empirically using different weights.
10. The method for classification of retinal images as claimed in claim 4, wherein fully automated segmentation and Feature extraction is provided, requiring minimum manual intervention.

11. The method for classification of retinal images as claimed in claim 4, wherein said measures are unique measures calculated from features like number of leaf nodes denoting the branching rate and vessel density denoting percentage of pixels belonging to blood vessels, in addition to the features of tortuosity and width.
12. The method for classification of retinal images as claimed in claim 4, wherein fusion of measures is done, based on segmented features with weightage for classification and ensuring the best features for a given target, so that the classifier can be trained using a relatively smaller dataset.

Documents

Application Documents

# Name Date
1 202041002182-Annexure [28-03-2024(online)].pdf 2024-03-28
1 202041002182-Response to office action [03-03-2025(online)].pdf 2025-03-03
1 202041002182-Response to office action [30-04-2025(online)].pdf 2025-04-30
1 202041002182-STATEMENT OF UNDERTAKING (FORM 3) [17-01-2020(online)].pdf 2020-01-17
2 202041002182-Annexure [28-03-2024(online)].pdf 2024-03-28
2 202041002182-FORM 18 [17-01-2020(online)].pdf 2020-01-17
2 202041002182-FORM FOR STARTUP [28-03-2024(online)].pdf 2024-03-28
2 202041002182-Response to office action [21-04-2025(online)].pdf 2025-04-21
3 202041002182-FORM 1 [17-01-2020(online)].pdf 2020-01-17
3 202041002182-FORM FOR STARTUP [28-03-2024(online)].pdf 2024-03-28
3 202041002182-FORM-8 [28-03-2024(online)].pdf 2024-03-28
3 202041002182-Response to office action [03-03-2025(online)].pdf 2025-03-03
4 202041002182-Annexure [28-03-2024(online)].pdf 2024-03-28
4 202041002182-FIGURE OF ABSTRACT [17-01-2020(online)].jpg 2020-01-17
4 202041002182-FORM-8 [28-03-2024(online)].pdf 2024-03-28
4 202041002182-OTHERS [28-03-2024(online)].pdf 2024-03-28
5 202041002182-Written submissions and relevant documents [28-03-2024(online)].pdf 2024-03-28
5 202041002182-OTHERS [28-03-2024(online)].pdf 2024-03-28
5 202041002182-FORM FOR STARTUP [28-03-2024(online)].pdf 2024-03-28
5 202041002182-DRAWINGS [17-01-2020(online)].pdf 2020-01-17
6 202041002182-Written submissions and relevant documents [28-03-2024(online)].pdf 2024-03-28
6 202041002182-FORM-8 [28-03-2024(online)].pdf 2024-03-28
6 202041002182-DECLARATION OF INVENTORSHIP (FORM 5) [17-01-2020(online)].pdf 2020-01-17
6 202041002182-Correspondence to notify the Controller [11-03-2024(online)].pdf 2024-03-11
7 202041002182-COMPLETE SPECIFICATION [17-01-2020(online)].pdf 2020-01-17
7 202041002182-Correspondence to notify the Controller [11-03-2024(online)].pdf 2024-03-11
7 202041002182-OTHERS [28-03-2024(online)].pdf 2024-03-28
7 202041002182-US(14)-ExtendedHearingNotice-(HearingDate-15-03-2024).pdf 2024-02-09
8 202041002182-Correspondence to notify the Controller [01-02-2024(online)].pdf 2024-02-01
8 202041002182-Proof of Right [18-06-2020(online)].pdf 2020-06-18
8 202041002182-US(14)-ExtendedHearingNotice-(HearingDate-15-03-2024).pdf 2024-02-09
8 202041002182-Written submissions and relevant documents [28-03-2024(online)].pdf 2024-03-28
9 202041002182-Correspondence to notify the Controller [01-02-2024(online)].pdf 2024-02-01
9 202041002182-Correspondence to notify the Controller [11-03-2024(online)].pdf 2024-03-11
9 202041002182-FORM-26 [18-06-2020(online)].pdf 2020-06-18
9 202041002182-US(14)-ExtendedHearingNotice-(HearingDate-14-02-2024).pdf 2024-01-16
10 202041002182-FER.pdf 2021-10-18
10 202041002182-US(14)-ExtendedHearingNotice-(HearingDate-14-02-2024).pdf 2024-01-16
10 202041002182-US(14)-ExtendedHearingNotice-(HearingDate-15-03-2024).pdf 2024-02-09
10 202041002182-US(14)-HearingNotice-(HearingDate-14-02-2024).pdf 2024-01-16
11 202041002182-Correspondence to notify the Controller [01-02-2024(online)].pdf 2024-02-01
11 202041002182-OTHERS [04-03-2022(online)].pdf 2022-03-04
11 202041002182-Response to office action [05-07-2023(online)].pdf 2023-07-05
11 202041002182-US(14)-HearingNotice-(HearingDate-14-02-2024).pdf 2024-01-16
12 202041002182-CLAIMS [04-03-2022(online)].pdf 2022-03-04
12 202041002182-FER_SER_REPLY [04-03-2022(online)].pdf 2022-03-04
12 202041002182-Response to office action [05-07-2023(online)].pdf 2023-07-05
12 202041002182-US(14)-ExtendedHearingNotice-(HearingDate-14-02-2024).pdf 2024-01-16
13 202041002182-US(14)-HearingNotice-(HearingDate-14-02-2024).pdf 2024-01-16
13 202041002182-COMPLETE SPECIFICATION [04-03-2022(online)].pdf 2022-03-04
13 202041002182-CLAIMS [04-03-2022(online)].pdf 2022-03-04
14 202041002182-CLAIMS [04-03-2022(online)].pdf 2022-03-04
14 202041002182-COMPLETE SPECIFICATION [04-03-2022(online)].pdf 2022-03-04
14 202041002182-FER_SER_REPLY [04-03-2022(online)].pdf 2022-03-04
14 202041002182-Response to office action [05-07-2023(online)].pdf 2023-07-05
15 202041002182-CLAIMS [04-03-2022(online)].pdf 2022-03-04
15 202041002182-FER_SER_REPLY [04-03-2022(online)].pdf 2022-03-04
15 202041002182-OTHERS [04-03-2022(online)].pdf 2022-03-04
15 202041002182-Response to office action [05-07-2023(online)].pdf 2023-07-05
16 202041002182-COMPLETE SPECIFICATION [04-03-2022(online)].pdf 2022-03-04
16 202041002182-FER.pdf 2021-10-18
16 202041002182-OTHERS [04-03-2022(online)].pdf 2022-03-04
16 202041002182-US(14)-HearingNotice-(HearingDate-14-02-2024).pdf 2024-01-16
17 202041002182-FER_SER_REPLY [04-03-2022(online)].pdf 2022-03-04
17 202041002182-FORM-26 [18-06-2020(online)].pdf 2020-06-18
17 202041002182-US(14)-ExtendedHearingNotice-(HearingDate-14-02-2024).pdf 2024-01-16
17 202041002182-FER.pdf 2021-10-18
18 202041002182-FORM-26 [18-06-2020(online)].pdf 2020-06-18
18 202041002182-OTHERS [04-03-2022(online)].pdf 2022-03-04
18 202041002182-Proof of Right [18-06-2020(online)].pdf 2020-06-18
18 202041002182-Correspondence to notify the Controller [01-02-2024(online)].pdf 2024-02-01
19 202041002182-COMPLETE SPECIFICATION [17-01-2020(online)].pdf 2020-01-17
19 202041002182-FER.pdf 2021-10-18
19 202041002182-Proof of Right [18-06-2020(online)].pdf 2020-06-18
19 202041002182-US(14)-ExtendedHearingNotice-(HearingDate-15-03-2024).pdf 2024-02-09
20 202041002182-FORM-26 [18-06-2020(online)].pdf 2020-06-18
20 202041002182-DECLARATION OF INVENTORSHIP (FORM 5) [17-01-2020(online)].pdf 2020-01-17
20 202041002182-Correspondence to notify the Controller [11-03-2024(online)].pdf 2024-03-11
20 202041002182-COMPLETE SPECIFICATION [17-01-2020(online)].pdf 2020-01-17
21 202041002182-DECLARATION OF INVENTORSHIP (FORM 5) [17-01-2020(online)].pdf 2020-01-17
21 202041002182-DRAWINGS [17-01-2020(online)].pdf 2020-01-17
21 202041002182-Proof of Right [18-06-2020(online)].pdf 2020-06-18
21 202041002182-Written submissions and relevant documents [28-03-2024(online)].pdf 2024-03-28
22 202041002182-COMPLETE SPECIFICATION [17-01-2020(online)].pdf 2020-01-17
22 202041002182-DRAWINGS [17-01-2020(online)].pdf 2020-01-17
22 202041002182-FIGURE OF ABSTRACT [17-01-2020(online)].jpg 2020-01-17
22 202041002182-OTHERS [28-03-2024(online)].pdf 2024-03-28
23 202041002182-DECLARATION OF INVENTORSHIP (FORM 5) [17-01-2020(online)].pdf 2020-01-17
23 202041002182-FIGURE OF ABSTRACT [17-01-2020(online)].jpg 2020-01-17
23 202041002182-FORM 1 [17-01-2020(online)].pdf 2020-01-17
23 202041002182-FORM-8 [28-03-2024(online)].pdf 2024-03-28
24 202041002182-DRAWINGS [17-01-2020(online)].pdf 2020-01-17
24 202041002182-FORM 1 [17-01-2020(online)].pdf 2020-01-17
24 202041002182-FORM 18 [17-01-2020(online)].pdf 2020-01-17
24 202041002182-FORM FOR STARTUP [28-03-2024(online)].pdf 2024-03-28
25 202041002182-Annexure [28-03-2024(online)].pdf 2024-03-28
25 202041002182-FIGURE OF ABSTRACT [17-01-2020(online)].jpg 2020-01-17
25 202041002182-FORM 18 [17-01-2020(online)].pdf 2020-01-17
25 202041002182-STATEMENT OF UNDERTAKING (FORM 3) [17-01-2020(online)].pdf 2020-01-17
26 202041002182-STATEMENT OF UNDERTAKING (FORM 3) [17-01-2020(online)].pdf 2020-01-17
26 202041002182-Response to office action [03-03-2025(online)].pdf 2025-03-03
26 202041002182-FORM 1 [17-01-2020(online)].pdf 2020-01-17
27 202041002182-Response to office action [21-04-2025(online)].pdf 2025-04-21
27 202041002182-FORM 18 [17-01-2020(online)].pdf 2020-01-17
28 202041002182-Response to office action [30-04-2025(online)].pdf 2025-04-30
28 202041002182-STATEMENT OF UNDERTAKING (FORM 3) [17-01-2020(online)].pdf 2020-01-17
29 202041002182-Response to office action [24-06-2025(online)].pdf 2025-06-24
30 202041002182-Response to office action [24-07-2025(online)].pdf 2025-07-24

Search Strategy

1 searchstrategy_202041002182E_26-08-2021.pdf