Sign In to Follow Application
View All Documents & Correspondence

Retinal Fundus Progression Predictive System

Abstract: A computer implemented system 100 to predict a retinal fundus appearance in at least one future point-in-time using an input retinal fundus image of a user is disclosed. The system 100 comprises a retinal fundus appearance prediction application 103 comprising: a processing means 103d to predict the retinal fundus appearance in at the least one future point-in-time using the input retinal fundus image of the user, comprising: determine a plurality of user-specific parameters for the input retinal fundus image at the initial point-in-time using a plurality of neural networks; calculate the user-specific parameters for each of the at least one future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks; and display the at least one future retinal fundus image for each of the at least one future point-in-time. (Figure 1)

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
09 March 2018
Publication Number
37/2019`
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
kraji@artelus.com
Parent Application
Patent Number
Legal Status
Grant Date
2021-08-31
Renewal Date

Applicants

Artificial Learning Systems India Pvt Ltd
Hansa Complex, 1665/A, second floor, 14th Main, 7th, sector, HSR Layout, HSR Layout, Bengaluru, Karnataka 560102, India

Inventors

1. Rajarajeshwari Kodhandapani
No.139 2nd Cross, 7th Block, Koramangala, Bangalore 560095.
2. Pradeep Walia
6138 Boundary Road, Downers Grove, Illinois 60516.
3. Raja Raja Lakshmi
No.139 2nd Cross, 7th Block, Koramangala, Bangalore 560095, Karnataka, India

Specification

Claims:We claim:
1. A computer implemented system 100 to predict a retinal fundus appearance of a user in at least one future point-in-time using an input retinal fundus image of the user, comprising:

at least one processor;
a non-transitory computer readable storage medium communicatively coupled to the at least one processor, the non-transitory computer readable storage medium configured to store a retinal fundus appearance prediction application 103, the at least one processor configured to execute the retinal fundus appearance prediction application 103; and
the retinal fundus appearance prediction application 103 comprising:
a graphical user interface 103e comprising a plurality of interactive elements 103f configured to enable capture and process of the input retinal fundus image of the user via a user device 101a, 101b or 101c;
a reception means 103a adapted to receive an input retinal fundus image of a user, an initial point-in-time corresponding to the input retinal fundus image and a request for at least one future retinal fundus image of the user for at least one future point-in-time;
an interactive retinal fundus image rendering means 103b adapted to dynamically render the input retinal fundus image, wherein the dynamically rendered input retinal fundus image is configurably accessible on the graphical user interface 103e via the user device 101a, 101b or 101c using the interactive elements 103f;
a retinal fundus image capture means 103c adapted to capture the input retinal fundus image of the user based on the dynamically rendered input retinal fundus image; and
a processing means 103d adapted to predict the retinal fundus appearance in at the least one future point-in-time using the input retinal fundus image of the user, comprising:
determine a plurality of user-specific parameters for the input retinal fundus image at the initial point-in-time using a plurality of neural networks;
calculate the user-specific parameters for each of the at least one future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks; and
display the at least one future retinal fundus image for each of the at least one future point-in-time, wherein the at least one future retinal fundus image for a corresponding future point-in-time depicts the user-specific parameters in the input retinal fundus image at the corresponding future point-in-time.
2. The system 100 as claimed in claim 1, wherein the user-specific parameters comprises an age of the user, an ethnicity of the user, a lifestyle of the user, a geographical location of the user and a plurality of environmental conditions of the geographical location of the user.
3. The system 100 as claimed in claim 1, further comprising the processing means 103d adapted to generate a time lapse video from a start point-in-time to an end point-in-time based on the input retinal fundus image.
4. The system 100 as claimed in claim 1, wherein the processing means 103d adapted to predict one or more pathological conditions associated with each of the at least one future point-in-time based on the calculated user-specific parameters for the at least one future point-in-time.
5. A method to predict a retinal fundus appearance of a user in at least one future point-in-time using an input retinal fundus image of the user using a computer implemented system 100, comprising:
receiving an input retinal fundus image of a user, an initial point-in-time corresponding to the input retinal fundus image and a request for at least one future retinal fundus image of the user for at least one future point-in-time;
determining a plurality of user-specific parameters for the input retinal fundus image at the initial point-in-time using a plurality of neural networks;
calculating the user-specific parameters for each of the at least one future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks; and
displaying the at least one future retinal fundus image for each of the at least one future point-in-time, wherein the at least one future retinal fundus image for a corresponding future point-in-time depicts the user-specific parameters in the input retinal fundus image at the corresponding future point-in-time.
6. The method as claimed in claim 1, wherein the user-specific parameters comprises an age of the user, an ethnicity of the user, a lifestyle of the user, a geographical location of the user, a plurality of environmental conditions of the geographical location.
7. The method as claimed in claim 1, further comprising generating a time lapse video from a start point-in-time to an end point-in-time based on the input retinal fundus image.
8. The method as claimed in claim 1, wherein predicting one or more pathological conditions associated with each of the at least one future point-in-time based on the calculated user-specific parameters for the at least one future point-in-time.
, Description:
F O R M 2

THE PATENTS ACT, 1970

(39 of 1970)

COMPLETE SPECIFICATION

(See section 10; rule 13)

1. TITLE OF THE INVENTION

RETINAL FUNDUS PROGRESSION PREDICTIVE SYSTEM

2. APPLICANT:

a. Name: ARTIFICIAL LEARNING SYSTEMS INDIA PVT LTD

b. Nationality: INDIA

c. Address: 1665/A, 14th Main Rd, Sector 7, HSR Layout, Bengaluru,
Karnataka 560102, India.

Complete specification:

The following specification particularly describes the invention and the manner in which it is to be performed.


Technical field of the invention

[0001] The invention relates to an automated prediction of a retinal fundus appearance of a subject over time. More particularly, the invention relates to using a plurality of neural networks to automatically predict changes in a structure of the retinal fundus of the subject for at least one future time-point using a retinal fundus image of the subject.

Background of the invention

[0002] A retinal fundus of an individual is an eye’s interior surface which provides indications of an overall health condition of the individual. The appearance of the retinal fundus of the individual changes as a function of time. The retinal fundus images of the individual over time provides insight into the retinal pathological changes associated with the individual. The retinal pathological changes have been shown to be associated with many diseases, including systematic diseases, for example, stroke, hypertension, diabetes, cardiovascular diseases such as coronary heart disease and cerebral vascular disease, etc. More efficient and individual specific health management programs can be planned and developed by understanding the individual's retinal pathological changes in the retinal fundus images over time. This requires precise prediction of the retinal pathological changes in the individual's retinal fundus over time. Thus, an automated system providing predictive retinal pathological changes in the retinal fundus of the individual over time and assisting a better management of the individual’s existing wellbeing practices is required.

Summary of invention

[0003] This summary is provided to introduce a selection of concepts in a simplified form that are further disclosed in the detailed description of the invention. This summary is not intended to identify key or essential inventive concepts of the claimed subject matter, nor is it intended for determining the scope of the claimed subject matter.

[0004] The present invention discloses a computer implemented system to predict a retinal fundus appearance in at least one future point-in-time using an input retinal fundus image of a user. The system comprises at least one processor; a non-transitory computer readable storage medium communicatively coupled to the at least one processor, the non-transitory computer readable storage medium configured to store a retinal fundus appearance prediction application, the at least one processor configured to execute the retinal fundus appearance prediction application; and the retinal fundus appearance prediction application comprising: a graphical user interface comprising a plurality of interactive elements configured to enable capture and process of the input retinal fundus image of the user via a user device; a reception means adapted to receive an input retinal fundus image of a user, an initial point-in-time corresponding to the input retinal fundus image and a request for at least one future retinal fundus image of the user for at least one future point-in-time; an interactive retinal fundus image rendering means adapted to dynamically render the input retinal fundus image, wherein the dynamically rendered input retinal fundus image is configurably accessible on the graphical user interface via the user device using the interactive elements; a retinal fundus image capture means adapted to capture the input retinal fundus image of the user based on the dynamically rendered input retinal fundus image; a processing means adapted to predict the retinal fundus appearance in at the least one future point-in-time using the input retinal fundus image of the user, comprising: determine a plurality of user-specific parameters for the input retinal fundus image at the initial point-in-time using a plurality of neural networks; calculate the user-specific parameters for each of the at least one future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks; and display the at least one future retinal fundus image for each of the at least one future point-in-time, wherein the at least one future retinal fundus image for a corresponding future point-in-time depicts the user-specific parameters in the input retinal fundus image at the corresponding future point-in-time.

Brief description of the drawings

[0005] The present invention is described with reference to the accompanying figures. The accompanying figures, which are incorporated herein, are given by way of illustration only and form part of the specification together with the description to explain the make and use the invention, in which,

[0006] Figure 1 illustrates a block diagram of a computer implemented system to predict a retinal fundus appearance in at least one future point-in-time using an input retinal fundus image of a user in accordance with the invention;

[0007] Figure 2 exemplarily illustrates the architecture of a computer system employed by a retinal fundus appearance prediction application; and

[0008] Figure 3 illustrates a flowchart to predict a retinal fundus appearance in at least one future point-in-time using an input retinal fundus image of a user in accordance with the invention.

Detailed description of the invention

[0009] Figure 1 illustrates a block diagram of a computer implemented system 100 to predict a retinal fundus appearance in at least one future point-in-time using an input retinal fundus image of a user in accordance with the invention. The system 100 comprises at least one processor; a non-transitory computer readable storage medium communicatively coupled to the at least one processor, the non-transitory computer readable storage medium configured to store a retinal fundus appearance prediction application 103, the at least one processor configured to execute the retinal fundus appearance prediction application 103; and the retinal fundus appearance prediction application 103 comprising: a graphical user interface (GUI) 103e comprising a plurality of interactive elements 103f configured to enable capture and process of the input retinal fundus image of the user via a user device; a reception means 103a adapted to receive an input retinal fundus image of a user, an initial point-in-time corresponding to the input retinal fundus image and a request for at least one future retinal fundus image of the user for at least one future point-in-time; an interactive retinal fundus image rendering means 103b adapted to dynamically render the input retinal fundus image, wherein the dynamically rendered input retinal fundus image is configurably accessible on the GUI 103e via the user device using the interactive elements 103f; a retinal fundus image capture means 103c adapted to capture the input retinal fundus image of the user based on the dynamically rendered input retinal fundus image; a processing means 103d adapted to predict the retinal fundus appearance in at the least one future point-in-time using the input retinal fundus image of the user, comprising: determine a plurality of user-specific parameters for the input retinal fundus image at the initial point-in-time using a plurality of neural networks; calculate the user-specific parameters for each of the at least one future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks; and display the at least one future retinal fundus image for each of the at least one future point-in-time, wherein the at least one future retinal fundus image for a corresponding future point-in-time depicts the user-specific parameters in the input retinal fundus image at the corresponding future point-in-time.

[0010] As used herein, the term “user” refers to an individual receiving or registered to receive a prediction of the retinal fundus appearance in at least one future point-in-time using the user’s input retinal fundus image. The user is, for example, an individual undergoing a regular health checkup, an individual with a medical condition, for example, diabetes mellitus, an individual curious in knowing a future health condition based on the retinal fundus appearance in at least one future point-in-time, etc. As used herein, the term “input retinal fundus image” refers to a two-dimensional array of digital image data, however, this is merely illustrative and not limiting of the scope of the invention.

[0011] In an embodiment, the retinal fundus appearance prediction application 103 is a web application implemented on a web based platform, for example, a website hosted on a server or a setup of servers. For example, the retinal fundus appearance prediction application 103 is implemented on a web based platform, for example, a retinal fundus image processing platform 104 as illustrated in Figure 1.

[0012] The retinal fundus image processing platform 104 hosts the retinal fundus appearance prediction application 103. The retinal fundus appearance prediction application 103 is accessible to one or more user devices 101a, 101b or 101c. In an example, the user device 101a, 101b or 101c is accessible over a network 102 such as the internet, a mobile telecommunication network, a Wi-Fi® network of the Wireless Ethernet Compatibility Alliance, Inc., etc. The retinal fundus appearance prediction application 103 is accessible through browsers such as Internet Explorer® (IE) 8, IE 9, IE 10, IE 11 and IE 12 of Microsoft Corporation, Safari® of Apple Inc., Mozilla® Firefox® of Mozilla Foundation, Chrome of Google, Inc., etc., and is compatible with technologies such as hypertext markup language 5 (HTML5), etc. The user device 101a, 101b or 101c is, for example, a personal computer, a laptop, a tablet computing device, a personal digital assistant, a client device, a web browser, etc.

[0013] In another embodiment, the retinal fundus appearance prediction application 103 is configured as a software application, for example, a mobile application downloadable by the user on the user device 101a, 101b or 101c, for example, a tablet computing device, a mobile phone, etc.

[0014] In an embodiment, an operator captures the input retinal fundus image of the user. As used herein, the term “operator” is an individual who operates the retinal fundus appearance prediction application 103 to capture the retinal fundus images of the user and generate a report describing the predictions of the retinal fundus appearance in the at least one future point-in-time using the captured input retinal fundus image of the user. In an embodiment, the user is also be the operator and thus, the terms “user” and “operator” maybe used interchangeably herein.

[0015] The retinal fundus appearance prediction application 103 is accessible by the user device 101a, 101b or 101c via the GUI 103e provided by the retinal fundus appearance prediction application 103. In an example, the retinal fundus appearance prediction application 103 is accessible over the network 102. The network 102 is, for example, the internet, an intranet, a wireless network, a wired network, a Wi-Fi® network of the Wireless Ethernet Compatibility Alliance, Inc., a universal serial bus (USB) communication network, a ZigBee® network of ZigBee Alliance Corporation, a general packet radio service (GPRS) network, a global system for mobile (GSM) communications network, a code division multiple access (CDMA) network, a third generation (3G) mobile communication network, a fourth generation (4G) mobile communication network, a wide area network, a local area network, an internet connection network, an infrared communication network, etc., or any combination of these networks.

[0016] The retinal fundus appearance prediction application 103 comprises the GUI 103e comprising a plurality of interactive elements 103f configured to enable capture and processing of the input retinal fundus image via the user device 101a, 101b or 101c. As used herein, the term “interactive elements 103f” refers to interface components on the GUI 103e configured to perform a combination of processes, for example, a retrieval process from the input received from the user, for example, the retinal fundus images of the user, processes that enable real time user interactions, etc. The interactive elements 103f comprise, for example, clickable buttons, icons, check-boxes, etc.

The retinal fundus appearance prediction application 103 comprises the reception means 103a adapted to receive the input retinal fundus image of the user, the initial point-in-time corresponding to the input retinal fundus image and the request for at least one future retinal fundus image of the user for at least one future point-in-time. In an embodiment, the reception means 103a receives a plurality of input retinal fundus images of the user. As used herein, the term “point-in-time” refers to an instance of time. The term “initial point-in-time” refers to the instance of time when the input retinal fundus image of the user is captured using an image capturing device. As used herein, the term “image capturing device” refers to a camera for photographing the fundus of the user. In an example, the image capturing device is a Zeiss FF 450+ fundus camera comprising a Charged Coupled Device (CCD) photographic unit. In another example, the image capturing device is a smart phone with a camera capable of capturing the retinal fundus images of the user.

[0017] The reception means 103a also receives the request for the at least one future retinal fundus image of the user for the at least one future point-in-time. In an embodiment, the operator enters the request indicating the at least one future point-in-time, for example, a future date, a future month, a future year, etc., via the GUI 103e for which the user desires to view the at least one future retinal fundus images. In an example, the operator selects the desired at least one future point-in-time from a calendar, a drop-down menu, check-boxes, etc., displayed on the GUI 103e.

[0018] In an embodiment, the reception means 103a also receives information associated with the user from the user device, for example, 101a, 101b or 101c via the GUI 103e. The information associated with the user is, for example, personal details about the user, a gender of the patient, hereditary disease details of the user, etc. In an example, the operator enters the information associated with the user.

[0019] The image capturing device is in communication with the retinal fundus appearance prediction application 103 via the network 102, for example, the internet, an intranet, a wireless network, a wired network, a Wi-Fi® network of the Wireless Ethernet Compatibility Alliance, Inc., a universal serial bus (USB) communication network, a ZigBee® network of ZigBee Alliance Corporation, a general packet radio service (GPRS) network, a global system for mobile (GSM) communications network, a code division multiple access (CDMA) network, a third generation (3G) mobile communication network, a fourth generation (4G) mobile communication network, a wide area network, a local area network, an internet connection network, an infrared communication network, etc., or any combination of these networks.

[0020] The retinal fundus appearance prediction application 103 accesses the image capturing device to receive the input retinal fundus image. The retinal fundus appearance prediction application 103 comprises a transmission means to request the image capturing device for a permission to control the activities of the image capturing device to capture the input retinal fundus image associated with the user. The image capturing device responds to the request received from the transmission means. The reception means 103a receives the response from the image capturing device.

[0021] In other words, the image capturing device permits the user of the retinal fundus appearance prediction application 103 to control the activities of the image capturing device via the interactive elements 103f of the GUI 103e. As used herein, the term “activities” refer to a viewing of a live mode of the retinal fundus of the user on a screen of the GUI 103e, focusing a field of view by zooming in or zooming out the field of view to observe the retinal fundus of the user and capturing the input retinal fundus image of the user from the displayed live mode of the retinal fundus of the user.

[0022] Once the retinal fundus appearance prediction application 103 has the permission to control the activities of the image capturing device, the operator of the retinal fundus appearance prediction application 103 can view the input retinal fundus image of the user via the image capturing device on the screen of the GUI 103e. The interactive retinal fundus image rendering means 103b dynamically renders the input retinal fundus image on the GUI 103e. The dynamically rendered input retinal fundus image is configurably accessible on the GUI 103e via the user device 101a, 101b or 101c using the interactive elements 103f. The field of view of the image capturing device is displayed on a screen of the GUI 103e via the user device 101a, 101b or 101c. The operator can focus the field of view by zooming in or zooming out the field of view to observe the fundus of the user by using with the interactive elements 103f via a user input device such as a mouse, a trackball, a joystick, etc.

[0023] The retinal fundus image capture means 103c is adapted to capture the input retinal fundus image of the user based on the dynamically rendered input retinal fundus image. In other words, the operator captures the retinal fundus image of the user from the displayed live mode of the fundus of the user using the interactive elements 103f of the GUI 103e via the user device 101a, 101b or 101c. As used herein, the term “live mode” refers to the seamless display of the fundus of the user in real time via the GUI 103e. The reception means 103a automatically considers the instance of time when the input retinal fundus image was captured by the operator as the initial point-in-time corresponding to the captured input retinal fundus image. In an embodiment, the operator manually enters the initial point-in-time for the captured input retinal fundus image using the interactive elements 103f of the GUI 103e.

[0024] In an embodiment, the input retinal fundus image is an already existing retinal fundus image of the user stored in the database 104a. The operator selects the already existing retinal fundus image of the user and enters the initial point-in-time corresponding to the already existing retinal fundus image via the GUI 103e.

[0025] The processing means 103d determines the user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks. The user-specific parameters comprises an age of the user, an ethnicity of the user, a lifestyle of the user, a geographical location of the user, a plurality of environmental conditions of the geographical location. The lifestyle of the user comprises an occupation of the user, smoking habits of the user, drinking habits of the user, etc. The environmental conditions are, for example, smog conditions, automotive traffic, pollution conditions, etc. The geographical location is, for example, a city, a countryside location, etc.

[0026] As used herein, the term “neural networks” refers to a class of deep artificial neural networks that can be applied to analyzing visual imagery. The neural networks corresponds to a specific model of an artificial neural network. The neural networks are trained over a training dataset of retinal fundus images to accomplish a function associated with the neural networks. Here, the term “function” of the neural networks comprises the processing of the input retinal fundus image to determine the user-specific parameters for the input retinal fundus image at the initial point-in-time and calculate the user-specific parameters for each of the at least one future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time. In an embodiment, the system 100 uses a single neural network and trains the single neural network over the training dataset of retinal fundus images to accomplish the function.

[0027] The trained neural networks determine the user-specific parameters for the input retinal fundus image at the initial point-in-time. Further, the trained neural networks use the determined user-specific parameters for the input retinal fundus image at the initial point-in-time as a reference to calculate the user-specific parameters for each of the at least one future point-in-time. For example, the user selects two successive years as the two future points-in-time to view the retinal fundus appearance via the GUI 103e. The processing means 103d determines the user-specific parameters for the input retinal fundus image at the initial point-in-time using the trained neural networks. Further, the processing means 103d calculates the user-specific parameters for each of the two successive years using the trained neural networks based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time.

[0028] In an embodiment, the neural networks are trained to determine user-specific parameters at the initial point-in-time using the training dataset. The retinal fundus images in the training dataset are manually annotated and categorized by an annotator for each of the user-specific parameters. The annotator collects the user-specific information for each individual’s retinal fundus image in the training dataset from the associated individual. For example, one of the user-specific parameter is an age of the user. The retinal fundus images in the training dataset are collected from several individuals. The appearance of the retinal fundus images are manually annotated and labelled with respect to the age of the related individuals by the annotator. The neural networks learn to determine the age of the user based on the manually annotated and labelled retinal fundus images in the training dataset. Similarly, the annotator annotates and labels each of the retinal fundus images for each of the user-specific parameters. Here, the annotator is an individual who records the user-specific parameters for the retinal fundus images in the training dataset and creates a label for each of the retinal fundus images in the training dataset. Each of the retinal fundus image in the training dataset is associated with a label. The label of each of the retinal fundus image in the training dataset provides information about each of the user-specific parameters at the initial point-in-time.

[0029] Similarly, the annotator further generates the label with an expected change in the user-specific parameters for a plurality of future points-in-time for each of the retinal fundus images in the training dataset. The neural networks are trained to learn the expected change in each of the user-specific parameters for each of the future points-in-time using the labels created for the retinal fundus images in the training dataset. In an example, the user-specific parameter is a familial trend associated with the user. The familial trend maybe an indication of hereditary diseases. Consider for example, the familial trend of the user indicates a high tendency of developing diabetes. The annotator creates the label with the expected change in the familial trend of the user over a future period of time comprising multiple regular intervals of future points-in-time, such as for every quarter of a year. As used herein, the term “expected change” is a variation in the appearance of the retinal fundus image of an individual over time. The appearance of the retinal fundus image in turn indicates a retinal anatomical change based on the user-specific parameters at the initial point-in-time. In an example, the expected change in the user-specific parameters of a patient demonstrate a progression of one or more diseases such as diabetic retinopathy, cataract, etc. In another example, the expected change in the user-specific parameters show a retinal fundus evolution of a healthy child over a period of time.

[0030] In an embodiment, the retinal fundus appearance prediction application 103 receives the training dataset from one or more devices. The training dataset comprises a plurality of retinal fundus images. The retinal fundus images in the training dataset are referred to as training retinal fundus images. The device is, for example, the image capturing device such as a camera incorporated into a mobile device, a server, a network of personal computers, or simply a personal computer, a mainframe, a tablet computer, etc. The retinal fundus appearance prediction application 103 stores the training dataset in a database 104a of the system 100. The system 100 comprises the database 104a in communication with the retinal fundus appearance prediction application 103. The database 104a is also configured to store user profile information, user medical history, the training retinal fundus images of users, reports of the users, etc.

[0031] The trained neural networks output a candidate segment mask corresponding to the input retinal fundus image at the initial point-in-time depicting the determined user-specific parameters. The trained neural networks also output a candidate segment mask corresponding to the input retinal fundus image for each of the at least one future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time.

[0032] As used herein, the term “candidate segment mask” refers to a map of the input retinal fundus image showing only the segments of the input retinal fundus image having potentially useful information while masking other segments that is potentially less useful in the input retinal fundus image with respect to a point-in-time. The segments with potentially useful information are the highlighted regions depicting the structure of the one or more candidate objects, with each candidate object category highlighted with the predetermined pixel intensity. The highlighted regions in turn display the determined user-specific parameters at the corresponding point-in-time. The candidate object is, for example, a pathology indicator, a retinal feature or the like. The pathology indicator indicates a retinal disease. The pathology indicator is, for example, a lesion like a venous beading, a venous loop, an intra retinal microvascular abnormality, an intra retinal hemorrhage, a micro aneurysm, a soft exudate (cotton-wool spots), a hard exudate, a vitreous/preretinal hemorrhage, neovascularization, a drusen or the like. The retinal disease is one of diabetic retinopathy, diabetic macular edema, glaucoma, coloboma, retinal tear, retinal detachment or the like. The candidate segment mask denotes the location and structure of each of the candidate objects with each type of candidate object highlighted with a predetermined pixel intensity.

[0033] Here, the “candidate segment mask” is a highlighted colored region indicating a structure of the candidate object. Each candidate object is identified by the candidate object category it belongs to and has a pre-defined color associated with it. The candidate object category defines a class or division of candidate objects, for example, lesions, regarded as having particular shared characteristics. That is, each candidate object category is identified with a specific pre-defined color.

[0034] This way, the candidate segment mask provides an insight into the one or more retinal diseases present in the training fundus image along with a severity level of each of the retinal diseases. The candidate segment mask denotes the structure and location of the candidate objects in the input retinal fundus image with each type of candidate object highlighted with a predetermined pixel intensity. The candidate segment mask at a given point-in-time also provides an insight into the changes in a structure or appearance of the candidate objects. The structure and position of the candidate objects provide the retinal anatomical change at a future point-in-time with reference to the retinal anatomy at the initial point-in-time.

[0035] The predetermined pixel intensity indicates a distinct color to distinguish a candidate object category from others. The colors corresponding to each type of candidate object category is predetermined. The different candidate object category to be identified are also predetermined by, for example, a medical practitioner. The candidate object category denotes a type of candidate object, for example, a pathology indicator such as a lesion, a retinal feature such as an optic disc, etc. In an example, the medical practitioner may decide the candidate object categories to be highlighted based on the essential candidate objects required in the easy diagnosis of retinal diseases. In another embodiment, unrecognizable lesions are another candidate object category and allotted a different predetermined pixel intensity.

[0036] The processing means 103d of the retinal fundus appearance prediction application 103 further superimposes the candidate segment mask and the input retinal fundus image for the initial point-in-time and each of the at least one future point-in-time and displays on the GUI 103e. The processing means 103d displays the superimposed at least one future retinal fundus image the initial point-in-time and for each of the at least one future point-in-time. The superimposed at least one future retinal fundus image corresponding to a future point-in-time depicts the user-specific parameters associated with the input retinal fundus image at the corresponding future point-in-time. The future retinal fundus image corresponding to a future point-in-time is the input retinal fundus image superimposed with the candidate segment mask generated by the processing means 103d for the corresponding future point-in-time.

[0037] In an embodiment, the retinal fundus appearance prediction application 103 generates a time lapse video showing a retinal fundus evolution from a start point-in-time to an end point-in-time. The start point-in-time and the end point-in-time are the time instances selected by the user for which the user desires to view the time lapse video. The retinal fundus appearance prediction application 103 provides options via the GUI 103e for the user to select the start point-in-time and the end point-in-time. The retinal fundus appearance prediction application 103 also provides options to select a “time interval”, for example, monthly, yearly, etc., for the predictive time lapse video based on the start point-in-time and the end point-in-time selected by the user. In an embodiment, the processing means 103d divides the time period in between the start point-in-time and the end point-in-time into a plurality of time intervals. In an example, the time intervals are periodic and the number of time intervals are selected by the user. In an example, the retinal fundus appearance prediction application 103 provides a “clock icon” to allow the user to select the start point-in-time and the end point-in-time. The retinal fundus appearance prediction application 103 receives the input retinal fundus image of the user. The processing means 103d of the retinal fundus appearance prediction application 103 determines the user-specific parameters for the input retinal fundus image at the start point-in-time using the neural networks. Here, the start point-in-time is the initial point-in-time. The time intervals and the end point-in-time are the future points-in-time.

[0038] The processing means 103d calculates the user-specific parameters for each of the time intervals and the end point-in-time based on the determined user-specific parameters for the input retinal fundus image at the start point-in-time using the neural networks. The processing means 103d generates the candidate segment mask for the input retinal fundus image at the start point-in-time, each of the time intervals and the end point-in-time. The processing means 103d superimposes the generated candidate segment mask at each point-in-time and the input retinal fundus image. The processing means 103d processes to combine the superimposed candidate segment mask and the input fundus image at the start point-in-time, each of the time intervals and the end point-in-time to generate the time lapse video. In other words, the retinal fundus appearance prediction application 103 develop retinal fundus evolutionary images to generate the time-lapse video for the selected time duration between the start point-in-time and the end point-in-time. This allows the user to see the changes in the nature of the retinal fundus evolution by highlighting areas affected by abnormalities based on the user-specific parameters which are determined using the neural networks. In an example, the retinal fundus appearance prediction application 103 displays an animation (highlighting retinal features, retinal abnormalities like lesions, retinal diseases indicators, etc.) to show changes in structure if any over time.

[0039] In an embodiment, the retinal fundus appearance prediction application 103 allows disease risk prediction for diseases such as hypertension, stroke, diabetes, cardiovascular diseases including coronary heart disease and cerebral vascular disease, prematurity, papilledema, dementia, retinal diseases like glaucoma, diabetic retinopathy, etc., based on the input retinal fundus image of the user. The retinal fundus appearance prediction application 103 uses the neural networks to determine the user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks and calculate the user-specific parameters for each of the at least one future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks. In other words, the user-specific parameters for each of the at least one future point-in-time provide a disease risk prediction for the input retinal fundus image using the determined user-specific parameters at the initial point-in-time.

[0040] In an example, the processing means 103d calculates a change in the retinal vessel patterns based on the calculated disease risk prediction for the at least one future point-in-time selected by the user using the neural networks. In another example, the processing means 103d uses the neural networks to predict a possibility of retinal diseases such as diabetic retinopathy (DR) and generates an abnormality pattern of the retinal fundus indicating a severity of DR for the at least one future point-in-time selected by the user. The existence of new vessels is a landmark for proliferative DR. The new vessels indicate an abnormality pattern which are marked in the superimposed candidate segment mask and the input retinal fundus image. Here, the neural networks are pre trained to determine a location, a structure and a time instant for the development of the new vessels in a retinal fundus image based on the user-specific parameters. In another example, the retinal fundus appearance prediction application 103 grades the retinal pathological changes in the input retinal fundus image at the future points-in-time based on the user-specific parameters using the neural networks.

[0041] In an embodiment, the processing means 103d displays the at least one future retinal fundus image for each of the at least one future point-in-time in a convenient fashion such as a collapsed view or an expanded view via the GUI 103e. In an example, the future retinal fundus images are arranged in a timeline fashion which facilitates the user to observe changes over time. The retinal fundus appearance prediction application 103 also provides the operator/user an opportunity to view information with each of the at least one future retinal fundus image for each of the at least one future point-in-time. For example, any changes in the candidate objects such as a size of micro aneurysms associated with a future retinal fundus image may be provided as the information related to the future retinal fundus image. In another example, the retinal fundus appearance prediction application 103 provides zoom in or out options for the user to review the future retinal fundus images being displayed.

[0042] In an embodiment, suitable messages to the user are and provided via a pop-up box displayed on a screen when the user selects a future retinal fundus image. The messages are an indication of a health condition of the retinal fundus of the user at the future point-in-time, an age detail of the patient at the future point-in-time, a suggestion of alteration of current lifestyle to prevent a future health condition, an eye type such as a left eye or a right eye, etc. These messages are based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time and the calculated user-specific parameters for each of the at least one future point-in-time.

[0043] The retinal fundus appearance prediction application 103 may also generate a report comprising the at least one future retinal fundus image for each of the at least one future point-in-time. The retinal fundus appearance prediction application 103 communicates the report to the user via an electronic mail. The report could also be stored in the database 104a of the system 100.

[0044] In an embodiment, the retinal fundus appearance prediction application 103 receives the input retinal fundus image corresponding to a left eye and a right eye of the user. The retinal fundus appearance prediction application 103 displays the future retinal fundus image corresponding to both the left eye and the right eye for each of the at least one future point-in-time using the neural networks. The processing means 103d analyses the differences between the future retinal fundus images corresponding to the left eye and the right eye for each of the at least one future point-in-time using the neural networks. The retinal fundus appearance prediction application 103 displays messages based on the analyzed differences between the future retinal fundus images corresponding to the left eye and the right eye for each of the at least one future point-in-time via the GUI 103e.

[0045] In an embodiment, the retinal fundus appearance prediction application 103 identifies retinal features located on the candidate segment mask of the input retinal fundus image for each of the at least one future point-in-time. This makes the further analysis of the input retinal fundus image easier for a medical practitioner in guiding the user towards a healthier life routine.

[0046] In an embodiment, the system 100 is in communication with a wearable device, for example, a simulator eyewear, via the network 102. The wearable device allows a wearer to directly experience visual symptoms associated with the user-specific parameters at a select future point-in-time by the wearer. As used herein, the terms “wearer” and “user” are used interchangeably. The visual symptoms allows the wearer to experience and know the visual symptoms and the influence of the user-specific parameters on the vision at the select future point-in-time.

[0047] In an example, the system 100 predicts that the user is subject to diabetic retinopathy at a future point-in-time chosen by the user. The system 100 calculates the user-specific parameters at the chosen future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks. The system 100 provides an option to the user to experience a simulation of visual symptoms of diabetic retinopathy at the chosen future point-in-time by authorizing the user to wear the wearable device which is in communication with the system 100. The system 100 thus enables the user to analyse and improve awareness of retinal eye diseases through the visual symptoms associated with the calculated user-specific parameters at the chosen future point-in-time.

[0048] Figure 2 exemplarily illustrates the architecture of a computer system 200 employed by the retinal fundus appearance prediction application 103. The retinal fundus appearance prediction application 103 of the computer implemented system 100 exemplarily illustrated in Figure 1 employs the architecture of the computer system 200 exemplarily illustrated in Figure 2. The computer system 200 is programmable using a high level computer programming language. The computer system 200 may be implemented using programmed and purposeful hardware.

[0049] The retinal fundus image processing platform 104 hosting the retinal fundus appearance prediction application 103 communicates with user devices, for example, 101a, 101b, 101c, etc., of a user registered with the retinal fundus appearance prediction application 103 via the network 102. The network 102 is, for example, the internet, a local area network, a wide area network, a wired network, a wireless network, a mobile communication network, etc. The computer system 200 comprises, for example, a processor 201, a memory unit 202 for storing programs and data, an input/output (I/O) controller 203, a network interface 204, a data bus 205, a display unit 206, input devices 207, fixed disks 208, removable disks 209, output devices 210, etc.

[0050] As used herein, the term “processor” refers to any one or more central processing unit (CPU) devices, microprocessors, an application specific integrated circuit (ASIC), computers, microcontrollers, digital signal processors, logic, an electronic circuit, a field-programmable gate array (FPGA), etc., or any combination thereof, capable of executing computer programs or a series of commands, instructions, or state transitions. The processor 201 may also be realized as a processor set comprising, for example, a math or graphics co-processor and a general purpose microprocessor. The processor 201 is selected, for example, from the Intel® processors such as the Itanium® microprocessor or the Pentium® processors, Advanced Micro Devices (AMD®) processors such as the Athlon® processor, MicroSPARC® processors, UltraSPARC® processors, hp® processors, International Business Machines (IBM®) processors, the MIPS® reduced instruction set computer (RISC) processor, Inc., RISC based computer processors of ARM Holdings, etc. The computer implemented system 100 disclosed herein is not limited to a computer system 200 employing a processor 201 but may also employ a controller or a microcontroller.

[0051] The memory unit 202 is used for storing data, programs, and applications. The memory unit 202 is, for example, a random access memory (RAM) or any type of dynamic storage device that stores information for execution by the processor 201. The memory unit 202 also stores temporary variables and other intermediate information used during execution of the instructions by the processor 201. The computer system 200 further comprises a read only memory (ROM) or another type of static storage device that stores static information and instructions for the processor 201.

[0052] The I/O controller 203 controls input actions and output actions performed by the retinal fundus appearance prediction application 103. The network interface 204 enables connection of the computer system 200 to the network 102. For example, the retinal fundus image processing platform 104 hosting the retinal fundus appearance prediction application 103 connects to the network 102 via the network interface 204. The network interface 204 comprises, for example, one or more of a universal serial bus (USB) interface, a cable interface, an interface implementing Wi-Fi® of the Wireless Ethernet Compatibility Alliance, Inc., a FireWire® interface of Apple, Inc., an Ethernet interface, a digital subscriber line (DSL) interface, a token ring interface, a peripheral controller interconnect (PCI) interface, a local area network (LAN) interface, a wide area network (WAN) interface, interfaces using serial protocols, interfaces using parallel protocols, and Ethernet communication interfaces, asynchronous transfer mode (ATM) interfaces, interfaces based on transmission control protocol (TCP)/internet protocol (IP), radio frequency (RF) technology, etc. The data bus 205 permits communications between the means/modules (103a, 103b, 103c, 103d, 103e and 103f) of the retinal fundus appearance prediction application 103.

[0053] The display unit 206, via the GUI 103e, displays information, display interfaces, interactive elements 103f such as drop down menus, text fields, checkboxes, text boxes, floating windows, hyperlinks, etc., for example, for allowing the user to enter inputs associated with the user. In an example, the display unit 206 comprises a liquid crystal display, a plasma display, etc. The input devices 207 are used for inputting data into the computer system 200. A user, for example, an operator, registered with the retinal fundus appearance prediction application 103 uses one or more of the input devices 207 of the user devices, for example, 101a, 101b, 101c, etc., to provide inputs to the retinal fundus appearance prediction application 103. For example, a user may enter a user’s profile information, the user’s medical history, etc., using the input devices 207. The input devices 207 are, for example, a keyboard such as an alphanumeric keyboard, a touch pad, a joystick, a computer mouse, a light pen, a physical button, a touch sensitive display device, a track ball, etc.

[0054] Computer applications and programs are used for operating the computer system 200. The programs are loaded onto the fixed disks 208 and into the memory unit 202 of the computer system 200 via the removable disks 209. In an embodiment, the computer applications and programs may be loaded directly via the network 102. The output devices 210 output the results of operations performed by the retinal fundus appearance prediction application 103.

[0055] The processor 201 executes an operating system, for example, the Linux® operating system, the Unix® operating system, any version of the Microsoft® Windows® operating system, the Mac OS of Apple Inc., the IBM® OS/2, VxWorks® of Wind River Systems, Palm OS®, the Solaris operating system, the Android operating system, Windows Phone™ operating system developed by Microsoft Corporation, the iOS operating system of Apple Inc., etc.

[0056] The computer system 200 employs the operating system for performing multiple tasks. The operating system is responsible for management and coordination of activities and sharing of resources of the computer system 200. The operating system employed on the computer system 200 recognizes, for example, inputs provided by the user using one of the input devices 207, the output display, files, and directories stored locally on the fixed disks 208. The operating system on the computer system 200 executes different programs using the processor 201. The processor 201 and the operating system together define a computer platform for which application programs in high level programming languages are written.

[0057] The processor 201 retrieves instructions for executing the modules (103a, 103b, 103c, 103d, 103e and 103f) of the retinal fundus appearance prediction application 103 from the memory unit 202. A program counter determines the location of the instructions in the memory unit 202. The program counter stores a number that identifies the current position in the program of each of the modules (103a, 103b, 103c, 103d, 103e and 103f) of the retinal fundus appearance prediction application 103. The instructions fetched by the processor 201 from the memory unit 202 after being processed are decoded. The instructions are stored in an instruction register in the processor 201. After processing and decoding, the processor 201 executes the instructions.

[0058] Figure 3 illustrates a flowchart to predict a retinal fundus appearance in at least one future point-in-time using an input retinal fundus image of a user in accordance with the invention. At step S1, the retinal fundus appearance prediction application 103 receives the input retinal fundus image of the user, the initial point-in-time corresponding to the input retinal fundus image and the request for at least one future retinal fundus image of the user for at least one future point-in-time.

[0059] The non-transitory computer readable storage medium is configured to store the retinal fundus appearance prediction application 103 and at least one processor is configured to execute the retinal fundus appearance prediction application 103. The retinal fundus appearance prediction application 103 is thus a part of the system 100 comprising the non-transitory computer readable storage medium communicatively coupled to the at least one processor. The retinal fundus appearance prediction application 103 comprises the GUI 103e comprising multiple interactive elements 103f configured to enable capture and processing of the retinal fundus image via the user device 101a, 101b or 101c. The reception means 103a is adapted to receive the input from the image capturing device. The input is the retinal fundus image of the user displayed in a live mode. In an embodiment, the retinal fundus appearance prediction application 103 is a web application implemented on a web based platform, for example, a website hosted on a server or a setup of servers.

[0060] The interactive retinal fundus image rendering means 103b dynamically render the input. The dynamically rendered input is configurably accessible on the GUI 103e via the user device 101a, 101b or 101c using the interactive elements 103f. The retinal fundus image capture means 103c captures the retinal fundus image based on the dynamically rendered input.

[0061] At step S2, the processing means 103d is adapted to determine the user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks. At step S3, the processing means 103d is adapted to calculate the user-specific parameters for each of the at least one future point-in-time based on the determined user-specific parameters for the input retinal fundus image at the initial point-in-time using the neural networks. At step S4, the processing means 103d is adapted to display the at least one future retinal fundus image for each of the at least one future point-in-time. The at least one future retinal fundus image for a corresponding future point-in-time depicts the user-specific parameters in the input retinal fundus image at the corresponding future point-in-time.

[0062] The system 100 allows the user to become aware of harmful effects of an environment, a need for a therapeutic treatment, a change in a wellbeing therapy, etc., based on the predictions of the retinal fundus appearance in at least one future point-in-time. The system 100 acts as an important supporting tool in simulating and/or predicting one or more retinal diseases and/or or a response to a therapy specific to the user by determining and calculating the user-specific parameters at the initial point-in-time and multiple future points-in-time. Since the system 100 uses pre-trained neural networks to determine the user-specific parameters at the initial point-in-time, the system 100 reduces the time-consumption involved in a manual determination of the user-specific parameters at the initial point-in-time.

[0063] The present invention described above, although described functionally or sensibly, may be configured to work in a network environment comprising a computer in communication with one or more devices. It will be readily apparent that the various methods, algorithms, and computer programs disclosed herein may be implemented on computer readable media appropriately programmed for general purpose computers and computing devices. As used herein, the term “computer readable media” refers to non-transitory computer readable media that participate in providing data, for example, instructions that may be read by a computer, a processor or a similar device. Non-transitory computer readable media comprise all computer readable media. Non-volatile media comprise, for example, optical discs or magnetic disks and other persistent memory volatile media including a dynamic random access memory (DRAM), which typically constitutes a main memory. Volatile media comprise, for example, a processor cache, a register memory, a random access memory (RAM), etc. Transmission media comprise, for example, coaxial cables, copper wire, fiber optic cables, modems, etc., including wires that constitute a system bus coupled to a processor, etc. Common forms of computer readable media comprise, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, a Blu-ray Disc®, a magnetic medium, a compact disc-read only memory (CD-ROM), a digital versatile disc (DVD), any optical medium, a flash memory card, a laser disc, RAM, a programmable read only memory (PROM), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM), a flash memory, any other cartridge, etc.

[0064] The database 104a is, for example, a structured query language (SQL) data base or a not only SQL (NoSQL) data base such as the Microsoft® SQL Server®, the Oracle® servers, the MySQL® database of MySQL AB Company, the MongoDB® of 10gen, Inc., the Neo4j graph database, the Cassandra database of the Apache Software Foundation, the HBase™ database of the Apache Software Foundation, etc. In an embodiment, the database 104a can also be a location on a file system. The database 104a is any storage area or medium that can be used for storing data and files. In another embodiment, the database 104a can be remotely accessed by the retinal fundus appearance prediction application 103 via the network 102. In another embodiment, the database 104a is configured as a cloud based database 104a implemented in a cloud computing environment, where computing resources are delivered as a service over the network 102, for example, the internet.

[0065] The foregoing examples have been provided merely for the purpose of explanation and does not limit the present invention disclosed herein. While the invention has been described with reference to various embodiments, it is understood that the words are used for illustration and are not limiting. Those skilled in the art, may effect numerous modifications thereto and changes may be made without departing from the scope and spirit of the invention in its aspects.

Documents

Orders

Section Controller Decision Date
15 Aditya Venkateswara N C 2021-04-28
15 Aditya Venkateswara N C 2021-08-31

Application Documents

# Name Date
1 201841008839-FORM 13 [27-02-2025(online)].pdf 2025-02-27
1 201841008839-STATEMENT OF UNDERTAKING (FORM 3) [09-03-2018(online)].pdf 2018-03-09
2 201841008839-OTHERS [09-03-2018(online)].pdf 2018-03-09
2 201841008839-Correspondence to notify the Controller [21-02-2025(online)].pdf 2025-02-21
3 201841008839-FORM FOR SMALL ENTITY(FORM-28) [09-03-2018(online)].pdf 2018-03-09
3 201841008839-FORM 4 [12-06-2024(online)].pdf 2024-06-12
4 201841008839-FORM-27 [12-06-2024(online)].pdf 2024-06-12
4 201841008839-FORM 1 [09-03-2018(online)].pdf 2018-03-09
5 201841008839-FORM-15 [28-05-2024(online)].pdf 2024-05-28
5 201841008839-FIGURE OF ABSTRACT [09-03-2018(online)].jpg 2018-03-09
6 201841008839-POWER OF AUTHORITY [28-05-2024(online)].pdf 2024-05-28
6 201841008839-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-03-2018(online)].pdf 2018-03-09
7 201841008839-RELEVANT DOCUMENTS [10-03-2023(online)].pdf 2023-03-10
7 201841008839-DRAWINGS [09-03-2018(online)].pdf 2018-03-09
8 201841008839-RELEVANT DOCUMENTS [08-03-2023(online)].pdf 2023-03-08
8 201841008839-DECLARATION OF INVENTORSHIP (FORM 5) [09-03-2018(online)].pdf 2018-03-09
9 201841008839-RELEVANT DOCUMENTS [24-05-2022(online)].pdf 2022-05-24
9 201841008839-COMPLETE SPECIFICATION [09-03-2018(online)].pdf 2018-03-09
10 201841008839-ReviewPetition-HearingNotice-(HearingDate-16-08-2021).pdf 2021-10-17
10 Form1_Proof of Right_23-04-2018.pdf 2018-04-23
11 201841008839-US(14)-ExtendedHearingNotice-(HearingDate-13-01-2021).pdf 2021-10-17
11 Correspondence by Applicant_Form1_23-04-2018.pdf 2018-04-23
12 201841008839-STARTUP [13-02-2020(online)].pdf 2020-02-13
12 201841008839-US(14)-HearingNotice-(HearingDate-04-01-2021).pdf 2021-10-17
13 201841008839-FORM28 [13-02-2020(online)].pdf 2020-02-13
13 201841008839-IntimationOfGrant31-08-2021.pdf 2021-08-31
14 201841008839-FORM 18A [13-02-2020(online)].pdf 2020-02-13
14 201841008839-PatentCertificate31-08-2021.pdf 2021-08-31
15 201841008839-FER.pdf 2020-05-19
15 201841008839-Written submissions and relevant documents [16-08-2021(online)].pdf 2021-08-16
16 201841008839-CLAIMS [02-08-2021(online)].pdf 2021-08-02
16 201841008839-OTHERS [19-11-2020(online)].pdf 2020-11-19
17 201841008839-FER_SER_REPLY [02-08-2021(online)].pdf 2021-08-02
17 201841008839-FER_SER_REPLY [19-11-2020(online)].pdf 2020-11-19
18 201841008839-CLAIMS [19-11-2020(online)].pdf 2020-11-19
18 201841008839-OTHERS [02-08-2021(online)].pdf 2021-08-02
19 201841008839-Response to office action [02-08-2021(online)]-1.pdf 2021-08-02
19 201841008839-FORM-26 [11-12-2020(online)].pdf 2020-12-11
20 201841008839-Response to office action [02-08-2021(online)].pdf 2021-08-02
20 201841008839-Written submissions and relevant documents [28-01-2021(online)].pdf 2021-01-28
21 201841008839-Annexure [28-01-2021(online)].pdf 2021-01-28
21 201841008839-FORM-24 [29-04-2021(online)].pdf 2021-04-29
22 201841008839-RELEVANT DOCUMENTS [29-04-2021(online)].pdf 2021-04-29
23 201841008839-Annexure [28-01-2021(online)].pdf 2021-01-28
23 201841008839-FORM-24 [29-04-2021(online)].pdf 2021-04-29
24 201841008839-Response to office action [02-08-2021(online)].pdf 2021-08-02
24 201841008839-Written submissions and relevant documents [28-01-2021(online)].pdf 2021-01-28
25 201841008839-Response to office action [02-08-2021(online)]-1.pdf 2021-08-02
25 201841008839-FORM-26 [11-12-2020(online)].pdf 2020-12-11
26 201841008839-OTHERS [02-08-2021(online)].pdf 2021-08-02
26 201841008839-CLAIMS [19-11-2020(online)].pdf 2020-11-19
27 201841008839-FER_SER_REPLY [02-08-2021(online)].pdf 2021-08-02
27 201841008839-FER_SER_REPLY [19-11-2020(online)].pdf 2020-11-19
28 201841008839-CLAIMS [02-08-2021(online)].pdf 2021-08-02
28 201841008839-OTHERS [19-11-2020(online)].pdf 2020-11-19
29 201841008839-FER.pdf 2020-05-19
29 201841008839-Written submissions and relevant documents [16-08-2021(online)].pdf 2021-08-16
30 201841008839-FORM 18A [13-02-2020(online)].pdf 2020-02-13
30 201841008839-PatentCertificate31-08-2021.pdf 2021-08-31
31 201841008839-FORM28 [13-02-2020(online)].pdf 2020-02-13
31 201841008839-IntimationOfGrant31-08-2021.pdf 2021-08-31
32 201841008839-STARTUP [13-02-2020(online)].pdf 2020-02-13
32 201841008839-US(14)-HearingNotice-(HearingDate-04-01-2021).pdf 2021-10-17
33 201841008839-US(14)-ExtendedHearingNotice-(HearingDate-13-01-2021).pdf 2021-10-17
33 Correspondence by Applicant_Form1_23-04-2018.pdf 2018-04-23
34 201841008839-ReviewPetition-HearingNotice-(HearingDate-16-08-2021).pdf 2021-10-17
34 Form1_Proof of Right_23-04-2018.pdf 2018-04-23
35 201841008839-COMPLETE SPECIFICATION [09-03-2018(online)].pdf 2018-03-09
35 201841008839-RELEVANT DOCUMENTS [24-05-2022(online)].pdf 2022-05-24
36 201841008839-DECLARATION OF INVENTORSHIP (FORM 5) [09-03-2018(online)].pdf 2018-03-09
36 201841008839-RELEVANT DOCUMENTS [08-03-2023(online)].pdf 2023-03-08
37 201841008839-RELEVANT DOCUMENTS [10-03-2023(online)].pdf 2023-03-10
37 201841008839-DRAWINGS [09-03-2018(online)].pdf 2018-03-09
38 201841008839-POWER OF AUTHORITY [28-05-2024(online)].pdf 2024-05-28
38 201841008839-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-03-2018(online)].pdf 2018-03-09
39 201841008839-FORM-15 [28-05-2024(online)].pdf 2024-05-28
39 201841008839-FIGURE OF ABSTRACT [09-03-2018(online)].jpg 2018-03-09
40 201841008839-FORM-27 [12-06-2024(online)].pdf 2024-06-12
40 201841008839-FORM 1 [09-03-2018(online)].pdf 2018-03-09
41 201841008839-FORM FOR SMALL ENTITY(FORM-28) [09-03-2018(online)].pdf 2018-03-09
41 201841008839-FORM 4 [12-06-2024(online)].pdf 2024-06-12
42 201841008839-OTHERS [09-03-2018(online)].pdf 2018-03-09
42 201841008839-Correspondence to notify the Controller [21-02-2025(online)].pdf 2025-02-21
43 201841008839-FORM 13 [27-02-2025(online)].pdf 2025-02-27
43 201841008839-STATEMENT OF UNDERTAKING (FORM 3) [09-03-2018(online)].pdf 2018-03-09
44 201841008839-FORM-27 [18-09-2025(online)].pdf 2025-09-18
45 201841008839-FORM-27 [18-09-2025(online)]-1.pdf 2025-09-18

Search Strategy

1 2020-11-2316-30-12AE_23-11-2020.pdf
2 2020-02-1416-14-02_14-02-2020.pdf

ERegister / Renewals

3rd: 03 Sep 2021

From 09/03/2020 - To 09/03/2021

4th: 03 Sep 2021

From 09/03/2021 - To 09/03/2022

5th: 03 Sep 2021

From 09/03/2022 - To 09/03/2023

6th: 03 Sep 2021

From 09/03/2023 - To 09/03/2024

7th: 12 Jun 2024

From 09/03/2024 - To 09/03/2025

8th: 12 Jun 2024

From 09/03/2025 - To 09/03/2026

9th: 12 Jun 2024

From 09/03/2026 - To 09/03/2027

10th: 12 Jun 2024

From 09/03/2027 - To 09/03/2028