Abstract: The present disclosure provides a system and method for automated screening and diagnosis of retinal vascular diseases. The system and method includes one or more processing units configured to receive retinal images from one or more imaging sensors, extract and analyze the retinal images to generate a set of attributes, compare the set of attributes with reference values by a pre-trained programmable set of instructions, generate diagnostic reports corresponding to the retinal images and transmit the images and the reports to storage and display devices. The proposed method is configured to pinpoint specific locations, dimensions and other attributes of the affected regions of retina, and to grade the severity of the disease. The proposed system is configured to facilitate generation of easy-to-read reports within a pre-determined time from the screening of the retinal images, and further facilitates storage and retrieval of the reports and retinal images by authorized users.
The present disclosure relates to the field of Biomedical system .In particular, the present disclosure provides a system and method for automated screening and diagnosis of retinal vascular diseases.
BACKGROUND
[0002] Background description includes information that may be useful in understanding the present disclosure. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed disclosure, or that any publication specifically or implicitly referenced is prior art. [0003] Retina is a light-sensitive layer of tissue located at posterior end of an eye, and is instrumental to vision. Retina contains rod and cone cells which receive light reflected from an object, convert light signal into neural impulses and neural signals are then transmitted to brain for identification and recognition of visualized object. Retinal vascular diseases happen when blockages are formed in form of blood clots inside retinal artery or veins or branches of veins and arteries. There are a number of medical conditions like atherosclerosis, hypertension, diabetes, high cholesterol, cardiovascular disorders and the likes, mostly aggravated by an advanced age of patient, which increases risk of retinal blockages, known 'occlusion'. Apart from chronic systemic eye diseases, blockages and other anomalies can result in severe sight threatening conditions like Glaucoma and Macular Edema. An early diagnosis can prevent further complications and permanent loss of vision.
[0004] Existing solution can include selection of pixels from regions of interest of retinal images and classifying them using supervised learning, followed by an automatic evaluation of classified images to facilitate patients and healthcare professionals. Other solutions can include a double classifier for detecting if a specific feature has crossed a threshold by a significant level. Another solution can include retinal disease identification by applying a deep learning method on extracted retinal images. Another solution can use convoluted deep neural network for identification of retinal abnormalities, and then display
results on a graphical user interface and store information in web based servers. However, drawing an inference about an exact disease from features extracted from the retinal images and specification of exact location, dimension and nature of defect in affected area of the retina is not disclosed.
[0005] Hence, there is need in the art to develop a method and system for automatic screening and diagnosis of retinal defects that can overcome above mentioned problem of the prior art by bringing a solution that facilitates exact identification of any or a combination of particular disease(s) and precise specification of the location, dimension, and other features of the affected regions of the retina. The solution can facilitate quick generation of clinical reports aiding early prognosis, efficient storage and secured retrieval of the clinical data by authorized personnel for future use.
OBJECTS OF THE PRESENT DISCLOSURE
[0006] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0007] It is an object of the present disclosure to provide a system and method
that facilitates performing operations on captured retinal images for diagnosis of
one or more retinal vascular diseases.
[0008] It is an object of the present disclosure to provide a system and method
that enables in exact diagnosis of any or a combination of one or more retinal
diseases from a wide range of retinal defects.
[0009] It is an object of the present disclosure to provide a system and method
that automatically determines exact location, dimension, nature and other
attributes of affected regions of the retina with help of an Artificial Neural
Network engine.
[0010] It is an object of the present disclosure to provide a system and method
that helps in automatically evaluating severity of the one or more detected retinal
diseases.
[0011] It is an object of the present disclosure to provide a system and method
that facilitates generating clinical transcripts pertaining to diagnosis of any or a
combination of one or more retinal diseases within a predetermined time starting
from the time of reception of the retinal images.
[0012] It is an object of the present disclosure to provide a system and method
that helps in storing plurality of captured retinal images and corresponding
clinical reports associated with patient for immediate and future access.
[0013] It is an object of the present disclosure to provide a system and method
that enables in allowing retrieval of stored retinal images and corresponding
clinical reports by verifying authenticity of an authorized healthcare provider or
user.
SUMMARY
[0014] The present disclosure relates to the field of Biomedical system. In particular the present disclosure relates to a method and system for automated screening and diagnosis of retinal vascular diseases.
[0015] An aspect of the present disclosure pertains to a system for automated screening and diagnosis of retinal vascular diseases. The system may include one or more processing unit(s) configured to receive a first set of data packets pertaining to retinal images from one or more imaging sensors associated with one or more first entities. The one or more processing unit(s) may be configured to extract a second set of data packets from the first set of data packets and analyze the second set of data packets to generate a feature-set corresponding to the retinal anomalies. The one or more processing unit(s) may be further configured to execute on the said feature-set, a pre-trained programmable set of instructions residing in an associated memory, the set of instructions being generated based on a training dataset or a third set of data packets stored in a database operatively coupled to the one or more processing unit(s). The programmable set of instructions, upon execution, may trigger the one or more processing unit(s) to generate a fourth set of data packets corresponding to an inference or diagnosis of any or a combination of one or more retinal diseases, pertaining to the received retinal images.
[0016] In an aspect, the one or more processing unit(s) may be configured to transmit the first set of data packets or the retinal images and the fourth set of data packets or diagnostic reports generated at the one or more processing unit(s) to one or more interactive display unit(s) for immediate examination and operations like but not limited to downloading, saving, printing or to a web-based server for future access.
[0017] In an aspect, the fourth set of data packets generated at the one or more processing unit(s) may be able to pinpoint specific location, dimension and other features of the affected regions of the retina to deduce an accurate diagnosis of any or a combination of one or more retinal diseases.
[0018] In an aspect, the system may be able to diagnose a wide variety of retinal diseases, and the one or more processing unit(s) may also be configured to grade the severity of the detected retinal diseases, that can help in prioritizing treatment for critical patients.
[0019] In an aspect, the fourth set of data packets or the clinical reports may be generated at one or more processing unit(s) in an easy-to read format that does not require intervention by an expert. The reports can be availed within a pre-determined time starting from the screening of said retinal images, that can help is early prognosis.
[0020] In an aspect, said images and clinical reports may be transmitted by the one or more processing unit(s) to one or more storage units so that the data can be retrieved in future at any time and from anyplace by an authorized user. The said data may further be used in healthcare, awareness and research purposes. [0021] Another aspect of the present disclosure pertains to a method for automated screening and diagnosis of retinal vascular diseases. The method may include one or more processing unit(s) configured to receive a first set of data packets pertaining to retinal images from one or more imaging sensors associated with one or more first entities. The one or more processing unit(s) may be configured to extract a second set of data packets from the first set of data packets and analyze the second set of data packets to generate a feature-set corresponding to the retinal anomalies. The one or more processing unit(s) may be further
configured to execute on the said feature-set, a pre-trained programmable set of instructions residing in an associated memory, the set of instructions being generated based on a training dataset or a third set of data packets stored in a database operatively coupled to the one or more processing unit(s). The programmable set of instructions, upon execution, may trigger the one or more processing unit(s) to generate a fourth set of data packets corresponding to an inference or diagnosis of any or a combination of one or more retinal diseases, pertaining to the received retinal images.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
[0022] The accompanying drawings are included to provide a further
understanding of the present disclosure, and are incorporated in and constitute a
part of this specification. The drawings illustrate exemplary embodiments of the
present disclosure and, together with the description, serve to explain the
principles of the present disclosure.
[0023] The diagrams described herein are for illustration only, which thus are
not limitations of the present disclosure, and wherein:
[0024] FIG. 1 illustrates network architecture of proposed system for
automated screening and diagnosis of retinal vascular diseases, to elaborate upon
its working in accordance with an embodiment of the present disclosure.
[0025] FIG. 2 illustrates exemplary functional components of processing unit
of proposed system for automated screening and diagnosis of retinal vascular
diseases, in accordance with an embodiment of the present disclosure.
[0026] FIG. 3 illustrates exemplary block diagram of the proposed system for
automated screening and diagnosis of retinal vascular diseases, in accordance with
an embodiment of the present disclosure.
[0027] FIG. 4 illustrates an exemplary method for automated screening and
diagnosis of retinal vascular diseases, in accordance with an embodiment of the
present disclosure.
[0028] FIG. 5 illustrates an exemplary computer system in which or with which embodiments of the present invention may be utilized in accordance with embodiments of the present disclosure.
DETAILED DESCRIPTION
[0029] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details. [0030] While embodiments of the present invention have been illustrated and described in the accompanying drawings, the embodiments are offered only in as much detail as to clearly communicate the disclosure and are not intended to limit the numerous equivalents, changes, variations, substitutions and modifications falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0031] The present disclosure relates generally to the field of Biomedical system. In particular, the present disclosure relates to a method and system for automated screening and diagnosis of retinal vascular diseases. [0032] FIG. 1 illustrates network architecture of proposed system for automated screening and diagnosis of retinal vascular diseases, to elaborate upon its working in accordance with an embodiment of the present disclosure. [0033] As illustrated in FIG. 1, an embodiment of the proposed system for automated screening and diagnosis of retinal vascular diseases (also referred to as system, herein) may include one or more processing units (102) (interchangeably referred to as processing unit (102), herein) configured to receive a first set of data packets in the form of retinal images from one or more imaging sensors (104) (interchangeably referred to as imaging sensors (104), herein), associated with one or more first entities (110a). The processing unit (102) may deduce retinal anomalies associated with received retinal images and transmit said retinal images and deduced clinical reports to one or more display units (106) for immediate interactive monitoring by one or more second entities (110b), and further transmit
said retinal images and deduced clinical reports to one or more storage units or server (108) (interchangeably referred to as storage unit (108), herein), communicatively coupled with the processing unit (102). Said images and corresponding clinical reports may be retrieved securely in future, from the storage unit (108) by one or more authorized third entities (110c). The imaging sensors (104), processing units (102), storage units (108) and one or more display units (106) may be communicatively coupled to each other by a network (112). [0034] In an embodiment, one or more first entities (110a-l, 110a-2... 110a-N) (collectively referred as patients (110a), and individually referred to as a patient (110a), may be associated with one or more imaging sensors (104). The imaging sensors (104) may be operatively coupled to processing units (102), said processing units (102) may be configured to receive the retinal images for detection of retinal anomalies. The imaging sensors (104) may include any or a combination of standard view and wide field fundus cameras, slit lamp, but not limited to the like, for capturing photographs of the retina, optic nerve, macula, blood vessels and the likes.
[0035] In an embodiment, said retinal images captured by imaging sensors (104) may be received in machine readable format by processing units (102) for extraction, analysis, classification and diagnosis of retinal diseases and may be further transmitted to one or more display units (106) for immediate monitoring by one or more second entities (110b).
[0036] In an embodiment, the one or more second entities (110b-l, 110b-2... 1 lOb-N) (collectively referred as operators (110b), and individually referred to as an operator (110b) may perform operations such as examine, download, save, print and the likes on said analyzed retinal images and their corresponding clinical reports generated at processing unit(s) (102), that may be used to prioritize patients based on the severity of their diseases.
[0037] In an embodiment, said retinal images and their corresponding clinical reports generated after extraction, analysis, classification and diagnosis at processing units (102) may be transmitted to one or more storage units (108), which may include but are not limited to local server (108a) for immediate
retrieval by operators (110b), and cloud server (108b) for future retrieval by one or more third entities (110c). In an embodiment, the one or more third entities (110c-l, 110c-2...110bc-N) (collectively referred as users (110c), and individually referred to as an user (110c) may include patients, operators, physicians, researchers and the likes, who may use said retinal images and clinical reports for referring to clinical history of patients, holding awareness programs to facilitate early prognosis of retinal diseases, carrying out further research on improving diagnosis and treatment plans and the likes. The users may be communicatively coupled to the storage units (108) through one or more user devices (114).
[0038] In an illustrative embodiment, processing units (102) may be communicatively coupled to storage units (108) through a secured communication network (112) that may be configured to include any or a combination of Wireless local area network (WLAN), Wide area network (WAN), Wireless fidelity (Wi-fi), Worldwide interoperability for microwave access (WiMAX), cellular communication module, and the likes. In an embodiment, processing units (102) may be communicatively coupled to imaging sensors (104) and one or more display units (106) through the communication network (112) that may be configured as any or a combination of Wi-Fi, Bluetooth, Li-Fi, Zigbee and the likes. In another embodiment, the communication network (112) may be a wireless network, a wired network or a combination thereof that may be implemented as one of the different types of networks, such as Intranet, Local Area Network (LAN), Wide Area Network (WAN), Internet, and the likes. Further, the communication network (112) may either be a dedicated network or a shared network. The shared network may represent an association of the different types of networks that may use variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP) and the likes. [0039] In an embodiment, processing units (102) may be implemented using any or a combination of hardware components and software components such as a computing system, a computing device, a network device, an Android device and
the like, that may be configured to operate with any operating system, including but not limited to, Android™, iOS™, Windows, Linux and the likes. [0040] FIG. 2 illustrates exemplary functional components of processing units of the proposed system for automated screening and diagnosis of retinal vascular diseases, in accordance with an embodiment of the present disclosure. [0041] FIG. 3 illustrates exemplary block diagram of the proposed system for automated screening and diagnosis of retinal vascular diseases, in accordance with an embodiment of the present disclosure.
[0042] As illustrated in FIG. 2, an embodiment of processing units (102) of said system (300) may include one or more processor(s) (202) (interchangeably referred to as processors (202), herein). The processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, processors (202) are configured to fetch and execute computer-readable instructions stored in a memory (204) associated with processing units (102). The memory (204) may store one or more computer-readable instructions or routines, which may be fetched and executed to generate and share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the likes.
[0043] In an embodiment, processing units (102) may also include an interface(s) (206) that may provide a communication pathway between the processors (202) and the network (112). The interface(s) (206) may also provide a communication pathway between processors (202) and other functional components of processing units (102) including but not limited to, memory (204) and a database (212).
[0044] In an embodiment, a processing engine (210) associated with processors (202) may include an extraction unit (214) configured to extract a second set of data packets pertaining to the received first set of data packets (retinal images).The extracted second set of data packets may be analyzed by an
analyzing unit (216) associated with the processing engine (210) to generate a plurality of feature-sets pertaining to the types of retinal anomalies. The feature-sets generated by analyzing the second set of data packets may be compared with reference values of different morphological attributes of pre-defined areas of retina for identifying the corresponding retinal anomalies. The reference values may be received by processors (202) as a third set of data packets from the one or more storage units (108). The extraction and analysis may include operations like but are not limited to preprocessing, image enhancement, image segmentation, morphological operations, color normalization, edge enhancement, binarization, filtering and feature extraction.
[0045] In an embodiment, the processing engine (210) may include a classification and deduction unit (218) that may be configured to execute a pre-trained programmable set of instructions residing in the memory (204), the instructions upon execution may classify the extracted and analyzed second set of data packets to deduce a fourth set of data packets, pertaining to the diagnosis of diseases corresponding to the received first set of data packets. In an embodiment, training of the programmable set of instructions may be achieved by a third set of data packets or training dataset, received at processor(s) (202) from the storage units (108) and the third set of data packets may be configured as reference inputs to the classification and deduction unit (218). The processing engine (210) may be further configured in the form of an Artificial Neural Network like the following but not limited to Convolutional Neural Network (CNN) and Deep Neural Network (DNN) for executing the classification of the second set of data packets and deducing the diagnosis.
[0046] In an embodiment, the processing engine (210) may generate a fourth set of data packets associated with the deductions or diagnosis. The diagnosis may include information like, but are not limited to exact location of blood clots in the blood vessels, which may be in one or more veins or arteries and branches of veins and arteries, size of blood clots, location and severity of tortuosity in blood vessels, nature of abnormalities of the optic disc, aberrations of micro vessels of retina from normal state. The fourth set of data packets or the clinical reports may
further include inferences drawn upon the grade and severity of the diagnosed anomalies, which for example may be any or a combination of bonnet signs, salus signs, micro-aneurysms, hemorrhages, exudates, bifurcation, macular edema, glaucoma and the likes.
[0047] In an embodiment, the fourth set of data packets or clinical reports pertaining to retinal images received as the first set of data packets may be generated at the classification and deduction unit (218) within a pre-determined time starting from reception of the first set of data packets. The said reports may be available in easy-to read format (for example, text, tables, visuals, figures) which may not require examination and interpretation by an expert healthcare professional. Fast availability of reports may encourage early prognosis before concerned disease may become more complicated.
[0048] In an embodiment, the processing engine (210) may be implemented as a combination of hardware and programmable instructions to implement the functionalities such as extracting, analyzing, classifying the second set of data packets from said retinal images and generating clinical diagnosis in the form of the fourth set of data packets. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine (210) may be processor executable programmable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine (210) may include a machine-readable storage medium (for example, the memory (204) ) for storing the executable programmable instructions and processing resource (for example, processors(202)), to execute such instructions. In an example, the machine-readable storage medium may be a part of processing units (102) or may be physically separate but accessible to processors (202). In other examples, processing engine (210) may be implemented by standalone electronic circuitry assembled along with other functional components (for example, memory (204), interfaces (206), database (212)) of processing units (102).
[0049] In an embodiment, processing units (102) may include database (212) that may be communicatively coupled to storage units (108) and may contain
either stored or generated information resulting from functionalities implemented by any of the components of the processing engine (210). In an aspect, stored information on database (212) may be the third set of data packets that are received by the classification and detection unit (218) as reference values configured to train the programmable set of instructions executed by processors (202). In another aspect, generated information may comprise of retinal images and corresponding clinical diagnosis produced by the classification and detection unit (218).
[0050] In an embodiment, processing engine (210) may include an updating and training unit (220), that may be configured to re-train the programmable set of instructions residing in the memory (204), which upon execution by the Artificial Neural Network (ANN) may generate clinical diagnosis (interchangeably referred to as diagnostic reports or the fourth set of data packets, herein) of retinal diseases at the classification and detection unit (218). Upon availability of new retinal images and corresponding clinical diagnosis, the updating and training unit (220) may also be configured to update the database (212) of training and the test dataset residing in storage units (108). In an embodiment, the updating and training unit (220) may be configured to repeat updating and training of the training and test dataset in a pre-determined time interval like one hour, one day, but not limited to the like based on the frequency, variety and overall disease coverage associated with the first set of data packets.
[0051] In an embodiment, processing units (102) may be communicatively coupled with storage units (108) including a cloud storage (108b) for storing information like but not limited to the retinal images, the corresponding diagnostic reports and the test and training dataset. The cloud storage (108b) may include a network service configured to be controlled by a Software Defined Network (SDN) controller, where the SDN may provide secured access to the stored information by separating the control and data planes and allowing access of said information only to an authorized user (110c).
[0052] As illustrated in FIG. 3, an embodiment of the system (300) may include processing units (102), imaging sensors (104), storage units (108), one or
more display units (106) and one or more user devices (114). The processing units (102) receive a first set of data packets from the imaging sensors (104), where the imaging sensors (104) can include any or a combination of slit camera, fundus camera, and the like pertaining to the retinal images of the patients (110a), execute a set of instructions residing in memory (204) associated with the processing units (102), which upon execution on the first set of data packets, are configured to generate a clinical diagnosis of the retinal images. In an embodiment, the system (300) is configured to transmit the retinal images and the clinical diagnosis to display units (106), configured to be accessed immediately by the operators (110b) and further transmit to storage units (108), configured to be accessed by authorized users (110c) through one or more user devices (114), in future.
[0053] In an embodiment, display units (106) may include a computing device, computer, laptop, smartphone, tablet and the likes, configured to display the retinal images and corresponding diagnostic reports through a graphical user interface, that may be accessed by operators (110b), including but not limited to clinical operators, healthcare service providers and paramedical staff, trained to perform operations such as examine, download, save, print and the likes on the retinal images and corresponding diagnostic reports. In another embodiment, one or more user devices (114) may include a computing device, computer, laptop, smartphone, tablet and the likes. User devices (114) may be accessed by users (110c) including but not limited to patients, clinical operators, healthcare service providers, physicians and researchers, who may use said retinal images and diagnostic reports for referring to clinical history of patients, holding awareness programs to facilitate early prognosis of retinal diseases, carrying out further research on improving diagnosis and treatment plans and the likes. [0054] FIG. 4 illustrates an exemplary method for automated screening and diagnosis of retinal vascular diseases, in accordance with an embodiment of the present disclosure.
[0055] In an embodiment, FIG. 4 illustrates a method (400) for automated screening and diagnosis of retinal anomalies. The method (400) may include a
step (402), of receiving, a first set of data packets from imaging sensors (104),
wherein the first set of data packets pertain to one or more posterior images of eye
associated with a patient (110a).
[0056] In an embodiment, the method (400) may include a step (404), of
extracting, a second set of data packets from the received first set of data packets
pertaining to one or more morphological attributes of a predefined area of retina
associated with a patient (110a).
[0057] In an embodiment, the method (400) may include a step (406) of
receiving, a third set of data packets associated with reference values pertaining to
morphological attributes of the pre-defined area of retina, from the storage units
(108).
[0058] In an embodiment, the method (400) may include a step (408) of
analyzing the attributes of the extracted second set of data packets, in comparison
with the reference values associated with the received third set of data packets, to
deduce a diagnosis pertaining to the first set of data packets.
[0059] In an embodiment, the method (400) may include a step (410), of
generating a fourth set of data packets, pertaining to the deduced clinical
diagnosis associated with the first set of data packets and transmitting the first and
the fourth set of data packets to one or more storage units (108) and one or more
display units (106), wherein the one or more storage units (108) and one or more
display units (106) may be accessed by one or more second entities or operator
(110b) and one or more third entities or user (110c) over a secured pathway.
[0060] FIG. 5 illustrates an exemplary computer system in which or with
which embodiments of the present invention may be utilized in accordance with
embodiments of the present disclosure.
[0061] As shown in FIG. 5, computer system includes an external storage
device (510), a bus (520), a main memory (530), a read only memory (540), a
mass storage device (550), communication port (560), and a processor (570). A
person skilled in the art will appreciate that computer system may include more
than one processor and communication ports. Examples of processor (570)
include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or
AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processors or other future processors. Processor (570) may include various modules associated with embodiments of the present invention. Communication port (460) may be any of an RS-232 port for use with a modem based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port (560) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects.
[0062] In an embodiment, the memory (530) may be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor (570). Mass storage (550) may be any current or future mass storage solution, which may be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7102 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc. [0063] In an embodiment, the bus (520) communicatively couples processor(s) (570) with the other memory, storage and communication blocks. Bus (520) may be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor 470 to software system.
[0064] In another embodiment, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to bus (520) to support direct operator interaction with computer system. Other operator and administrative interfaces may be provided through network connections connected through communication port (560). External storage device (510) may be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc -Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Video Disk - Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[0065] As used herein, and unless the context dictates otherwise, the term "coupled to" is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms "coupled to" and "coupled with" are used synonymously. Within the context of this document terms "coupled to" and "coupled with" are also used euphemistically to mean "communicatively coupled with" over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device.
[0066] The terms, descriptions, figures and operational sequences used herein are set forth by way of illustration only. Many variations are possible within the spirit and scope of the subject matter, which is intended to be defined by the following claims and their equivalents in which all terms are meant in their broadest reasonable sense unless otherwise indicated.
[0067] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill
in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE INVENTION
[0068] The present disclosure provides for a system and method that
facilitates performing operations on captured retinal images for diagnosis of one
or more retinal vascular diseases.
[0069] The present disclosure provides for a system and method that enables
in exact diagnosis of any or a combination of one or more retinal diseases from a
wide range of retinal defects.
[0070] The present disclosure provides for a system and method that
automatically determines exact location, dimension, nature and other attributes of
affected regions of the retina with help of an Artificial Neural Network engine.
[0071] The present disclosure provides for a system and method that helps in
automatically evaluating severity of the one or more detected retinal diseases.
[0072] The present disclosure provides for a system and method that
facilitates generating clinical transcripts pertaining to diagnosis of any or a
combination of one or more retinal diseases within a predetermined time starting
from the time of reception of the retinal images.
[0073] The present disclosure provides for a system and method that helps in
storing plurality of captured retinal images and corresponding clinical reports
associated with patient for immediate and future access.
[0074] The present disclosure provides for a system and method that enables
in allowing retrieval of stored retinal images and corresponding clinical reports by
verifying authenticity of an authorized healthcare provider or user.
We Claim:
1. A system for automated screening and diagnosis of retinal anomalies, said system comprising:
a processing unit with one or more processors coupled with a memory, the memory storing instructions executable by the one or more processors and configured to:
receive a first set of data packets from imaging sensors, wherein the first set of data packets pertain to one or more posterior images of eye associated with one or more first entities;
extract a second set of data packets from the received first set of data packets pertaining to one or more morphological attributes of a predefined area of retina associated with one or more first entity;
receive a third set of data packets associated with reference values pertaining to morphological attributes of the pre-defined area of retina, from the storage units;
analyze the attributes of the extracted second set of data packets, in comparison with the reference values associated with the received third set of data packets, to deduce a diagnosis pertaining to the first set of data packets;
generate a fourth set of data packets, pertaining to the deduced clinical diagnosis associated with the first set of data packets and transmit the first and the fourth set of data packets to one or more storage units and one or more display units, wherein the one or more storage units and one or more display units are accessed by one or more second entities and one or more third entities over a secured pathway.
2. The system as claimed in claim 1, wherein the first set of data packets are collected by the one or more imaging sensors, configured to capture one or more images of the retina of an eye, including any or a combination of pre-defined area of retina, blood vessels, optic disc and macula.
3. The system as claimed in claim 1, wherein the one or more morphological attributes of the second set of data packets include of accurate quantitative information regarding type of abnormality in the retina, location and dimension of the blockages in blood vessels, measures of aberrations of micro-vessels from normal range.
4. The system as claimed in claim 1, wherein the one or more processors are configured to generate the fourth set of data packets, pertaining to the deduced clinical diagnosis within a pre-determined time period in response to the received first set of data packets, for facilitating observation, downloading, printing by the one or more second entities and the one or more third entities.
5. The system as claimed in claim 1, wherein the fourth set of data packets, generated at the one or more processors comprise of exact diagnosis, grading and determination of severity of retinal diseases pertaining to the first set of data packets associated with the one or more first entities , in a pre-defined format.
6. A method for automated screening and diagnosis of retinal anomalies, said method comprising the steps of:
receiving, a first set of data packets from imaging sensors, wherein the first set of data packets pertain to one or more posterior images of eye associated with one or more first entities;
extracting, a second set of data packets from the received first set of data packets pertaining to one or more morphological attributes of a predefined area of retina associated with one or more first entities;
receiving, a third set of data packets associated with reference values pertaining to morphological attributes of the pre-defined area of retina, from the storage units;
analyzing, the attributes of the extracted second set of data packets, in comparison with the reference values associated with the received third set of data packets, to deduce a diagnosis pertaining to the first set of data packets;
generating, a fourth set of data packets, pertaining to the deduced clinical diagnosis associated with the first set of data packets and transmitting the first and the fourth set of data packets to one or more storage units and one or more display units, wherein the one or more storage units and one or more display units are accessed by one or more second entities and one or more third entities over a secured pathway.
| Section | Controller | Decision Date |
|---|---|---|
| (Section 15) | Ram Shankar Sahu | 2025-05-21 |
| (Section 15) | Ram Shankar Sahu | 2025-05-25 |
| # | Name | Date |
|---|---|---|
| 1 | 202111018990-STATEMENT OF UNDERTAKING (FORM 3) [24-04-2021(online)].pdf | 2021-04-24 |
| 2 | 202111018990-POWER OF AUTHORITY [24-04-2021(online)].pdf | 2021-04-24 |
| 3 | 202111018990-FORM FOR STARTUP [24-04-2021(online)].pdf | 2021-04-24 |
| 4 | 202111018990-FORM FOR SMALL ENTITY(FORM-28) [24-04-2021(online)].pdf | 2021-04-24 |
| 5 | 202111018990-FORM 1 [24-04-2021(online)].pdf | 2021-04-24 |
| 6 | 202111018990-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-04-2021(online)].pdf | 2021-04-24 |
| 7 | 202111018990-EVIDENCE FOR REGISTRATION UNDER SSI [24-04-2021(online)].pdf | 2021-04-24 |
| 8 | 202111018990-DRAWINGS [24-04-2021(online)].pdf | 2021-04-24 |
| 9 | 202111018990-DECLARATION OF INVENTORSHIP (FORM 5) [24-04-2021(online)].pdf | 2021-04-24 |
| 10 | 202111018990-COMPLETE SPECIFICATION [24-04-2021(online)].pdf | 2021-04-24 |
| 11 | 202111018990-Proof of Right [08-05-2021(online)].pdf | 2021-05-08 |
| 12 | 202111018990-FORM 18 [07-01-2023(online)].pdf | 2023-01-07 |
| 13 | 202111018990-FER.pdf | 2023-09-06 |
| 14 | 202111018990-FER_SER_REPLY [06-03-2024(online)].pdf | 2024-03-06 |
| 15 | 202111018990-ENDORSEMENT BY INVENTORS [06-03-2024(online)].pdf | 2024-03-06 |
| 16 | 202111018990-DRAWING [06-03-2024(online)].pdf | 2024-03-06 |
| 17 | 202111018990-CORRESPONDENCE [06-03-2024(online)].pdf | 2024-03-06 |
| 18 | 202111018990-COMPLETE SPECIFICATION [06-03-2024(online)].pdf | 2024-03-06 |
| 19 | 202111018990-CLAIMS [06-03-2024(online)].pdf | 2024-03-06 |
| 20 | 202111018990-ABSTRACT [06-03-2024(online)].pdf | 2024-03-06 |
| 21 | 202111018990-US(14)-HearingNotice-(HearingDate-18-09-2024).pdf | 2024-09-02 |
| 22 | 202111018990-FORM-26 [13-09-2024(online)].pdf | 2024-09-13 |
| 23 | 202111018990-Correspondence to notify the Controller [13-09-2024(online)].pdf | 2024-09-13 |
| 24 | 202111018990-US(14)-ExtendedHearingNotice-(HearingDate-23-09-2024)-1200.pdf | 2024-09-20 |
| 25 | 202111018990-Correspondence to notify the Controller [20-09-2024(online)].pdf | 2024-09-20 |
| 26 | 202111018990-US(14)-ExtendedHearingNotice-(HearingDate-23-09-2024)-1430.pdf | 2024-09-23 |
| 27 | 202111018990-Written submissions and relevant documents [03-10-2024(online)].pdf | 2024-10-03 |
| 28 | 202111018990-Annexure [03-10-2024(online)].pdf | 2024-10-03 |
| 29 | 202111018990-US(14)-ExtendedHearingNotice-(HearingDate-19-02-2025)-1200.pdf | 2025-01-27 |
| 30 | 202111018990-Correspondence to notify the Controller [14-02-2025(online)].pdf | 2025-02-14 |
| 31 | 202111018990-Written submissions and relevant documents [06-03-2025(online)].pdf | 2025-03-06 |
| 32 | 202111018990-US(14)-ExtendedHearingNotice-(HearingDate-03-04-2025)-1630.pdf | 2025-03-17 |
| 33 | 202111018990-Correspondence to notify the Controller [02-04-2025(online)].pdf | 2025-04-02 |
| 1 | Searchstrategy202111018990E_05-09-2023.pdf |