Abstract: RETINAL DISORDER DETECTION SYSTEM AND METHOD THEREOF ABSTRACT A retinal disorder detection system (100) is disclosed. The system (100) comprises an image acquisition unit (106) adapted to receive the retinal images from a computing device (102). A processing unit (108) to preprocess the retinal images; extract features from the pre-processed retinal images using a customized Convolutional Neural Network (CNN) (110) integrated with a fog computing environment (112) and cloud-based services (114); capture hierarchical features and spatial features from the extracted features; classify the hierarchical features and the spatial features based on a retinal images dataset (116); and evaluate a presence of retinal disorder by culminating the classified hierarchical features and the classified spatial features in a log-SoftMax activation for binary classification. The system (100) performs initial retinal disorder detection locally, minimizing latency and enabling immediate diagnostic feedback essential for timely medical intervention. Claims: 10, Figures: 4 Figure 1A is selected.
Description:
BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to a disease detection system and particularly to a retinal disorder detection system.
Description of Related Art
[002] Retinal disorder remains one of the leading causes of vision impairment and blindness among working-age individuals worldwide. The condition arises due to damage to the blood vessels in the retina, typically as a complication of diabetes. Routine screening and early diagnosis play a crucial role in preventing vision loss, but accessibility and efficiency continue to hinder timely interventions.
[003] Moreover, conventional screening methods for retinal disorder rely on techniques such as fundus photography, where retinal images are captured and analyzed by trained specialists. These procedures demand expensive equipment, skilled personnel, and considerable processing time, often limiting their application in under-resourced or rural areas. The dependency on centralized diagnostic infrastructure delays clinical decision-making, especially in time-sensitive cases.
[004] However, Artificial intelligence (AI) based systems have emerged as promising tools to assist in the detection and classification of Retinal disorder. Many of these solutions depend on cloud-based models that require high-speed internet for transferring large image datasets to remote servers for analysis. However, this reliance on external processing units often introduces latency, raising concerns over response time, data privacy, and the feasibility of deployment in locations with unstable connectivity.
[005] There is thus a need for an improved and advanced retinal disorder detection system that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide a retinal disorder detection system. The system comprising an image acquisition unit adapted to receive retinal images from a computing device. The system further comprising a processing unit in communication with the image acquisition unit. The processing unit is configured to preprocess the retinal images received by the image acquisition unit; extract features from the pre-processed retinal images using a customized Convolutional Neural Network (CNN) integrated with a fog computing environment and cloud-based services; capture hierarchical features and spatial features from the extracted features by implementation of filters and Rectified Linear Unit (ReLU) activation functions. The processing unit is further configured to classify the hierarchical features and the spatial features based on a retinal images dataset; and evaluate a presence of retinal disorder by culminating the classified hierarchical features and the classified spatial features in a log-SoftMax activation for binary classification. The log-SoftMax activation comprises fully connected layers and a dropout layer to reduce overfitting of the classified spatial features.
[007] Embodiments in accordance with the present invention further provide a method for detecting retinal disorder using a retinal disorder detection system. The method comprising steps of receiving retinal images from a computing device; preprocessing the retinal images using an image processor; extracting features from the pre-processed retinal images using a customized Convolutional Neural Network (CNN) integrated with a fog computing environment and cloud-based services; capturing hierarchical features and spatial features from the extracted features by implementation of filters and Rectified Linear Unit (ReLU) activation functions; classifying the hierarchical features and the spatial features based on a retinal images dataset; and evaluating a presence of retinal disorder by culminating the classified hierarchical features and the classified spatial features in a log-SoftMax activation for binary classification. The log-SoftMax activation comprises fully connected layers and a dropout layer to reduce overfitting of the classified spatial features.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a retinal disorder detection system.
[009] Next, embodiments of the present application may provide a retinal disorder detection system that performs initial retinal disorder detection locally, minimizing latency and enabling immediate diagnostic feedback essential for timely medical intervention.
[0010] Next, embodiments of the present application may provide a retinal disorder detection system that minimizes reliance on high-speed internet connections, making it more suitable for deployment in rural or low-bandwidth environments.
[0011] Next, embodiments of the present application may provide a retinal disorder detection system that is optimized specifically for retinal disorder detection, improving classification performance and reducing false positives and negatives.
[0012] Next, embodiments of the present application may provide a retinal disorder detection system that manages secure data transmission, patient information remains protected throughout the processing pipeline, aligning with healthcare data compliance standards.
[0013] Next, embodiments of the present application may provide a retinal disorder detection system that allows flexible scaling and integration with various healthcare infrastructures.
[0014] These and other advantages will be apparent from the present application of the embodiments described herein.
[0015] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0017] FIG. 1A illustrates a schematic block diagram of a retinal disorder detection system, according to an embodiment of the present invention;
[0018] FIG. 1B illustrates an exemplary scenario of the system for detecting the retinal disorder, according to an embodiment of the present invention;
[0019] FIG. 2 illustrates a block diagram of a processing unit, according to an embodiment of the present invention; and
[0020] FIG. 3 depicts a flowchart of a method for detecting retinal disorder using a retinal disorder detection system, according to an embodiment of the present invention.
[0021] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0022] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0023] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0024] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0025] FIG. 1 illustrates a schematic block diagram of a retinal disorder detection system 100 (hereinafter referred to as the system 100), according to an embodiment of the present invention. The system 100 may be adapted to receive retinal images. Further, the system 100 may be adapted to detect a presence of the retinal disorder in the received retinal images. Moreover, the system 100 may classify and evaluate a stage of the detected retinal disorder in the retinal images. The retinal disorder may be, but not limited to, a retinal tear, a retinal detachment, a macular pucker, an epiretinal membrane, a macular hole, a macular degeneration, a retinitis, a retinitis pigmentosa, and so forth. In a preferred embodiment of the present invention, the retinal disorder may be a diabetic retinal disease. Embodiments of the present invention are intended to include or otherwise cover any type of the retinal disorder, including known, related art, and/or later developed technologies.
[0026] According to the embodiments of the present invention, the system 100 may incorporate non-limiting hardware components to enhance the processing speed and efficiency such as the system 100 may comprise a computing device 102, a computer application 104, an image acquisition unit 106, a processing unit 108, a customized Convolutional Neural Network (CNN) 110, a fog computing environment 112, cloud-based service 114, and a retinal images dataset 116. In an embodiment of the present invention, the hardware components of the system 100 may be integrated with computer-executable instructions for overcoming the challenges and the limitations of the existing systems.
[0027] In an embodiment of the present invention, the computing device 102 may be adapted to upload the retinal images to the system 100. The retinal images uploaded may be captured under various conditions such as, but not limited to, different angles, disproportionate lighting, several rotations, and so forth to ensure diversity. In an embodiment of the present invention, a resolution of the retinal images may be in a range from 200 pixels by 200 pixels to 300 pixels by 300 pixels. In a preferred embodiment of the present invention, the resolution of the retinal images may be 225 pixels by 225 pixels. Embodiments of the present invention are intended to include or otherwise cover any resolution of the retinal images.
[0028] The computing device 102 may be, but not limited to, a camera, a laptop, a mobile, a flood illuminator, an infrared emitter, a Raspberry Pi, a smartphone, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the computing device 102, including known, related art, and/or later developed technologies.
[0029] The computing device 102 may comprise the computer application 104. The computer application 104 may be adapted to display a presence of retinal disorder or an absence of the retinal disorder and a confidence level of the retinal disorder. The computer application 104 may be, but not limited to, a web application, a standalone application, an Unstructured Supplementary Service Data (USSD) application, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the computer application 104, including known, related art, and/or later developed technologies.
[0030] In an embodiment of the present invention, the image acquisition unit 106 may be adapted to receive the retinal images from the computing device 102.
[0031] The processing unit 108 may further be configured to execute computer-executable instructions to generate an output relating to the system 100. According to embodiments of the present invention, the processing unit 108 may be, but not limited to, a Programmable Logic Control (PLC) unit, a microprocessor, a development board, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the processing unit 108 including known, related art, and/or later developed technologies. In an embodiment of the present invention, the processing unit 108 may further be explained in conjunction with FIG. 2.
[0032] FIG. 1B illustrates an exemplary scenario of the system 100 for detecting the retinal disorder, according to an embodiment of the present invention. In an embodiment of the present invention, the computing device 102 and the image acquisition unit 106 may interact to exchange the retinal images. The retinal images may further be transmitted to the processing unit 108. The processing unit 108 may interact with the fog computing environment 112. In an embodiment of the present invention, the fog computing environment 112 may comprise components such as, but not limited to, a cloud controller, a service director, a protection supervisor, an information director, a service observer, the customized Convolutional Neural Network (CNN) 110, and so forth. Embodiments of the present invention are intended to include or otherwise cover any components in the fog computing environment 112, including known, related art, and/or later developed technologies.
[0033] Further, the fog computing environment 112 may communicate with the cloud-based service 114. The cloud-based service 114 may be, but not limited to, a Microsoft Azure cloud server, an Amazon AWS cloud server, a Google Compute Engine (GCC) cloud server, an Amazon Elastic Compute Cloud (EC2) cloud server, and so forth. In a preferred embodiment of the present invention, the cloud-based service 114 may be an Amazon Web Services (AWS) platform. Embodiments of the present invention are intended to include or otherwise cover any type of the cloud-based service 114, including known, related art, and/or later developed technologies. The interaction of the fog computing environment 112 and the cloud-based service 114 may enable the system 100 to evaluate a presence of the retinal disorder in the retinal images.
[0034] FIG. 2 illustrates a block diagram of the processing unit 108 of the system 100, according to an embodiment of the present invention. The processing unit 108 may comprise the computer-executable instructions in form of programming modules such as a data receiving module 200, a data preprocessing module 202, a data extraction module 204, a data classification module 206, and a data evaluation module 208.
[0035] In an embodiment of the present invention, the data receiving module 200 may be configured to receive the retinal images from the computing device 102. The data receiving module 200 may be configured to transmit the retinal images to the data preprocessing module 202.
[0036] The data preprocessing module 202 may be activated upon receipt of the retinal images from the data receiving module 200. In an embodiment of the present invention, the data preprocessing module 202 may be configured to preprocess the retinal images. The preprocessing of the retinal images may comprise transformations with random horizontal and vertical flips, rotations of up to 30 degrees, conversion into PyTorch tensors, and so forth. Embodiments of the present invention are intended to include or otherwise cover any measures, including known, related art, and/or later developed technologies, for preprocessing of the retinal images. The data preprocessing module 202 may be configured to transmit the preprocessed retinal images to the data extraction module 204.
[0037] The data extraction module 204 may be activated upon receipt of the preprocessed retinal images from the data preprocessing module 202. In an embodiment of the present invention, the data extraction module 204 may be configured to extract features from the pre-processed retinal images. The features may be extracted by utilization of the customized Convolutional Neural Network (CNN) 110. The customized Convolutional Neural Network (CNN) 110 may comprise four convolutional layers and two fully connected layers. The customized Convolutional Neural Network (CNN) 110 may be integrated with the fog computing environment 112 and the cloud-based service 114.
[0038] In an embodiment of the present invention, the data extraction module 204 may be configured to capture hierarchical features and spatial features from the extracted features. The hierarchical features and the spatial features may be captured by implementation of filters and Rectified Linear Unit (ReLU) activation functions. The Rectified Linear Unit (ReLU) activation functions may provide nonlinearity to a network and max-pooling layers. The provided nonlinearity along with the max-pooling layers may enable extraction of the hierarchical features and the spatial features from the extracted features from the pre-processed retinal images. The filters may be in a range from 8 to 64.
[0039] The provision of nonlinearity may ensure spacing between each layer and may further prevent simplification of functions. The max-pooling layers may be adapted to reduce dimensionality of the hierarchical features and the spatial features. The dimensionality may be reduced by reducing a number of pixels in the hierarchical features and the spatial features. Along with reduction of dimensionality, the max-pooling layers may provide a weightage to the pixels in the hierarchical features and the spatial features. The weightage may be provided on the basis of factors such as, but not limited to, application of filters, clarity, condensation, pixelation, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the factors, including known, related art, and/or later developed technologies, for the provision of weightage in the hierarchical features and the spatial features.
[0040] The data extraction module 204 may be configured to transmit the extracted hierarchical features and the extracted spatial features to the data classification module 206.
[0041] In an embodiment of the present invention, the data classification module 206 may be activated upon receipt of the extracted hierarchical features and the extracted spatial features from the data extraction module 204. In an embodiment of the present invention, the data extraction module 204 may be configured to classify the hierarchical features and the spatial features based on the retinal images dataset 116. The retinal images dataset 116 may comprise a set of pretrained retinal images exhibiting the retinal disorder. Upon classification of the hierarchical features and the spatial features, the data classification module 206 may be configured to subsequently transmit an activation signal to the data evaluation module 208.
[0042] The data evaluation module 208 may be activated upon receipt of the activation signal from the data classification module 206. In an embodiment of the present invention, the data evaluation module 208 may be configured to evaluate the presence of the retinal disorder in the retinal images. The data evaluation module 208 may be configured to evaluate the presence of the retinal disorder by culminating the classified hierarchical features and the classified spatial features in a log-SoftMax activation for binary classification. The log-SoftMax activation may comprises fully connected layers and a dropout layer to reduce overfitting of the classified spatial features. The binary classification may be, but not limited to, the presence of the retinal disorder, and/or an absence of the retinal disorder. Further, the data evaluation module 208 may be configured to evaluate a confidence level upon evaluation of the presence of the retinal disorder in the retinal images.
[0043] The data evaluation module 208 may be configured to generate recommendations based on the presence of the retinal disorder and the confidence level. The recommendations may be, but not limited to, a treatment, a therapy, a surgery, a medication, preventive measures, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the recommendations, including known, related art, and/or later developed technologies.
[0044] FIG. 3 depicts a flowchart of a method 300 for detecting retinal disorder using the system 100, according to an embodiment of the present invention.
[0045] At step 302, the system 100 may receive the retinal images from the computing device 102.
[0046] At step 304, the system 100 may preprocess the retinal images.
[0047] At step 306, the system 100 may extract the features from the pre-processed retinal images using the customized Convolutional Neural Network (CNN) 110 integrated with the fog computing environment 112 and the cloud-based service 114.
[0048] At step 308, the system 100 may capture the hierarchical features and the spatial features from the extracted features by implementation of filters and Rectified Linear Unit (ReLU) activation functions.
[0049] At step 310, the system 100 may classify the hierarchical features and the spatial features based on the retinal images dataset 116.
[0050] At step 312, the system 100 may evaluate the presence of the retinal disorder by culminating the classified hierarchical features and the classified spatial features in the log-SoftMax activation for binary classification.
[0051] At step 314, the system 100 may evaluate the confidence level upon evaluation of the presence of the retinal disorder in the retinal images.
[0052] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0053] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A retinal disorder detection system (100), comprising:
an image acquisition unit (106) adapted to receive the retinal images from a computing device (102); and
a processing unit (108) in communication with the image acquisition unit (106), characterized in that the processing unit (108) is configured to:
preprocess the retinal images received by the image acquisition unit (106);
extract features from the pre-processed retinal images using a customized Convolutional Neural Network (CNN) (110) integrated with a fog computing environment (112) and cloud-based services (114);
capture hierarchical features and spatial features from the extracted features by implementation of filters and Rectified Linear Unit (ReLU) activation functions;
classify the hierarchical features and the spatial features based on a retinal images dataset (116); and
evaluate a presence of retinal disorder by culminating the classified hierarchical features and the classified spatial features in a log-SoftMax activation for binary classification, wherein the log-SoftMax activation comprises fully connected layers and a dropout layer to reduce overfitting of the classified spatial features.
2. The system (100) as claimed in claim 1, wherein the processing unit (108) is configured to evaluate a confidence level upon evaluation of the presence of retinal disorder in the retinal images.
3. The system (100) as claimed in claim 1, wherein a resolution of the retinal images is in a range from 200 pixels by 200 pixels to 300 pixels by 300 pixels.
4. The system (100) as claimed in claim 1, wherein the preprocessing of the retinal images comprises transformations with random horizontal and vertical flips, rotations of up to 30 degrees, conversion into PyTorch tensors, or a combination thereof.
5. The system (100) as claimed in claim 1, wherein customized Convolutional Neural Network (CNN) (110) comprises four convolutional layers and two fully connected layers.
6. The system (100) as claimed in Claim 1, wherein the filters are in a range from 8 to 64.
7. The system (100) as claimed in claim 1, wherein the Rectified Linear Unit (ReLU) activation functions provide nonlinearity to a network and max-pooling layers to extract the hierarchical features and the spatial features from the extracted features from the pre-processed retinal images.
8. The system (100) as claimed in claim 1, wherein the binary classification is selected from a presence of the retinal disorder, or an absence of the retinal disorder.
9. A method (300) for detecting retinal disorder using a retinal disorder detection system, the method (300) characterized by steps of:
receiving retinal images from a computing device (102);
preprocessing the retinal images using an image processor (108);
extracting features from the pre-processed retinal images using a customized Convolutional Neural Network (CNN) (110) integrated with a fog computing environment (112) and cloud-based services (114);
capturing hierarchical features and spatial features from the extracted features by implementation of filters and Rectified Linear Unit (ReLU) activation functions;
classifying the hierarchical features and the spatial features based on a retinal images dataset (116); and
evaluating a presence of retinal disorder by culminating the classified hierarchical features and the classified spatial features in a log-SoftMax activation for binary classification, wherein the log-SoftMax activation comprises fully connected layers and a dropout layer to reduce overfitting of the classified spatial features.
10. The method (300) as claimed in claim 9, wherein the binary classification is selected from a presence of the retinal disorder, or an absence of the retinal disorder.
Date: April 22, 2025
Place: Noida
Nainsi Rastogi
Patent Agent (IN/PA-2372)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202541039356-STATEMENT OF UNDERTAKING (FORM 3) [24-04-2025(online)].pdf | 2025-04-24 |
| 2 | 202541039356-REQUEST FOR EARLY PUBLICATION(FORM-9) [24-04-2025(online)].pdf | 2025-04-24 |
| 3 | 202541039356-POWER OF AUTHORITY [24-04-2025(online)].pdf | 2025-04-24 |
| 4 | 202541039356-OTHERS [24-04-2025(online)].pdf | 2025-04-24 |
| 5 | 202541039356-FORM-9 [24-04-2025(online)].pdf | 2025-04-24 |
| 6 | 202541039356-FORM FOR SMALL ENTITY(FORM-28) [24-04-2025(online)].pdf | 2025-04-24 |
| 7 | 202541039356-FORM 1 [24-04-2025(online)].pdf | 2025-04-24 |
| 8 | 202541039356-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-04-2025(online)].pdf | 2025-04-24 |
| 9 | 202541039356-EDUCATIONAL INSTITUTION(S) [24-04-2025(online)].pdf | 2025-04-24 |
| 10 | 202541039356-DRAWINGS [24-04-2025(online)].pdf | 2025-04-24 |
| 11 | 202541039356-DECLARATION OF INVENTORSHIP (FORM 5) [24-04-2025(online)].pdf | 2025-04-24 |
| 12 | 202541039356-COMPLETE SPECIFICATION [24-04-2025(online)].pdf | 2025-04-24 |