Abstract: AUTOMATED SYSTEM FOR DETECTING AND CLASSIFYING DIABETIC RETINOPATHY (DR) IN RETINAL FUNDUS IMAGES ABSTRACT An automated system (100) for detecting and classifying diabetic retinopathy (DR) in retinal fundus images is disclosed. The system (100) comprising: an image acquisition unit (104) to receive the retinal fundus images from a computing device (102). A processing unit (106) configured to: preprocess the retinal fundus images received by the image acquisition unit (104) using an image processor (108); extract features from the pre-processed retinal fundus images using a hybrid Visual Geometry Group 16 (VGG16) convolutional neural network (CNN) model with additional dense layers and a Rectified Linear Unit (ReLU) layer; classify the extracted features based on a retinal fundus images dataset; and evaluate a stage of diabetic retinopathy (DR) based on the extracted features and classification results, wherein the stage is determined according to a multi-stage classification of disease. The system (100) minimizes the need for extensive manual screenings, making eye care cost-effective and accessible. Claims: 10, Figures: 6 Figure 1 is selected.
Description:BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to a system for detecting and classifying diabetic retinopathy (DR) and particularly to an automated system for detecting and classifying diabetic retinopathy (DR) in retinal fundus images.
Description of Related Art
[002] Diabetic Retinopathy (DR) is a leading cause of vision impairment and blindness worldwide, primarily affecting individuals with prolonged diabetes. It results from damage to the retinal blood vessels, leading to progressive vision loss if left undetected or untreated. Traditionally, DR diagnosis relies on manual examination of fundus images by trained ophthalmologists, which can be time-consuming, subjective, and dependent on specialized expertise. As the prevalence of diabetes increases globally, there is a growing need for more efficient and accessible screening methods to enable early detection and timely intervention.
[003] Advancements in artificial intelligence (AI) and deep learning have significantly contributed to medical imaging analysis, offering automated solutions for disease detection and classification. Convolutional Neural Networks (CNNs) have demonstrated remarkable accuracy in identifying patterns in medical images, making them a promising tool for DR detection. Existing approaches leverage pre-trained CNN models on large datasets, fine-tuned to classify the severity of DR in retinal images. However, challenges such as overfitting, computational complexity, and reliance on extensive labeled datasets hinder their widespread clinical implementation. Improving model efficiency, robustness, and adaptability remains a key focus in this domain.
[004] Efforts have also been made to integrate AI-driven DR detection into telemedicine platforms, enhancing accessibility in remote and resource-limited regions. Smartphone-based retinal imaging and cloud-based diagnostic tools have shown potential to reduce screening costs and improve early intervention rates. Despite these advancements, current solutions still face limitations in accuracy, generalization across diverse populations, and real-time performance optimization. There is a continuous demand for enhanced methodologies that balance diagnostic precision with computational efficiency, ensuring scalable and reliable DR screening in diverse clinical settings.
[005] There is thus a need for an improved and advanced automated system for detecting and classifying diabetic retinopathy (DR) in retinal fundus images that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide an automated system for detecting and classifying diabetic retinopathy (DR) in retinal fundus images. The system comprising: an image acquisition unit adapted to receive the retinal fundus images from a computing device. The system further comprising: a processing unit in communication with the image acquisition unit. The processing unit is configured to preprocess the retinal fundus images received by the image acquisition unit using an image processor; and extract features from the pre-processed retinal fundus images using a hybrid Visual Geometry Group 16 (VGG16) convolutional neural network (CNN) model with additional dense layers and a Rectified Linear Unit (ReLU) layer; classify the extracted features based on a retinal fundus images dataset; and evaluate a stage of diabetic retinopathy (DR) based on the extracted features and classification results, wherein the stage is determined according to a multi-stage classification of disease.
[007] Embodiments in accordance with the present invention further provide a method for detecting and classifying diabetic retinopathy (DR) in retinal fundus images. The method comprising steps of receiving, by an image acquisition unit, retinal fundus images from a computing device; preprocessing the retinal fundus images received by the image acquisition unit using an image processor; extracting features from the pre-processed retinal fundus images using a hybrid Visual Geometry Group 16 (VGG16) convolutional neural network (CNN) model with additional dense layers and a Rectified Linear Unit (ReLU) layer; classifying the extracted features based on a retinal fundus images dataset; and evaluating a stage of diabetic retinopathy (DR) based on the extracted features and classification results, wherein the stage is determined according to a multi-stage classification of disease.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide an automated system for detecting and classifying diabetic retinopathy (DR) in retinal fundus images.
[009] Next, embodiments of the present application may provide a system for detecting and classifying diabetic retinopathy (DR) that enhances the accuracy of Diabetic Retinopathy (DR) detection by leveraging deep learning techniques, reducing human error and subjectivity in diagnosis.
[0010] Next, embodiments of the present application may provide a system for detecting and classifying diabetic retinopathy (DR) that enables early-stage identification of DR, allowing for prompt medical intervention and reducing the risk of vision loss in diabetic patients.
[0011] Next, embodiments of the present application may provide a system for detecting and classifying diabetic retinopathy (DR) that provides faster results, optimizing the screening process in healthcare settings.
[0012] Next, embodiments of the present application may provide a system for detecting and classifying diabetic retinopathy (DR) that can be integrated into telemedicine platforms, enabling remote DR screening in underserved areas where ophthalmologists may not be readily available.
[0013] Next, embodiments of the present application may provide a system for detecting and classifying diabetic retinopathy (DR) that minimizes the need for extensive manual screenings, making eye care more cost-effective and accessible, particularly in regions with limited medical resources.
[0014] These and other advantages will be apparent from the present application of the embodiments described herein.
[0015] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0017] FIG. 1 illustrates a block diagram of an automated system for detecting and classifying diabetic retinopathy (DR) in retinal fundus images, according to an embodiment of the present invention;
[0018] FIG. 2A illustrates a Visual Geometry Group 16 (VGG16) map, according to an embodiment of the present invention;
[0019] FIG. 2B illustrates a Visual Geometry Group 16 (VGG16) architecture with a modified top layer, according to an embodiment of the present invention;
[0020] FIG. 2C illustrates a data flow diagram, according to an embodiment of the present invention;
[0021] FIG. 2D illustrates different stages of the diabetic retinopathy (DR), according to an embodiment of the present invention; and
[0022] FIG. 3 depicts a flowchart of a method for detecting and classifying diabetic retinopathy (DR) in retinal fundus image, according to an embodiment of the present invention.
[0023] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0024] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0025] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0026] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0027] FIG. 1 illustrates a block diagram of an automated system 100 (hereinafter referred to as the system 100) for detecting and classifying diabetic retinopathy (DR) in retinal fundus images, according to an embodiment of the present invention. The system 100 may be adapted to receive the retinal fundus images. Further, the system 100 may be adapted to detect a presence of the diabetic retinopathy (DR) in the received retinal fundus images. Moreover, the system 100 may classify and evaluate a stage of the detected diabetic retinopathy (DR) in the retinal fundus images.
[0028] According to the embodiments of the present invention, the system 100 may incorporate non-limiting hardware components to enhance the processing speed and efficiency such as the system 100 may comprise a computing device 102, an image acquisition unit 104, a processing unit 106, and an image processor 108. In an embodiment of the present invention, the hardware components of the system 100 may be integrated with computer-executable instructions for overcoming the challenges and the limitations of the existing systems.
[0029] In an embodiment of the present invention, the computing device 102 may be adapted to upload the retinal fundus images to the system 100. The computing device 102 may be, but not limited to, a laptop, a mobile, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the computing device 102, including known, related art, and/or later developed technologies.
[0030] In an embodiment of the present invention, the image acquisition unit 104 may be adapted to receive the retinal fundus images from the computing device 102. The image acquisition unit 104 may process the received retinal fundus images to ensure compatibility with the system 100. This processing may include, but is not limited to, image format conversion, resolution adjustment, noise reduction, and enhancement for improved analysis. In an embodiment of the present invention, the image acquisition unit 104 may further validate the received retinal fundus images to ensure their integrity and completeness. This validation may involve checking for image clarity, proper illumination, and absence of artifacts that may hinder accurate diagnosis. If any discrepancies or quality issues are detected, the system 100 may generate an alert or request a re-upload of the images.
[0031] Once validated, the image acquisition unit 104 may transmit the retinal fundus images to an image processing module for further analysis. The image processing module may utilize machine learning algorithms, artificial intelligence techniques, or predefined computational models to analyze the images for potential abnormalities, such as diabetic retinopathy, macular degeneration, or other retinal diseases.
[0032] In an embodiment of the present invention, the processing unit 106 may be in communication with the image acquisition unit 104. The processing unit 106 may be configured to preprocess the retinal fundus images received by the image acquisition unit 104 using an image processor 108. The processing unit 106 may be configured to extract features from the pre-processed retinal fundus images using a hybrid Visual Geometry Group 16 (VGG16) convolutional neural network (CNN) model with additional dense layers and a Rectified Linear Unit (ReLU) layer. The hybrid Visual Geometry Group 16 (VGG16) convolutional neural network (CNN) model may be pre-trained on ImageNet.
[0033] The processing unit 106 may be configured to classify the extracted features based on a retinal fundus images dataset. The processing unit 106 may be configured to evaluate the stage of diabetic retinopathy (DR) based on the extracted features and classification results. The stage is determined according to a multi-stage classification of disease. The multi-stage classification of the disease may be, but not limited to, No Diabetic Retinopathy (No DR), Mild Non-Proliferative Diabetic Retinopathy (Mild NPDR), Moderate Non-Proliferative Diabetic Retinopathy (Moderate NPDR), Severe Non-Proliferative Diabetic Retinopathy (Severe NPDR), Proliferative Diabetic Retinopathy (PDR), and so forth. Embodiments of the present invention are intended to include or otherwise cover any stage of diabetic retinopathy (DR), including known, related art, and/or later developed technologies. The multi-stage classification of the disease may further be illustrated in FIG. 2D.
[0034] The processing unit 106 may be configured to include early stopping criteria based on validation loss monitoring to prevent overfitting of the hybrid convolutional neural network (CNN) model. The processing unit 106 may be configured to include early stopping criteria based on validation loss monitoring to prevent overfitting of the hybrid convolutional neural network (CNN) model. The processing unit 106 may be configured to utilize ensemble learning techniques to improve a reliability of the classification results. The processing unit 106 may be configured to generate a diagnostic report based on the classification results.
[0035] The processing unit 106 may be configured to integrate attention mechanisms, transformer-based models, and so forth to improve feature extraction and classification performance. The processing unit 106 may be configured to generate classification results using a semi-supervised learning approach incorporating Generative Adversarial Networks (GANs) to leverage labeled and unlabeled data.
[0036] In an embodiment of the present invention, the image processor 108 may be configured to preprocess the received retinal fundus images by preprocessing techniques such as, but not limited to, Gaussian filtering, histogram equalization, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the preprocessing techniques, including known, related art, and/or later developed technologies.
[0037] FIG. 2A illustrates a Visual Geometry Group 16 (VGG16) map 200, according to an embodiment of the present invention. The Visual Geometry Group 16 (VGG16) map 200 may depict an input-output direction with pooled and dense layers. The map 200 highlights the hierarchical structure of convolutional layers, which progressively extract features from the input image. The pooling layer may reduce the spatial dimensions, and the dense layers may facilitate classification. This architecture is particularly effective for image recognition tasks due to its depth and ability to capture intricate patterns.
[0038] FIG. 2B illustrates a Visual Geometry Group 16 (VGG16) architecture 202 with a modified top layer, according to an embodiment of the present invention. The Visual Geometry Group 16 (VGG16) architecture may depict a combination of a dropout layer, a dense layer, and a sigmoid layer. The dropout layer is introduced to prevent overfitting by randomly deactivating neurons during training. The dense layer processes the flattened features, and the sigmoid layer provides a probabilistic output for binary classification. This modification enhances the model's robustness and accuracy in handling medical imaging data.
[0039] FIG. 2C illustrates a data flow diagram 204 of the system 100, according to an embodiment of the present invention. The system 100 may receive the retinal fundus images from the computing device 102. Further, the system 100 may resize the retinal fundus images while maintaining an aspect ratio of the images. Further, the system 100 may crop the retinal fundus images. Later, the system 100 may normalize and augment the retinal fundus images. Normalization ensures consistent pixel intensity ranges, while augmentation techniques such as rotation, flipping, and scaling improve the model's generalization. These preprocessing steps are critical for enhancing the quality and diversity of the training dataset.
[0040] FIG. 2D illustrates different stages of the diabetic retinopathy (DR) 206, according to an embodiment of the present invention. The stages range from mild non-proliferative DR, characterized by microaneurysms, to severe proliferative DR, marked by abnormal blood vessel growth. Intermediate stages include moderate non-proliferative DR, with intraretinal hemorrhages, and severe non-proliferative DR, exhibiting venous beading and intraretinal microvascular abnormalities. Understanding these stages is crucial for accurate diagnosis and timely intervention to prevent vision loss.
[0041] FIG. 3 depicts a flowchart of a method 300 for detecting and classifying the diabetic retinopathy (DR) in the retinal fundus image, according to an embodiment of the present invention.
[0042] At step 302, the system 100 may receive the retinal fundus images from the computing device 102.
[0043] At step 304, the system 100 may pre-process the retinal fundus images received by the image acquisition unit 104 using the image processor 108.
[0044] At step 306, the system 100 may extract features from the pre-processed retinal fundus images using the hybrid Visual Geometry Group 16 (VGG16) convolutional neural network (CNN) model with the additional dense layers and the Rectified Linear Unit (ReLU) layer.
[0045] At step 308, the system 100 may classify the extracted features based on the retinal fundus images dataset.
[0046] At step 310, the system 100 may evaluate the stage of the diabetic retinopathy (DR) based on the extracted features and the classification results. The stage may be determined according to a multi-stage classification of the disease.
[0047] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0048] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. An automated system (100) for detecting and classifying diabetic retinopathy (DR) in retinal fundus images, comprising:
an image acquisition unit (104) adapted to receive the retinal fundus images from a computing device (102);
a processing unit (106) in communication with the image acquisition unit (104), characterized in that the processing unit (106) is configured to:
preprocess the retinal fundus images received by the image acquisition unit (104) using an image processor (108);
extract features from the pre-processed retinal fundus images using a hybrid Visual Geometry Group 16 (VGG16) convolutional neural network (CNN) model with additional dense layers and a Rectified Linear Unit (ReLU) layer;
classify the extracted features based on a retinal fundus images dataset; and
evaluate a stage of diabetic retinopathy (DR) based on the extracted features and classification results, wherein the stage is determined according to a multi-stage classification of disease.
2. The system (100) as claimed in claim 1, wherein the image processor (108) is configured to preprocess the received retinal fundus images by preprocessing techniques selected from Gaussian filtering, histogram equalization, or a combination thereof.
3. The system (100) as claimed in Claim 1, wherein the processing unit (106) is configured to include early stopping criteria based on validation loss monitoring to prevent overfitting of the hybrid convolutional neural network (CNN) model.
4. The system (100) as claimed in Claim 1, wherein the processing unit (106) is configured to utilize ensemble learning techniques to improve reliability of the classification results.
5. The system (100) as claimed in Claim 1, wherein the processing unit (106) is configured to generate a diagnostic report based on the classification results.
6. The system (100) as claimed in claim 1, wherein the processing unit (106) is configured to integrate attention mechanisms, transformer-based models, or a combination thereof to improve feature extraction and classification performance.
7. The system (100) as claimed in claim 1, wherein the multi-stage classification of the disease is selected from No Diabetic Retinopathy (No DR), Mild Non-Proliferative Diabetic Retinopathy (Mild NPDR), Moderate Non-Proliferative Diabetic Retinopathy (Moderate NPDR), Severe Non-Proliferative Diabetic Retinopathy (Severe NPDR), Proliferative Diabetic Retinopathy (PDR), or a combination thereof.
8. The system (100) as claimed in Claim 1, wherein the processing unit (106) is configured to generate classification results using a semi-supervised learning approach incorporating Generative Adversarial Networks (GANs) to leverage labeled and unlabeled data.
9. The system (100) as claimed in Claim 1, wherein the hybrid Visual Geometry Group 16 (VGG16) convolutional neural network (CNN) model is pretrained on ImageNet.
10. A method (300) for detecting and classifying diabetic retinopathy (DR) in retinal fundus images, the method (300) comprising:
receiving, by an image acquisition unit (104), retinal fundus images from a computing device (102);
preprocessing the retinal fundus images received by the image acquisition unit (104) using an image processor (108);
extracting features from the pre-processed retinal fundus images using a hybrid Visual Geometry Group 16 (VGG16) convolutional neural network (CNN) model with additional dense layers and a Rectified Linear Unit (ReLU) layer;
classifying the extracted features based on a retinal fundus images dataset; and
evaluating a stage of diabetic retinopathy (DR) based on the extracted features and classification results, wherein the stage is determined according to a multi-stage classification of disease.
Date: March 10, 2025
Place: Noida
Dr. Keerti Gupta
Agent for the Applicant
(IN/PA-1529)
| # | Name | Date |
|---|---|---|
| 1 | 202541021590-STATEMENT OF UNDERTAKING (FORM 3) [11-03-2025(online)].pdf | 2025-03-11 |
| 2 | 202541021590-REQUEST FOR EARLY PUBLICATION(FORM-9) [11-03-2025(online)].pdf | 2025-03-11 |
| 3 | 202541021590-POWER OF AUTHORITY [11-03-2025(online)].pdf | 2025-03-11 |
| 4 | 202541021590-OTHERS [11-03-2025(online)].pdf | 2025-03-11 |
| 5 | 202541021590-FORM-9 [11-03-2025(online)].pdf | 2025-03-11 |
| 6 | 202541021590-FORM FOR SMALL ENTITY(FORM-28) [11-03-2025(online)].pdf | 2025-03-11 |
| 7 | 202541021590-FORM 1 [11-03-2025(online)].pdf | 2025-03-11 |
| 8 | 202541021590-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [11-03-2025(online)].pdf | 2025-03-11 |
| 9 | 202541021590-EDUCATIONAL INSTITUTION(S) [11-03-2025(online)].pdf | 2025-03-11 |
| 10 | 202541021590-DRAWINGS [11-03-2025(online)].pdf | 2025-03-11 |
| 11 | 202541021590-DECLARATION OF INVENTORSHIP (FORM 5) [11-03-2025(online)].pdf | 2025-03-11 |
| 12 | 202541021590-COMPLETE SPECIFICATION [11-03-2025(online)].pdf | 2025-03-11 |