Sign In to Follow Application
View All Documents & Correspondence

Hybrid Sift Cnn Framework For Diabetic Retinopathy Detection: Advanced Feature Fusion And Deep Learning Based Exudate Segmentation

Abstract: HYBRID SIFT-CNN FRAMEWORK FOR DIABETIC RETINOPATHY DETECTION: ADVANCED FEATURE FUSION AND DEEP LEARNING-BASED EXUDATE SEGMENTATION The present invention presents a novel Hybrid SIFT-CNN Framework for the detection and severity classification of diabetic retinopathy (DR) using retinal fundus images. The proposed method integrates handcrafted features extracted through Scale-Invariant Feature Transform (SIFT) with deep features learned via Convolutional Neural Networks (CNNs), leveraging the strengths of both approaches. While SIFT effectively captures fine structural details such as vessel edges and lesions, it lacks contextual depth, which is compensated by the hierarchical representation learning of CNNs. A feature fusion strategy combines these complementary features to enhance classification performance by capturing both local and global patterns. Additionally, the framework incorporates a deep learning-based segmentation module—using networks like U-Net or FCN—for accurate detection of exudates, a critical biomarker for DR progression. The system is designed to provide interpretable outputs to support ophthalmologists in diagnosis and treatment planning. The framework will be evaluated on publicly available datasets such as DIARETDB1, Messidor, and IDRiD using metrics including accuracy, sensitivity, specificity, and F1-score. Experimental results are expected to demonstrate improved accuracy and reduced false positives in DR detection.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 May 2025
Publication Number
22/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. CHILUKURI RAJITHA
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. DR P. PRAVEEN
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF THE INVENTION
This invention relates to Hybrid SIFT-CNN Framework for Diabetic Retinopathy Detection: Advanced Feature Fusion and Deep Learning-Based Exudate Segmentation
BACKGROUND OF THE INVENTION
Diabetic Retinopathy (DR) is a leading cause of vision impairment and blindness among diabetic patients worldwide. Early detection and accurate classification of DR are crucial for effective intervention and treatment, significantly reducing the risk of severe vision loss. However, existing automated DR detection systems face challenges such as poor feature extraction, misclassification due to complex lesion patterns, and inefficient segmentation of pathological regions like exudates.
Standard image retrieval techniques like Scale-Invariant Feature Transform (SIFT) provide reliable characteristics of shapes and are invariant to other factors while still missing the context. However, compared with other deep learning methods especially Convolutional Neural Networks (CNNs), they perform better on hierarchical features representation however, they do not pay enough attentions to micro foundation features which is so critical for DR diagnosis. This also hinders existing models by the lack of an efficient characteristic fusion approach, resulting in a lower classification outcome.
In order to overcome these drawbacks, this study proposes a Hybrid SIFT-CNN Framework for Diabetic Retinopathy Detection that combines SIFT descriptors of the image and CNN features. Moreover, to accurately detect and delineate exudates, which is a significant biomarker of DR progression, there is a Deep Learning-Based Exudate Segmentation module. For this purpose, the proposed framework will leverage both conventional and deep learning approaches to enhance accuracy, minimize the false positive rate, and segment the structures properly.
The proposed system will therefore be tested on real DR datasets and its performance will be measured using parameters including accuracy, sensitivity, specificity, and F1 score. The goal of the present work is to design a DR detection model with higher accuracy and interpretability that could help ophthalmologists for more accurate diagnosis and timely planning of the treatment.
Current solutions for diabetic retinopathy detection include AI-powered ophthalmology applications such as EyeArt, RetMarker, and IDx-DR, which utilize deep learning algorithms for automated screening and diagnosis. Additionally, hospitals and diagnostic centers employ computer-aided diagnosis (CAD) systems and fundus imaging software to assist ophthalmologists in detecting retinal abnormalities.
However, these solutions often rely on conventional feature extraction techniques, leading to suboptimal segmentation and classification of retinal lesions. The absence of optimized hybrid frameworks that integrate Scale-Invariant Feature Transform (SIFT) and Convolutional Neural Networks (CNNs) for enhanced feature fusion limits the accuracy of existing commercial applications. Furthermore, the lack of robust exudate segmentation techniques affects their real-world effectiveness in diabetic retinopathy screening.
Most diabetic retinopathy detection models struggle with poor feature extraction and segmentation accuracy, leading to unreliable results. Insufficient preprocessing causes noise retention, poor contrast, and improper exudate segmentation. Traditional feature extraction methods fail to capture complex retinal structures, especially in low-resolution or variably illuminated images. The lack of hybrid feature fusion limits adaptability, and high false positive/negative rates reduce reliability. The Hybrid SIFT-CNN framework addresses these issues through advanced feature fusion and deep learning-based exudate segmentation, enhancing accuracy and robustness.
• The elaborated Hybrid SIFT-CNN Framework is based on the SIFT approach aimed to extract local features and the CNNs that demonstrated enhanced accuracy in exudate detection compared to traditional and handcrafted features methods based only on CNNs.
• It presents a new feature fusion method that boosts the discriminatory power to address the differences in various types of retinal images tainted by different illumination and noise than the traditional strategies based on the pixel contents.
• This increases computational effectiveness as compared to pure models of deep learning by employing SIFT for extraction of the key features prior to feeding them to the CNN architecture for further processing.
• It is much more effective for detecting exudates with weak appearance and low contrast, unlike prior methods that are highly sensitive to noise or have issues with small size of the exudates.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The present paper outlines a novel method of feature fusion that combines SIFT and CNNs in is referred to as the Hybrid SIFT-CNN Framework for Diabetic related eye disease that has potential to considerably enhance the detection concerning diabetic retinopathy. SIFT pre-structured feature extraction procedure effectively preserves significant structures like vessel edges, lesions, and micro aneurysms for providing robustness. to anger against such conditions as lighting variations, differences in scale and orientation. However, SIFT fails to go deeper into the context of the image regions which is tackled by adding the CNN-based deep feature learning module. CNNs work at hierarchical representations of the image thus making the model differentiate between normal and pathologic conditions in fundus images. Both these approaches are integrated into the proposed framework in the best possible manner possible by utilizing a feature fusion strategy that integrates two types of features namely SIFT features and deep features of the CNN. This integration of features that are designed from two different levels improves discrimination, incorporates both the micro and global level contextual patterns which can improve the classification. Moreover, a Deep Learning-Based Exudate Segmentation is present in the framework in order to accurately segment the exudates using an advanced segmentation network, for example, U-Net or Fully Convolutional Networks (FCN), which is a significant biomarker for DR progression. The presented segmentation module enhances accurate identification of the lesions and allows grading DR severity more effectively. The proposed system is intended for classifying fundus images into DR severity levels so that the results are easy for an ophthalmologist to comprehend and use in diagnosis and planning of treatment. For testing purpose, the proposed framework is going to utilize public accessible DR datasets like, DIARETDB1, Messidor and IDRiD with the help of efficiency measures such as accuracy, sensitivity, specificity and F1-score.Using both SIFT features and CNN in this proposed Hybrid SIFT-CNN Framework will be very helpful in increasing accuracy and reducing false positives specifically when it comes to the diagnosis of diabetic retinopathy.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The present paper outlines a novel method of feature fusion that combines SIFT and CNNs in is referred to as the Hybrid SIFT-CNN Framework for Diabetic related eye disease that has potential to considerably enhance the detection concerning diabetic retinopathy. SIFT pre-structured feature extraction procedure effectively preserves significant structures like vessel edges, lesions, and micro aneurysms for providing robustness. to anger against such conditions as lighting variations, differences in scale and orientation. However, SIFT fails to go deeper into the context of the image regions which is tackled by adding the CNN-based deep feature learning module. CNNs work at hierarchical representations of the image thus making the model differentiate between normal and pathologic conditions in fundus images. Both these approaches are integrated into the proposed framework in the best possible manner possible by utilizing a feature fusion strategy that integrates two types of features namely SIFT features and deep features of the CNN. This integration of features that are designed from two different levels improves discrimination, incorporates both the micro and global level contextual patterns which can improve the classification. Moreover, a Deep Learning-Based Exudate Segmentation is present in the framework in order to accurately segment the exudates using an advanced segmentation network, for example, U-Net or Fully Convolutional Networks (FCN), which is a significant biomarker for DR progression. The presented segmentation module enhances accurate identification of the lesions and allows grading DR severity more effectively. The proposed system is intended for classifying fundus images into DR severity levels so that the results are easy for an ophthalmologist to comprehend and use in diagnosis and planning of treatment. For testing purpose, the proposed framework is going to utilize public accessible DR datasets like, DIARETDB1, Messidor and IDRiD with the help of efficiency measures such as accuracy, sensitivity, specificity and F1-score.Using both SIFT features and CNN in this proposed Hybrid SIFT-CNN Framework will be very helpful in increasing accuracy and reducing false positives specifically when it comes to the diagnosis of diabetic retinopathy.
NOVELTY:
The proposed Diabetic Retinopathy diagnosis framework is called Hybrid SIFT-CNN Framework, based on the efficacy of SIFT for extraction of local features and CNN for segmentation of exudates with the help of an improved feature fusion process so as to increase the efficiency in various types of retinal images.
, Claims:1. A system for detecting diabetic retinopathy, comprising: fusion strategy and deep learning-based exudate segmentation module.
2. The system as claimed as claim 1, wherein the feature fusion module configured to combine the SIFT-based features and CNN-based deep features into a unified feature representation.
3. The system as claimed as claim 1, wherein the classification module configured to classify the fused feature representation into one of a plurality of diabetic retinopathy severity levels.
4. The system as claimed as claim 1, wherein the segmentation module comprising a deep learning-based network selected from the group consisting of a U-Net and a Fully Convolutional Network (FCN), wherein the segmentation module is configured to identify and delineate exudates in the retinal fundus image.
5. The system as claimed as claim 1, wherein the classification output and segmented lesion information are configured to be presented in a format interpretable by a medical practitioner for the diagnosis and treatment planning of diabetic retinopathy.

Documents

Application Documents

# Name Date
1 202541050031-STATEMENT OF UNDERTAKING (FORM 3) [24-05-2025(online)].pdf 2025-05-24
2 202541050031-REQUEST FOR EARLY PUBLICATION(FORM-9) [24-05-2025(online)].pdf 2025-05-24
3 202541050031-POWER OF AUTHORITY [24-05-2025(online)].pdf 2025-05-24
4 202541050031-FORM-9 [24-05-2025(online)].pdf 2025-05-24
5 202541050031-FORM FOR SMALL ENTITY(FORM-28) [24-05-2025(online)].pdf 2025-05-24
6 202541050031-FORM 1 [24-05-2025(online)].pdf 2025-05-24
7 202541050031-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-05-2025(online)].pdf 2025-05-24
8 202541050031-EVIDENCE FOR REGISTRATION UNDER SSI [24-05-2025(online)].pdf 2025-05-24
9 202541050031-EDUCATIONAL INSTITUTION(S) [24-05-2025(online)].pdf 2025-05-24
10 202541050031-DRAWINGS [24-05-2025(online)].pdf 2025-05-24
11 202541050031-DECLARATION OF INVENTORSHIP (FORM 5) [24-05-2025(online)].pdf 2025-05-24
12 202541050031-COMPLETE SPECIFICATION [24-05-2025(online)].pdf 2025-05-24