Sign In to Follow Application
View All Documents & Correspondence

Ai Powered Iot Framework For Real Time Detection, Monitoring, And Doctor Assisted Diagnosis Of Diabetic Retinopathy

Abstract: AI-POWERED IOT FRAMEWORK FOR REAL-TIME DETECTION, MONITORING, AND DOCTOR-ASSISTED DIAGNOSIS OF DIABETIC RETINOPATHY The present invention discloses an AI-powered IoT framework for real-time detection, monitoring, and doctor-assisted diagnosis of diabetic retinopathy (DR). The system collects raw retinal image data from various hospitals, which is then subjected to preprocessing techniques such as histogram equalization and RGB-to-grayscale conversion to enhance image quality and consistency. Augmentation techniques including rotation, flipping, zooming, and brightness adjustments are applied to improve model generalization and reliability. A Fast Region-based Convolutional Neural Network (Fast RCNN) is employed for feature extraction, lesion detection, and classification of DR into five stages: No DR, Mild DR, Moderate DR, Severe DR, and Proliferative DR. The classified data is stored in a cloud-based IoT framework, enabling real-time access through doctor web portals and patient mobile applications. The system also includes performance validation using sensitivity, specificity, F1-score, precision-recall, and AUC-ROC metrics with k-fold cross-validation. This integrated AI-IoT framework facilitates early diagnosis, continuous monitoring, and remote medical care with proactive insights and alerts for effective diabetic eye health management.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 June 2025
Publication Number
24/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. MR. M. VEERANNA
1-8-110, KAKATHIYANAGAR, YELLANDU, BHADRADRI KOTHAGUDEM.TELANGANA
2. DR. CH. RAJENDRA PRASAD
55-3-160/1, ROAD NO-3, RUDHRAMADEVI COLONY, BHEEMARAM, WARANGAL, TELANGANA
3. DR. R. SHASHANK
35-5-321, VIJAYAGANAPATHY NAGAR, OPP.KU II GATE, HANAMKONDA, TELANGANA

Specification

Description:FIELD OF THE INVENTION
This invention relates to AI-Powered IoT Framework for Real-Time Detection, Monitoring, and Doctor-Assisted Diagnosis of Diabetic Retinopathy
BACKGROUND OF THE INVENTION
Diabetic Retinopathy (DR) is a major reason behind vision loss so optimal treatment depends on timely diagnosis followed by ongoing evaluation. Conventional methods prove to be expensive, time-consuming, and inaccessible for remote locations. AI and IoT devices work together to enable timely automated DR diagnosis and continuous patient monitoring, which also reduces the workload of medical professionals and improves patient outcomes by continuously assessing retinal health.
EXISTING SOLUTIONS / PRIOR ART/RELATED APPLICATIONS & PATENTS
Telemedicine-Based DR Screening, NVIDIA Clara FL, Google Health’s AI- Multi-Modal AI Systems, google's DeepMind, Eyenuk EyeArt AI System, IDx-DR, EyeArt, Google ARDA- Deep Learning Models for DR Prediction, RetinaVue-
The current solutions do not meet the required standards.
• Explainable AI: To trust outcomes, clinicians require transparent decision-making processes (such as heatmaps of retinal abnormalities).
• Federated Learning: Models are trained on heterogeneous, decentralized datasets without dangerous patient privacy.
• Edge Computing: Processing data locally on Internet of Things devices to lessen dependency on cloud infrastructure.
• Affordable Hardware: Developing robust, affordable retinal imaging instruments for environments with limited resources.
Feature Proposed Solution Previous Solutions
Enhanced Pre-processing Uses advanced histogram equalization and adaptive color-space conversion Limited pre-processing techniques, often basic grayscale conversion.
Lesion Detection Employs Fast RCNN for automated and precise lesion detection. Relies on handcrafted feature extraction, leading to variability.

Robust Data Augmentation
Incorporates Gaussian noise, contrast enhancement, and multi-angle transformations. Incorporates Gaussian noise, contrast enhancement, and multi-angle transformations.
Improved Classification Performance Leverages domain adaptation and federated learning for better generalization. Lacks adaptation strategies, leading to dataset bias and reduced accuracy.
Optimized Computational Efficiency Uses Faster RCNN with region proposal networks for reduced overhead. Traditional CNN-based architectures require more computation.

SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The block diagram for the proposed innovation of AI-powered IoT Framework for Real-Time Detection and Monitoring of DR is illustrated in Fig 1. The system collects raw data, a Pre-processing Unit, Augmentation, and Training with the Dataset, then gives to Algorithm Fast Region-based Convolutional Neural Network (RCNN), Classification and stores in a cloud-based IoT Framework and further easy access of Doctor’s is used to web access, and similarly for patient mobile access is provided.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The block diagram for the proposed innovation of AI-powered IoT Framework for Real-Time Detection and Monitoring of DR is illustrated in Fig 1. The system collects raw data, a Pre-processing Unit, Augmentation, and Training with the Dataset, then gives to Algorithm Fast Region-based Convolutional Neural Network (RCNN), Classification and stores in a cloud-based IoT Framework and further easy access of Doctor’s is used to web access, and similarly for patient mobile access is provided.
The proposed system follows six major steps. First, we are taking raw data from the various hospitals. The collected data has noise and improper image sizes, to settle down to an equal size and remove the noise we are using the preprocessing. Preprocessing plays a crucial role in preparing retinal images for diabetic retinopathy (DR) detection by enhancing image quality and ensuring uniformity. Histogram Equalization, improves contrast in retinal images by redistributing pixel intensity values. Next, the conversion of RGB to grayscale. In the next stage, Augmentation we use different techniques such as rotation, flipping, zooming, and brightness adjustments to help create variations in retinal images, reducing overfitting and improving model generalization. Augmentation ensures that the deep learning (DL) model learns to recognize lesions in various lighting conditions, orientations, and resolutions, thereby enhancing its reliability in real-world scenarios.
The image is given to automated dataset analysis for both lesion detection and analysis after going through preprocessing and augmentation steps. The process checks to make sure that all major features, including microaneurysms, hemorrhages, and exudates, remain perceptible through labeled processed images for evaluation testing. Model performance evaluation together with generalization needs testing and validation to determine their accuracy. The distribution of the dataset contains three sections for training, validation, and testing purposes to assess model accuracy with sensitivity and specificity metrics. The robustness of system procedures can be achieved through k-fold validation and similar cross-validation techniques. The model’s effectiveness can be determined through performance metrics that include F1-score along with AUC-ROC and precision-recall.
The fourth step utilizes both a base model intended for feature extraction and a Fast Region-based Convolutional Neural Networks (RCNN) Architecture which represents an object detection framework based on DL. The generated feature vectors from extracted feature maps by Fast RCNN serve as input signals for DR classification. This network has demonstrated an effective ability to discover patterns related to lesions and clear classification accuracy. The model employs this architectural design to achieve excellent detection accuracy in DR conditions. Fast RCNN uses one forward pass for extracting features while decreasing overall computational requirements during the process. The Region Proposal Network (RPN) joins the Faster RCNN framework to enhance the efficiency of lesion detection during the process. The utilization of these architectural approaches enables DR classification models to conduct effective retinal abnormality detection and classification at both high speed and precision levels.
In the fifth step, based on extracted features, the system classifies the DR into five stages: 1. No DR 2. Mild DR 3. Moderate DR 4. Severe DR 5. Proliferative DR
• No DR refers to healthy retinal images with no signs of DR.
• Mild DR is characterized by the presence of a few microaneurysms without significant leakage.
• Moderate DR exhibits increased microaneurysms, hemorrhages, and hard exudates.
• Severe DR involves extensive hemorrhages, venous beading, and significant leakage, posing a high risk of vision impairment
• Proliferative DR is the most advanced stage, marked by neovascularization, fibrosis, and severe retinal damage, often leading to blindness.
Finally, The Doctor/Patient App is an ingenious interface that allows the AI-powered IoT device to monitor DR in real time. But it also provides doctors with AI-powered retinal examination with the help of web access as shown in Fig 1, a severity rating, and prediction insight for quick actions. Patients get access to proactive health advice, risk assessments, and customized reports with the help of mobile apps with the support of web access. The IoT provides capabilities to enable remote medical care while providing streamlined data exchange and automatic disease progression warning alerts. The state-of-the-art medical tool provides real-time patient and doctor access to data-driven diabetic eye health management which makes AI diagnosis and patient treatments more interconnected.
NOVELTY:
One of the unusual things about the AI-powered IoT Framework for Real-Time Detection and Monitoring of DR is its seamless combination of IoT-enabled real-time monitoring with AI-powered DL. It distinguishes from conventional screening techniques and gives continuous, automatic, and distant detection on one hand, for early diagnosis and preventive action. Enhanced by IoT connection, cloud-based analytics, and real-time patient monitoring are guaranteed as AI Adaptive Models for better diagnosis. 
, Claims:1. An AI-powered IoT framework for real-time detection and monitoring of diabetic retinopathy (DR), comprising:
• a data acquisition unit configured to collect retinal image data from hospital databases or screening devices;
• a preprocessing unit configured to perform histogram equalization, noise removal, and RGB to grayscale conversion to enhance image quality and uniformity;
• an augmentation module applying image rotation, flipping, zooming, and brightness adjustments to improve model generalization;
• a deep learning model based on Fast Region-based Convolutional Neural Network (Fast RCNN) architecture for lesion detection and DR classification;
• a cloud-based IoT platform for storing classified results and enabling access through doctor web portals and patient mobile applications.
2. The framework as claimed in claim 1, wherein the Fast RCNN model is configured to extract feature maps from input images and classify DR into five stages: No DR, Mild DR, Moderate DR, Severe DR, and Proliferative DR.
3. The framework as claimed in claim 1, wherein model performance is evaluated using metrics including sensitivity, specificity, F1-score, precision-recall, and AUC-ROC, and validated using k-fold cross-validation techniques.
4. The framework as claimed in claim 1, wherein a Region Proposal Network (RPN) is integrated with the Fast RCNN architecture to improve lesion localization accuracy and computational efficiency in real-time scenarios.
5. The framework as claimed in claim 1, wherein further comprising a web and mobile application interface for providing real-time DR monitoring, AI-assisted diagnostic reports, severity predictions, customized patient health advice, and alert-based communication between patients and medical professionals.

Documents

Application Documents

# Name Date
1 202541053268-STATEMENT OF UNDERTAKING (FORM 3) [02-06-2025(online)].pdf 2025-06-02
2 202541053268-REQUEST FOR EARLY PUBLICATION(FORM-9) [02-06-2025(online)].pdf 2025-06-02
3 202541053268-POWER OF AUTHORITY [02-06-2025(online)].pdf 2025-06-02
4 202541053268-FORM-9 [02-06-2025(online)].pdf 2025-06-02
5 202541053268-FORM FOR SMALL ENTITY(FORM-28) [02-06-2025(online)].pdf 2025-06-02
6 202541053268-FORM 1 [02-06-2025(online)].pdf 2025-06-02
7 202541053268-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [02-06-2025(online)].pdf 2025-06-02
8 202541053268-EVIDENCE FOR REGISTRATION UNDER SSI [02-06-2025(online)].pdf 2025-06-02
9 202541053268-EDUCATIONAL INSTITUTION(S) [02-06-2025(online)].pdf 2025-06-02
10 202541053268-DECLARATION OF INVENTORSHIP (FORM 5) [02-06-2025(online)].pdf 2025-06-02
11 202541053268-COMPLETE SPECIFICATION [02-06-2025(online)].pdf 2025-06-02