Abstract: FEDERATED LEARNING FRAMEWORK FOR MEDICAL IMAGE CLASSIFICATION ABSTRACT A federated learning framework (100) for medical image classification is disclosed. The framework (100) comprising an image acquisition unit (104), installed at individual health care institutions, adapted to receive the medical images from a computing device (102). A processing unit (106) configured to: fetch the medical images from the image acquisition unit (104); embed model weights and feature representations in the medical images, wherein the model weights and the feature representations ensure privacy preservation by preventing raw medical data transfer; execute a hybrid deep convolutional neural network (CNN) architecture for medical image classification, wherein the hybrid deep convolutional neural network (CNN) comprise convolutional layers for feature extraction, batch normalization layers for feature standardization, max-pooling layers for dimensionality reduction, fully connected layers for classification, or a combination thereof; and optimize the classified medical image with dropout layers to prevent overfitting. Claims: 10, Figures: 3 Figure 1 is selected.
Description:BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to a medical image classification and particularly to a federated learning framework for medical image classification.
Description of Related Art
[002] Neurodegenerative diseases such as Alzheimer’s and Parkinson’s significantly impact millions worldwide, leading to progressive cognitive decline and motor impairments. The increasing prevalence of these disorders underscores the need for early and accurate diagnosis, which plays a crucial role in patient management and treatment. Traditional diagnostic methods, including clinical assessments and imaging techniques such as Magnetic Resonance Imaging (MRI), often require significant expertise and resources. Moreover, these methods may not always detect early-stage disease, limiting timely intervention.
[003] Machine learning and deep learning models have shown promise in the diagnosis of neurodegenerative diseases, leveraging medical imaging and patient data to identify patterns indicative of disease progression. However, existing AI-driven diagnostic approaches face several challenges. Many models rely on centralized datasets that are difficult to compile due to strict data privacy regulations and variability across healthcare institutions. Additionally, models trained on specific datasets may struggle to generalize across different patient demographics, imaging protocols, or geographic regions, leading to inconsistencies in diagnostic performance.
[004] Beyond data accessibility and generalization issues, high-performance AI models often require extensive computational resources, making them less feasible for widespread adoption in resource-limited healthcare settings. Furthermore, privacy concerns and regulatory restrictions hinder collaboration between healthcare institutions, limiting the scalability of AI-driven diagnostic frameworks. Addressing these challenges is essential for advancing neurodegenerative disease diagnostics, enabling more accurate, accessible, and privacy-preserving solutions for global healthcare systems.
[005] There is thus a need for an improved and advanced federated learning framework for medical image classification that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide a federated learning framework for medical image classification. The framework comprising an image acquisition unit, installed at individual health care institutions, adapted to receive the medical images from a computing device. The framework further comprising a processing unit in communication with the image acquisition unit. The processing unit is configured to fetch the medical images from the image acquisition unit; embed model weights and feature representations in the medical images, wherein the model weights and the feature representations ensure privacy preservation by preventing raw medical data transfer; execute a hybrid deep convolutional neural network (CNN) architecture for medical image classification, wherein the hybrid deep convolutional neural network (CNN) comprise convolutional layers for feature extraction, batch normalization layers for feature standardization, max-pooling layers for dimensionality reduction, fully connected layers for classification, or a combination thereof; and optimize the classified medical image with dropout layers to prevent overfitting.
[007] Embodiments in accordance with the present invention further provide a method for medical image classification using a federated learning framework. The method comprising steps of fetching medical images from an image acquisition unit; embedding model weights and feature representations in the medical images, wherein the model weights and the feature representations ensure privacy preservation by preventing raw medical data transfer; executing a hybrid deep convolutional neural network (CNN) architecture for medical image classification, wherein the hybrid deep convolutional neural network (CNN) comprise convolutional layers for feature extraction, batch normalization layers for feature standardization, max-pooling layers for dimensionality reduction, fully connected layers for classification, or a combination thereof; and optimizing the classified medical image with dropout layers to prevent overfitting.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a federated learning framework for medical image classification.
[009] Next, embodiments of the present application may provide a federated learning framework that enables collaborative training across multiple healthcare institutions without sharing sensitive patient data, ensuring compliance with privacy regulations.
[0010] Next, embodiments of the present application may provide a federated learning framework that allows the model to learn from diverse data sources, enhancing its ability to generalize across different patient populations and imaging protocols.
[0011] Next, embodiments of the present application may provide a federated learning framework that reduces the need for high computational resources at a central location, making the solution accessible to smaller healthcare facilities with limited infrastructure.
[0012] Next, embodiments of the present application may provide a federated learning framework that improves feature extraction from medical images, leading to a higher accuracy rate (0.79) compared to existing AI-based models.
[0013] Next, embodiments of the present application may provide a federated learning framework that supports multi-institutional collaboration, allowing seamless integration across hospitals and research centers, which helps in continuously improving the model’s performance with new data.
[0014] These and other advantages will be apparent from the present application of the embodiments described herein.
[0015] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0017] FIG. 1 illustrates a block diagram of a federated learning framework for medical image classification, according to an embodiment of the present invention;
[0018] FIG. 2A illustrates a graph representing a receiver operating characteristic (ROC) curve, according to an embodiment of the present invention;
[0019] FIG. 2B illustrates a graph representing a precision recall curve, according to an embodiment of the present invention; and
[0020] FIG. 3 depicts a flowchart of a method for medical image classification using a federated learning framework, according to an embodiment of the present invention.
[0021] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0022] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0023] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0024] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0025] FIG. 1 illustrates a block diagram of a federated learning framework 100 (hereinafter referred to as the framework 100) for medical image classification, according to an embodiment of the present invention. The framework 100 may be adapted to receive medical images. Further, the framework 100 may be adapted to detect a presence of neurodegenerative disease in the received medical images. Moreover, the framework 100 may classify and evaluate a stage of the detected neurodegenerative disease in the medical images.
[0026] According to the embodiments of the present invention, the framework 100 may incorporate non-limiting hardware components to enhance the processing speed and efficiency such as the framework 100 may comprise a computing device 102, an image acquisition unit 104, and a processing unit 106. In an embodiment of the present invention, the hardware components of the framework 100 may be integrated with computer-executable instructions for overcoming the challenges and the limitations of the existing systems.
[0027] In an embodiment of the present invention, the computing device 102 may be adapted to upload the medical images to the framework 100. The computing device 102 may be, but not limited to, a laptop, a mobile, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the computing device 102, including known, related art, and/or later developed technologies.
[0028] In an embodiment of the present invention, the image acquisition unit 104 may be adapted to receive the medical images from the computing device 102.
[0029] In an embodiment of the present invention, the processing unit 106 may be in communication with the image acquisition unit 104. The processing unit 106 may be configured to fetch the medical images from the image acquisition unit 104. The processing unit 106 may be configured to embed model weights and feature representations in the medical images. The model weights and the feature representations ensure privacy preservation by preventing raw medical data transfer.
[0030] The processing unit 106 may be configured to execute a hybrid deep convolutional neural network (CNN) architecture for medical image classification. The hybrid deep convolutional neural network (CNN) may comprise convolutional layers for feature extraction, batch normalization layers for feature standardization, max-pooling layers for dimensionality reduction, fully connected layers for classification, and so forth. The hybrid deep convolutional neural network (CNN) architecture may be optimized for neurodegenerative disease detection, including Alzheimer’s and Parkinson’s disease, through advanced feature extraction and classification techniques
[0031] The processing unit 106 may be configured to optimize the classified medical image with dropout layers to prevent overfitting. The medical images may be optimized using an Adam optimizer. The optimization include techniques may be, but not limited to, an image transformations, an image rotation, an image zooming, an image flipping, and so forth. In a preferred embodiment of the present invention, the medical images may be cropped at 224 pixels by 224 pixels. The framework 100 may support an adaptive learning rate to fine-tune performance over time. The learning rate may be 0.0001. Embodiments of the present invention are intended to include or otherwise cover any learning rate.
[0032] The processing unit 106 may be configured to aggregates learned parameters from multiple local models across distributed healthcare institutions and evaluates a global model performance using key metrics such as, but not limited to, an accuracy, a precision, a recall, a receiver operating characteristic (ROC-AUC) curve, and so forth. The processing unit 106 may be configured to carry out a decentralized model evaluation to enable secure, scalable, and robust medical image classification without centralizing patient data.
[0033] FIG. 2A illustrates a graph 200 representing a receiver operating characteristic (ROC) curve, according to an embodiment of the present invention. The receiver operating characteristic (ROC) curve may be a graphical representation that may illustrates a performance of the framework 100 across various threshold settings, plotting a true positive rate against a false positive rate. The receiver operating characteristic (ROC) curve may evaluate an ability of the framework 100 to distinguish between two classes. The two classes may be a presence of the neurodegenerative disease, and an absence of the neurodegenerative disease. Further, the area under curve (AUC) may represent a probability that the framework 100, if given a randomly chosen positive example and a randomly chosen negative example, will rank the randomly chosen positive example higher than the randomly chosen negative example.
[0034] In an embodiment of the present invention, an x-axis of the graph 200 may represent the false positive rate. Further, a y-axis of the graph 200 may represent the true positive rate. As the graph 200 may depict, the framework 100 may have distributed classes, namely class 0, class 1, and class 2 with the area under curve (AUC) value of 0.91, 0.88, and 1.00 respectively.
[0035] FIG. 2B illustrates a graph 202 representing a precision recall curve, according to an embodiment of the present invention. The precision recall curve may be a visual represents a trade-off between a precision and a recall in the framework 100. The precision may refer to a proportion of correct positive predictions (True Positives) out of all positive predictions made by the framework 100 (True Positives + False Positives). The precision may be calculated by an equation (1):
Precision=True positives / (True positives + False positives) --- (1)
[0036] The recall may refer to a ratio of the true positive rates to a total actual positives. The recall may be calculated by an equation (2):
Recall=True positives / (True positives + False positives) --- (2)
[0037] In an embodiment of the present invention, an x-axis of the graph may represent the recall. Further, a y-axis of the graph 200 may represent the precaution. As the graph 202 may depict, the framework 100 may have distributed classes, namely class 0, class 1, and class 2 with the area under curve (AUC) value for the precision recall curve may be 0.77, 0.79, and 1.00 respectively.
[0038] In an embodiment of the present invention, an Alzheimer’s disease may have a precision value of 0.72. a recall value of 0.83, a F1 value of 0.77, and a support value of 376. In an embodiment of the present invention, a healthy brain may have the precision value of 0.74. the recall value of 0.71, the F1 value of 0.72, and the support value of 405. In an embodiment of the present invention, a Parkinson’s disease may have the precision value of 0.7. the recall value of 0.80, the F1 value of 0.79, and the support value of 359.
[0039] In an embodiment of the present invention, a mean average may have the precision value of 0.80. the recall value of 0.79, the F1 value of 0.80, and the support value of 1140. In an embodiment of the present invention, a weighted average may have the precision value of 0.80. the recall value of 0.79, the F1 value of 0.79, and the support value of 1140. In an embodiment of the present invention, an accuracy may have the F1 value of 0.79, and the support value of 1140.
[0040] FIG. 3 depicts a flowchart of a method 300 for the medical image classification using the framework 100, according to an embodiment of the present invention.
[0041] At step 302, the framework 100 may fetch the medical images from the image acquisition unit 104.
[0042] At step 304, the framework 100 may embed the model weights and the feature representations in the medical images.
[0043] At step 306, the framework 100 may execute the hybrid deep convolutional neural network (CNN) architecture for the medical image classification.
[0044] At step 308, the framework 100 may optimize the classified medical image with dropout layers to prevent overfitting.
[0045] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0046] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A federated learning framework (100) for medical image classification, the framework (100) comprising:
an image acquisition unit (104), installed at individual health care institutions, adapted to receive the medical images from a computing device (102);
a processing unit (106) in communication with the image acquisition unit (104), characterized in that the processing unit (106) is configured to:
fetch the medical images from the image acquisition unit (104);
embed model weights and feature representations in the medical images, wherein the model weights and the feature representations ensure privacy preservation by preventing raw medical data transfer;
execute a hybrid deep convolutional neural network (CNN) architecture for medical image classification, wherein the hybrid deep convolutional neural network (CNN) comprise convolutional layers for feature extraction, batch normalization layers for feature standardization, max-pooling layers for dimensionality reduction, fully connected layers for classification, or a combination thereof; and
optimize the classified medical image with dropout layers to prevent overfitting.
2. The framework (100) as claimed in claim 1, wherein the optimization includes techniques selected from an image transformations, an image rotation, an image zooming, an image flipping, or a combination thereof.
3. The framework (100) as claimed in claim 1, wherein the hybrid deep convolutional neural network (CNN) architecture is optimized for neurodegenerative disease detection, including Alzheimer’s and Parkinson’s disease, through advanced feature extraction and classification techniques.
4. The framework (100) as claimed in claim 1, wherein the medical images are optimized using an Adam optimizer.
5. The framework (100) as claimed in claim 1, wherein the processing unit (106) aggregates learned parameters from multiple local models across distributed healthcare institutions and evaluates a global model performance using key metrics selected from an accuracy, a precision, a recall, a receiver operating characteristic (ROC-AUC) curve, or a combination thereof.
6. The framework (100) as claimed in claim 1, wherein the processing unit (106) is configured to carry out a decentralized model evaluation to enable secure, scalable, and robust medical image classification without centralizing patient data.
7. A method (300) for medical image classification using a federated learning framework (100), the method (300) is characterized by steps of:
fetching medical images from an image acquisition unit (104);
embedding model weights and feature representations in the medical images, wherein the model weights and the feature representations ensure privacy preservation by preventing raw medical data transfer;
executing a hybrid deep convolutional neural network (CNN) architecture for medical image classification, wherein the hybrid deep convolutional neural network (CNN) comprise convolutional layers for feature extraction, batch normalization layers for feature standardization, max-pooling layers for dimensionality reduction, fully connected layers for classification, or a combination thereof; and
optimizing the classified medical image with dropout layers to prevent overfitting.
8. The method (300) as claimed in claim 7, wherein the hybrid deep convolutional neural network (CNN) architecture is optimized for neurodegenerative disease detection, including Alzheimer’s and Parkinson’s disease, through advanced feature extraction and classification techniques.
9. The method (300) as claimed in claim 7, wherein the medical images are optimized using an Adam optimizer.
10. The method (300) as claimed in claim 7, wherein the optimization includes techniques selected from an image transformations, an image rotation, an image zooming, an image flipping, or a combination thereof.
Date: March 13, 2025
Place: Noida
Nainsi Rastogi
Patent Agent (IN/PA-2372)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202541023380-STATEMENT OF UNDERTAKING (FORM 3) [17-03-2025(online)].pdf | 2025-03-17 |
| 2 | 202541023380-REQUEST FOR EARLY PUBLICATION(FORM-9) [17-03-2025(online)].pdf | 2025-03-17 |
| 3 | 202541023380-POWER OF AUTHORITY [17-03-2025(online)].pdf | 2025-03-17 |
| 4 | 202541023380-OTHERS [17-03-2025(online)].pdf | 2025-03-17 |
| 5 | 202541023380-FORM-9 [17-03-2025(online)].pdf | 2025-03-17 |
| 6 | 202541023380-FORM FOR SMALL ENTITY(FORM-28) [17-03-2025(online)].pdf | 2025-03-17 |
| 7 | 202541023380-FORM 1 [17-03-2025(online)].pdf | 2025-03-17 |
| 8 | 202541023380-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [17-03-2025(online)].pdf | 2025-03-17 |
| 9 | 202541023380-EDUCATIONAL INSTITUTION(S) [17-03-2025(online)].pdf | 2025-03-17 |
| 10 | 202541023380-DRAWINGS [17-03-2025(online)].pdf | 2025-03-17 |
| 11 | 202541023380-DECLARATION OF INVENTORSHIP (FORM 5) [17-03-2025(online)].pdf | 2025-03-17 |
| 12 | 202541023380-COMPLETE SPECIFICATION [17-03-2025(online)].pdf | 2025-03-17 |
| 13 | 202541023380-Proof of Right [21-05-2025(online)].pdf | 2025-05-21 |