Sign In to Follow Application
View All Documents & Correspondence

System And Method For Early Prediction Of Brain Tumors Using Deep Machine Learning Model

Abstract: SYSTEM AND METHOD FOR EARLY PREDICTION OF BRAIN TUMORS USING DEEP MACHINE LEARNING MODEL The present invention discloses a deep learning-based system for early prediction of brain tumors using MRI scans. The system employs a CNN architecture optimized for high accuracy, computational efficiency, and clinical integration. Preprocessing techniques include image normalization, segmentation, and data augmentation to improve model reliability. The training framework incorporates dataset partitioning, hyperparameter tuning, and performance evaluation using ROC curves and F1-score metrics. The system achieves 99% accuracy while maintaining low computational overhead, making it suitable for real-time applications in resource-limited clinical environments. Additionally, visualization techniques such as activation heatmaps enhance model interpretability, supporting transparent medical decision-making. The invention bridges the gap between AI advancements and practical healthcare applications, providing an efficient and scalable solution for brain tumor detection.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
03 March 2025
Publication Number
11/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. YALLA ANITHA REDDY
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. R. S. DUBEY
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF THE INVENTION
This invention relates to System and Method for Early Prediction of Brain Tumors Using Deep Machine Learning Model
BACKGROUND OF THE INVENTION
Brain tumors emerge as fatal medical conditions because they affect more than 300,000 people worldwide each year. Medical success requires prompt diagnosis in order to enhance both therapy outcomes and patient survival statistics. The current diagnostic approaches use manual feature selection and conventional techniques thus they remain susceptible to errors and require extended processing time. Artificial intelligence and deep learning technologies have recently advanced to such a level that they now automate MRI scan tumor detection while improving detection accuracy.
Deep learning models, especially, Convolutional Neural Networks (CNNs), have been widely applied by researchers to analyze the MRI images. The reason CNNs are superior to traditional methods is their automatic extraction of hierarchical features directly from imaging data. However, all the existing implementation of it suffer from the following limitations. Tumor detection in MRI scans was done using applied pre trained CNN which were fine-tuned, using Manta Ray Foraging Optimization algorithm. However, since their method also relied on large pre-trained models they achieved high precision but at a substantial computational cost, much so that it prohibited real time applications [1]. Used MobileNetV3 (and other transfer learning frameworks) to classify tumors using MRI. It achieved 99.75 % accuracy but was not generalized for external datasets due its overfitting to the input database [2].
Brain tumour segmentation has been approached using transfer learning and YOLO (You Only Look Once) which are advanced object detection algorithms. In another research, YOLOv5 and YOLOv7 algorithms were used for brain tumor detection and classification. With high sensitivity and specificity, the models needed a large amount of computational resources for training as well as inference [3]. An improved ResNet architectures was employed for the vanishing gradient problem in the deep neural networks. However, the proposed method improved segmentation accuracy, however, it was computationally intensive for its deployment in resource constrained environments, for example, as part of the requested planetary services [4].
Certain preprocessing techniques have also been researched to enhance MRI image quality and enhancing model robustness. An improved use of MRI scans with more filters and data augmentation for identification of pituitary gland tumor were researched. Though the proposed technique resulted in improved accuracy but it was expensive in terms of were computation [5]. In another work, they also used an InvNets for medical image processing tasks. Interestingly, InvNets achieved higher accuracy and less parameters than the CNNs but were not suitable for practical application in clinical workflows because of their extreme fine-tuning requirement [6].
Several studies were conducted on multi-class classification of tumor types. An EfficientNet-B4 based architecture was proposed which was able to achieve 99.33% accuracy. However, the approach was still in need of multiple layers and consumed large computational power, prohibiting the utility in real time diagnostic systems [7]. Similarly, GLCM (Gray Level Co-occurrence Matrix) were used for extracting the feature and to increase accuracy of the segmentation in CNN models. Though, this method was proven to be promising but required large preprocessing efforts [8].
Patent US20210000567A1: An invention on ‘Deep Learning System for Medical Image Classification’ discloses a generic CNN for tumor detection but lacks specificity in preprocessing and fails to address dataset variability.
Patent EP3451234B1: Another invention on ‘Method for Brain Tumor Segmentation Using Transfer Learning’ utilizes pre-trained models like ResNet but suffers from high computational costs and overfitting on small datasets.
Patent CN113963134A: Similarly, an invention on ‘Medical Image Analysis Using Hybrid Neural Networks’ combines CNNs and RNNs but does not optimize preprocessing or validation for clinical reliability.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
Several important limitations exist in the prior research in this domain. Data imbalance is one major issue in many studies as they did not handle class imbalances in the datasets, and it causes biased prediction. Moreover, existing models suffer from poor generalization in the sense that they cannot be easily adapted to changes in MRI acquisition protocols. Moreover, it is the high computational costs required for training complex architectures and pre trained models that must be taken. In addition, there are no robust preprocessing techniques which can effectively handle noise and artifacts in MRI scans, decreasing the total reliability of the MRI models.
This invention provides a CNN-based system that addresses the limitations of prior art by:
1. Employing robust data preprocessing techniques to normalize and segment MRI images for accurate analysis.
2. Utilizing data augmentation methods to enhance model generalization and mitigate overfitting.
3. Incorporating an optimized CNN architecture to improve feature extraction and classification accuracy.
4. Achieving high performance metrics, including 99% accuracy, sensitivity, and precision, validated on balanced datasets.
5. Supporting clinical integration with low computational requirements and compatibility with resource-constrained environments.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention comprises an end-to-end deep learning-based diagnostic system for brain tumor detection, featuring advanced preprocessing, optimized CNN architecture, and rigorous evaluation methodologies.
The dataset includes 4,600 MRI images covering axial, coronal, and sagittal planes. Preprocessing ensures consistency in input dimensions, with all images resized to 256 × 256 pixels. Intensity normalization converts pixel values to a uniform scale, reducing acquisition-related variability. Segmentation isolates brain tissue, removing irrelevant regions to enhance model focus on tumor-specific features.
The CNN architecture follows a structured hierarchical approach. The input layer processes preprocessed MRI scans, standardizing them before feature extraction. Convolutional layers apply multiple filters to capture spatial patterns and tumor-specific characteristics. Pooling layers reduce spatial dimensions while preserving key information, optimizing computational efficiency. Dense layers further process extracted features, classifying images as healthy or tumor-infected.
The model undergoes training using a 70:20:10 dataset split for training, validation, and testing. Hyperparameter tuning techniques, including early stopping and learning rate adjustments, optimize model convergence and prevent overfitting. Performance evaluation incorporates Receiver Operating Characteristic (ROC) curves, confusion matrices, and F1-score metrics, ensuring a comprehensive assessment of classification reliability.
A key aspect of the invention is its low computational cost compared to alternative deep learning models such as YOLOv7 and EfficientNet. The proposed architecture reduces inference time while maintaining high detection accuracy, making it suitable for real-time applications in clinical settings.
The invention further integrates explainability features, using activation heatmaps and convolutional feature maps to highlight tumor-specific regions in MRI scans. This enhances transparency and supports clinical decision-making by providing visual evidence of tumor classification.
The system is designed for seamless clinical integration, incorporating an intuitive user interface for medical practitioners. The model’s adaptability allows for deployment in hospitals, diagnostic centers, and telemedicine platforms, bridging the gap between research advancements and practical healthcare applications.
1. Dataset:
The system processes data from 4,600 MRI images including both healthy and tumor patient samples across axial, coronal and sagittal plane perspectives. The system applies preprocessing steps that set images to 256 × 256 pixel resolution while normalizing pixel intensity values to a scale of 0 to 1.
2. Preprocessing:
Preprocessing involves:
• Segmentation to isolate brain tissues.
• Data augmentation techniques, to prevent overfitting which includes flipping, rotation, brightness modulation to enlarge training data and prevent overfitting.
3. CNN Architecture:
The CNN model consists of following components
• Input Layer: By this layer the pre-processed MRI images are inputted. The method removes non-brain regions then normalizes pixel intensity to lower acquisition-based variability.
• Convolutional Layers: These layers are to extract the spatial features using various filters.
• Pooling Layers: The pooling layers decrease spatial information without losing vital characteristics in the data.
• Dense Layers: Dense layers complete pattern identification along with classification duties.
• Output Layer: This layer assigns binary classification (healthy/tumor).
4. Training and Validation
The dataset is partitioned into three parts for training, validation, and testing sets with a ratio of 70:20:10. To overcome the issue of overfitting, early stopping and model checkpointing techniques were employed during the training.
5. Performance Metrics
The model yields 99% accuracy, 99% precision, 99% sensitivity (recall), specificity with a high-detection rates for healty cases, and F1 Score by harmonizing the precision and recall for the robust performance evaluation.
6. Visualization
The system's detection capabilities for tumor-specific areas are verified through convolutional layer feature maps combined with activation heatmaps which prepare the platform for clinical explainable use.
References
1) Aljohani, M., Bahgat, W. M., Balaha, H. M., AbdulAzeem, Y., El-Abd, M., Badawy, M., & Elhosseini, M. A. (2024) An automated metaheuristic-optimized approach for diagnosing and classifying brain tumors based on a convolutional neural network.
Results in Engineering, 23(102459), 102459. DOI: 10.1016/j.rineng.2024.102459
2) Mathivanan, S. K., Sonaimuthu, S., Murugesan, S., Rajadurai, H., Shivahare, B. D., & Shah, M. A. (2024) Employing deep learning and transfer learning for accurate brain tumor detection. Scientific Reports, 14(1), 7232.
DOI: 10.1038/s41598-024-57970-7
3) Almufareh, M. F., Imran, M., Khan, A., Humayun, M., & Asim, M. (2024)
Automated brain tumor segmentation and classification in MRI using YOLO-based deep learning. IEEE Access, 12, 16189–16207. DOI: 10.1109/access.2024.3359418
4) Aggarwal, M., Tiwari, A. K., Sarathi, M. P., & Bijalwan, A. (2023)
An early detection and segmentation of Brain Tumor using Deep Neural Network.
BMC Medical Informatics and Decision Making, 23(1), 78.
DOI: 10.1186/s12911-023-02174-8
5) Abdusalomov, A. B., Mukhiddinov, M., & Whangbo, T. K. (2023)
Brain tumor detection based on deep learning approaches and magnetic resonance imaging. Cancers, 15(16), 4172. DOI: 10.3390/cancers15164172
6) Asiri, A. A., Shaf, A., Ali, T., Zafar, M., Pasha, M. A., Irfan, M., Alqahtani, S., Alghamdi, A. J., Alghamdi, A. H., Alshamrani, A. F. A., Aleylyani, M., & Alamri, S. (2023) Enhancing brain tumor diagnosis: Transitioning from convolutional neural network to involutional neural network. IEEE Access, 11, 123080–123095.
DOI: 10.1109/access.2023.3326421
7) Preetha, R., Priyadarsini, M. J. P., & Nisha, J. S. (2024)
Automated brain tumor detection from magnetic resonance images using fine-tuned EfficientNet-B4 convolutional neural network. IEEE Access, 12, 112181–112195. DOI: 10.1109/access.2024.3442979
8) Sharma, M., & Miglani, N. (2020) Automated brain tumor segmentation in MRI images using deep learning: Overview, challenges, and future. In Studies in Big Data (pp. 347–383). Springer International Publishing.
9) Patent US20210000567A1
10) Patent EP3451234B1
11) Patent CN113963134A
, Claims:1. A deep learning-based system for brain tumor detection, comprising:
i. A preprocessing module for MRI image normalization, segmentation, and augmentation;
ii. A CNN architecture optimized for spatial feature extraction and classification;
iii. A training pipeline with dataset partitioning and hyperparameter tuning;
iv. A visualization module for explainability and model interpretability.
2. The system as claimed in claim 1, wherein the preprocessing module standardizes MRI images to 256 × 256 pixels and normalizes pixel intensity values.
3. The system as claimed in claim 1, wherein the CNN architecture consists of convolutional layers, pooling layers, and fully connected dense layers.
4. The system as claimed in claim 1, wherein data augmentation techniques, including flipping, rotation, and brightness modulation, enhance model generalization.
5. The system as claimed in claim 1, wherein the training pipeline includes early stopping mechanisms to prevent overfitting.
6. The system as claimed in claim 1, wherein performance evaluation metrics include ROC curves, confusion matrices, and F1-score assessments.
7. The system as claimed in claim 1, wherein activation heatmaps and feature visualization techniques enhance explainability.
8. The system as claimed in claim 1, wherein the system is designed for real-time clinical applications with minimal computational overhead.
9. The system as claimed in claim 1, wherein the model supports integration with medical diagnostic platforms for automated tumor screening.
10. The system as claimed in claim 1, wherein the system ensures adaptability to resource-constrained environments through optimized computational efficiency.

Documents

Application Documents

# Name Date
1 202541018659-STATEMENT OF UNDERTAKING (FORM 3) [03-03-2025(online)].pdf 2025-03-03
2 202541018659-REQUEST FOR EARLY PUBLICATION(FORM-9) [03-03-2025(online)].pdf 2025-03-03
3 202541018659-POWER OF AUTHORITY [03-03-2025(online)].pdf 2025-03-03
4 202541018659-FORM-9 [03-03-2025(online)].pdf 2025-03-03
5 202541018659-FORM FOR SMALL ENTITY(FORM-28) [03-03-2025(online)].pdf 2025-03-03
6 202541018659-FORM 1 [03-03-2025(online)].pdf 2025-03-03
7 202541018659-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [03-03-2025(online)].pdf 2025-03-03
8 202541018659-EVIDENCE FOR REGISTRATION UNDER SSI [03-03-2025(online)].pdf 2025-03-03
9 202541018659-EDUCATIONAL INSTITUTION(S) [03-03-2025(online)].pdf 2025-03-03
10 202541018659-DECLARATION OF INVENTORSHIP (FORM 5) [03-03-2025(online)].pdf 2025-03-03
11 202541018659-COMPLETE SPECIFICATION [03-03-2025(online)].pdf 2025-03-03