Abstract: SYSTEM AND METHOD FOR MULTI-CLASS BRAIN TUMOR CLASSIFICATION USING DEEP TRANSFER LEARNING WITH MOBILENETV2 A system for multi-class brain tumor classification, comprising: A data preprocessing module for MRI images; A data augmentation module for generating augmented images; A MobileNetV2-based deep learning model for classifying brain tumor types; A performance evaluation module for assessing classification accuracy. The brain tumor types include glioma, meningioma, pituitary tumor, and non-tumor. A method for multi-class brain tumor classification, comprising the steps of: Preprocessing MRI images to enhance quality; Augmenting the dataset with transformations; Using MobileNetV2 to classify the preprocessed images; Evaluating the model using precision, recall, F1-score, and AUC-ROC.
Description:FIELD OF THE INVENTION
The present invention describes systems and procedures which perform medical image analysis with specific applications for brain tumor detection through the integration of deep learning and transfer learning methods. The method employs MobileNetV2 as a deep convolutional neural network to achieve multi-class brain tumor detection.
BACKGROUND OF THE INVENTION
Different brain tumor types exist as gliomas together with meningiomas and pituitary tumors. Early identification of these tumors combined with accurate diagnosis stands essential for both patient survival and treatment approaches. MRI serves as a diagnostic instrument for brain tumor detection yet its manual scan reading process is both lengthy and associated with interpretation errors. CNN technology represents one of the recent deep learning breakthroughs that substantially boosts automatic tumor classification capabilities. Various hurdles exist that include difficulties in precise identification of specific tumor types together with risks of overfitting and challenges in predicting new imaging data.
Pre-trained models have not reached their maximum potential for MRI image classification in current solutions. The present systems employ either basic CNN networks along with outdated transfer learning techniques because these methods fail to reach the necessary accuracy standards in medical applications. A deep learning model based on MobileNetV2 advances the level of accuracy and generalization capabilities for multi-class brain tumor identification.
• US Patent 10,669,141 B2: This patent describes an automated deep learning system designed to detect brain tumors specifically aimed at glioma classification. The patent excludes both transfer learning techniques along with a comprehensive preprocessing pipeline that exists in the current invention.
• US Patent 10,710,922 B2: This patent establishes a procedure to detect brain tumors using convolutional neural networks. Segmentation of tumors serves as the main objective in this work instead of classification. The current invention puts forward a multi-class classification model that reaches high precision levels through the utilization of MobileNetV2.
• US Patent 9,885,341 B2: This patent incorporates medical image classification through CNN-based methods yet does not include MobileNetV2 or data augmentation principles which are key elements of the present invention's effective classifier design.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
A system and method based on deep transfer learning with MobileNetV2 operates for multi-class brain tumor classification. The system accepts MRI patient scans which undergo preprocessing for image enhancement before MobileNetV2 operates for classification purposes.
Key features include:
• Data preprocessing for MRI images to ensure consistency and high quality.
• Data augmentation techniques, such as rotation and flipping, to improve model robustness.
• Transfer learning using MobileNetV2, a lightweight and efficient model pre-trained on large-scale datasets.
• Regularization techniques to prevent overfitting and enhance model generalization.
• Evaluation of the model on a dataset of 5,712 MRI images and performance analysis on a test set of 1,311 images.
The model demonstrates reliable tumor identification performance for glioma, meningioma, pituitary tumors as well as non-tumor conditions with a total accuracy of 99% and perfect "no tumor" classification accuracy at 100%.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Brain tumors without controlled cell proliferation create severe health risks to adults since they might lead to organ failure and eventual mortality. Tumor classification with magnetic resonance imaging (MRI) depends heavily on advanced computational methods since the interpretation process remains highly complicated. The domain of brain tumor classification has demonstrated major advantages through deep learning methods but continues to face difficulty in reaching its highest possible accuracy levels. The research presents an advanced deep learning framework focused on brain tumor diagnosis with specific identification of glioma, meningioma and pituitary tumors to strengthen diagnostic precision. A dataset comprising 5,712 MRI images allows the model to reach 99% accuracy on training and validation data. Through data expansion techniques together with MobileNetV2 transfer learning and regularization strategies the model builds better generalization attributes. The model achieved excellent test set performance by correctly discerning four tumor categories: glioma (98.33%) and meningioma (96.40%) and no tumor (100.00%) and pituitary tumor (99.67%). The model achieved excellent performance through precision at 0.9872 and recall at 0.9870 and F1-score at 0.980 and AUC-ROC score at 1.0 which validates its effectiveness. The adapted deep learning approach offers excellent potential for brain tumor detection at an early stage while maintaining accuracy through an extensive evaluation of neural network classification methods.
Figure 1: Workflow of Proposed Methodology
• Data Preprocessing: MRI images are first preprocessed using standard techniques to standardize image size, enhance contrast, and remove noise.
• Data Augmentation: Techniques like rotations, flips, and zoom are used to augment the dataset, increasing the diversity of training data.
• MobileNetV2 Transfer Learning: The preprocessed and augmented images are passed through MobileNetV2, fine-tuned for the classification of glioma, meningioma, pituitary tumors, and non-tumor.
• Model Evaluation: The model's performance is evaluated using various metrics such as accuracy, precision, recall, F1-score, and AUC-ROC.
Figure 2: Architecture of Deep Learning Model
An architecture of a deep learning model consists of:
1. Input Image: As the initial step the model receives an image most likely an MRI scan for processing.
2. Conv: A convolutional layer uses basic features extraction to remove edges, textures and patterns from an input image.
3. Bottleneck Residual Blocks (1 to 17): The MobileNetV2 architecture consists of 17 bottleneck residual blocks that form an important architectural element of the network. The network contains bottleneck residual blocks which let the model abstract features better while maintaining accuracy through reduced computational complexity. Each block typically contains:
o Depthwise Separable Convolutions: The two parts of convolution operations in these layers implement depthwise filtering followed by pointwise combination of outputs to decrease parameter usage and boost model efficiency.
o ReLU6: The used activation function accelerates training speed while adding non-linear characteristics to the network.
4. Conv1x1 Layers: Extra convolutional layers (having a kernel size of 1x1) implement secondary feature processing following the bottleneck residual block operations.
5. Add: A residual connection involves adding the layer output to the next layer which assists gradient flow and enhances model convergence during the training process.
6. Fully Connected Layer: The fully connected layer follows the features by performing a flattened output from the convolutional layers to create multi-class outputs that identify various brain tumor types.
The MobileNetV2 architecture reduces computational requirements while preserving its accuracy level which makes it ideal to perform multi-class brain tumor classification tasks in medical imaging applications.
Figure 3: Confusion Matrix for a multi-class brain tumor classification system
Out of all Glioma instances evaluated by the model 295 proved to be genuine cases of Glioma. Out of the 295 Meningioma cases classified by the model it correctly identified all of them as Meningioma. The model properly identified 405 cases of No Tumor which stayed as No Tumor. The model achieved correct classification for 299 Pituitary tumors by identifying them as Pituitary.
The model achieves very precise tumor type predictions because the substantial diagonal values show that most classifications match the intended results. Most of the predictions match actual tumor types except for some uncertainty when differentiating between Glioma and Meningioma and Meningioma and Pituitary. Nevertheless, the model shows robust results based on the high values in the diagonal portion.
Figure 4: Receiver Operating Characteristic (ROC) Curve
The ROC curve displays how a classifier operates at all possible threshold levels through graphical representation. The y-axis represents true positive rates as well as recall while plotting two metrics in the ROC curve. False Positive Rate (FPR) values appear on the x-axis whereas the proportion of actual negative instances falsely classified as positive by the model.
Figure 5: Accuracy and Loss Curves
The Train Accuracy Curve shows the level of training accuracy throughout each epoch. The training process demonstrates increased accuracy because the model demonstrates enhancement and learning behavior. The model's output accuracy measurement for validation data appears as the Validation Accuracy curve across epochs. An optimized model shows validation accuracy that closely matches training accuracy because it demonstrates good ability to predict new data. The models effectively learn from data because accuracy levels demonstrate high growth across both curves. The slight separation of validation from training accuracy points indicates minimal overfitting occurs in this model. Train Loss Curve tracks the training-based loss values. Training duration reduces the loss level which indicates that the model develops capabilities to minimize its mistakes on the training examples. Validation Loss Curve shows the measurement of loss that occurs on validation data. A model achieves good generalization by showing a descent in validation loss that matches the training loss decrease pattern. The model shows great accuracy in its training and validation tasks although there is minor discrepancy between them. Both loss curves indicate major reductions of training and validation loss throughout epochs which signifies better model performance.
, Claims:1. A system for multi-class brain tumor classification, comprising:
o A data preprocessing module for MRI images;
o A data augmentation module for generating augmented images;
o A MobileNetV2-based deep learning model for classifying brain tumor types;
o A performance evaluation module for assessing classification accuracy.
2. The system of claim 1, wherein the brain tumor types include glioma, meningioma, pituitary tumor, and non-tumor.
3. A method for multi-class brain tumor classification, comprising the steps of:
o Preprocessing MRI images to enhance quality;
o Augmenting the dataset with transformations;
o Using MobileNetV2 to classify the preprocessed images;
o Evaluating the model using precision, recall, F1-score, and AUC-ROC.
| # | Name | Date |
|---|---|---|
| 1 | 202541042646-STATEMENT OF UNDERTAKING (FORM 3) [02-05-2025(online)].pdf | 2025-05-02 |
| 2 | 202541042646-REQUEST FOR EARLY PUBLICATION(FORM-9) [02-05-2025(online)].pdf | 2025-05-02 |
| 3 | 202541042646-POWER OF AUTHORITY [02-05-2025(online)].pdf | 2025-05-02 |
| 4 | 202541042646-FORM-9 [02-05-2025(online)].pdf | 2025-05-02 |
| 5 | 202541042646-FORM FOR SMALL ENTITY(FORM-28) [02-05-2025(online)].pdf | 2025-05-02 |
| 6 | 202541042646-FORM 1 [02-05-2025(online)].pdf | 2025-05-02 |
| 7 | 202541042646-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [02-05-2025(online)].pdf | 2025-05-02 |
| 8 | 202541042646-EVIDENCE FOR REGISTRATION UNDER SSI [02-05-2025(online)].pdf | 2025-05-02 |
| 9 | 202541042646-EDUCATIONAL INSTITUTION(S) [02-05-2025(online)].pdf | 2025-05-02 |
| 10 | 202541042646-DRAWINGS [02-05-2025(online)].pdf | 2025-05-02 |
| 11 | 202541042646-DECLARATION OF INVENTORSHIP (FORM 5) [02-05-2025(online)].pdf | 2025-05-02 |
| 12 | 202541042646-COMPLETE SPECIFICATION [02-05-2025(online)].pdf | 2025-05-02 |