Sign In to Follow Application
View All Documents & Correspondence

Enhanced Mri Scan Data Analysis For Brain Tumor Classification Using Advanced Preprocessing And Separable Convnet

Abstract: A method and system for analyzing brain MRI scan data are disclosed. The method involves receiving a brain MRI scan image, preprocessing the image through data augmentation techniques such as Rescaling, RandomFlip, RandomRotation, and RandomZoom, normalizing the augmented images to a standard scale, and classifying the normalized images using a customized SeparableConvNet model. The model is capable of distinguishing between various brain tumor categories including glioma, meningioma, pituitary abnormality, or identifying the absence of a tumor. The system comprises a data preprocessing module for augmentation and normalization, a SeparableConvNet module for tumor classification, an interpretability module utilizing LIME to explain the classifications, and a user interface for displaying results and explanations. Drawings / FIG. 1 / FIG. 2 / FIG. 3 / FIG. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 April 2024
Publication Number
23/2024
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

MARWADI UNIVERSITY
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
GOVANA VETRIMANI MOODELY
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
S. M. IHTASHAM HOSSAIN AMIREE
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
RAVIKUMAR R N
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
DR. SUSHIL KUMAR SINGH
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Inventors

1. MS. GOVANA VETRIMANI MOODELY
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
2. S. M. IHTASHAM HOSSAIN AMIREE
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
3. RAVIKUMAR R N
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
4. DR. SUSHIL KUMAR SINGH
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Specification

Description:.

ENHANCED MRI SCAN DATA ANALYSIS FOR BRAIN TUMOR CLASSIFICATION USING ADVANCED PREPROCESSING AND SEPARABLECONVNET

Field of the Invention

The technical field relates to the analysis of brain MRI scans using advanced computer vision and machine learning techniques to classify brain tumors with improved accuracy and reliability.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
The analysis of MRI scan data, particularly for the identification and classification of brain tumors, represents a significant area of focus within medical imaging technology. The utilization of MRI scans offers a non-invasive approach for obtaining detailed images of the brain, facilitating the early detection and classification of tumors. Despite the advancements in MRI technology, the interpretation of scan data poses challenges, necessitating sophisticated analysis methods.
Data augmentation techniques, including Rescaling, RandomFlip, RandomRotation, and RandomZoom, play a crucial role in the preprocessing of brain MRI scan images. These techniques enhance the robustness of the analysis by introducing variability into the training data, thereby improving the model's ability to generalize across different imaging conditions. However, the effectiveness of data augmentation is contingent on the appropriate selection and application of these techniques, posing a challenge in maintaining the balance between augmenting the data and preserving its clinical relevance.
Normalization of augmented images to a standard scale constitutes another vital step in the preprocessing phase. This process ensures that the images fed into the classification model adhere to a uniform scale, enhancing the model's ability to recognize patterns and features indicative of specific tumor types. The accuracy of the normalization process directly influences the reliability of the subsequent classification stage, highlighting the need for precise and consistent normalization methods.
The classification of normalized images using a customized SeparableConvNet model marks a significant advancement in the capability to distinguish between various brain tumor categories. The SeparableConvNet model, through its efficient architecture, enables the detailed analysis of brain MRI scans, facilitating the accurate classification of tumors such as glioma, meningioma, pituitary abnormality, or the identification of no tumor presence. The customization of the model to specifically address the challenges of brain tumor classification underscores the importance of tailored neural network solutions in medical imaging analysis.
Despite these advancements, the process of brain tumor classification from MRI scans faces obstacles related to the variability in tumor appearance, the quality of the MRI scans, and the inherent limitations of the classification models. These challenges underscore the need for continuous improvement in the methodologies used for data preprocessing, normalization, and classification.
In light of the above discussion, there exists an urgent need for solutions that overcome the problems associated with conventional systems and techniques for the accurate classification of brain tumors from MRI scans.
Summary
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
The In an aspect, the present disclosure aims to provide a method for analyzing MRI scan data, which includes receiving a brain MRI scan image, preprocessing the image via data augmentation techniques such as Rescaling, RandomFlip, RandomRotation, and RandomZoom, normalizing these images to a uniform scale, classifying them with a custom SeparableConvNet model that identifies different brain tumor categories, and finally outputting the classification indicative of specific tumor types or the absence of a tumor. Furthermore, the method encompasses applying Local Interpretable Model-agnostic Explanations (LIME) to offer interpretability of the model's decision-making, enabling visualization of contributing factors for the classification, thus enhancing understanding of predictive elements.
In another aspect, the disclosure presents a system for MRI scan data analysis, comprising a data preprocessing module for image augmentation and normalization, a SeparableConvNet module for tumor classification, an interpretability module utilizing LIME for explanation of classifications, and a user interface for result display and LIME explanation adjustments. The data preprocessing module includes functions for rescaling, brightness and contrast adjustment, and noise injection to simulate diverse imaging conditions. The interpretability module is designed to highlight significant regions in the MRI images contributing to the classification, while the user interface offers functionalities for adjusting LIME parameters and exporting results for medical review.

Brief Description of the Drawings

The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a method (100) for analysis of MRI scan data, in accordance with the embodiments of the present disclosure.
FIG. 2 illustrates a block diagram of the system (200) for the analysis of MRI scan data, in accordance with the embodiments of the present disclosure.
FIG. 3 represents architectural design of the SeparableConvNet Tumor Classifier for to processing input images for tumor classification, in accordance with embodiment of present disclosure.
FIG. 4 describes the workflow of the system utilizing the SeparableConvNet for tumor classification, in accordance with embodiment of present disclosure.

Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
FIG. 1 illustrates a method (100) for analysis of MRI scan data, in accordance with the embodiments of the present disclosure. The term "method for analysis of MRI scan data" as used throughout the present disclosure relates to a sequence of operations designed to process magnetic resonance imaging (MRI) scans for the purpose of identifying and classifying brain tumors. In method (100), step (102) initiates with the reception of a brain MRI scan image. Upon receiving an input of a brain MRI scan image, the system is configured to accept digital images in various formats generated by MRI scanners. These images serve as the basis for further processing and analysis aimed at tumor identification. In step (104), the method involves preprocessing the received image through data augmentation techniques including at least one of Rescaling, RandomFlip, RandomRotation, and RandomZoom. The data augmentation techniques applied serve to enhance the dataset with varied orientations and scales, thereby augmenting the robustness of the model against variations in new images. The step (102) preprocessing ensures that the model is trained on a dataset that closely mimics the diversity found in real-world imaging scenarios. Following the augmentation, step (106) entails normalizing the augmented images to a standard scale. Normalization as described herein involves adjusting the pixel values of images to fit within a specific range, ensuring uniformity across the dataset. The step (106) is critical for maintaining consistency in image quality and is fundamental for the accurate analysis by machine learning models. The method (100) further comprises step (108) classifying the normalized images using a customized SeparableConvNet model designed to distinguish between brain tumor categories. The SeparableConvNet model, a form of convolutional neural network tailored for this application, is optimized to identify distinctive features within the brain MRI scans that correspond to various types of tumors such as glioma, meningioma, pituitary abnormality, or the absence of a tumor. This customization allows for high specificity and sensitivity in tumor classification. Additionally, the method (100) includes outputting a classification result indicative of at least one of glioma, meningioma, pituitary abnormality, or no tumor in step (110). The outputted classification result is presented in a manner that facilitates the interpretation of the MRI scan data, providing valuable insights for medical professionals in the diagnosis and treatment planning process.
In an embodiment, the method (100) for analysis of MRI scan data further comprises applying Local Interpretable Model-agnostic Explanations (LIME) to the classification results to provide interpretability of the model's decision-making process. The incorporation of LIME into the method represents an advancement in elucidating how the customized SeparableConvNet model arrives at specific classifications for brain tumors. This addition enables stakeholders, particularly medical practitioners, to gain insights into the predictive accuracy and reliability of the system. By leveraging LIME, the method breaks down the classification decision into understandable components, highlighting the importance of certain features or regions within the MRI scan that influence the model's output. This process not only enhances trust in the system's diagnostic capabilities but also supports the clinical decision-making process by offering explanations that align with medical knowledge and intuition. The application of LIME is instrumental in addressing potential concerns regarding the "black box" nature of deep learning models, thus bridging the gap between complex machine learning algorithms and practical clinical applications. The method's integration of interpretability tools like LIME signifies a significant step towards the deployment of AI in healthcare, where transparency and understanding are paramount.
In another embodiment, the method (100) of applying Local Interpretable Model-agnostic Explanations (LIME) as described in claim 2 further enables the visualization of contributing factors for the classification results. This enhancement allows for an in-depth understanding of the model's predictive factors, offering a granular view into the aspects of the MRI scan that are most instrumental in determining the presence and type of brain tumor. The application of LIME facilitates a detailed analysis whereby individual pixels or regions within the MRI scan contributing to the classification decision are highlighted, providing users with a visual representation of the model's rationale. This visualization is crucial for medical professionals as it allows for the comparison of the AI-generated insights with their clinical expertise, thereby validating the classification results. Furthermore, this embodiment supports educational purposes, aiding in the training of medical professionals by illustrating typical and atypical features associated with different brain tumors. The ability to visualize contributing factors enhances the method's utility by improving interpretability, fostering greater confidence in the AI's diagnostic recommendations, and enabling a collaborative approach to patient care where machine intelligence and human expertise complement each other. This level of interpretability and visualization represents a forward leap in making complex machine learning models accessible and actionable in clinical settings.
The term "system for analysis of MRI scan data" as used throughout the present disclosure relates to a comprehensive apparatus designed to process and analyze magnetic resonance imaging (MRI) scans, particularly for identifying and classifying brain tumors. This system is engineered to enhance the accuracy and interpretability of brain tumor diagnostics.
The term "data preprocessing module" as used throughout the present disclosure relates to a component of the system dedicated to the initial handling and preparation of MRI scan images. This module is configured to perform augmentation and normalization of brain MRI scan images, thereby improving the quality and variability of the dataset fed into the classification module. Augmentation techniques such as rescaling, flipping, rotating, and zooming are applied to simulate various imaging conditions, while normalization adjusts the images to a standard scale for consistent analysis.
The term "SeparableConvNet module" as used throughout the present disclosure refers to a specialized neural network architecture within the system, trained to classify brain tumors into distinct categories. This module leverages a custom SeparableConvolutional Neural Network (SeparableConvNet) designed for high efficiency and accuracy in image classification tasks. The SeparableConvNet module differentiates between various brain tumor types, including glioma, meningioma, pituitary abnormalities, and the absence of a tumor, based on patterns discerned from the processed MRI scans.
The term "interpretability module" as used throughout the present disclosure denotes a system component that employs Local Interpretable Model-agnostic Explanations (LIME) to elucidate the classifications provided by the SeparableConvNet module. This module facilitates the understanding of how specific features within the MRI scan images contribute to the classification outcomes, thus offering insights into the decision-making process of the neural network. The interpretability module is crucial for validating the model's accuracy and for providing explanations that are accessible to medical professionals.
The term "user interface" as used throughout the present disclosure describes the graphical or textual interface through which users interact with the system. This interface is designed for displaying the classification results and corresponding LIME explanations in an intuitive and easily interpretable manner. The user interface enables healthcare practitioners to review the diagnostic outputs, understand the rationale behind the classifications, and make informed decisions regarding patient care.
Optionally, the system may include additional features such as the capability to adjust preprocessing parameters, training procedures for the SeparableConvNet module based on new data, and customization options for the presentation of results and explanations in the user interface.
FIG. 2 illustrates a block diagram of the system (200) for the analysis of MRI scan data, in accordance with the embodiments of the present disclosure. In the depicted embodiment, the system (200) comprises four primary modules. The data preprocessing module (202) is configured to augment and normalize brain MRI scan images, employing various techniques such as rescaling, random flipping, random rotation, and random zooming. Such augmentation is crucial for enhancing the robustness of the image dataset, which aids in the subsequent classification process. Adjacent to the data preprocessing module (202), the SeparableConvNet module (204) is trained to classify brain tumors into distinct categories based on the processed images. Such classification is performed with high precision and accuracy, enabling the identification of various brain tumor types such as glioma, meningioma, pituitary abnormalities, or the absence of a tumor. The interpretability module (206), situated within the system (200), utilizes Local Interpretable Model-agnostic Explanations (LIME) to provide interpretability of the classifications made by the SeparableConvNet module (204). Such interpretability is essential for understanding the decision-making process behind the classification results. Additionally, the user interface (208) is designed to display the classification results along with the corresponding LIME explanations. Such a user interface (208) facilitates the interaction between the user and the system (200), providing a means to visualize and interpret the analysis outcome.
In an embodiment, the system (200) for analysis of MRI scan data incorporates a data preprocessing module (202) that utilizes a rescaling function to adjust the pixel values of the MRI scan images to a specified range prior to augmentation. This rescaling function is crucial for standardizing the input images, ensuring that they all possess a consistent scale of pixel values, which is fundamental for accurate analysis and classification by the SeparableConvNet module (204). The adjustment of pixel values to a specific range addresses the variability inherent in MRI scans due to differing scanner settings and patient conditions. By normalizing the scale of pixel values across all images, the system enhances the model’s ability to detect subtle differences and similarities in the images, leading to improved accuracy in tumor classification. This preparatory step is essential for maintaining the integrity and comparability of the images fed into the system, thereby optimizing the performance of the subsequent data augmentation and classification processes. The rescaling function exemplifies the system's commitment to precision in the initial stages of image processing, laying a strong foundation for the accurate diagnosis of brain tumors.
In another embodiment, the data preprocessing module (202) of the system (200) is further designed to adjust the brightness and contrast of MRI scan images to simulate variations in imaging conditions. This feature addresses one of the significant challenges in medical imaging analysis - the diversity in image quality and appearance due to different imaging parameters and conditions under which scans are conducted. By simulating these variations, the system ensures that the SeparableConvNet module (204) is exposed to a wide range of imaging scenarios during training, thereby enhancing its ability to generalize and accurately classify tumors across diverse real-world conditions. The adjustment to brightness and contrast is a sophisticated technique that mimics the variability encountered in clinical settings, ensuring that the model is robust and reliable under various conditions. This embodiment highlights the system’s capability to preprocess images in a manner that closely aligns with the practical challenges faced in medical imaging analysis, thereby improving the diagnostic utility of the system.
In yet another embodiment, the system's data preprocessing module (202) incorporates a noise injection function to simulate various image acquisition artifacts commonly encountered in MRI scans. The inclusion of noise injection as a preprocessing step is pivotal for preparing the system (200) to handle images with imperfections, which are typical in clinical environments due to scanner limitations, patient movement, or other factors affecting scan quality. By introducing controlled amounts of noise into the training images, the system cultivates a level of robustness in the SeparableConvNet module (204), enabling it to maintain high accuracy in tumor classification despite the presence of artifacts. This approach ensures that the model is well-equipped to interpret and analyze images that mirror the complexity and imperfection of real-world MRI scans, thus enhancing the reliability and applicability of the system in clinical settings. The noise injection function demonstrates the system’s comprehensive approach to data preprocessing, aiming to equip the model with the resilience needed for accurate diagnosis in the face of variable scan quality.
In an embodiment, the interpretability module (206) utilizing Local Interpretable Model-agnostic Explanations (LIME) within the system (200) is configured to highlight regions of interest in the MRI scan images that contribute most significantly to the classification decision. This functionality of the interpretability module (206) is critical for providing clarity on how the SeparableConvNet module (204) determines the presence and type of brain tumors. By visually identifying and emphasizing the areas within the scans that are pivotal to the model’s classification outcomes, the system fosters a deeper understanding and trust in the AI-driven diagnostic process among medical professionals. The ability to pinpoint and visualize these regions not only aids in validating the model's decisions but also offers valuable insights into the characteristics and indicators of various tumor types. This embodiment underscores the system’s dedication to transparency and interpretability in medical imaging analysis, bridging the gap between advanced AI technologies and clinical practice.
In another embodiment, the user interface (208) of the system (200) is designed to allow user input to adjust the parameters of the LIME explanation, thereby accommodating the needs and preferences of different users. This adjustable interface enables users to modify aspects such as the number of features displayed or the complexity of the explanation model, tailoring the interpretability of the classification results to suit individual requirements. By providing this level of customization, the system ensures that the insights derived from the interpretability module (206) are accessible and meaningful to a broad spectrum of users, from experts seeking detailed analysis to clinicians requiring simplified explanations. This feature highlights the system’s flexibility and user-centric design, emphasizing its capability to serve as a versatile tool in the diagnostic process, adaptable to the diverse needs of the medical community.
In a further embodiment, the user interface (208) of the system (200) includes a feature for exporting the classification results and corresponding LIME explanations into a report format suitable for medical review. This functionality enables the seamless integration of AI-driven diagnostic insights into existing medical workflows, facilitating collaboration and decision-making among healthcare professionals. The export feature ensures that the diagnostic outputs generated by the system, along with the accompanying interpretability explanations, are presented in a format that aligns with established clinical practices, thereby enhancing their utility and acceptance within medical institutions. By streamlining the process of sharing diagnostic findings, the system promotes efficiency and transparency in patient care, ultimately contributing to improved outcomes and patient satisfaction. This embodiment underscores the system’s commitment to facilitating the translation of AI-driven insights into actionable clinical interventions, thereby advancing the practice of medical imaging analysis.
FIG. 3 represents architectural design of the SeparableConvNet Tumor Classifier for to processing input images for tumor classification, in accordance with embodiment of present disclosure. The input consists of medical images with a resolution of 180x180 pixels. The process starts with Data Augmentation, where the input data is augmented to enhance the model's ability to generalize from the input data by creating modified versions of the images. In augmentation step, image can undergo various alterations such as rotations, flips, zooms, or other transformations. The augmented images then undergo normalization, which is a technique to standardize the pixel values across the dataset, typically to have a mean of zero and a standard deviation of one. Following normalization, the images are passed through multiple layers, specifically SeparableConv2D layers, which are a type of convolutional layer that operates on individual channels of the input, making them computationally efficient. Each of these layers is typically followed by a normalization layer, which ensures that the activations don't become too high or too low, leading to faster convergence during training. The architecture includes pooling layers, which reduce the dimensionality of the data while retaining important features, making the network less sensitive to the exact location of features in the input images. The output from SeparableConv2D layers is then flattened into a single vector in the Flatten Layer. This flattened vector is passed through a Dropout Layer, which randomly sets a fraction of the input units to zero at each update during training, helping to prevent overfitting by making the neural network more robust. After dropout, the data is passed through a Dense Layer, a type of neural network layer where every unit is connected to every unit in the previous layer, thus learning high-level features from the data. Finally, the output layer consists of multiple units, each corresponding to a category of tumor that the network can classify. The network presumably uses a softmax function to output a probability distribution over the different tumor categories.
FIG. 4 describes the workflow of the system utilizing the SeparableConvNet for tumor classification, in accordance with embodiment of present disclosure. The workflow begins with the input of a medical image with dimensions specified as 180x180 pixels. The image is then augmented through various techniques such as random flipping, rotation, and zooming to increase the diversity of the dataset and improve the model's robustness. The augmented images are then normalized to ensure consistent scale across the dataset, which is a crucial step before they are fed into the neural network. The normalized images go through the SeparableConvNet module, which comprises layers designed to extract features and patterns that are indicative of the presence and type of brain tumor. The output from the SeparableConvNet module leads to the classification results. This output segment displays four categories—Pituitary, Meningioma, Glioma, and NoTumor. Each category shows a medical image with a highlighted region indicating the model's prediction of the tumor location along with a confidence score. This demonstrates the model's ability to not only classify the type of tumor but also localize it within the brain imagery. Finally, the workflow incorporates an interpretation with LIME (Local Interpretable Model-agnostic Explanations), which is a technique to explain the predictions of the model. LIME helps in understanding which features influence the prediction by highlighting them, as shown in the images below the classification results. Here, actual labels are compared against predicted labels, and the interpretive focus is shown with highlighted sections on brain scans, providing insights into the model's decision-making process. This aspect of the system is crucial for validating the model's predictions and for offering explanations that can be understood by medical professionals.
In an embodiment, the disclosed system enables detection of brain tumors, the system utilizes SeparableConvNet architecture for precise classification and provides an interpretive framework within healthcare diagnostics. The system is trained on a diverse dataset of 7,023 human brain MRI scans from well-known sources, achieving a high test accuracy of 96.64% through advanced preprocessing techniques like normalization and augmentation. The system utilizes Local Interpretable Model-agnostic Explanations (LIME), which increases the transparency of the Convolutional Neural Network (CNN) predictions. The system allows medical professionals to understand the model’s reasoning, fostering trust and enabling informed decision-making. The customized SeparableConvNet models analyzes brain MRI scans and classify into categories such as gliomas, meningiomas, pituitary abnormalities, or no tumor presence. In preprocessing stage image undergoes various image processing steps such as rescaling, random flipping, rotation, and zooming to prepare the data for the model, ensuring robustness and reliability. Compared to existing methodologies, the system provides several advantages such as higher accuracy in detecting brain tumors, which is critical for improving patient outcomes and reducing diagnostic errors, enable personalized patient care and supports healthcare professionals in their decision-making processes. The classification aids in accurate diagnosis and treatment planning, ultimately leading to better patient care.
Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.

Claims

I/We claim:

A method (100) for analysis of MRI scan data, the method comprising:
a. receiving an input of a brain MRI scan image;
b. preprocessing the received image through data augmentation techniques including at least one of: Rescaling, RandomFlip, RandomRotation, and RandomZoom;
c. normalizing the augmented images to a standard scale;
d. classifying the normalized images using a customized SeparableConvNet model designed to distinguish between brain tumor categories; and
e. outputting a classification result indicative of at least one of: glioma, meningioma, pituitary abnormality, or no tumor.
The method (100) of claim 1, further comprising applying Local Interpretable Model-agnostic Explanations (LIME) to the classification results to provide interpretability of the model's decision-making process.
The method (100) of claim 2, wherein the application of LIME enables visualization of contributing factors for the classification results, thereby allowing for an understanding of the model's predictive factors.
A system analysis of MRI scan data, the system (200) comprising:
a. a data preprocessing module (202) configured to perform augmentation and normalization of brain MRI scan images;
b. a SeparableConvNet module (204) trained to classify brain tumors into distinct categories;
c. an interpretability module (206) utilizing LIME to explain the classifications provided by the SeparableConvNet module (204); and
d. a user interface (208) for displaying the classification results and corresponding LIME explanations.
The system analysis of claim 4, wherein the data preprocessing module (202) utilizes a rescaling function to adjust the pixel values of the MRI scan images to a specified range prior to augmentation.
The system analysis of claim 4, wherein the data preprocessing module (202) adjust to brightness and contrast to simulate variations in imaging conditions.
The system analysis of claim 4, wherein the data preprocessing module (202) utilizes a noise injection function to simulate various image acquisition artifacts.
The system analysis of claim 4, wherein the interpretability module (206) utilizing LIME is configured to highlight regions of interest in the MRI scan images that contribute most significantly to the classification decision.
The system analysis of claim 4, wherein the user interface (208) is configured to allow user input to adjust the parameters of the LIME explanation, including but not limited to the number of features to display and the complexity of the explanation model.
The system analysis of claim 4, wherein the user interface includes (208) a feature for exporting the classification results and corresponding LIME explanations into a report format suitable for medical review.

ENHANCED MRI SCAN DATA ANALYSIS FOR BRAIN TUMOR CLASSIFICATION USING ADVANCED PREPROCESSING AND SEPARABLECONVNET

A method and system for analyzing brain MRI scan data are disclosed. The method involves receiving a brain MRI scan image, preprocessing the image through data augmentation techniques such as Rescaling, RandomFlip, RandomRotation, and RandomZoom, normalizing the augmented images to a standard scale, and classifying the normalized images using a customized SeparableConvNet model. The model is capable of distinguishing between various brain tumor categories including glioma, meningioma, pituitary abnormality, or identifying the absence of a tumor. The system comprises a data preprocessing module for augmentation and normalization, a SeparableConvNet module for tumor classification, an interpretability module utilizing LIME to explain the classifications, and a user interface for displaying results and explanations.

Drawings
/
FIG. 1
/
FIG. 2
/
FIG. 3
/
FIG. 4
, Claims:I/We claim:

A method (100) for analysis of MRI scan data, the method comprising:
a. receiving an input of a brain MRI scan image;
b. preprocessing the received image through data augmentation techniques including at least one of: Rescaling, RandomFlip, RandomRotation, and RandomZoom;
c. normalizing the augmented images to a standard scale;
d. classifying the normalized images using a customized SeparableConvNet model designed to distinguish between brain tumor categories; and
e. outputting a classification result indicative of at least one of: glioma, meningioma, pituitary abnormality, or no tumor.
The method (100) of claim 1, further comprising applying Local Interpretable Model-agnostic Explanations (LIME) to the classification results to provide interpretability of the model's decision-making process.
The method (100) of claim 2, wherein the application of LIME enables visualization of contributing factors for the classification results, thereby allowing for an understanding of the model's predictive factors.
A system analysis of MRI scan data, the system (200) comprising:
a. a data preprocessing module (202) configured to perform augmentation and normalization of brain MRI scan images;
b. a SeparableConvNet module (204) trained to classify brain tumors into distinct categories;
c. an interpretability module (206) utilizing LIME to explain the classifications provided by the SeparableConvNet module (204); and
d. a user interface (208) for displaying the classification results and corresponding LIME explanations.
The system analysis of claim 4, wherein the data preprocessing module (202) utilizes a rescaling function to adjust the pixel values of the MRI scan images to a specified range prior to augmentation.
The system analysis of claim 4, wherein the data preprocessing module (202) adjust to brightness and contrast to simulate variations in imaging conditions.
The system analysis of claim 4, wherein the data preprocessing module (202) utilizes a noise injection function to simulate various image acquisition artifacts.
The system analysis of claim 4, wherein the interpretability module (206) utilizing LIME is configured to highlight regions of interest in the MRI scan images that contribute most significantly to the classification decision.
The system analysis of claim 4, wherein the user interface (208) is configured to allow user input to adjust the parameters of the LIME explanation, including but not limited to the number of features to display and the complexity of the explanation model.
The system analysis of claim 4, wherein the user interface includes (208) a feature for exporting the classification results and corresponding LIME explanations into a report format suitable for medical review.

ENHANCED MRI SCAN DATA ANALYSIS FOR BRAIN TUMOR CLASSIFICATION USING ADVANCED PREPROCESSING AND SEPARABLECONVNET

Documents

Application Documents

# Name Date
1 202421033178-OTHERS [26-04-2024(online)].pdf 2024-04-26
2 202421033178-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf 2024-04-26
3 202421033178-FORM 1 [26-04-2024(online)].pdf 2024-04-26
4 202421033178-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf 2024-04-26
5 202421033178-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf 2024-04-26
6 202421033178-DRAWINGS [26-04-2024(online)].pdf 2024-04-26
7 202421033178-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf 2024-04-26
8 202421033178-COMPLETE SPECIFICATION [26-04-2024(online)].pdf 2024-04-26
9 202421033178-FORM-9 [07-05-2024(online)].pdf 2024-05-07
10 202421033178-FORM 18 [08-05-2024(online)].pdf 2024-05-08
11 202421033178-FORM-26 [12-05-2024(online)].pdf 2024-05-12
12 202421033178-FORM 3 [13-06-2024(online)].pdf 2024-06-13
13 202421033178-RELEVANT DOCUMENTS [17-04-2025(online)].pdf 2025-04-17
14 202421033178-POA [17-04-2025(online)].pdf 2025-04-17
15 202421033178-FORM 13 [17-04-2025(online)].pdf 2025-04-17