Sign In to Follow Application
View All Documents & Correspondence

System For Classifying Brain Tumors In Mri Images

Abstract: A system for classifying brain tumors in MRI images comprising a feature extraction unit employing a CNN based on ResNet50 or VGG16 to process spatial features from MRI images, a dependency capture unit utilizing a Swin Transformer or Vision Transformer to capture long-range dependencies, a feature fusion unit integrates outputs to improve classification, an interpretability module using LIME to highlight critical image regions aiding radiologists in understanding classification results, an integration unit connects the system to a picture archiving and communication system and sends the classification results and explanations to PACS for use by radiologists and in hospitals.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 August 2025
Publication Number
35/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR University
Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.

Inventors

1. Gunde Mounika
SR University, Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.
2. Dr. Sreedhar Kollem
SR University, Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.
3. Dr. Srinivas Samala
SR University, Ananthasagar, Hasanparthy (PO), Warangal-506371, Telangana, India.

Specification

Description:FIELD OF THE INVENTION

[0001] The present invention relates to a system for classifying brain tumors in MRI images by extracting both spatial and contextual features for accurate diagnosis and provides interpretable visual outputs to support clinical decisions to enhance diagnostic confidence and timely access to critical imaging insights for radiologists.

BACKGROUND OF THE INVENTION

[0002] Brain tumor diagnosis through MRI imaging is important in modern medical practice, as early and accurate classification significantly impacts treatment planning and patient outcomes. Brain tumor diagnostic approaches rely heavily on radiologists’ expertise, which is subjective and time-consuming, particularly when interpreting complex or subtle abnormalities. While earlier automated systems to assist in tumor classification, many face limitations such as poor extraction of critical spatial features, inability to capture long-range dependencies within the images, lack of interpretability and insufficient integration with clinical workflows. Ensuring efficient diagnosis to support radiologists in making more accurate and explainable brain tumor diagnoses.

[0003] Traditional process of classifying brain tumors in MRI images relies heavily on the manual interpretation by radiologists, which presents several challenges and limitations. The approach is time-consuming and subject to inter-observer variability, as diagnostic accuracy depends greatly on the experience and expertise of individual practitioners. Subtle differences in tumor appearance, overlapping features among tumor types and variations in image quality leads to inconsistent or incorrect diagnoses. Furthermore, traditional computer-aided diagnostic systems often rely on handcrafted features and shallow learning techniques, which lack the capability to fully capture the complex patterns and structures present in medical images. These systems focus on local features and fail to account for broader spatial context, limiting their effectiveness in analyzing tumors that span or affect multiple brain regions. These challenges underscore the urgent need for a more accurate, explainable and seamlessly integrated solution for brain tumor classification in MRI images.

[0004] JP2023124767A relates to a brain tumor type automatic distinguish system. An image outputting device outputs at least three brain images captured from a position of a brain tumor. A server computing device pre-stores a plurality of distinguish pathways corresponding to different types of brain tumors. The server computing device includes an image receiving module, an image front-end processing module, a data comparison module and a distinguish module. The image receiving module receives the brain images. The image front-end processing module front-end processes each brain image to obtain corresponding processed images thereof. The data comparison module compares each brain image and the processed images with the distinguish pathways to obtain at least three comparison results. The distinguish module statistically analyzes the comparison results to obtain a distinguish result.

[0005] CN117437493B relates to a brain tumor MRI image classification method and system combining first-order and second-order characteristics, and relates to the technical field of medical image analysis, wherein the method comprises the following steps: acquiring a brain tumor MRI image to be classified, and preprocessing the brain tumor MRI image; taking a pre-trained ResNet18 network as a main network, and extracting an initial feature vector of the preprocessed brain tumor MRI image by using the main network; based on the initial feature vector, obtaining a first-order feature vector of the brain tumor MRI image through global average pooling, and obtaining a second-order feature vector of the brain tumor MRI image through covariance pooling; based on the first-order feature vector and the second-order feature vector, respectively performing category prediction, and then adding and fusing the two category prediction results to obtain a final classification result. The invention summarizes global features and finer local structures of image data by combining first-order and second-order features, thereby realizing higher-precision brain tumor MRI image classification.

[0006] Conventionally, many systems have been developed for classifying brain tumors in MRI images, however the devices mentioned in the prior arts have limitations pertaining to spatial-contextual analysis, lack of explain ability and seamless data exchange with hospital infrastructure to enhance diagnostic accuracy and clinical usability.

[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a system that is required to be capable of extract spatial features from MRI images, combine multiple levels of image information into a unified output for more reliable diagnostic outcomes in clinical environments and smooth integration with hospital infrastructure to improve diagnostic precision and support effective clinical application.

OBJECTS OF THE INVENTION

[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.

[0009] An object of the present invention is to develop a system that is capable of
accurately classifying tumors in MRI images for early detection, while providing interpretable results for clinical use.

[0010] Another object of the present invention is to develop a system that is capable of understand spatial and contextual relationships in medical images, facilitating the detection of subtle or dispersed abnormalities.

[0011] Another object of the present invention is to develop a system that is capable of combine multiple levels of image information into a unified output that strengthens classification performance, ensuring more reliable diagnostic outcomes in clinical environments.

[0012] Yet another object of the present invention is to develop a system that is capable of providing diagnostic results and insights accessible to healthcare professionals or in hospitals.

[0013] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.

SUMMARY OF THE INVENTION

[0014] The present invention relates to a system for classifying brain tumors in MRI images that combine detailed spatial feature with long-range dependency analysis for classification accuracy, provides interpretable visual explanations for diagnostic support for radiologists in healthcare.

[0015] According to an embodiment of the present invention, a system for classifying brain tumors in MRI images comprises of a feature extraction unit associated with the system, uses a convolutional neural network (CNN) based on ResNet50 or VGG16 to extract spatial features from MRI images uploaded by a concerned person through a computing unit wirelessly linked with the system, a dependency capture unit that uses a transformer to capture long-range dependencies in the MRI images, a feature fusion unit that combines the spatial features from the CNN and the long-range dependencies from the transformer to improve classification.

[0016] According to another embodiment of the present invention, the system further, comprises of an interpretability module uses a local interpretable model-agnostic explanations (LIME) to show which parts of the MRI images affect the classification and highlights critical regions in the MRI images to explain the tumor classification to radiologists and an integration unit that connects the system to a picture archiving and communication system (PACS) and sends the classification results and explanations to the PACS for use by radiologists or in hospitals.

[0017] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0018] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates a flow chart depicting a system for classifying brain tumors in MRI images.

DETAILED DESCRIPTION OF THE INVENTION

[0019] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.

[0020] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.

[0021] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.

[0022] The present invention relates to a system for classifying brain tumors in MRI images that analyze both spatial and long-range features to improve diagnostic accuracy through combined processing which offers visual interpretability to aid radiologists and integrates with hospital to ensure seamless access and reliable decision support in clinical practice.

[0023] Referring to Figure 1, illustrates a flow chart depicting a system for classifying brain tumors in MRI images.

[0024] The system disclosed herein comprises of a feature extraction unit to analyze MRI images for tumor classification with high precision. This unit employs a Convolutional Neural Network (CNN) architecture, based on either ResNet50 or VGG16, which are well-established deep learning models known for their exceptional performance in image recognition tasks. Upon receiving MRI scans uploaded by a concerned person via a computing unit that communicates wirelessly with the system, the feature extraction unit initiates preprocessing to standardize the image data. The CNN then processes these images through multiple convolutional layers, where each layer detects increasingly complex patterns. ResNet50 with deep residual learning framework, efficiently addresses the vanishing gradient problem, enabling the extraction of deeper, more abstract features. As the MRI images pass through the CNN, spatial features such as edges, textures, and anatomical structures relevant to potential tumor regions are extracted and encoded into a dense feature representation.

[0025] ResNet50 works by using deep residual learning to effectively train a 50-layer convolutional neural network without performance degradation, which occurs in very deep networks. ResNet50 introduces shortcut connections that bypass one or more layers, allowing the network to learn residual functions essentially the difference between the input and output of layers rather than direct mappings. These residual connections help maintain gradient flow during backpropagation, making it easier to train deeper networks.

[0026] The architecture includes an initial convolutional layer, followed by four stages of residual blocks composed of convolutional layers with batch normalization and ReLU activation. Each block uses a 1x1 convolution to reduce dimensions, a 3x3 convolution to process features, and another 1x1 convolution to restore dimensions. These blocks are stacked to learn increasingly abstract representations, enabling ResNet50 to extract complex spatial features from input images such as MRI scans with high accuracy for precise tumor localization and classification in medical diagnostics.

[0027] A communication module for establishing a wireless connection between the system and the computing unit accessed by the user to upload MRI images in the system. The computing unit mentioned herein includes, but not limited to smartphone, tablet or laptop that comprises a processor where the data is stored, process and retrieve the output data in order to display on computing unit. The communication module used herein includes, but not limited to Wi-Fi (Wireless Fidelity) module, Bluetooth module, GSM (Global System for Mobile Communication) module.

[0028] The communication module used herein is preferably a Wi-Fi module that is a hardware component that enables the computing unit to connect wirelessly with the system. The Wi-Fi module works by utilizing radio waves to transmit and receive data over short distances. The core functionality relies on the IEEE 802.11 standards, which define the protocols for wireless local area networking (WLAN). Once connected, the module allows to send and receive data through data packets.

[0029] A dependency capture unit to capture long-range dependencies in the MRI images using a transformer-based architecture, specifically a Swin Transformer or Vision Transformer (ViT). The Swin Transformer uses a hierarchical structure with shifted windows, allowing local attention to be computed efficiently within non-overlapping regions, while also enabling cross-window connections to build a global understanding over multiple layers. This hierarchical and shifted design makes the Swin Transformer more scalable and effective for high-resolution images like MRI scans. As the image progresses through the transformer layers, rich feature representations are generated that capture both fine-grained details and broader spatial relationships critical for accurate tumor analysis.

[0030] A feature fusion unit to combine spatial features extracted by the CNN with the long-range dependencies captured by the transformer, for enhancing the overall classification accuracy. Spatial features from the CNN provide detailed local information such as edges, textures and tumor boundaries, while the transformer contributes global contextual insights across distant regions of the MRI image. The fusion unit aligns and merges these complementary feature sets using concatenation or element-wise addition followed by normalization and fully connected layers to integrate the information effectively. As a result, the classification model receives a richer and more informative input, leading to more accurate and robust tumor detection and categorization.

[0031] An interpretability module to enhance the transparency of the system by employing Local Interpretable Model-Agnostic Explanations (LIME) to explain which regions of an MRI image most significantly influence the tumor classification decision. LIME works by perturbing the input image such as slightly modifying or masking different regions and observing how these changes affect the model’s prediction, then builds a simplified interpretable model around the instance being analyzed to approximate the complex classifier's behavior locally. This allows LIME to assign importance scores to different image regions based on their impact on the output, effectively highlighting the most influential areas. These highlighted regions are presented visually on the MRI image, allowing radiologists to see which parts of the scan were most relevant to the system's decision.

[0032] An integration unit acts as the communication bridge between the system and Picture Archiving and Communication System (PACS), ensuring seamless incorporation into clinical workflows. Once the MRI images are processed and classified leveraging CNNs for spatial features, transformers for long-range dependencies and LIME for interpretability the integration unit packages the final outputs, including the classification results and visual explanation maps, into a format compatible with PACS standards, then securely transmits this data to the PACS, where is accessed by radiologists alongside traditional imaging studies. This unit supports hospital IT protocols, ensuring compliance with data privacy, security and interoperability requirements. By embedding diagnostic insights directly into the existing imaging infrastructure, the integration unit facilitates real-time clinical use, allowing radiologists to review supported tumor assessments.

[0033] The present invention works best in the following manner, where the system disclosed herein comprises of the feature extraction unit, which employs the CNN based on ResNet50 or VGG16 to process MRI images uploaded wirelessly by the concerned person through the computing unit. The CNN extracts spatial features such as edges, textures and localized tumor structures. These spatial features are then passed to the dependency capture unit, which utilizes the transformer preferably a Swin Transformer or Vision Transformer to model long-range dependencies within the MRI image, capturing complex contextual relationships across distant regions. The feature fusion unit combines the spatial features from the CNN and the features from the transformer to produce a unified, enriched feature representation that improves tumor classification performance. The interpretability module applies Local Interpretable Model-Agnostic Explanations (LIME) to identify and highlight the most influential regions in the MRI images providing visual explanations of the classification output to support radiologist decision-making. The integration unit ensures seamless connection of the system to the Picture Archiving and Communication System (PACS) used in hospitals, enabling secure transfer of classification results and explanations for use by radiologists or in hospitals.

[0034] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. , Claims:1) A system for classifying brain tumors in MRI images, comprising:

i) a feature extraction unit associated with the system, that uses a convolutional neural network (CNN) to extract spatial features from MRI images uploaded by a concerned person through a computing unit wirelessly linked with the system;

ii) a dependency capture unit that uses a transformer to capture long-range dependencies in the MRI images;

iii) a feature fusion unit that combines the spatial features from the CNN and the long-range dependencies from the transformer to improve classification;

iv) an interpretability module uses a local interpretable model-agnostic explanations (LIME) to show which parts of the MRI images affect the classification; and

v) an integration unit that connects the system to a picture archiving and communication system (PACS) for use in hospitals.

2) The system as claimed in claim 1, wherein the CNN in the feature extraction unit is based on ResNet50 or VGG16 to process MRI images.

3) The system as claimed in claim 1, wherein the interpretability module highlights critical regions in the MRI images to explain the tumor classification to radiologists.

4) The system as claimed in claim 1, wherein the integration unit sends the classification results and explanations to the PACS for use by radiologists.

5) The system as claimed in claim 1, wherein the transformer in the dependency capture unit is preferably a swin transformer or vision transformer to capture image dependencies.

Documents

Application Documents

# Name Date
1 202541077336-STATEMENT OF UNDERTAKING (FORM 3) [13-08-2025(online)].pdf 2025-08-13
2 202541077336-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-08-2025(online)].pdf 2025-08-13
3 202541077336-PROOF OF RIGHT [13-08-2025(online)].pdf 2025-08-13
4 202541077336-POWER OF AUTHORITY [13-08-2025(online)].pdf 2025-08-13
5 202541077336-FORM-9 [13-08-2025(online)].pdf 2025-08-13
6 202541077336-FORM FOR SMALL ENTITY(FORM-28) [13-08-2025(online)].pdf 2025-08-13
7 202541077336-FORM 1 [13-08-2025(online)].pdf 2025-08-13
8 202541077336-FIGURE OF ABSTRACT [13-08-2025(online)].pdf 2025-08-13
9 202541077336-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-08-2025(online)].pdf 2025-08-13
10 202541077336-EVIDENCE FOR REGISTRATION UNDER SSI [13-08-2025(online)].pdf 2025-08-13
11 202541077336-EDUCATIONAL INSTITUTION(S) [13-08-2025(online)].pdf 2025-08-13
12 202541077336-DRAWINGS [13-08-2025(online)].pdf 2025-08-13
13 202541077336-DECLARATION OF INVENTORSHIP (FORM 5) [13-08-2025(online)].pdf 2025-08-13
14 202541077336-COMPLETE SPECIFICATION [13-08-2025(online)].pdf 2025-08-13