Sign In to Follow Application
View All Documents & Correspondence

An Automated Deep Learning System For Accurate Detection And Segmentation Of Brain Tumours From Mri Scans

Abstract: AN AUTOMATED DEEP LEARNING SYSTEM FOR ACCURATE DETECTION AND SEGMENTATION OF BRAIN TUMOURS FROM MRI SCANS An automated system and method for detecting and segmenting brain tumours from magnetic resonance imaging (MRI) scans are disclosed. The invention receives multi-modal MRI inputs, pre-processes them to standardise and enhance quality, and applies a convolutional neural network with attention-guided segmentation to delineate tumour boundaries accurately. A classification module assigns a tumour stage or grade based on extracted features. The system outputs a segmented tumour mask and stage prediction within seconds via a user-friendly interface for clinicians. Transfer learning and data augmentation enhance robustness across diverse tumour types and patient demographics. The fully automated pipeline eliminates manual intervention, reduces diagnostic time, and delivers consistent, high-accuracy results suitable for real-time clinical environments. By integrating advanced deep learning with secure deployment options, the invention supports improved diagnosis, treatment planning and monitoring of brain tumours in both specialised and resource-constrained settings.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
23 September 2025
Publication Number
43/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. MRUTHYUNJAYA MENDU
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. SURESH KUMAR MANDALA
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF THE INVENTION
The invention relates to medical image analysis, specifically to a system and method for automated detection and segmentation of brain tumours from magnetic resonance imaging (MRI) scans using deep learning. It concerns a fully automated framework combining convolutional neural networks, transfer learning and attention-based segmentation for high-precision tumour delineation and stage classification.
BACKGROUND OF THE INVENTION
Brain tumours pose a serious health risk and require timely and accurate diagnosis for effective treatment. Magnetic Resonance Imaging (MRI) is widely used for detecting and analyzing brain tumours, but manual interpretation of MRI scans is time-consuming, subject to inter-observer variability, and often lacks consistency, especially in early or ambiguous cases. Furthermore, tumour boundaries can be complex and vary greatly in shape, size, and intensity, making accurate segmentation difficult. Existing traditional and semi-automated techniques frequently fall short in terms of accuracy, scalability, and real-time applicability. Therefore, there is a critical need to develop an automated deep learning framework capable of accurately detecting and segmenting brain tumours from MRI scans. Such a system should enhance diagnostic efficiency, reduce human error, and support clinicians in making precise treatment decisions, ultimately improving patient care and outcomes.
US2024090822A1: a method of diagnosing a neurodegenerative disorder (nd) in a patient comprising: (a) obtaining mri image(s) of the patient's brain, (b) using the mri image(s) of the patient's brain to segment sub-cortical structures associated with the nd into sub-regions, based on structural connectivity to cortical sub-regions, (c) extracting one or more mri features from each of the sub-regions generated by the segmentation, and (d) using one or more machine learning techniques to classify the patient as being nd positive or nd negative based on comparisons of the one or more mri features to at least one training data set that includes mri features of each of the sub-regions generated by the segmentation of known nd positive controls and mri features of each of the sub-regions generated by the segmentation of nd negative controls, thereby diagnosing nd. Also computer-based or cloud-based systems to diagnose a nd in a subject.
US12154239B2: An augmented reality system and method, comprising: a memory configured to store 3D medical scans comprising an image of a tumor and an angiogram; an output port configured to present a signal for presentation of an augmented reality display to a user; at least one camera, configured to capture images of a physiological object from a perspective; at least one processor, configured to: implement a first neural network trained to automatically segment the tumor; implement a second neural network to segment vasculature in proximity to the tumor; implement a third neural network to recognize a physiological object in the captured images; and generate an augmented reality display of the physiological object, tumor and vasculature based on the captured images, the segmented tumor and the segmented vasculature, compensated for changes in the perspective.
Manual interpretation of MRI scans for brain tumour detection is time-consuming, inconsistent and prone to inter-observer variability. Existing semi-automated systems lack robustness across tumour types and modalities, and most commercial tools provide limited automation without precise boundary segmentation or stage classification. This invention addresses these shortcomings by providing a fully automated, end-to-end deep learning pipeline capable of accurately detecting, segmenting and classifying brain tumours from MRI scans, reducing diagnostic time, improving consistency and supporting clinicians in treatment planning.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
The invention provides an automated deep learning framework for brain tumour analysis from MRI data. The framework receives multi-modal MRI inputs, performs pre-processing and data augmentation to enhance robustness, and then applies convolutional neural networks and attention-guided segmentation modules to localise and delineate tumour regions.
The system outputs segmented tumour masks and predicted stage classifications in real time via a user-friendly interface for clinicians. Transfer learning and large annotated datasets are employed to improve generalisation across diverse tumour types and patient demographics.
By automating both detection and segmentation, the invention eliminates the need for manual or semi-automated workflows and delivers consistent, high-accuracy results suitable for real-world clinical environments, especially in resource-constrained settings.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The invention proposed consists of an automated deep learning framework intended for brain tumor detection and segmentation from MRI scans. The framework uses state-of-the-art, convolutional neural networks (CNNs) and advanced transfer learning techniques to encode meaningful features of high-resolution MRI images.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention proposed consists of an automated deep learning framework intended for brain tumor detection and segmentation from MRI scans. The framework uses state-of-the-art, convolutional neural networks (CNNs) and advanced transfer learning techniques to encode meaningful features of high-resolution MRI images.
The system is built to perform two key tasks:
(1) Localization and segmentation of tumor regions with precisions.
(2) Stage classification based on shape, size and tissue characterization of the tumour.
The training of the model is based on annotated MRI datasets and we utilize data augmentation and preprocessing to improve robustness and generalization. The invention also integrates a user friendly interface where MRI scans can be uploaded by clinicians and within seconds automated results such as segmented tumour region and stage prediction are returned to the clinician. This invention makes diagnostic time much shorter, decreases the amount of manual intervention during tumour analysis and maintains both high accuracy and consistency of tumour analysis. Due to its scalable and interpretable architecture this solution is deployable in real-time clinical environments, especially in resource constrained settings where advanced radiological analysis by an expert may be scarce.
A deep learning based brain tumour detection system is proposed which combines multi level feature extraction and attention based segmentation techniques to detect and delineate tumours from MRI scans accurately, outperforming the conventional models.
The invention comprises an input interface allowing clinicians to upload multi-modal MRI scans such as T1, T2 and FLAIR sequences.
A pre-processing module standardises input images by performing skull stripping, intensity normalisation, alignment and noise reduction. This ensures consistent data quality and improves downstream performance.
Data augmentation techniques such as rotation, scaling and contrast adjustment are applied to increase the robustness of the model to variations in patient orientation and scanner parameters.
A convolutional neural network backbone extracts hierarchical features from the MRI images. These features capture both low-level texture patterns and high-level anatomical structures relevant to tumour identification.
An attention-guided segmentation module focuses the network on salient tumour regions, improving boundary delineation and reducing false positives.
Transfer learning from large public MRI datasets accelerates training and enhances performance on limited clinical data.
The framework supports multi-modal input, enabling fusion of information from different MRI sequences for more accurate segmentation and classification.
A classification head analyses extracted features to assign a tumour stage or grade based on shape, size and tissue characteristics.
The output of the model includes a binary mask of the segmented tumour region overlaid on the original MRI scan and a textual report summarising tumour location, size and predicted stage.
A graphical user interface presents results to the clinician in an interpretable format within seconds of upload.
Security and privacy measures ensure that patient data are encrypted and handled in compliance with medical data standards.
The architecture is scalable, allowing deployment on hospital servers, cloud infrastructure or edge devices in remote clinics.
Automated monitoring and periodic re-training with new annotated data maintain high accuracy as tumour imaging patterns evolve.
The system reduces diagnostic time from minutes or hours to seconds and standardises output across different operators and institutions.
Applications include primary diagnosis, treatment planning, longitudinal monitoring of tumour progression, and integration into radiology workflow systems.
By combining advanced deep learning with a user-friendly interface, the invention brings high-end neuro-oncology analysis to settings without specialised radiologists.
BEST METHOD OF WORKING
The preferred embodiment deploys the deep learning framework on a secure server accessible via a web-based interface. Multi-modal MRI scans uploaded by clinicians are pre-processed automatically and fed into the trained convolutional-attention segmentation network. The model outputs a segmented tumour mask and stage classification within seconds. Periodic re-training with new annotated data ensures continued accuracy. This configuration achieves high precision and speed without manual intervention, making it suitable for integration into clinical workflows.
, Claims:1. A system for automated detection and segmentation of brain tumours comprising:
an input module configured to receive multi-modal magnetic resonance imaging scans;
a pre-processing module configured to normalise, align and denoise the scans and perform data augmentation;
a convolutional neural network backbone configured to extract hierarchical image features;
an attention-guided segmentation module configured to delineate tumour boundaries from the extracted features;
a classification module configured to assign a tumour stage based on shape, size and tissue characteristics; and
an output interface configured to present segmented tumour regions and stage classification to a user.
2. The system as claimed in claim 1, wherein the pre-processing module performs skull stripping, intensity normalisation and alignment to standardise input data.
3. The system as claimed in claim 1, wherein the convolutional neural network processes multi-modal MRI inputs to improve segmentation accuracy.
4. The system as claimed in claim 1, wherein the attention-guided segmentation module highlights salient tumour regions to refine boundary delineation.
5. The system as claimed in claim 1, wherein the classification module predicts tumour stage or grade from extracted features.
6. A method for automated detection and segmentation of brain tumours comprising:
receiving multi-modal magnetic resonance imaging scans;
pre-processing the scans to normalise and enhance image quality;
extracting hierarchical features using a convolutional neural network;
applying an attention-guided segmentation module to delineate tumour boundaries;
classifying the tumour stage based on extracted features; and
outputting a segmented tumour mask and stage classification to a user interface.
7. The method as claimed in claim 6, wherein the pre-processing includes skull stripping, intensity normalisation, alignment and data augmentation.
8. The method as claimed in claim 6, wherein multi-modal MRI inputs are fused to improve detection and segmentation accuracy.
9. The method as claimed in claim 6, wherein the attention-guided segmentation module refines boundary detection by focusing on salient image regions.
10. The method as claimed in claim 6, wherein the output comprises an overlaid tumour mask on the original MRI scan and a textual report summarising tumour characteristics.

Documents

Application Documents

# Name Date
1 202541090644-STATEMENT OF UNDERTAKING (FORM 3) [23-09-2025(online)].pdf 2025-09-23
2 202541090644-REQUEST FOR EARLY PUBLICATION(FORM-9) [23-09-2025(online)].pdf 2025-09-23
3 202541090644-POWER OF AUTHORITY [23-09-2025(online)].pdf 2025-09-23
4 202541090644-FORM-9 [23-09-2025(online)].pdf 2025-09-23
5 202541090644-FORM FOR SMALL ENTITY(FORM-28) [23-09-2025(online)].pdf 2025-09-23
6 202541090644-FORM 1 [23-09-2025(online)].pdf 2025-09-23
7 202541090644-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [23-09-2025(online)].pdf 2025-09-23
8 202541090644-EVIDENCE FOR REGISTRATION UNDER SSI [23-09-2025(online)].pdf 2025-09-23
9 202541090644-EDUCATIONAL INSTITUTION(S) [23-09-2025(online)].pdf 2025-09-23
10 202541090644-DRAWINGS [23-09-2025(online)].pdf 2025-09-23
11 202541090644-DECLARATION OF INVENTORSHIP (FORM 5) [23-09-2025(online)].pdf 2025-09-23
12 202541090644-COMPLETE SPECIFICATION [23-09-2025(online)].pdf 2025-09-23