Sign In to Follow Application
View All Documents & Correspondence

A Hybrid Deep Learning System For Brain Tumor Classification

Abstract: Disclosed herein is a hybrid deep learning system for brain tumor classification (100) comprises a multi-scale CNN module (102) configured to receive a magnetic resonance imaging (MRI) input. The system also includes a DenseNet-169 backbone network (104) configured to perform hierarchical feature extraction through dense connectivity. The system also includes a plurality of gated channel units (GCUs) (106) configured to dynamically reweight feature maps across channels. The system also includes a feature fusion mechanism (108) configured to integrate multi-scale local features with attention-weighted global features for robust tumor representation. The system also includes a classification head (110) configured to classify the tumor into one of a plurality of tumor categories. The system also includes a computing unit (112) configured to execute the hybrid model to provide automated brain tumor classification with enhanced accuracy, robustness, and generalization across heterogeneous MRI datasets.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2025
Publication Number
46/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. CHERUKU MURALIKRISHNA
RESEARCH SCHOLAR, DEPARTMENT OF COMPUTER SCIENCE & ARTIFICIAL INTELLIGENCE, SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. DR. SHANKER CHANDRE
ASSISTANT PROFESSOR, DEPARTMENT OF COMPUTER SCIENCE & ARTIFICIAL INTELLIGENCE, SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF DISCLOSURE
[0001] The present disclosure relates generally relates to the field of medical imaging and computer-aided diagnosis. More specifically, it pertains to a hybrid deep learning system for brain tumor classification.
BACKGROUND OF THE DISCLOSURE
[0002] Brain tumors represent one of the most critical health concerns in modern medicine, posing significant threats to human life due to their unpredictable growth, diverse histological subtypes, and high mortality rates. The early and accurate detection of brain tumors has consistently remained a priority for clinicians, radiologists, and researchers in the medical imaging community. Brain tumors can be broadly categorized into benign and malignant types, and among malignant tumors, gliomas, astrocytomas, meningiomas, and glioblastomas are some of the most studied. These tumors vary greatly in morphology, aggressiveness, and progression speed, making diagnosis and treatment planning highly challenging. Magnetic Resonance Imaging (MRI) is the most widely used non-invasive imaging modality for visualizing brain tumors, offering superior soft tissue contrast and multi-parametric imaging capabilities. Despite advances in MRI technology, interpreting images manually remains time-consuming, subjective, and prone to inter-observer variability, leading to the increasing reliance on computational intelligence for tumor detection and classification.
[0003] In the early years of medical image analysis, radiologists primarily relied on handcrafted feature extraction techniques. These included textural features, intensity-based features, and shape descriptors designed to capture the tumor’s appearance. Traditional machine learning algorithms such as support vector machines (SVM), k-nearest neighbors (KNN), random forests, and logistic regression were employed to classify tumor regions based on these extracted features. While these approaches showed promise, they suffered from limitations associated with manual feature engineering, where the effectiveness of the model depended heavily on the quality and relevance of the selected features. Furthermore, handcrafted features lacked the representational power to capture the multi-scale complexity inherent in brain tumor imaging data. As a result, misclassifications frequently occurred, particularly in cases involving tumors with irregular boundaries or overlapping intensity values with surrounding tissues.
[0004] The field of deep learning revolutionized computer vision by introducing algorithms capable of automatically learning hierarchical feature representations directly from raw data. Convolutional Neural Networks (CNNs), inspired by the visual processing mechanisms of the human brain, demonstrated remarkable success in tasks such as object recognition, natural image classification, and speech processing. The application of CNNs to medical imaging opened new pathways for accurate, robust, and scalable tumor classification. Unlike traditional approaches, CNNs eliminated the need for handcrafted features by automatically learning both low-level and high-level image patterns through stacked convolutional and pooling layers. This ability significantly improved the reliability of automated tumor analysis, leading to a surge in research focused on CNN-based models for MRI brain tumor classification.
[0005] Over time, numerous CNN architectures were adapted for medical image analysis. Basic CNN models initially showed considerable improvements compared to classical machine learning methods. However, as the complexity of tumor classification tasks increased, researchers explored more advanced architectures. Each of these architectures contributed unique improvements. VGGNet introduced deeper networks with smaller convolutional filters, ResNet employed residual connections to address the vanishing gradient problem, InceptionNet enabled multi-scale feature extraction using parallel convolutional filters of varying sizes, and DenseNet introduced dense connectivity patterns that encouraged feature reuse and improved gradient flow. These architectures played pivotal roles in improving classification accuracy and reducing overfitting in limited medical datasets.
[0006] One of the major challenges in brain tumor classification is the availability of annotated datasets. Unlike natural image datasets such as ImageNet, which consist of millions of labeled images, medical datasets are typically smaller due to patient privacy concerns, high annotation costs, and ethical restrictions. The scarcity of data often leads to overfitting when training deep neural networks, thereby limiting their generalization capability. To mitigate these issues, researchers adopted strategies such as transfer learning, data augmentation, and synthetic data generation using generative adversarial networks (GANs). Transfer learning, in particular, became a widely used approach where models pre-trained on large-scale natural image datasets were fine-tuned on smaller medical datasets. This technique leveraged the general image recognition knowledge of pre-trained networks and adapted it to the specific domain of tumor classification, producing improved results even with limited data.
[0007] Another critical aspect in brain tumor classification research is the recognition of multi-scale information. Tumors often exhibit variations in size, shape, and texture across different patients and tumor types. Small tumors may present subtle differences compared to surrounding tissues, while larger tumors may involve heterogeneous regions with necrosis, edema, or irregular margins. Capturing these variations requires models capable of analyzing information at multiple scales simultaneously. Multi-scale feature extraction became a central theme in CNN-based approaches, where convolutional filters of different sizes were used to capture both fine-grained local details and broader contextual patterns. Incorporating multi-scale information improved the robustness of tumor classification systems, particularly in distinguishing between similar-looking tumor subtypes.
[0008] Alongside multi-scale analysis, gated mechanisms gained attention in deep learning research for their ability to selectively filter and prioritize important information. Inspired by recurrent neural networks and attention mechanisms, gating units allow models to emphasize relevant features while suppressing irrelevant or redundant information. In the context of brain tumor classification, gated models improved interpretability and performance by focusing on tumor regions rather than being distracted by background or healthy brain tissue. DenseNet architectures, with their dense connectivity patterns, proved highly effective in medical imaging tasks because they encouraged feature reuse, reduced parameter requirements, and maintained stronger gradient flow in deep networks. Enhancing DenseNet with gating mechanisms further boosted its ability to capture relevant tumor-specific features while minimizing noise.
[0009] In recent years, hybrid deep learning models have emerged as a promising solution to overcome the limitations of single architectures. Hybridization involves combining multiple architectures or techniques to leverage their complementary strengths. For brain tumor classification, hybrid models often integrate CNN-based feature extraction with advanced architectures such as DenseNet, ResNet, or InceptionNet to improve accuracy and robustness. Additionally, ensemble learning methods, which combine predictions from multiple models, have also been applied to further enhance classification performance. These hybrid strategies reflect the growing consensus that no single architecture can fully capture the complexity of brain tumor imaging data, and combining multiple methods is often more effective.
[0010] Beyond model design, explainability and interpretability have become pressing concerns in the medical community. While deep learning models can achieve high accuracy, their black-box nature raises concerns about trust and accountability in clinical practice. To address this, researchers introduced visualization techniques such as class activation maps (CAM) and gradient-weighted CAM (Grad-CAM), which highlight regions of the image that influence the model’s classification decision. These visualization tools provide radiologists with valuable insights into the decision-making process, thereby increasing confidence in automated tumor classification systems. Ensuring explainability is crucial for clinical adoption, as physicians require not only accurate predictions but also transparent reasoning to support diagnosis and treatment planning.
[0011] Parallel to algorithmic developments, advances in hardware and computational resources have fueled progress in deep learning for medical imaging. The availability of high-performance GPUs, TPUs, and cloud-based platforms has made it feasible to train complex models on large datasets. Moreover, open-source deep learning frameworks such as TensorFlow, PyTorch, and Keras have democratized access to powerful modeling tools, enabling rapid prototyping and experimentation. These resources have accelerated the pace of innovation, allowing researchers worldwide to contribute to the development of improved tumor classification models.
[0012] In addition to classification, brain tumor research encompasses related tasks such as segmentation, detection, and grading. Tumor segmentation involves delineating the exact boundaries of the tumor region, which is essential for surgical planning and treatment monitoring. Detection focuses on identifying the presence of tumors, while grading seeks to assess their severity or malignancy level. Although these tasks differ in scope, they are interconnected, and advances in classification often benefit other areas. For instance, multi-scale CNN architectures developed for classification have been adapted for segmentation, and DenseNet-based models have been applied to both detection and grading. This cross-pollination of methods highlights the interconnected nature of medical imaging research.
[0013] Despite these advancements, challenges remain in developing universally reliable brain tumor classification systems. Variability in imaging protocols across hospitals, scanner manufacturers, and acquisition parameters introduces inconsistencies in MRI datasets. These domain shifts can degrade model performance when applied to data from different sources. Efforts to address these issues include domain adaptation techniques, federated learning frameworks, and harmonization methods to standardize imaging data. Furthermore, clinical adoption requires models that are not only accurate but also efficient, interpretable, and robust to noise, artifacts, and real-world variability.
[0014] Ethical and regulatory considerations also play a significant role in shaping the development of deep learning systems for brain tumor classification. Patient privacy, data security, and informed consent are paramount concerns in medical research. The deployment of AI-driven systems in healthcare settings must comply with strict regulations, including guidelines from agencies such as the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA). Additionally, issues of bias and fairness in AI models must be addressed to ensure equitable healthcare outcomes across diverse populations. Researchers are increasingly emphasizing the importance of ethical AI, advocating for transparent, fair, and responsible deployment of deep learning systems in medicine.
[0015] The intersection of deep learning and brain tumor classification continues to evolve, driven by advances in algorithms, computational resources, and collaborative research efforts. International challenges and benchmark competitions, such as the Brain Tumor Segmentation (BraTS) challenge, have played a pivotal role in promoting innovation by providing standardized datasets and evaluation metrics. These initiatives foster collaboration among researchers, enabling comparative studies and accelerating the development of state-of-the-art models. The outcomes of such competitions consistently highlight the potential of hybrid and multi-scale approaches, which continue to push the boundaries of what is possible in automated tumor classification.
[0016] Thus, in light of the above-stated discussion, there exists a need for a hybrid deep learning system for brain tumor classification.
SUMMARY OF THE DISCLOSURE
[0017] The following is a summary description of illustrative embodiments of the invention. It is provided as a preface to assist those skilled in the art to more rapidly assimilate the detailed design discussion which ensues and is not intended in any way to limit the scope of the claims which are appended hereto in order to particularly point out the invention.
[0018] According to illustrative embodiments, the present disclosure focuses on a hybrid deep learning system for brain tumor classification which overcomes the above-mentioned disadvantages or provide the users with a useful or commercial choice.
[0019] An objective of the present disclosure is to enhance tumor feature extraction across multiple spatial scales, ensuring that both fine-grained local structures and global contextual patterns in MRI scans are effectively captured for improved diagnostic accuracy.
[0020] Another objective of the present disclosure is to design a hybrid deep learning framework that integrates multi-scale convolutional neural networks (CNNs) and gated DenseNet-169 for accurate and efficient classification of brain tumors into categories such as glioma, meningioma, and pituitary tumor.
[0021] Another objective of the present disclosure is to incorporate gating mechanisms within DenseNet-169 for prioritizing highly informative feature channels, thereby reducing the impact of irrelevant or redundant features and improving classification reliability.
[0022] Another objective of the present disclosure is to minimize diagnostic errors arising from intra-observer variability by providing a consistent and automated classification system that reduces dependency on manual interpretation by radiologists.
[0023] Another objective of the present disclosure is to address overfitting issues associated with small and imbalanced MRI datasets by introducing feature-level attention and regularization strategies, ensuring better generalization to diverse patient populations.
[0024] Another objective of the present disclosure is to improve robustness of tumor classification models across varying tumor shapes, sizes, textures, and growth patterns, thereby enhancing the system’s adaptability to real-world clinical scenarios.
[0025] Another objective of the present disclosure is to optimize computational efficiency of the hybrid model by leveraging DenseNet’s feature reuse and multi-scale CNN’s hierarchical learning, reducing redundancy and ensuring faster training and inference.
[0026] Another objective of the present disclosure is to validate the clinical relevance of the proposed system through performance evaluation on benchmark brain tumor MRI datasets, ensuring that the model achieves higher sensitivity, specificity, and accuracy than conventional CNN-based methods.
[0027] Another objective of the present disclosure is to enable explainable AI in medical imaging by incorporating visualization techniques such as heatmaps or class activation maps, thereby assisting radiologists in understanding the system’s decision-making process.
[0028] Yet another objective of the present disclosure is to contribute toward precision medicine by developing an intelligent, generalizable, and scalable brain tumor classification framework that can support personalized prognosis and treatment planning in neurosurgical and oncological care.
[0029] In light of the above, a hybrid deep learning system for brain tumor classification comprises a multi-scale CNN module configured to receive a magnetic resonance imaging (MRI) input and process the input using a plurality of parallel convolutional filters of different kernel sizes. The system also includes a DenseNet-169 backbone network configured to perform hierarchical feature extraction through dense connectivity for feature reuse and mitigation of vanishing gradients. The system also includes a plurality of gated channel units (GCUs) configured to dynamically reweight feature maps across channels by selectively enhancing informative channels and suppressing non-relevant channels. The system also includes a feature fusion mechanism configured to integrate multi-scale local features with attention-weighted global features for robust tumor representation across varying morphologies, sizes, and textures. The system also includes a classification head configured to classify the tumor into one of a plurality of tumor categories including glioma, meningioma, and pituitary tumor. The system also includes a computing unit configured to execute the hybrid model to provide automated brain tumor classification with enhanced accuracy, robustness, and generalization across heterogeneous MRI datasets.
[0030] In one embodiment, the multi-scale CNN module comprises parallel convolutional filters for capturing fine, medium, and coarse-grained tumor features.
[0031] In one embodiment, the gated channel units (GCUs) are strategically placed between dense blocks of DenseNet-169 to dynamically reweight intermediate feature maps during the forward pass.
[0032] In one embodiment, the DenseNet-169 backbone is pre-trained on a large-scale medical imaging dataset to leverage transfer learning for improved feature extraction and generalization.
[0033] In one embodiment, the feature fusion mechanism concatenates multi-scale CNN features with attention-enhanced DenseNet features before inputting to the classification head.
[0034] In one embodiment, the classification head includes dropout regularization to reduce overfitting on small or imbalanced MRI datasets.
[0035] In one embodiment, the computing unit generates visualization outputs, such as heatmaps or class activation maps, for interpretability of tumor classification results.
[0036] In one embodiment, the system is configured to classify tumors with varying sizes, shapes, textures, and intensities to improve robustness across heterogeneous patient MRI data.
[0037] In one embodiment, the system achieves computational efficiency by utilizing DenseNet’s feature reuse and the lightweight design of GCUs to maintain low inference time.
[0038] In one embodiment, the system is adapted to integrate with clinical decision support systems to provide automated, reliable, and reproducible brain tumor diagnoses.
[0039] These and other advantages will be apparent from the present application of the embodiments described herein.
[0040] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
[0041] These elements, together with the other aspects of the present disclosure and various features are pointed out with particularity in the claims annexed hereto and form a part of the present disclosure. For a better understanding of the present disclosure, its operating advantages, and the specified object attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated exemplary embodiments of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0042] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description merely show some embodiments of the present disclosure, and a person of ordinary skill in the art can derive other implementations from these accompanying drawings without creative efforts. All of the embodiments or the implementations shall fall within the protection scope of the present disclosure.
[0043] The advantages and features of the present disclosure will become better understood with reference to the following detailed description taken in conjunction with the accompanying drawing, in which:
[0044] FIG. 1 illustrates a flowchart outlining sequential step involved in a hybrid deep learning system for brain tumor classification, in accordance with an exemplary embodiment of the present disclosure;
[0045] FIG. 2 illustrates a flowchart showing working of a hybrid deep learning system for brain tumor classification, in accordance with an exemplary embodiment of the present disclosure.
[0046] Like reference, numerals refer to like parts throughout the description of several views of the drawing;
[0047] The hybrid deep learning system for brain tumor classification, which like reference letters indicate corresponding parts in the various figures. It should be noted that the accompanying figure is intended to present illustrations of exemplary embodiments of the present disclosure. This figure is not intended to limit the scope of the present disclosure. It should also be noted that the accompanying figure is not necessarily drawn to scale.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0048] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
[0049] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.
[0050] Various terms as used herein are shown below. To the extent a term is used, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0051] The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
[0052] The terms “having”, “comprising”, “including”, and variations thereof signify the presence of a component.
[0053] Referring now to FIG. 1 to FIG. 2 to describe various exemplary embodiments of the present disclosure. FIG. 1 illustrates a flowchart outlining sequential step involved in a hybrid deep learning system for brain tumor classification, in accordance with an exemplary embodiment of the present disclosure.
[0054] A hybrid deep learning system for brain tumor classification 100 comprises a multi-scale CNN module 102 configured to receive a magnetic resonance imaging (MRI) input and process the input using a plurality of parallel convolutional filters of different kernel sizes. The multi-scale CNN module 102 comprises parallel convolutional filters for capturing fine, medium, and coarse-grained tumor features.
[0055] The system also includes a DenseNet-169 backbone network 104 configured to perform hierarchical feature extraction through dense connectivity for feature reuse and mitigation of vanishing gradients. The DenseNet-169 backbone 104 is pre-trained on a large-scale medical imaging dataset to leverage transfer learning for improved feature extraction and generalization.
[0056] The system also includes a plurality of gated channel units (GCUs) 106 configured to dynamically reweight feature maps across channels by selectively enhancing informative channels and suppressing non-relevant channels. The gated channel units (GCUs) 106 are strategically placed between dense blocks of DenseNet-169 to dynamically reweight intermediate feature maps during the forward pass.
[0057] The system also includes a feature fusion mechanism 108 configured to integrate multi-scale local features with attention-weighted global features for robust tumor representation across varying morphologies, sizes, and textures. The feature fusion mechanism 108 concatenates multi-scale CNN features with attention-enhanced DenseNet features before inputting to the classification head.
[0058] The system also includes a classification head 110 configured to classify the tumor into one of a plurality of tumor categories including glioma, meningioma, and pituitary tumor. The classification head 110 includes dropout regularization to reduce overfitting on small or imbalanced MRI datasets.
[0059] The system also includes a computing unit 112 configured to execute the hybrid model to provide automated brain tumor classification with enhanced accuracy, robustness, and generalization across heterogeneous MRI datasets. The computing unit 112 generates visualization outputs, such as heatmaps or class activation maps, for interpretability of tumor classification results.
[0060] The system is configured to classify tumors with varying sizes, shapes, textures, and intensities to improve robustness across heterogeneous patient MRI data. The system achieves computational efficiency by utilizing DenseNet’s feature reuse and the lightweight design of GCUs to maintain low inference time. The system is adapted to integrate with clinical decision support systems to provide automated, reliable, and reproducible brain tumor diagnoses.
[0061] FIG. 1 illustrates a flowchart outlining sequential step involved in a hybrid deep learning system for brain tumor classification.
[0062] At 102, the MRI input is directed to the multi-scale convolutional neural network (CNN) module. This module is designed to process the input using a series of parallel convolutional filters with varying kernel sizes, enabling the extraction of features at different scales. The use of multiple kernel sizes allows the system to capture both fine-grained local patterns and broader contextual structures within the MRI scans, which is crucial for accurately identifying tumor characteristics that vary widely in shape, size, and texture.
[0063] At 104, the extracted features are passed into the DenseNet-169 backbone network. DenseNet is particularly advantageous due to its dense connectivity pattern, where each layer receives inputs from all preceding layers. This design encourages extensive feature reuse, enhances the flow of gradients throughout the network, and effectively mitigates the issue of vanishing gradients. By leveraging this architecture, the system is able to perform hierarchical feature extraction, learning progressively abstract and discriminative representations of the tumor regions. This robust backbone ensures that both low-level and high-level information is preserved and utilized, providing a strong foundation for subsequent processing.
[0064] At 106, once the hierarchical features are obtained, they are refined by a plurality of gated channel units (GCUs). The GCUs act as adaptive filters that dynamically reweight the feature maps across channels. Instead of treating all channels equally, they assign higher importance to informative channels while suppressing those that carry redundant or less relevant information. This selective enhancement ensures that the model focuses on the most diagnostically significant cues, thereby improving sensitivity to subtle tumor-related features. The integration of GCUs helps in reducing noise and improving the discriminative capacity of the system, especially in challenging cases with complex tumor morphologies.
[0065] At 108, the refined features are then processed through a feature fusion mechanism. At this stage, multi-scale local features obtained from the CNN module are integrated with the attention-weighted global features refined by the GCUs. The fusion mechanism ensures that both localized structural details and global contextual information are jointly considered when forming the tumor representation. Such integration is essential for achieving robustness across heterogeneous tumor presentations, since tumors may differ not only in their fine details but also in their broader spatial patterns. The resulting fused representation provides a comprehensive characterization of the tumor, enabling the system to handle a wide spectrum of morphological, textural, and size variations in brain tumors.
[0066] At 110, the fused representation is forwarded to the classification head, which is designed to categorize the tumor into one of several classes, namely glioma, meningioma, or pituitary tumor. This classification process relies on the rich, multi-level, and selectively enhanced features accumulated throughout the system. The classification head produces an automated diagnostic output, which can serve as a reliable decision-support tool for radiologists and clinicians.
[0067] At 112, the overall workflow is executed by a computing unit, which ensures efficient model computation and deployment across diverse MRI datasets. The hybrid design, combining multi-scale feature extraction, dense connectivity, adaptive channel attention, and feature fusion, enables the system to deliver enhanced accuracy, robustness, and generalization, making it a valuable asset in clinical settings for brain tumor diagnosis.
[0068] FIG. 2 illustrates a flowchart showing working of a hybrid deep learning system for brain tumor classification.
[0069] The process begins with the input of MRI scans, which serve as the raw data for analysis. These medical images often contain complex structural information of the brain, including the presence and morphology of tumors. Since tumors can differ significantly in size, texture, and shape, it is critical to extract features that capture these variations comprehensively.
[0070] The next stage in the flowchart is the multi-scale CNN feature extractor, which applies multiple convolutional filters of varying kernel sizes to the MRI scans. This step allows the system to capture both local fine-grained details and larger contextual patterns. For instance, small kernels may focus on edges, textures, and small tumor regions, while larger kernels capture broader structural patterns across the brain. By applying these filters in parallel, the system builds a rich representation of the input, ensuring that no diagnostically important detail is overlooked.
[0071] Once the multi-scale features are extracted, they are concatenated to form a unified feature representation. This concatenation step integrates the information obtained from different kernel scales, creating a feature space that incorporates diverse levels of detail. The combined representation improves the system’s ability to handle tumors with heterogeneous appearances and ensures that both local and global features contribute to the classification process.
[0072] After feature concatenation, the data is passed to a DenseNet-169 backbone network enhanced with gated channel units (GCUs). DenseNet is particularly effective because of its dense connectivity mechanism, where each layer is connected to every other layer. This connectivity facilitates efficient gradient flow, reduces the risk of vanishing gradients, and promotes feature reuse, which makes the network both deeper and more efficient. The inclusion of GCUs further strengthens this stage by dynamically reweighting the channels of the feature maps. Instead of assigning equal importance to all channels, GCUs highlight the most informative ones while suppressing redundant or irrelevant features. This selective enhancement sharpens the model’s focus on tumor-specific patterns, making it more robust against noise and irrelevant variations in MRI data.
[0073] The features extracted and refined by the DenseNet-GCU mechanism are then processed through global average pooling (GAP). GAP compresses the feature maps into a more compact representation by averaging each channel across its spatial dimensions. This reduces the complexity of the feature representation while retaining the most salient global information. The use of GAP not only lowers the risk of overfitting but also ensures that the resulting feature vector is well-suited for classification.
[0074] The pooled features are then directed to a fully connected layer, which acts as a decision-making module. At this stage, the network interprets the extracted features and transforms them into a format that aligns with the classification task. This dense layer aggregates the abstracted features into meaningful representations that can differentiate between different types of brain tumors.
[0075] Finally, the output passes through a softmax classification layer, where the tumor is categorized into one of several clinically relevant classes such as glioma, meningioma, or pituitary tumor. The softmax function converts the outputs into probability distributions, providing not only the predicted class but also a confidence score for each possible category. This final stage ensures that the system delivers a clear and interpretable diagnostic output.
[0076] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it will be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0077] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof.
[0078] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present disclosure and its practical application, and to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the present disclosure.
[0079] Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0080] In a case that no conflict occurs, the embodiments in the present disclosure and the features in the embodiments may be mutually combined. The foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
, Claims:I/We Claim:
1. A hybrid deep learning system for brain tumor classification (100) comprising:
a multi-scale CNN module (102) configured to receive a magnetic resonance imaging (MRI) input and process the input using a plurality of parallel convolutional filters of different kernel sizes;
a DenseNet-169 backbone network (104) configured to perform hierarchical feature extraction through dense connectivity for feature reuse and mitigation of vanishing gradients;
a plurality of gated channel units (GCUs) (106) configured to dynamically reweight feature maps across channels by selectively enhancing informative channels and suppressing non-relevant channels;
a feature fusion mechanism (108) configured to integrate multi-scale local features with attention-weighted global features for robust tumor representation across varying morphologies, sizes, and textures;
a classification head (110) configured to classify the tumor into one of a plurality of tumor categories including glioma, meningioma, and pituitary tumor;
a computing unit (112) configured to execute the hybrid model to provide automated brain tumor classification with enhanced accuracy, robustness, and generalization across heterogeneous MRI datasets.
2. The system (100) as claimed in claim 1, wherein the multi-scale CNN module (102) comprises parallel convolutional filters for capturing fine, medium, and coarse-grained tumor features.
3. The system (100) as claimed in claim 1, wherein the gated channel units (GCUs) (106) are strategically placed between dense blocks of DenseNet-169 to dynamically reweight intermediate feature maps during the forward pass.
4. The system (100) as claimed in claim 1, wherein the DenseNet-169 backbone (104) is pre-trained on a large-scale medical imaging dataset to leverage transfer learning for improved feature extraction and generalization.
5. The system (100) as claimed in claim 1, wherein the feature fusion mechanism (108) concatenates multi-scale CNN features with attention-enhanced DenseNet features before inputting to the classification head.
6. The system (100) as claimed in claim 1, wherein the classification head (110) includes dropout regularization to reduce overfitting on small or imbalanced MRI datasets.
7. The system (100) as claimed in claim 1, wherein the computing unit (112) generates visualization outputs, such as heatmaps or class activation maps, for interpretability of tumor classification results.
8. The system (100) as claimed in claim 1, wherein the system is configured to classify tumors with varying sizes, shapes, textures, and intensities to improve robustness across heterogeneous patient MRI data.
9. The system (100) as claimed in claim 1, wherein the system achieves computational efficiency by utilizing DenseNet’s feature reuse and the lightweight design of GCUs to maintain low inference time.
10. The system (100) as claimed in claim 1, wherein the system is adapted to integrate with clinical decision support systems to provide automated, reliable, and reproducible brain tumor diagnoses.

Documents

Application Documents

# Name Date
1 202541096538-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2025(online)].pdf 2025-10-07
2 202541096538-REQUEST FOR EARLY PUBLICATION(FORM-9) [07-10-2025(online)].pdf 2025-10-07
3 202541096538-POWER OF AUTHORITY [07-10-2025(online)].pdf 2025-10-07
4 202541096538-FORM-9 [07-10-2025(online)].pdf 2025-10-07
5 202541096538-FORM FOR SMALL ENTITY(FORM-28) [07-10-2025(online)].pdf 2025-10-07
6 202541096538-FORM 1 [07-10-2025(online)].pdf 2025-10-07
7 202541096538-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [07-10-2025(online)].pdf 2025-10-07
8 202541096538-DRAWINGS [07-10-2025(online)].pdf 2025-10-07
9 202541096538-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2025(online)].pdf 2025-10-07
10 202541096538-COMPLETE SPECIFICATION [07-10-2025(online)].pdf 2025-10-07
11 202541096538-Proof of Right [30-10-2025(online)].pdf 2025-10-30