Abstract: The development of aberrant brain cells, some of which may turn into brain cancer, is known as an intracranial (brain) tumor. Brain cancer is a leading cause of death each year. Early detection of the brain tumor increases the likelihood of successful treatment and survival. Most tumor detection and identification techniques now rely on the decision of a neuro specialist's picture interpretation, which takes time and is susceptible to human error. The objective of the invention is to develop a strong CNN (Convolutional Neural Network) model that can accurately determine from brain MRI scan images whether a person has a tumor or not and it is should also be capable of determining the type of tumor. The doctors will be assisted by this model in making speedy diagnoses. 6 Claims & 1 Figure
Description:Field of Invention
The present invention is relating to the early detection of the brain tumor using CNN (Convolutional Neural Network) model that can increases the likelihood of successful treatment and survival.
The Objectives of this Invention
The objective is to develop a strong CNN (Convolutional Neural Network) model that can accurately determine from brain MRI scan images whether a person has a tumor or not and it is should also be capable of determining the type of tumor. The doctors will be assisted by this model in making speedy diagnoses.
Background of the Invention
In (CN2020/1119044494B), In accordance with one aspect of the aforementioned invention, an artificial intelligence-based stroke evaluation instrument consists of the following components: an image-acquiring unit that acquires a flat-scan computerized tomographic image corresponding to the brain of at least one patient; an imaging preprocessing unit that preprocesses the flat-scan electronic tomographic imaging image and determines whether the patient or patients in question is bleeding or not; an imagery processing unit that normalizes the transformed image, divides the image using a preset requirements mask structure, and extracts a region of fascination; and a decision-making unit that is established to use the separated and gathered region of interest to identify whether or not there is an abnormalities in the intellectual large blood vessels of the at minimum a single individual. In addition to that (US2019/0340762A1), Obtaining a piece of first image information from the image capture device, that comprises the first skin portion; determining a first skin mask that corresponds to the first skin portion; locating one or more initially within the first skin mask, each of which consists of an abnormality on a first part of the body within the first skin component, are the steps involved in a method for tracking abnormalities in the skin on a patient's skin portion. getting additional image data from an image capture device; comparing to both the first and second areas of the body of the first and subsequent picture data to match the skin mask of the first and second graphic data; and capturing the second image data repeatedly, with the second body region contained throughout a second skin component. In addition to (US2023/0018956A1), The steps involved in determining the cancer status of biological tissue are as follows: gathering a Raman spectrum suggesting the biological tissue's radiation response; capturing the Raman spectrum with a fiber-optic investigation of a fiber-optic Raman spectral analysis framework; entering the Raman frequency into a computer program's boosted tree classification algorithm; and using the boosted tree classification algorithm to compare the captured Raman spectrum in actual time with reference data and determining the biological products tissue's cancer status based on the difference between the two. The basis for the reference data was previously established using a set of reference information Raman spectra demonstrating the Raman spectroscopy comments of reference tissue specimens.
The researchers are now proposing various machine learning models to identify brain tumors using a large volume of collected medical MRI imaging data. These models employ feature extraction, feature selection, dimensionality reduction and classification techniques, with the majority of them being focused on binary identification. Kharrat et al. proposed a binary classification approach for brain images using a Support Vector Machine (SVM) and Genetic Algorithm (GA) [24]. In this experiment, features were extracted by utilizing the Spatial Gray Level Dependency Matrix (SGLDM) method. Another study by Bahadure et al. utilized Berkeley Wavelet Transformation (BWT) in combination with SVM to segment and classify normal and abnormal brain tissues [25], achieving an impressive prediction accuracy of 96.5% on 135 images.
Certainly, here's a literature survey on the topic of "Intracranial Tumor Detection using Convolutional Neural Networks (CNNs)." This survey provides an overview of some key research papers and developments in the field up until my last knowledge update in September 2021. This paper explores the use of deep learning techniques, including CNNs, for brain tumor detection and segmentation. It discusses the challenges of medical image analysis and presents a method for brain tumor segmentation using a 3D CNN architecture. The authors propose a CNN-based approach for brain tumor detection and classification. They focus on classifying tumor types based on MRI images using transfer learning from pre-trained models like VGG-16 and Inception-v3.
This paper presents an approach for brain tumor segmentation and survival prediction using CNNs. It highlights the challenges faced in the BRATS (Multimodal Brain Tumor Segmentation) challenge and showcases the utility of deep learning models in this context. The authors introduce DeepMedic, a CNN architecture designed for medical image analysis tasks such as brain tumor segmentation. They emphasize the importance of capturing spatial context and combining multi-scale information in medical images. This paper proposes a cascaded CNN architecture for brain tumor segmentation. It addresses the challenge of dealing with different tumor sizes and shapes by employing anisotropic convolutions in the network. While not specifically about tumors, this work focuses on the detection of intracranial hemorrhages using 3D CNNs. It demonstrates the potential of deep learning in detecting critical conditions within the brain. This paper presents an approach based on the U-Net architecture for automatic brain tumor detection and segmentation. U-Net is a widely used architecture for medical image segmentation tasks. The authors propose a CNN-based method for brain tumor segmentation in MRI images. They experiment with different CNN architectures and evaluate the performance of their approach on public datasets. This work introduces the 3D U-Net architecture for volumetric medical image segmentation. It addresses the challenge of limited annotated data by leveraging a sparse annotation strategy. This paper discusses the classification of brain tumors using deep neural networks. It explores mixed feature learning, which combines different types of features extracted from MRI images, and demonstrates improved classification accuracy. Remember that the field of medical imaging and deep learning is rapidly evolving, and there might be more recent developments and papers that have emerged since my last update in September 2021. I recommend searching on academic databases like PubMed, IEEE Xplore, or arXiv for the most up-to-date research in this area.
Summary of the Invention
The proposed invention will be helpful for the doctors to save time and capital. We have proposed Intracranial Tumor Detection using Convolutional Neural Network which will effectively determine whether a person has a tumor or not. The use of CNN makes the feature extraction process easy when compared to other techniques and the image augmentation was used to overcome the problem of over fitting.
Detailed Description of the Invention
Intracranial tumor detection using Convolutional Neural Networks (CNNs) involves the application of deep learning techniques to accurately identify and classify tumors within brain images. Here's a proposed method for intracranial tumor detection using CNNs:
Obtain a dataset of brain MRI or CT scan images with labeled tumor regions. Normalize and rescale the images to a consistent size. Augment the dataset with techniques like rotation, flipping, and scaling to improve model generalization. Split the dataset into training, validation, and test sets. A common split could be 70% for training, 15% for validation, and 15% for testing. Design a CNN architecture for tumor detection. A common choice is to use a pre-trained CNN as a feature extractor (e.g., VGG, ResNet, or Inception), followed by additional layers for classification. Add global average pooling to reduce the spatial dimensions and retain important features. Add fully connected layers for classification. Initialize the pre-trained CNN with weights from a model that has been trained on a large dataset, like ImageNet. Freeze the layers of the pre-trained CNN to retain learned features. Unfreeze some of the later layers of the pre-trained CNN to fine-tune the model on the specific tumor detection task. Use a lower learning rate for fine-tuning to prevent drastic changes to the pre-trained weights. Choose an appropriate loss function for binary classification (tumor vs. non-tumor) such as binary cross-entropy. Use an optimizer like Adam or RMSprop for gradient descent.
Train the model on the training dataset, monitoring the validation loss and accuracy. Implement techniques like early stopping and learning rate scheduling to prevent overfitting. Evaluate the trained model on the test dataset to assess its performance. Calculate metrics such as accuracy, precision, recall, F1-score, and area under the ROC curve (AUC-ROC). Apply thresholding to the model's output probabilities to generate binary predictions (tumor present or absent). Perform morphological operations (e.g., dilation, erosion) to refine the predicted tumor regions. Visualize the model's predictions overlaid on the original brain images to assess the accuracy of tumor detection. Once satisfied with the model's performance, deploy it in a clinical or research setting for real-time tumor detection. Ensure appropriate compliance with medical regulations and ethical considerations. It's important to note that medical image analysis requires careful consideration of ethical and regulatory guidelines. Collaboration with medical professionals and domain experts is essential to ensure the accuracy, safety, and reliability of the proposed method.
A component of machine learning that makes use of neural networks is deep learning. These neural networks aim to mimic how the human brain functions by enabling it to "learn" from a lot of data. Some of the data pre-processing that is generally involved with machine learning is eliminated with deep learning. A CNN is a form of artificial neural network used in deep learning that is generally used for image recognition and classification. Consider a CNN model as the combination of two components : the feature extraction component and the classification component. The layers that perform feature extraction are convolution and pooling. After the feature extraction, the fully connected layers together with activation function performs the classification. Whenever we want to train a neural network, we need to preprocess the images. Generally image preprocessing includes resizing, data augmentation etc. The pace of model inference and model training may both be accelerated by image preprocessing. Sometimes the images we collect for the training may not be of same size or are huge. In that case image processing will help us to adjust all the images to same size and also helps in reducing the size of images if they are huge. This will reduce the training time and also increases the performance. Image pre-processing aims to improve the picture data by reducing unwanted noise or enhancing specific visual properties that are important for subsequent processing and analysis tasks.
Any neural network training requires an enormous amount of input data. So that the model can acquire as many features as feasible and avoid overfitting. However, obtaining a significant volume of data every time is not always possible. Therefore, we employ the idea of image augmentation, which will aid in growing the data set. In image augmentation we generally apply rotation, resizing , flipping and other techniques.
Improved Accuracy: CNNs have demonstrated remarkable accuracy in detecting intracranial tumors from medical images such as MRI scans and CT scans. They can learn intricate patterns and features that might be challenging for traditional methods to capture. Automated Detection: CNNs enable the automation of tumor detection, reducing the need for manual intervention and accelerating the diagnostic process. This is crucial for timely diagnosis and treatment planning. Segmentation: CNNs excel in segmenting tumor regions within medical images. They can accurately delineate tumor boundaries, aiding in treatment planning and follow-up assessments. Multi-Modal Integration: Many studies have shown that CNNs can effectively integrate information from different imaging modalities, such as T1-weighted, T2-weighted, and contrast-enhanced MRI images, enhancing the overall accuracy of tumor detection. Real-time Detection: CNNs can process medical images in real-time or near-real-time, making them suitable for use in emergency situations and critical care scenarios. Early Detection: The sensitivity of CNNs allows for early detection of small tumors that might be overlooked in manual analysis or less sophisticated automated methods. Reduced False Positives: By learning from large datasets, CNNs can reduce the rate of false positive detections, minimizing unnecessary follow-up tests or procedures. Robustness to Variability: CNNs can handle variations in tumor size, shape, location, and patient demographics, making them adaptable across different clinical scenarios. Reduced Inter-Observer Variability: Automated tumor detection using CNNs can reduce the variability that can arise from differences in expertise among human observers. Clinical Validation: Several studies have reported successful clinical validation of CNN-based tumor detection systems, demonstrating their potential for real-world application and integration into clinical workflows. Generalizability: CNNs trained on diverse datasets can generalize well to new, unseen cases, potentially assisting radiologists in identifying rare or unusual tumor presentations. Integration with PACS: CNN-based systems can be integrated with Picture Archiving and Communication Systems (PACS) commonly used in healthcare institutions, facilitating seamless adoption into existing infrastructure. Personalized Medicine: By accurately detecting and segmenting tumors, CNNs contribute to personalized treatment planning and monitoring of disease progression.
It's important to note that while CNNs have shown promising results in intracranial tumor detection, they are not without limitations. The quality of the training data, the potential for model bias, interpretability of results, and ethical considerations are among the factors that researchers and clinicians need to carefully consider when developing and deploying such systems.
6 Claims & 1 Figure
Brief description of Drawing
In the figure which are illustrate exemplary embodiments of the invention.
Figure 1, the System Architecture of Proposed method. , Claims:The scope of the invention is defined by the following claims:
Claim:
1. A system/method to the identification of tumor detection system using deep learning algorithms, said system/method comprising the steps of:
a) The system starts up, data is gathered from MRI datasets (1). The data processed with training and testing data set (2).
b) The developed system will be functioning based on loss function (4) and image classification feature selection (5).
c) Based on these data’s the different architecture will apply to analyse test data (6) to analyze the result (7).
2. As per the claim 1, the To begin with, a dataset is necessary in order to build any deep learning model. ‘Brain-Tumor-Classification’ dataset will be applied for brain tumour identification. The dataset is divided into 4 folders: “no_tumor” representing non-tumorous images and remaining three folders named “giloma_tumor”,” meningiloma_tumor”,”pituitary_tumor” representing different types of tumors.
3. As mentioned in claim 1, The training, testing and validation datasets will now be created from the dataset. Here, 30% of the dataset will be used for testing and validation and 70% will be used for training.
4. As per the claim 1, As a result, the testing and training phases of the design are separated. The training phase will begin first. To enhance our dataset and enable the model to extract as many features as feasible, we must first preprocess the training dataset using data augmentation.
5. As mentioned in the claim 1, After that, a model will be created and trained using the preprocessed training dataset and the CNN algorithm. The model will attempt to learn which features in this training dataset correspond to a tumor.
6. As per the claim 1, the loss function is used to assess how effectively the neural network models the trained dataset. A loss function compares the target and anticipated output values to determine how well the neural network mimics the training data. If the loss is low then the model is good to go.
| # | Name | Date |
|---|---|---|
| 1 | 202341069031-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-10-2023(online)].pdf | 2023-10-13 |
| 2 | 202341069031-FORM-9 [13-10-2023(online)].pdf | 2023-10-13 |
| 3 | 202341069031-FORM FOR STARTUP [13-10-2023(online)].pdf | 2023-10-13 |
| 4 | 202341069031-FORM FOR SMALL ENTITY(FORM-28) [13-10-2023(online)].pdf | 2023-10-13 |
| 5 | 202341069031-FORM 1 [13-10-2023(online)].pdf | 2023-10-13 |
| 6 | 202341069031-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-10-2023(online)].pdf | 2023-10-13 |
| 7 | 202341069031-EVIDENCE FOR REGISTRATION UNDER SSI [13-10-2023(online)].pdf | 2023-10-13 |
| 8 | 202341069031-EDUCATIONAL INSTITUTION(S) [13-10-2023(online)].pdf | 2023-10-13 |
| 9 | 202341069031-DRAWINGS [13-10-2023(online)].pdf | 2023-10-13 |
| 10 | 202341069031-COMPLETE SPECIFICATION [13-10-2023(online)].pdf | 2023-10-13 |