Sign In to Follow Application
View All Documents & Correspondence

Integrated Multi Modal Image Fusion And Tumor Classification Application For Enhanced Diagnosis

Abstract: The accurate diagnosis and localization of lung tumors are pivotal for tailored treatment strategies and enhanced patient outcomes. This study introduces a MATLAB app that harnesses the power of multi-modal medical imaging, integrating magnetic resonance imaging (MRI) and computed tomography (CT) scans. Through Convolutional Neural Networks (CNNs) and Principal Component Analysis, this research enhances the precision of lung tumor diagnosis. The fusion of MRI and CT images using multimodal medical image fusion techniques provides a comprehensive insight into lung tumors, capturing diverse attributes such as anatomical details and perfusion patterns. This integration facilitates clearer visualization of tumor boundaries and surrounding tissues. Subsequent classification of lung tumor types contributes to personalized treatment strategies. This holistic approach optimizes lung tumor diagnosis and medical decisions, enabling tailored therapeutic interventions. The integration of innovative techniques within the MATLAB app showcases the potential to redefine lung tumor diagnosis standards, ultimately improving accuracy, efficiency, and clinical outcomes. 5 Claims and 3 Figures

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 November 2023
Publication Number
51/2023
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

MLR Institute of Technology
Hyderabad

Inventors

1. Dr. K. Sai Prasad
Department of Computer Science and Engineering – Artificial Intelligence and Machine Learning, MLR Institute of Technology, Hyderabad
2. Ms. Meghana
Department of Computer Science and Engineering – Artificial Intelligence and Machine Learning, MLR Institute of Technology, Hyderabad
3. Ms.L.Bhargavi
Department of Computer Science and Engineering – Artificial Intelligence and Machine Learning, MLR Institute of Technology, Hyderabad
4. Ms. C.Sai Shreeya
Department of Computer Science and Engineering – Artificial Intelligence and Machine Learning, MLR Institute of Technology, Hyderabad

Specification

Description:Field of the Invention
The proposed invention presents an innovative approach to lung tumor diagnosis by creating a MATLAB-based application that seamlessly integrates multi-modal medical image fusion with automated tumor classification. This model improves diagnostic accuracy by enhancing the visibility of lung tumors through fusion and providing a comprehensive classification into distinct tumor types.
FIELD OF INVENTION
The proposed invention presents an innovative approach to lung tumor diagnosis by creating a MATLAB-based application that seamlessly integrates multi-modal medical image fusion with automated tumor classification. This model improves diagnostic accuracy by enhancing the visibility of lung tumors through fusion and providing a comprehensive classification into distinct tumor types.
OBJECTIVE OF THIS INVENTION
The primary objective of this invention is to significantly enhance the accuracy of lung tumor diagnosis through the seamless integration of multi-modal medical image fusion and automated tumor classification. This is accomplished by combining two distinct types of medical images, specifically Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) images, using an advanced technology known as Convolutional Neural Networks (CNNs). This invention seeks to streamline the diagnostic workflow by offering both image fusion and tumor classification within a single MATLAB-based application. This integrated approach reduces the need for clinicians to analyze images separately, thus saving time and expediting the diagnostic process. By leveraging their capabilities, relevant features from both types of images are effectively captured and blended together to create entirely new composite images. The resulting fused images provide a comprehensive and detailed representation of the Lung’s condition, particularly in cases involving tumors. This invention aims to enhance clinical decision-making by providing clinicians with enhanced and more informative images along with automated tumor typing.

Background of the Invention
Medical imaging, like ultrasound and CT scans, is crucial for disease detection, especially tumor detection. However, manual diagnosis is time-consuming and error-prone, especially in remote areas with fewer experts. AI and deep learning can help, but challenges remain due to noisy images and limited information from single modes. For instance, ultrasound is quick but struggles with small tumors, while X-rays are sensitive but have their own limitations. The solution lies in combining AI and multiple imaging modes, like CT and MRI, to improve accuracy and early detection, promising a revolution in medical diagnostics.
For instance, CN109035160B relates to improving how we analyze medical images. It focuses on merging different types of images to get a better understanding of medical conditions. The main goal is to solve problems like blurry images and difficulty telling tissues apart. The process involves taking two types of images, cleaning them up, and combining them. This combination provides more detailed information about the tissues from different angles. To do this, the images go through various steps like cleaning, breaking down into smaller parts, and then merging them. While CN109035160B generally aims to enhance the quality and information content of medical images, the concept of multimodal medical image fusion for lung tumor detection specifically focuses on using fusion techniques to improve the accuracy of detecting tumors within the lungs.
Similarly, the CN106203327B invention is about a system and method for spotting lung tumors using convolutional neural networks. It takes a bunch of images and processes them to find tumor features. It then uses a chart to figure out if there's a high chance of a tumor being present. This helps in better identifying tumors in images of the lungs. It just identified the tumors but did not classify them.
CN112750097B aims to enhance the blending of various medical image types, particularly by utilizing a technique that combines several Convolutional Neural Networks (CNNs) with a fuzzy neural network. In multi-modal medical imaging, the objective is to improve the texture,details, and crispness of crucial areas. There are two primary steps in the method. Building a collection of Gabor-based CNNs (G-CNNs): This stage produces various representations of the same region in CT and MR images using different Gabor filters. Following that, various CNNs are trained using these various representations to produce a set of G-CNNs. Using a fuzzy neural network to combine G-CNN outputs In this step, a fuzzy neural network is used to merge the outputs from the G-CNNs. This aids in creating the final fused image.
The CN110322423B invention introduces a method to improve the detection of small targets in images by combining infrared and visible light images. It uses a fusion model to create a new image that combines the strengths of both types of images. This helps in detecting small targets accurately, which is important for various applications. The method can overcome the limitations of using only an infrared sensor and is valuable for improving detection results in practical applications.
The embodiments of US9922272B2 introduce a sophisticated method for enhancing the fusion of different types of medical images through deep learning techniques. The objective is to improve the accuracy of medical diagnosis and treatment planning by effectively merging information from diverse imaging modalities. The process involves aligning and training neural networks using two sets of medical images, which allows the creation of a specialized similarity metric for the modalities being fused. This metric captures the relationships between image patches, enhancing the quality of fusion. The resulting approach can be highly valuable for medical practitioners, aiding in more accurate and comprehensive diagnosis and treatment strategies.
In the context of detecting Lung tumors using multi-modal medical imaging, there’s a need for a better diagnostic method. Current methods lack comprehensive accuracy and integration of different imaging sources. To address this, a new approach is proposed that combines Convolutional Neural Networks with various imaging modalities. This approach aims to enhance Lung tumor detection accuracy and provide precise treatment guidance. This innovative model is inspired by existing advancements in medical imaging and aims to revolutionize Lung tumor diagnosis.Summary of the Invention
This invention focus on the coaxial propellers used in the unmanned aerial vehicles. Co-axial propellers, also known as dual-rotor or counter-rotating propellers, are a type of propulsion system commonly used in UAVs and other aircraft. In this co-axial setup, two propellers are mounted on the same axis, with one placed above the other. Both propellers rotate in opposite directions, which creates a number of advantages for UAV design and performance. The counter-rotating propellers produce equal and opposite torques, which cancel each other out. This reduces the UAV's tendency to spin or rotate uncontrollably, providing better stability during flight. Compared to traditional quad copters that use four separate motors and propellers, co-axial UAVs have a simpler mechanical design with only two motors. This can lead to reduced weight, fewer moving parts, and potentially lower maintenance requirements. Co-axial propellers can be more efficient in terms of power consumption and flight time. The counter-rotating propellers can also help in reducing the overall noise level produced by the UAV.The components of the coaxial propeller assembly are fabricated with 3D printing. This design is simple and easy to assemble and cost effective. Propellers are mounted on the inner and outer hub and the lower race is attached to the engine mount. This type of design ensures smooth flow ofload from the propeller to the engine mount. The propeller is flushed into hub and the hub is mounted on the motor shaft. The aerodynamics can cause varying load conditions for the upper and lower propellers, and these loads are then transferred from the engine mount to the fuselage.
Brief Description of Drawings
The invention will be described in detail with reference to the exemplary embodiments shown in the figures, wherein:
Figure 1: Flow diagram representing the step-by-step workflow of the proposed method, offering a visual overview of achieving the desired final output
Figure 2: Flow diagram representing the working of Convolutional Neural Network for the proposed method
Figure 3: Flow diagram representing the architecture of the proposed model
Detailed Description of the Invention
The following description presents a model in the field of medical imaging for improved lung tumor diagnosis. The invention involves the integration of Convolutional Neural Networks (CNNs) and Principal Component Analysis (PCA) to analyze lung tumor images captured using Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans. The following paragraphs provide a comprehensive analysis of the invention's technical underpinnings.
In the context of this model, fusion denotes the combining of data from different imaging modalities, each representing a specific imaging technique. One such modality is Magnetic Resonance Imaging (MRI), which uses radio waves to create a view of bones, tissues, and organs for an in-depth look, while another is Computed Tomography (CT), which uses x-rays to create multiple cross-sections of your body and can detect unknown masses that can be identified as tumors. As different modalities consist of interrelated information, the goal is to combine their strengths to obtain an overall improved image. The combination of two anatomical modalities like Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) is often used for diagnosis as they depict the inner structure of the body having different contrast for different tissues.
At first, the MRI and CT images are pre-processed to ensure uniformity in terms of dimensions, resolution, and other factors as mentioned in Figure 1. It is done to ensure that the images are compatible with the parallel CNNs as they are designed as separate neural network architectures, with identical or similar structures. These architectures consist of multiple layers, including convolutional layers for feature extraction, pooling layers for downsampling, and fully connected layers for classification. Each CNN processes the MRI and CT images independently, convolving filters across the images to detect various features such as edges, textures, and patterns. The convolutional layers capture hierarchical features, recognizing simple patterns in earlier layers and complex structures in deeper layers. After convolution, pooling layers downsample the feature maps, reducing their dimensions. This process helps retain important information while reducing computational complexity. Max pooling is used to select the maximum value from a group of neighboring pixels in the feature map.
The pooled and downsampled features are flattened into a one-dimensional vector and passed through fully connected layers. These layers perform high-level feature extraction and classification, transforming the learned features into a format suitable for classification tasks. The final layer of each CNN is the output layer, which produces feature vectors that encapsulate high-level information about the images.
Furthermore, Principal Component Analysis (PCA) is used to transform the feature vectors into a new set of vectors that are reduced in dimension. These vectors are known as principal components. The principal components are ranked based on the amount of variance they account for. The principal components that account for the most variance is kept.
Further processing involves subjecting PCA-transformed feature vectors to Discrete Wavelet Transform (DWT) decomposition, breaking information into frequency sub-bands. Corresponding coefficients from both modalities fuse at each level, reconstructing a comprehensive image using Inverse Discrete Wavelet Transform. This integration leverages PCA and DWT, enhancing representation by combining details for interpretability. The "Contrast-Based" strategy gains relevance for lung tumor detection in multimodal fusion after PCA, enhancing tumor visibility against tissues. Increased contrast aids accurate localization, boundary definition, and differentiation, which are vital for diagnosis and treatment. By uniting PCA and contrast-driven fusion, this method improves interpretability and precision, elevating lung tumor detection's effectiveness in medical imaging.
Lastly, a CNN built with TensorFlow is adapted for the fused features. It includes convolutional, pooling, and fully connected layers to analyze and extract patterns. Fused features are processed in the CNN's convolutional layers to reveal higher-level features.
After convolution, pooling layers reduce the dimensions of the feature maps, while max pooling is used to retain the most prominent features within each region. Flattened pooled features move through connected layers, learning and forming a foundation for classification. The last layer produces probabilities for the benign' and 'malignant' classes. Softmax activation transforms outputs into class probabilities, indicating the network's confidence in classifying tumors.
For interpretation, a threshold is applied to the output probabilities. If the probability of malignancy surpasses a certain threshold of 0.5, the tumor is classified as malignant; otherwise, it's classified as benign. Once tumors are identified as malignant using the Softmax function, the next step is to differentiate between small-cell lung tumors and non-small-cell lung tumors. Utilizing Convolutional Neural Networks (CNNs), the fused image data undergoes intricate feature extraction to discern unique patterns characteristic of each tumor type. The CNN's classification layers interpret these patterns to accurately categorize tumors as either small-cell or non-small-cell lung tumors. This CNN-driven approach enhances precision in tumor classification, enabling tailored medical strategies and better-informed clinical decisions.
11 Claims and 3 Figures

Advantages of the proposed method,
The proposed model fuses the modalities of CT, which allows for the preservation of high spatial resolution due to their excellence in capturing anatomical details, and MRI, which excels in soft tissue contrast, resulting in a more precise representation of lung and tumor boundaries.
By combining various imaging techniques, we tap into a wealth of information about tumor behavior, perfusion patterns, and structural attributes. This holistic data interpretation aids in grasping the tumor's behavior and response to treatment.
The fusion of different images ensures pinpoint localization of the tumors within the Lung. This precision is vital for surgical planning, guiding neurosurgeons tominimize damage to healthy tissue while intervening effectively.
Multimodal fusion's ability to detect subtle tissue changes in the early stages enables timely diagnosis and meticulous treatment planning, enhancing the efficacy of therapeutic interventions. With diverse Lung tumors responding uniquely to treatments like surgery, chemotherapy, and radiation, multimodal fusion offers insights into the tumor's diverse characteristics. This knowledge assists in personalizing treatment approaches to align with specific tumor attributes and patient needs.
The user-friendly MATLAB-based application interface makes advanced diagnostic tools more accessible to a wider range of medical professionals, fostering the adoption of sophisticated techniques in clinical practice.
This model also holds the potential toincreaselabor competency through the use of a data-driven approach that facilitates the objective tracking of tumor progression and treatment response with the quantitative analysis of tumor parameters such as size, shape, and volume, which becomes possible through multimodal fusion.
5 Claims and 3 Figures , Claims:The scope of the invention is defined by the following claims:
Claims:
1. A model that uses MATLAB to create a multimodal medical image fusion application for enhanced diagnosis of lung tumors, comprising:
a) The different modalities of images are obtained as input to independently extract the features. The simultaneous reduction of dimensions on the derived features, resulting in the creation of representations with reduced dimensions.
b) The features of both modal images are combined to create the fused image representation. The lung tumors are detected and further identifying whether they are benign or malignant
2. As per claim 1, wherein the first modality image must be either MRI-scanned image or CT-scanned image and second input modality image must be the alternate image.
3. As per claim 1, the decomposition process using Discrete Wavelet Transform (DWT), wherein these decomposed images are utilized for applying parallel Convolutional Neural Networks (CNN) independently to extract high-level features,
4. As per claim 1, wherein the Principal Component Analysis (PCA) is simultaneously employed from both modality images for a reduced dimensional representation, the transformed features of the first mode image are fused using Inverse Discrete Wavelet Transform with the transformed features of the second mode image for obtaining a reduced dimensional representation.
5. As per claim 1, wherein the MATLAB tool is used for creating an application for integrated multimodal image fusion.

Documents

Application Documents

# Name Date
1 202341075643-REQUEST FOR EARLY PUBLICATION(FORM-9) [06-11-2023(online)].pdf 2023-11-06
2 202341075643-FORM-9 [06-11-2023(online)].pdf 2023-11-06
3 202341075643-FORM FOR STARTUP [06-11-2023(online)].pdf 2023-11-06
4 202341075643-FORM FOR SMALL ENTITY(FORM-28) [06-11-2023(online)].pdf 2023-11-06
5 202341075643-FORM 1 [06-11-2023(online)].pdf 2023-11-06
6 202341075643-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-11-2023(online)].pdf 2023-11-06
7 202341075643-EDUCATIONAL INSTITUTION(S) [06-11-2023(online)].pdf 2023-11-06
8 202341075643-DRAWINGS [06-11-2023(online)].pdf 2023-11-06
9 202341075643-COMPLETE SPECIFICATION [06-11-2023(online)].pdf 2023-11-06