Abstract: A system for evaluating a malignant or non-malignantgrowth in a brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user 102 using a machine learning model 110 is provided. The system 100 includes an input data source 104, and a server 108. The server 108(i) receives MRI data from the input data source 104; (ii) pre-processes the MRI data to obtain pre-processed MRI data; (iii) segments the pre-processed MRI data into segments using a cortical and a sub-cortical segmentation method; (iv) determining cortical and sub-cortical volumes of the brain and a first output data; (v) evaluates the malignant or non-malignant growth in the brain by comparing the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to estimate a change as a third output in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth. FIG. 1
DESC:Technical Field
[0001] The embodiments herein generally relate toan automated malignanat or non-malignantgrowth analysis in brain, more particularly, a system and method forevaluating a malignant or non-malignantgrowth in the brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user using a machine learning model.
Description of the Related Art
[0002] A comprehensive brain tumor analysis is a process of complete analysis of brain tumor images (e.g. structural magnetic resonance imaging (sMRI) scan) including tumor detection, classification, and segmentation; volumetric analysis on tumor growth, longitudinal analysis, radiomics analysis, preoperative and postoperative analysis etc. in order to extract useful information in all aspects about the tumors (e.g. tumor boundaries and segmentations, quantitative estimation results on tumor growth,textural information and other tumor metrics such as tumor grade, tumor type, the direction of growth, tumor morphology, etc.,). The comprehensive analysis of the brain tumor images is a vital step towards accurate and/or speedy diagnosis and effective treatment planning for brain tumors.
[0003] Existing brain tumor analysis systems perform either one or two of segmentation, type and/or grade classification, longitudinal analysis, radiomics analysis, and pre-post operative tumor analysis and provide limited information. Hence, an effective system is still needed for a holistic approach to diagnosis and treatment planning of the brain tumors.
[0004] Moreover, the existing brain tumor analysis systems perform the analysis on the tumor using one or two modalities of sMRI. For example, the existing brain tumor analysis systems use T1 contrast weighted (T1cw) MRI image and fluid-attenuated inversion recovery (FLAIR) MRI image for tumor delineation and segmentation. This segmentation analysis provides tumor core delineation, but not other useful segmentation information such as peritumoral edema, the necrotic tumor, etc.
[0005] In addition to the brain tumor images analysis, the treatment planning further depends on patient-specific information including demographic and clinical information of patients. Hence, it is essential to consider this patient-specific information while planning the treatment, especially personalized therapy. The existing systems fail to consider this patient-specific information in analysis that provides personalized treatment planning.
[0006] Therefore, there arises a need to address the aforementioned technicaldrawbacks in existing systemsin analyzing the MRI data to evaluate a malignant or non-malignant growth in brain.
SUMMARY
[0007] In view of the foregoing, an embodiment herein provides a system for evaluating a malignant or non-malignantgrowth in a brain by estimating a change in cortical and sub-cortical volumes of the brain and the malignant or non-malignantgrowth from multi-modal structural magnetic resonance imaging (MRI) dataof a user using a machine learning model. The system includes an imaging device and a server.The imaging device includes at least one of a camera, a screen, or an image-capturing device. The imaging device obtains an MRI data of the user that comprises at least one scan data. The server acquires the MRI data of the user from the imaging device, and processes, the MRI data using the machine learning model. The server includes a memory that stores a database and a processor that is configured to execute the machine learning model and is configured to (i) automatically pre-process the MRI data to obtain pre-processed MRI data; (ii) segment the pre-processed MRI data into segmentsusing a first cortical and a sub-cortical segmentation method, the segments include at least one of cortical and sub-cortical areas of the brain; (iii) determine cortical and sub-cortical volumes of the brain and a first output data by refining boundaries of the segmentsusing a second cortical and the sub-cortical segmentation method, the first output data include at least one of a thickness of the brain from different regions, segmentation boundaries, and cortical surface delineations; (iv) predict, using the machine learning model, a malignant or non-malignant growth and a second output data based on the pre-processed MRI data, the machine learning model is trained by correlating historical MRI data, historical preprocessed MRI data with historical malignant or non-malignant growths of the brain, and historical second outputs, the second output data comprises at least one of delineation of malignant or non-malignant growthcore areas, differential segmentation and/or delineation of malignant or non-malignant growthand malignant or non-malignant growthcore areas, multi differential segmentation and/or delineation of malignant or non-malignant growth, a peritumoral edema, a necrotic malignant or non-malignant growth, or a malignant or non-malignant growthcore; (v) evaluate the malignant or non-malignant growth in the brain by comparing the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to estimate a change as a third output in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth, the third output includes at least one of a volume of malignant or non-malignant growth, a malign or non-malign growth’sinfiltration, and a rate of the malignant or non-malignant growth.
[0008] In some embodiments, the processor is configured to quantify a fourth output data by extracting a spatial distribution of pixel interrelationships in the pre-processed MRI data, signal intensities of the pre-processed MRI data, and one or more radiomics features from the pre-processed MRI data, the fourth output data includes at least one of textural information of the malignant or non-malignant growth, a direction of growth, morphology of the malignant or non-malignant growth, a volume of the malignant or non-malignant growth, the malign or non-malign growth’s infiltration.
[0009] In some embodiments, the processor is configured to compare, using a normative model, the first output data and the second output data based on a group-level distribution (i) to access where metrics of the user fall in a healthy population curve and (ii) to estimate the change in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to obtain a fifth output data, the fifth data includes data on percentile volume differences compared to a healthy population.
[0010] In some embodiments,the processor is configured to classify the malignant or non-malignant growth into at least one type of the malignant or non-malignant growth and at least one grade of the malignant or non-malignant growth, and assess a genotypic nature of the malignant or non-malignant growth using the one or more radiomics features.
[0011] In some embodiments,the processor is configured to compare the pre-processed MRI data taken on a pre-operative stage of the user and the pre-processed MRI data taken on a post-operative stage to provide changes in a tissue map of the user as a sixth output data.
[0012] In some embodiments,the processor is configured to generate a comprehensive report using outputs of malignant or non-malignant growth layers, the grade of the malignant or non-malignant growth, a position of the malignant or non-malignant growth, a tissue type, and probability maps.
[0013] In some embodiments,the processor is configured to perform an image preprocessing and noise reduction using statistical methods and the machine learning model is trained by correlating historical users, historical pre-processed MRI data, historical MRI data, with historical malignant or non-malignant growthcore areas, historical malignant or non-malignant growthtypes, historical malignant or non-malignant growthgrades, historical genotypic nature of the malignant or non-malignant growth, historic user profiles, and historic survival analytics.
[0014] In some embodiments,the processor is configured to obtain the pre-processed MRI data by(i) converting the MRI data from at least one of a digital imaging and communications in medicine (DICOM) format or a nearly raw raster data (NRRD) format to a Neuro imaging informatics technology initiative (NIfTI) format and generating corresponding header information as JSON sidecar; (ii) removing a scanner noise associated with an MRI machine, motion, image acquisition, the DICOM and the NIfTI format of the MRI data to obtain a noise-free MRI data; (iii) correcting motion, orientation, an origin and angles in X, Y, Z coordinates for motion artefacts occurred between each captured 2D slice of the noise-free MRI data to obtain a motion corrected MRI data; (iv) performing non-parametric, non-uniform intensity normalization on the motion corrected MRI data to remove inhomogeneous MRI scanner noise and inhomogeneous artefacts to obtain a first corrected MRI data; (v) performing bias-correction on noises due to low-frequency and smooth signals on the first corrected MRI data to obtain a second corrected MRI data; (vi) scaling and normalizing intensities for voxels specified by a brain mask across the second corrected MRI data to obtain a normalized MRI data; (vii) determining a brain area MRI data by removing a neck area from the normalized MRI data; (viii) computing a nonlinear transformation of the brain area MRI data to align with a MRI template using a non-linear volumetric registration; (ix) segmenting at least one of a white matter (WM), a gray matter (GM), a cerebrospinal fluid (CSF), from the nonlinear transformation of the brain area MRI data; (x) performing registration across the nonlinear transformation of the brain area MRI data that is segmented to a Montreal Neurological Institute co-ordinate (MNI) space or native/original space; (xi) performing skull stripping of the nonlinear transformation of the brain area MRI data that is registered to the MNI space or the native/original space; (xii) resampling the nonlinear transformation of the brain area MRI data that is skull stripped for obtaining a homogeneous voxel size of the nonlinear transformation of the brain area MRI data (approzimately to 1mm^3 isotropyfor homogeneous voxel size); (xiii) performing spatial normalization of the nonlinear transformation of the brain area MRI data that is resampled from a user-specific space to the MNI space; (xiv) performing a cortical registration of the nonlinear transformation of the brain area MRI data that is spatially normalized from the user-specific space to the MNI space; (xv) transforming the MNI space back to the user-specific space for user-specific analysis; (xvi) covering a cortical surface of the nonlinear transformation of the brain area MRI data with triangles to fill up a hemisphere (tessellation); (xvii) reconstructing, using a cortical surface reconstruction, any missing voxels from the gray matter area of the cortical surface to obtain a reconstructed MRI data; (xviii) performing spatial smoothing by filtering high frequencies from a frequency domain to increase the signal-to-noise ratio of the reconstructed MRI data; and (xix)aligning the reconstructed MRI data by a position of anterior and posterior commissures.
[0015] In some embodiments,the scan data comprises at least one T1 weighted magnetic resonance imaging (MRI) image, or a resting-state functional MRI image in a predefined format, the predefined format of the scan data comprises at least one of the DICOM format or the NIfTI format.
[0016] In one aspect a method for evaluating a malignant or non-malignantgrowth in a brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user using a machine learning model is provided. The method includes obtaining an MRI data of the user 102 that includes at least one of scan data. The imaging device includes at least one of a camera, a screen, or an image-capturing device. The scan data includes at least one T1 weighted magnetic resonance imaging (MRI) image or a resting-state functional MRI image in a predefined format. The predefined format of the scan data includes at least one digital imaging and communications in medicine (DICOM) format or neuroimaging informatics technology initiative (NIfTI) format. The method includes automatically pre-processing, by a server, the MRI data to obtain pre-processed MRI data.The method includes segmenting the pre-processed MRI data into segments using a first cortical and a sub-cortical segmentation method. The segments include at least one of cortical and sub-cortical areas of the brain.The method includes determining cortical and sub-cortical volumes of the brain and a first output data by refining boundaries of the segments using a second cortical and the sub-cortical segmentation method. The first output data include at least one of a thickness of the brain from different regions, segmentation boundaries, and cortical surface delineations.The method includes evaluating the malignant or non-malignant growth in the brain by comparing the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to estimate a change as a third output in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth. The third output includes at least one of a volume of malignant or non-malignant growth, a malignant or non-malignant growthinfiltration, and a rate of malignant or non-malignant growth.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0018] FIG. 1 is a block diagram of a system for evaluating a malignant or non-malignantgrowth in a brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user using a machine learning model according to some embodiments herein;
[0019] FIG. 2 is a block diagram of a server of FIG. 1 according to some embodiments herein;
[0020] FIG. 3 is a block diagram of a preprocessing module of the server of FIG. 2 according to some embodiments herein;
[0021] FIG. 4 is an exemplary comprehensive report that is generated by the server according to some embodiments herein;
[0022] FIGS. 5A-5B are exemplary views of a comprehensive report generated by the server of FIG. 2 according to some embodiments herein;
[0023] FIG. 5C is an exemplary view that depicts a second output data according to some embodiments herein;
[0024] FIG. 5D is an exemplary view that depicts a fourth output data according to some embodiments herein;
[0025] FIG. 5E is an exemplary view that depicts a third output data according to some embodiments herein;
[0026] FIG. 5F is an exemplary view that depicts a sixth output data according to some embodiments herein;
[0027] FIG. 6isa flow diagram that illustrates a method for evaluating a malignant or non-malignantgrowth in a brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user using a machine learning model according to some embodiments herein; and
[0028] FIG. 7 is a schematic diagram of a computer architecture in accordance with the embodiments herein.
DETAILED DESCRIPTION OF THE DRAWINGS
[0029] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0030] As mentioned, there is a need foranalyzing the MRI data to evaluate a malignant or non-malignantgrowth in the brain. Embodiments herein provide a system and method for evaluating the malignant or non-malignantgrowth in the brain by estimating a change in cortical and sub-cortical volumes of the brain and the malignant or non-malignantgrowth in the brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user using a machine learning model. Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, preferred embodiments are shown.
[0031] FIG. 1 is a block diagram of a system 100 for evaluating a malignant or non-malignantgrowth in a brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user 102 using a machine learning model 110 according to some embodiments herein.The system 100 includes the user 102, an inputdata source 104, a network 106, and a server 108 that includes a machine learning model 110.
[0032] The server 108includes a device processor and a non-transitory computer-readable storage medium storing one or more sequences of instructions, which when executed by the device processor causes the evaluation of a malignant or non-malignantgrowth in the brain associated with the user 102 using the machine learning model 110. The server 108may be a handheld device, a mobile phone, a kindle, a Personal Digital Assistant (PDA), a tablet, a music player, a computer, an electronic notebook, or a Smartphone. In some embodiments, the system 100 may include an application that may be installed in Android based devices, windows-based devices, or any such mobile operating systems devices for evaluating the brain tumor using the machine learning model 110.
[0033] The server 108is configured to connect with the input data source 104 and receive an MRI data of the user 102 from the input data source 104 through the network 106. The network 106 is a wireless network or wired network. The network 106 is a combination of a wired network and a wireless network. In some embodiments, the network 106 is the Internet.
[0034] The input data source 104 may be an imaging modality, electronic medical records (EMRs), electronic health records (EHRs), patient registries, disease registries, a user device, any medical modality, an Internet, a website, or any database, from which the MRI data associated with the user 102 is obtained. The imaging modality may be a Magnetic resonance Imaging (MRI) scanner. The user device may be without limitation, is selected from a mobile phone, a handheld device, a kindle, an electronic notebook, a music player, a Personal Digital Assistant (PDA), a tablet, a desktop computer, a laptop, smartphones, tablets, computers, smart watches, IoT (Internet of Things) devices, connected vehicles, and the like. The user 102 may be a patient.
[0035] The MRI data may include scan data. The scan data may include structural MRI scans, clinical information of the user 102, and demographic information of the user 102. The structural MRI scans may include T1 weighted image (T1w), T2 weighted image (T2w), T1 contrast weighted image (T1cw), and Fluid-attenuated inversion recovery (FLAIR) of the brain of the user 102. The structural MRI scans may be in at least one of DICOM format (.dcm), Nearly Raw Raster Data (NRRD), NIfTI formator .tck format or .obj format, or other imaging or neuroimaging formats. The clinical information may include present or prior disease (comorbidities), relapse, rehabilitation, number of days the patients survived after surgery, frequency of chemotherapy and/or radiotherapy, post and pre-surgery drug or medication regime, dietary changes and/or restrictions, one or more blood parameters, surgical procedure and extent of surgery, rating for the functional status of the user 102 before and after surgery, and the like. The demographic data of the user 102 may include age, gender, marital status, family size, ethnicity, income, population data education of the user 102, and the like.
[0036] The server 108 is further configured to pre-process the MRI data to obtain pre-processed MRI data. The server 108 generates one or more pre-processed input data by performing at least one pre-processing step for ease of quality assurance (QA) and quality control (QC). The server 108 may use one or more techniques known in the art for performing one or more pre-processing steps.The server 108 is further configured to segment the pre-processed MRI data into segments using a first cortical and a sub-cortical segmentation method. The segments include cortical and sub-cortical areas of the brain.
[0037] The cortical segments may be, for example, cerebral cortex, such as frontal lobe, parietal lobe, temporal lobe, and occipital lobe. The sub-cortical segments may be, for example, structures located below the cerebral cortex, such as the thalamaus, basal ganglia, hippocampus, and brainstem.
[0038] The server 108is configured to determinecortical and sub-cortical volumes of the brain and a first output data by refining the boundaries of the segments using a second cortical and the sub-cortical segmentation method. The first output data may include a thickness of the brain from different regions, segmentation boundaries, and cortical surface delineations.
[0039] The server 108is configured to predict a malignant or non-malignant growth and a second output data based on the pre-processed MRI datausing the machine learning model 110. The machine learning model 110 is trained by correlating historical MRI data, historical preprocessed MRI data with historical malignant or non-malignantgrowths of the brain, and historical second outputs.The second output data includes at least one of delineation of malignant or non-malignant growthcore areas, differential segmentation and/or delineation of malignant or non-malignant growthand malignant or non-malignant growthcore areas, multi differential segmentation and/or delineation of malignant or non-malignant growth, a peri-tumoral edema, a necrotic malignant or non-malignant growth, or a malignant or non-malignant growthcore.
[0040] The server 108is configured to evaluate the malignant or non-malignant growth in the brain by comparing the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth (for example, metrics on volumes). The server 108 is configured to estimate a change as a third output in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth.The third output comprises at least one of a volume of malignant or non-malignant growth, a malignant or non-malignant growthinfiltration, and a rate of malignant or non-malignant growth.Thechange may be in cortical and tumor areaslongitudinally.
[0041] The server 108is configured to quantify a fourth output data by extracting a spatial distribution of pixel interrelationships in the pre-processed MRI data, signal intensities of the pre-processed MRI data, and one or more radiomics features from the pre-processed MRI data.The fourth output data includes at least one a textural information about the malignant or non-malignant growth, a direction of growth, morphology of the malignant or non-malignant growth, a volume of the malignant or non-malignant growth, and the malign or non-malign growth’s infiltration.
[0042] The server 108is configured to compare the first, and second output data using a normative model based on a group-level distribution.The server 108is configured to access where metrics of the user 102 fall in a healthy population curve.The server 108is configured toestimate the change in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to obtain a fifth output data. The fifth output data may include data on percentile volume differences compared to a healthy population.
[0043] The server 108is configured to compare the pre-processed MRI data taken in a pre-operative stage of the user 102 and the pre-processed MRI data taken in a post-operative stage to provide changes in a tissue map of the user 102 as a sixth output data. The sixth output data may include assessment data on the precision of the surgical intervention and how it has affected the brain tissue surrounding the tumor, tissue maps showing the changes, and data on tumor recurrence.
[0044] The server 108 is configured to classify the malignant or non-malignant growth into at least one type of the malignant or non-malignant growth and at least one grade of the malignant or non-malignant growth.The server 108 is configured to assess a genotypic nature of the malignant or non-malignant growth using one or more radiomics features.The seventh output data may include malignant or non-malignant growthboundaries, data on malignant or non-malignant growthclassification and grading of the malignant or non-malignant growth,and the genotypic nature of the malignant or non-malignant growth.
[0045] The machine learning model 110 may be a deep learning model. The machine learning model 110 may be a neural network model, a supervised learning model or an unsupervised learning model. In some embodiments, the machine learning model 110 learns about a specific brain illness and a consistent deviation of features from one or more input data (data points). The features may be associated with brain aberration. The machine learning model 110 may use at least one of the supervised learning approaches or unsupervised learning approaches to learn the data points across different users to improve the accuracy ofthe at least one or more analysis.
[0046] In some embodiments, the server 108 provides a eighth output data by obtainingand analysing one or more features from the one or more input data. The eighth output data may be patient profiling metrics. The one or more input data may include clinical information and demographic data associated with the user 102. The patient profiling metrics may be used for medication subscription, estimating the likely course of surgery and planning, survival analysis, and the like.
[0047] The server 108is configured to generate a comprehensive report about the brain tumor using outputs of malignant or non-malignant growth layers, the grade of the malignant or non-malignant growth, a position of the malignant or non-malignant growth, a tissue type, and probability maps(including the first output data to the eighth output data) and displays on a display screen.
[0048] FIG. 2 is a block diagram of a server 108 of FIG. 1 according to some embodiments herein. The server 108 includes a database 200, the machine learning model 110, an MRIdata receiving module 202, a pre-processing module 204, a first cortical and sub-cortical segmentation module 206, a second cortical and sub-cortical segmentation module 208, and a malignant or non-malignantgrowth evaluation module 210.
[0049] The MRI datareceiving module 202 connects with the input data source 104 and receives one or more input data from the input data source 104. The one or more input data may include structural MRI scans, clinical information of the user 102, and demographic information of the user 102. The structural MRI scans may include a T1 weighted image (T1w), T2 weighted image (T2w), T1 contrast weighted image (T1cw), and Fluid-attenuated inversion recovery (FLAIR) of the brain of the user 102. The structural MRI scans may be in at least one DICOM format (.dcm), Nearly Raw Raster Data (NRRD), NIfTI format or .tck format or .obj format, or other imaging or neuroimaging formats. The structural MRI scans may be captured as 2D slices. The one or more input data may be stored in the database 200.
[0050] The preprocessing module 204automatically pre-processes the MRI data to obtain pre-processed MRI databy (i) converting the MRI data from at least one of a digital imaging and communications in medicine (Dicom) format or a nearly raw raster data (NRRD) format to a Neuro imaging informatics technology initiative (NIfTI) format and generating corresponding header information as JSON sidecar; (ii) removing a scanner noise associated with an MRI machine, motion, image acquisition, the DICOM and the NIfTI format of the MRI data to obtain a noise-free MRI data; (iii) correcting motion, orientation, an origin and angles in X, Y, Z coordinates for motion artefacts occurred between each captured 2D slice of the noise-free MRI data to obtain a motion corrected MRI data; (iv) performing non-parametric, non-uniform intensity normalization on the motion corrected MRI data to remove inhomogeneous MRI scanner noise and inhomogeneous artefacts to obtain a first corrected MRI data; (v) performing bias-correction on noises due to low-frequency and smooth signals on the first corrected MRI data to obtain a second corrected MRI data; (vi) scaling and normalizing intensities for voxels specified by a brain mask across the second corrected MRI data to obtain a normalized MRI data; (vii) determining a brain area MRI data by removing a neck area from the normalized MRI data; (viii) computing a nonlinear transformation of the brain area MRI data to align with a MRI template using a non-linear volumetric registration; (ix) segmenting at least one of a white matter (WM), a gray matter (GM), a cerebrospinal fluid (CSF), from the nonlinear transformation of the brain area MRI data; (x) performing registration across the nonlinear transformation of the brain area MRI data that is segmented to a Montreal Neurological Institute co-ordinate (MNI) space or native/original space; (xi) performing skull stripping of the nonlinear transformation of the brain area MRI data that is registered to the MNI space or the native/original space; (xii) resampling the nonlinear transformation of the brain area MRI data that is skull stripped for obtaining a homogeneous voxel size of the nonlinear transformation of the brain area MRI data (approzimately to 1mm^3 isotropyfor homogeneous voxel size); (xiii) performing spatial normalization of the nonlinear transformation of the brain area MRI data that is resampled from a user-specific space to the MNI space; (xiv) performing a cortical registration of the nonlinear transformation of the brain area MRI data that is spatially normalized from the user-specific space to the MNI space; (xv) transforming the MNI space back to the user-specific space for user-specific analysis; (xvi) covering a cortical surface of the nonlinear transformation of the brain area MRI data with triangles to fill up a hemisphere (tessellation); (xvii) reconstructing, using a cortical surface reconstruction, any missing voxels from the gray matter area of the cortical surface to obtain a reconstructed MRI data; (xviii) performing spatial smoothing by filtering high frequencies from a frequency domain to increase the signal-to-noise ratio of the reconstructed MRI data; and (xix)aligning the reconstructed MRI data by a position of anterior and posterior commissures.
[0051] The first cortical and sub-cortical segmentation module 206 is configured to segment the pre-processed MRI data into segmentsusing a first cortical and a sub-cortical segmentation method.The first cortical and the sub-cortical segmentation approach may be an atlas-based cortical and sub-cortical segmentation approach.
[0052] The second cortical and sub-cortical segmentation module 208determines cortical and sub-cortical volumes of the brain and a first output data by refining the boundaries of the segments using a second cortical and the sub-cortical segmentation method. The second cortical and sub-cortical segmentation approach may use the machine learning model 110 to derive the first output data. The first output data may include a thickness from different regions of the brain, improved segmentation boundaries, and cortical surface delineations. The first output data may be used in at least one volumetric analysis, building normative models, and tumor infiltration identification and reach.The first output data may be stored in the database 200.The second cortical and sub-cortical segmentation module 208 uses pre-defined regions of interests (ROIs) from atlases for segmentation of the different cortical and sub-cortical areas to derive the first output data. The atlases may be selected from at least one of Glasser atlas, Destrieux, Desikan-Killiany atlas, and the like.
[0053] The machine learning model 110predicts a malignant or non-malignant growth and a second output data based on the pre-processed MRI data.The second output data may include the delineation of GD-enhancing malignant or non-malignant growthnon-enhancing malignant or non-malignant growthcore areas, differential segmentation and/or delineation of malignant or non-malignant growth,and malignant or non-malignant growthcore areas, multi-differential segmentation and/or delineation of malignant or non-malignant growth, a peritumoral edema, a necrotic malignant or non-malignant growth, or a malignant or non-malignant growthcore. The second output data may be stored in the database 200.
[0054] The machine learning model 110 is trained by correlating historical MRI data, historical preprocessed MRI data with historical malignant or non-malignant growths of the brain, and historical second outputs. For this, the machine learning model 110 may be trained on different MRI modalities. For example, the delineation of non-enhancing malignant or non-malignant growth core areas is derived when inputting T1cw image and FLAIR image into the machine learning model 110. The differential segmentation and/or delineation of GD-enhancing malignant or non-malignant growthand non-enhancing malignant or non-malignant growthcore areas are derived based on T1w image, T1cw image, and FLAIR image. The multi- differential segmentation and/or delineation of GD-enhancing malignant or non-malignant growth, the peritumoral edema, the necrotic malignant or non-malignant growth, and the non-enhancing malignant or non-malignant growthcore are derived based on using T1w image, T1cw image, T2w image, and FLAIR image.
[0055] The machine learning model 110 may be at least one of the models based on statistical approaches such as water-shedding; histogram analysis; and wavelet transform, convolutional neural networks (CNNs), transformer-based models, UNETs based on CNNs, recursive neural networks (RNNs), and the like.The machine learning model 110 may be provided a holistic view into the malignant or non-malignant growthboundaries and segmentations.
[0056] The malignant or non-malignant growthevaluation module 210 evaluates the malignant or non-malignant growth in the brain by comparing the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth.The malignant or non-malignant growthevaluation module 210 estimates a change as a third output in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth. The third output data may include a volume of malignant or non-malignant growth, a malignant or non-malignant growth’sinfiltration, and a rate of the malignant or non-malignant growth.The third output data may be stored in the database 200.The malignant or non-malignant growthevaluation module 210 quantifies a fourth output data by extracting a spatial distribution of pixel interrelationships in the pre-processed MRI data, signal intensities of the pre-processed MRI data, and one or more radiomics features from the pre-processed MRI data.The one or more radiomics features may include first order features, shape features (2D and 3D), gray level co-occurrence matrix (GLCM) features, gray level size zone matrix (GLSZM) features, gray level run length matrix (GLRLM) features, neighboring gray-tone difference matrix (NGTDM) features, and gray level dependence matrix (GLDM) features. The fourth output data includes at least one textural information about the malignant or non-malignant growth, a direction of growth, morphology of the malignant or non-malignant growth, a volume of the malignant or non-malignant growth, the malign or non-malignant growth’s infiltration.The fourth output datamay be stored in the database 200.The malignant or non-malignant growthevaluation module 210 may use at least one of the mathematical approaches, statistical approaches, and the machine learning model 110 for extraction of the features and quantification of the fourth output data.
[0057] The malignant or non-malignant growthevaluation module 210 compares the first, and second output data using a normative model based on a group-level distribution. The server 108is configured to access where metrics of the user 102 fall in a healthy population curve. The malignant or non-malignant growthevaluation module 210 is configured toestimate the change in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to obtain a fifth output data. The fifth output data may include data on percentile volume differences compared to a healthy population.The fifth output datamay be stored in the database 200. The malignant or non-malignant growthevaluation module 210 may use patient population data, unsupervised approaches, and normative modeling to automatically arrive at better user-specific based predictors.
[0058] The malignant or non-malignant growthevaluation module 210 obtains a fifth output data from the one or more pre-processed input data. The radiomics analysis module 214 extracts the spatial distribution of pixel interrelationships and signal intensities from the one or more pre-processed input data to quantify the fifth output data.
[0059] The malignant or non-malignant growthevaluation module 210 compares the pre-processed MRI data taken during a pre-operative stage of the user 102 and the pre-processed MRI data taken during a post-operative stage to provide changes in a tissue map of the user 102 as a sixth output data. The sixth output data may include assessment data on the precision of the surgical intervention and how it has affected the brain tissue surrounding the tumor, tissue maps showing the changes, and data on tumor recurrence.The malignant or non-malignant growthevaluation module 210 may use the Glasser ROI template atlas as a reference to provide the changes in the tissue map. The malignant or non-malignant growthevaluation module 210 further classifies between tumor recurrence as opposed to radiation and/or chemotherapy-based necrosis of the brain tissue by analysing the one or more pre-processed input data taken on the post-operative stage. The sixth output data may be stored in the database 200. The malignant or non-malignant growthevaluation module 210 may use the machine learning model 110to classify tumor recurrences. The machine learning model110 may be a deep neural network (DNN) model.
[0060] The malignant or non-malignant growthevaluation module 210 classifies the malignant or non-malignant growth into at least one type of the malignant or non-malignant growth and at least one grade of the malignant or non-malignant growth.The malignant or non-malignant growthevaluation module 210 is configured to assess a genotypic nature of the malignant or non-malignant growth using the one or more radiomics features as a seventh output. The typeof the malignant or non-malignant growth may includehigh-grade glioma (HGG), low-grade glioma (LGG), non-malignant brain tumor, meningioma, pituitary adenoma, brain metastasis, and acoustic neuroma. The grade of the malignant or non-malignant growthincludes grades 1, 2, 3, and 4 defined by the World Health Organization (WHO) standards. The seventh output data may include tumor boundaries, data on the classification and grading of the tumor, and the genotypic nature of the tumor.The seventh output data may be stored in the database 200.
[0061] The machine learning model 110 may be (i) supervised learning approaches such as deep learning models (such as convolutional neural networks) and machine learning tools (such as logistic regression, support vector machine, and the like) to arrive at different classifications; and (ii) unsupervised clustering approaches such as k-means clustering and t-distributed stochastic neighbour embedding (tSNE) algorithms to automatically arrive at malignant or non-malignant growthgroups for better holistic classification. The malignant or non-malignant growthevaluation module 210 may use radiomics analysis to give descriptions of the tumor such as tumor size, shape, location, and morphometry.
[0062] The malignant or non-malignant growthevaluation module 210 may access the validity of the machine learning model 110 using metrics such as accuracy, sensitivity, specificity, F1 score, positive predictive value (PPV), negative predictive value, area under the receiver operating characteristic curve (AUC), and area under the precision-recall curve (AUPRC). The malignant or non-malignant growthevaluation module 210 may use interpretable modeling for the machine learning model 110 to give interpretable inferences that providevalidity for the machine learning model 110.The interpretable modeling may include generating probability maps.
[0063] The malignant or non-malignant growthevaluation module 210 may generatesynthetic data from the same class distribution and use the synthetic data for training the machine learning model to rectify class imbalance issues and improve the precision of tumor classification. The malignant or non-malignant growthevaluation module 210 may use at least one of the statistical approaches or artificial intelligence (AI) approaches to generate the synthetic data. The artificial intelligence (AI) approaches may include generation using generative adversarial networks (GAN).
[0064] The machine learning model 110 may be optimized by enabling learning about a specific brain illness and a consistent deviation of features from the one or more input data (i.e. the one or more data points). The features may be associated with brain aberration. The machine learning model 110 may use at least one of the supervised learning approaches or unsupervised learning approaches to learning the one or more data points across different users to improve the accuracy of the at least one or more analysis (e.g. tumor delineation, segmentation, and classification).
[0065] The malignant or non-malignant growthevaluation module 210 obtainsone or more features from one or more input data to provide an eighth output data. The one or more input data may include clinical information and demographic data associated with the user 102. The eighth output data may include patient profiling metrics and may be stored in the database 200. The patient profiling metrics may be used for medication subscription, estimating the likely course of surgery and planning, survival analysis, and the like.
[0066] The malignant or non-malignant growthevaluation module 210 generates a comprehensive report about the brain tumor using outputs of malignant or non-malignant growth layers, the grade of the malignant or non-malignant growth, the position of the malignant or non-malignant growth, a tissue type, and probability maps and displays on a display screen.The comprehensive report may bea probability map of the malignant or non-malignant growths. The comprehensive report may include different colors to specify segmentation masks of different areas of the malignant or non-malignant growth. The comprehensive report may be viewed using a 3D MRI viewer that allows visualizing the 3D shape and size of the malignant or non-malignant growthregions for further assessment.
[0067] FIG. 3 is a block diagram of a preprocessing module of the server of FIG. 2 according to some embodiments herein.The pre-processing module 204 includes a dicom format converting module 302, a scanner noise removing module 304, a motion correction module 306, a first corrected MRI data obtaining module 308, a bias correction module 310, a scaling and normalizing module 312, a brain area determining module 314, a non-linear transformation computing module 316, a white matter, grey matter segmenting module 318, a registration module 320, a skull stripping module 322, a resampling module 324, a spatial normalization module 326, a cortical registration module 328, an MNI space transformation module 330, a cortical surface covering module 332, a cortical surface reconstruction module 334, and a spatial smoothing module 336.
[0068] The dicom format converting module 302converts the MRI data from at least one of a digital imaging and communications in medicine (Dicom) format or a nearly raw raster data (NRRD) format to a Neuroimaging informatics technology initiative (NIfTI) format and generates corresponding header information as JSON sidecar.Thescanner noise removing module 304 removes scanner noise associated with an MRI machine, motion, image acquisition, the DICOM, and the NIfTI format of the MRI data to obtain noise-free MRI data.The scanner noise removing module 304 may use at least one bilateral filtering, block-match, and 3D filtering for removing the standard scanner noise.
[0069] Themotion correction module 306 corrects motion, orientation, origin, and angles in X, Y, and Z coordinates for motion artifacts that occurred between each captured 2D slice of the noise-free MRI data to obtain motion-corrected MRI data.Thefirst corrected MRI data obtaining module 308 performs non-parametric, non-uniform intensity normalization on the motion-corrected MRI data to remove inhomogeneous MRI scanner noise and inhomogeneous artifacts to obtain a first corrected MRI data.
[0070] The one or more input data may be corrupted by at least one of the low-frequency signals or very smooth signals from old MRI scanners. The bias correction module 310performs biascorrection on noises due to low-frequency and smooth signals on the first corrected MRI data to obtain a second corrected MRI data.The scaling and normalizing module 312 scales and normalizes intensities for voxels specified by a brain mask across the second corrected MRI data to obtain normalized MRI data.This pre-processing step may scale down the artifacts in one or more input data and help in better pre-processing of one or more input data.The brain area determining module 314determines brain area MRI data by removing a neck area from the normalized MRI data.
[0071] Thenon-linear transformation computing module 316 computes a nonlinear transformation of the brain area MRI data to align with an MRI template using non-linear volumetric registration.Thewhite matter, grey matter segmenting module 318 segments at least one of a white matter (WM), a gray matter (GM), a cerebrospinal fluid (CSF), intracranial volume (ICV), skull and scalp area, and othersfrom the nonlinear transformation of the brain area MRI data.The registration module 320performs registration across the nonlinear transformation of the brain area MRI data (such as T1w, T1cw, T2, and FLAIR) that is segmented to a Montreal Neurological Institute co-ordinate (MNI) space or native/original space.
[0072] The skull stripping module 322performs skull stripping of the nonlinear transformation of the brain area MRI data that is registered to the MNI space or the native/original space.The skull stripping module 322may use the machine learning model 110 to strip the skull of the MRI data. The machine learning model 110 may be a deep learning model known in the art. The skull stripping may improve the robustness of the registration to one or more MRI data and MNI normalization.
[0073] The resampling module 324 resamples the nonlinear transformation of the brain area MRI data that is skull stripped for obtaining a homogeneous voxel size of the nonlinear transformation of the brain area MRI data (approximately to 1mm^3 isotropyfor homogeneous voxel size).The spatial normalization module 326performs spatial normalization of the nonlinear transformation of the brain area MRI data that is resampled from a user-specific space to the MNI space.
[0074] The cortical registration module 328performs a cortical registration of the nonlinear transformation of the brain area MRI data that is spatially normalized from the user-specific space to the MNI space.The MNI space transformation module 330 transforms the MNI space back to the user-specific space for user-specific analysis.The cortical surface covering module 332 covers a cortical surface of the nonlinear transformation of the brain area MRI data with triangles to fill up a hemisphere (tessellation).
[0075] The cortical surface reconstruction module 334 reconstructs any missing voxels from the gray matter area of the cortical surface to obtain reconstructed MRI datausing a cortical surface reconstruction.The spatial smoothing module 336 performs spatial smoothing by filtering high frequencies from a frequency domain to increase the signal-to-noise ratio of the reconstructed MRI data.In spatial smoothing, the smallest scale changes may be removed among voxels. The spatial smoothing module 336 aligns the reconstructed MRI data by a position of anterior and posterior commissures.In spatial normalization, one or more MRI data may be translated onto a common shape and size (i.e. MNI template) to compare the user’s brain to another user’s.
[0076] Thus, the pre-processing module 204 generates one or more pre-processed input data for ease of quality assurance (QA) and quality control (QC). The pre-processing module 204 may use one or more techniques known in the art for one or more pre-processes such as motion correction, non-parametric non-uniform intensity normalization, bias-correction, intensity normalization, non-linear volumetric registration, neck removal, segmentation of WM, GM, CSF, etc., registration of multimodal MRI, spatial normalization, cortical registration, user-specific spatial transformation, tessellation, cortical surface reconstruction, and spatial smoothing. One or more pre-processed input data may be stored in the database 200.
[0001] FIG. 4 is an exemplary comprehensive report 400 that is generated by the server 108 according to some embodiments herein. The exemplary comprehensive report 400 that is generated by the server 108 includes an output of malignant/non-malignant growth layers 402, a volume of segments 404, a grade of malignant/non-malignant growth 406, a textural information of the malignant/non-malignant growth 408, a position of the malignant or non-malignant growth 410, and a tissue probability maps 412. The output of malignant/non-malignant growth 402 includes the first output to the eighth output. The volume of segments 404 includes cortical and sub-cortical areas of the brain. The grade of malignant/non-malignant growth 406 includes grade 1, 2, 3, and 4 defined by the World Health Organization (WHO) standards. Thetextural information of the malignant/non-malignant growth 408 includes texture of the malignant/non-malignant growth. The texture of the malignant/non-malignant growthmay be smooth, heterogeneous, cystic, necrotic, enhancing rim, hemorrhagic, infiltrative, fibrillary, etc. The position of the malignant/non-malignant growth 410 includes a position. The position may be the brain stem, pineal region, cerebellum, frontal lobe, temporal lobe, etc. The tissue probability maps 412 include probability maps of a tissue type being present in a specific region of the brain. The tissue probability maps may be tumor probabaility maps, gray matter probability maps, white matter probability maps, etc.
[0002] FIGS. 5A-5B are exemplary views of a comprehensive report generated by a server 108of FIG. 2 according to some embodiments herein.FIG. 5A is an exemplary view that depicts a fifth output data arrived at the malignant or non-malignant growth evaluation module 210 of FIG. 2 according to some embodiments herein. The exemplary views at 502A, 502B depict thefifth output data that showspercentile volume differences compared to the healthy population
[0003] FIG. 5B is an exemplary view that depicts a seventh output data arrived at the malignant or non-malignant growthevaluation module 210 of FIG. 2 according to some embodiments herein. The exemplary view depictstheseventhoutput data that shows the detected tumor boundaries at views 506A-F and provides a corresponding probability map of the tumors at views 508A-F. The numeral510 depicts a classification label indicating the type of tumors. The classification label may includehigh-grade glioma (HGG), low-grade glioma (LGG), meningioma (MEN), pituitary adenoma (PA), brain metastasis (METS), and acoustic neuroma (AN).
[0004] FIG. 5C is an exemplary view that depicts a second output data according to some embodiments herein. The exemplary view depicts thesecond output data that shows the delineation of different areas of the tumor at view 512 and color-specified segmentation masks of different areas of the tumor at view 514. The numerals 516A, 516B, 516C, and 516D denote peritumoral edema, GD-enhancing tumor, non-enhancing tumor core, and the necrotic tumor respectively. The 3D MRI viewer allows one to visualize the 3D shape and size of the malignant or non-malignant growthregions for further assessment.
[0005] FIG. 5D is an exemplary view that depicts a fourth output data according to some embodiments herein. The exemplary view at 518 depicts a tumor shape of the tumors and the exemplary view at 520 depicts metrics associated with at least the tumor shape, size, cortical volumes, and morphometry with corresponding values.
[0006] FIG. 5E is an exemplary view that depicts a third output data according to some embodiments herein.The exemplary views at 522A-D depict T1w, T1cw, T2w, and T2-FLAIR MRI images overlaid with GT at time point 1 (i.e. day 1) respectively. The exemplary views at 524A-D depict the corresponding images overlaid with GT at timepoint 2 (i.e. 282 days after timepoint 1). The numerals 526A, 526B, 526C, and 526D denote peritumoral edema (ED), non-enhancing tumor core (NE), necrotic tumor core (NC), andenhancing tumor (ET) respectively. From the exemplary view, it shows that the ET526D, NC 526C,and other surrounding tissues of the tumor in timepoint 2 are increasing during the elapsed time by comparing that of timepoint 1.
[0007] FIG. 5F is an exemplary view that depicts a sixth output data according to some embodiments herein.The sixth output data shows changes in the tissue map using the Glasser ROI template atlas as a reference.
[0008] FIG. 6 is a flow diagram that illustrates a method for evaluating a malignant or non-malignantgrowth in a brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user using a machine learning model according to some embodiments herein. At step 602, the method includes obtaining an MRI data of the user 102 that includes at least one of scan data. The imaging device 104 includes at least one a camera, or a screen. The scan data includes at least one of T1 weighted magnetic resonance imaging (MRI) image or a resting-state functional MRI image in a predefined format. The predefined format of the scan data includes at least one of digital imaging and communications in medicine (dicom) format or neuroimaging informatics technology initiative (NIfTI) format. At step 604, the method includes automatically pre-processing, by the server 108, the MRI data to obtain pre-processed MRI data.At step 606, the method includes segmenting the pre-processed MRI data into segments using a first cortical and a sub-cortical segmentation method. The segments include at least one of the cortical and sub-cortical areas of the brain.
[0009] At step 608,the method includes determining cortical and sub-cortical volumes of the brain and a first output data by refiningboundaries of the segmentsusing a second cortical and the sub-cortical segmentation method.The first output data include at least one a thickness of the brain from different regions, segmentation boundaries, and cortical surface delineations.
[0010] At step 612, the method includesevaluating the malignant or non-malignantgrowth in the brain by comparing the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to estimate a change as a third output in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth.The third output includes at least one a volume of malignant or non-malignant growth, a malignant or non-malignant growthinfiltration, and a rate of malignant or non-malignant growth.
[0011] A representative hardware environment for practicing the embodiments herein is depicted in FIG. 7, with reference to FIGS. 1 through 6. This schematic drawing illustrates a hardware configuration of a server 108/computer system/ computing device in accordance with the embodiments herein. The system includes at least one processing device CPU 10 that may be interconnected via system bus 15 to various devices such as a random-access memory (RAM) 12, read-only memory (ROM) 16, and an input/output (I/O) adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk units 58 and program storage devices 50 that are readable by the system. The system can read the inventive instructions on the program storage devices 50 and follow these instructions to execute the methodology of the embodiments herein. The system further includes a user interface adapter 22 that connects a keyboard 28, mouse 50, speaker 52, microphone 55, and/or other user interface devices such as a touch screen device (not shown) to the bus 15 to gather user input. Additionally, a communication adapter 20 connects the bus 15 to a data processing network 52, and a display adapter 25 connects the bus 15 to a display device 26, which provides a graphical user interface (GUI) 56 of the output data in accordance with the embodiments herein, or which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
[0012] The system 100 performs the comprehensive analysis on the brain tumor and generates the comprehensive report on the brain tumor associated with the user 102. Hence, the system 100 with the comprehensive report allows for a holistic approach to diagnosis and treatment planning of the brain tumors. As the system 100 reports the analytic data in all aspects about the tumors (e.g. tumor boundaries and segmentations, quantitative estimation results on tumor growth,textural information and other tumor metrics such as tumor grade, tumor type, the direction of growth, tumor morphology, etc.,), the system 100 ensures an accurate and/or speedy diagnosis and treatment planning for the brain tumors.
[0013] Moreover, the system 100 performs the one or more analysis on the tumor using one or more modalities of sMRI such as T1w, T2W, T1cw and FLAIR to obtain information of the tumor in all aspects. The one or more analysis includes (i) a cortical and sub-cortical segmentation to derive a first output data; (ii) a tumor delineation and segmentation to derive a second output data; (iii) a longitudinal analysis to obtain a third output data; (iv) a normative modeling to obtain a fourth output data; (v) a radiomics analysis to quantify a fifth output data; (vi) a tumor classification and grading to obtain a sixth output data; (vii) a preoperative and postoperative tumor analysis to obtain a seventh output data.
[0014] For example, the segmentation analysis with the all four modalities (T1w, T2W, T1cw and FLAIR) provides additional segmentation information such as peritumoral edema, the necrotic tumor, etc., in addition to the tumor core delineation. Further, the system 100 performs the patient profiling using the demographic and past clinical information of user 102, and the normative modeling using the past patient population data. Both these approaches help in better patient personalization and surgical planning.
[0015] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope of the appended claims.
,CLAIMS:I/We Claim:
1. A system for evaluating a malignant or non-malignantgrowth in a brain by estimating a change in cortical and sub-cortical volumes of the brain and the malignant or non-malignantgrowth in the brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user (102) using a machine learning model (110), the system comprising:
an imaging device (104) that comprises at least one camera, a screen, or an image capturing device, wherein the imaging device (104) obtains an MRI data of the user (102) that comprises at least one scan data;
a server (108) that acquires the MRI data of the user (102) from the imaging device (104), and processes, the MRI data using the machine learning model (110), wherein the server (108) comprises:
a memory that stores a database;
a processor that is configured to execute the machine learning model (110) and is configured to,
automatically pre-process the MRI data to obtain pre-processed MRI data;
characterized in that,
segment, using a first cortical and a sub-cortical segmentation method, the pre-processed MRI data into segments, wherein the segments comprise at least one of cortical and sub-cortical areas of the brain;
determine cortical and sub-cortical volumes of the brain and a first output data by refining, using a second cortical and the sub-cortical segmentation method, boundaries of the segments, wherein the first output data comprise at least one of a thickness of the brain from different regions, segmentation boundaries, and cortical surface delineations;
predict, using the machine learning model (110), a malignant or non-malignant growth and a second output data based on the pre-processed MRI data, wherein the machine learning model (110) is trained by correlating historical MRI data, historical preprocessed MRI data with historical malignant or non-malignant growths of the brain, and historical second outputs, wherein the second output data comprises at least one of delineation of malignant or non-malignant growthcore areas, differential segmentation and/or delineation of malignant or non-malignant growthand malignant or non-malignant growthcore areas, multi differential segmentation and/or delineation of malignant or non-malignant growth, a peritumoral edema, a necrotic malignant or non-malignant growth, or a malignant or non-malignant growthcore; and
evaluate the malignant or non-malignant growth in the brain by comparing the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to estimate a change as a third output in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth, wherein the third output comprises at least one of a volume of malignant or non-malignant growth, a malign or non-malignant growth’sinfiltration, and a rate of the malignant or non-malignant growth.
2. The system as claimed in claim 1, wherein the processor is configured to quantify a fourth output data by extracting a spatial distribution of pixel interrelationships in the pre-processed MRI data, signal intensities of the pre-processed MRI data, and one or more radiomics features from the pre-processed MRI data, wherein the fourth output data comprises at least one of a textural information of the malignant or non-malignant growth, a direction of growth, morphology of the malignant or non-malignant growth, a volume of the malignant or non-malignant growth, the malign or non-malignant growth’s infiltration.
3. The system as claimed in claim 1, wherein the processor is configured to compare, using a normative model, the first output data and the second output data based on a group-level distribution (i) to access where metrics of the user (102) fall in a healthy population curve and (ii) to estimate the change in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to obtain a fifth output data, wherein the fifth data comprises data on percentile volume differences compared to a healthy population.
4. The system as claimed in claim 1, wherein the processor is configured to classify the malignant or non-malignant growth into at least one type of the malignant or non-malignant growth and at least one grade of the malignant or non-malignant growth, and assess a genotypic nature of the malignant or non-malignant growth using the one or more radiomics features.
5. The system as claimed in claim 1, wherein the processor is configured to compare the pre-processed MRI data taken on a pre-operative stage of the user (102) and the pre-processed MRI data taken on a post-operative stage to provide changes in a tissue map of the user (102) as a sixth output data.
6. The system as claimed in claim 1, wherein the processor is configured to generate a comprehensive report using outputs of malignant or non-malignant growth layers, the grade of the malignant or non-malignant growth, a position of the malignant or non-malignant growth, a tissue type, and probability maps.
7. The system as claimed in claim 1, wherein the processor is configured to perform an image preprocessing and noise reduction using statistical methods and the machine learning model (110) is trained by correlating historical users, historical pre-processed MRI data, historical MRI data, with historical malignant or non-malignant growthcore areas, historical malignant or non-malignant growthtypes, historical malignant or non-malignant growthgrades, historical genotypic nature of the malignant or non-malignant growth, historic user profiles, and historic survival analytics.
8. The system as claimed in claim 1, wherein the processor is configured to obtain the pre-processed MRI data by
(i) converting the MRI data from at least one of a digital imaging and communications in medicine (Dicom) format or a nearly raw raster data (NRRD) format to a Neuro imaging informatics technology initiative (NIfTI) format and generating corresponding header information;
(ii) removing a scanner noise associated with an MRI machine, motion, image acquisition, the DICOM and the NIfTI format of the MRI data to obtain a noise-free MRI data;
(iii) correcting motion, orientation, an origin and angles in X, Y, Z coordinates for motion artefacts occurred between each captured 2D slice of the noise-free MRI data to obtain a motion corrected MRI data;
(iv) performing non-parametric, non-uniform intensity normalization on the motion corrected MRI data to remove inhomogeneous MRI scanner noise and inhomogeneous artefacts to obtain a first corrected MRI data;
(v) performing bias-correction on noises due to low-frequency and smooth signals on the first corrected MRI data to obtain a second corrected MRI data;
(vi) scaling and normalizing intensities for voxels specified by a brain mask across the second corrected MRI data to obtain a normalized MRI data;
(vii) determining a brain area MRI data by removing a neck area from the normalized MRI data;
(viii) computing a nonlinear transformation of the brain area MRI data to align with a MRI template using a non-linear volumetric registration;
(ix) segmenting at least one of a white matter (WM), a gray matter (GM), a cerebrospinal fluid (CSF), from the nonlinear transformation of the brain area MRI data;
(x) performing registration across the nonlinear transformation of the brain area MRI data that is segmented to a Montreal Neurological Institute co-ordinate (MNI) space or native/original space;
(xi) performing skull stripping of the nonlinear transformation of the brain area MRI data that is registered to the MNI space or the native/original space;
(xii) resampling the nonlinear transformation of the brain area MRI data that is skull stripped for obtaining a homogeneous voxel size of the nonlinear transformation of the brain area MRI data;
(xiii) performing spatial normalization of the nonlinear transformation of the brain area MRI data that is resampled from a user-specific space to the MNI space;
(xiv) performing a cortical registration of the nonlinear transformation of the brain area MRI data that is spatially normalized from the user-specific space to the MNI space;
(xv) transforming the MNI space back to the user-specific space for user-specific analysis;
(xvi) covering a cortical surface of the nonlinear transformation of the brain area MRI data with triangles to fill up a hemisphere;
(xvii) reconstructing, using a cortical surface reconstruction, any missing voxels from the gray matter area of the cortical surface to obtain a reconstructed MRI data;
(xviii) performing spatial smoothing by filtering high frequencies from a frequency domain to increase the signal-to-noise ratio of the reconstructed MRI data; and
(xix) aligning the reconstructed MRI data by a position of anterior and posterior commissures.
9. The system as claimed in claim 1, wherein the scan data comprises at least one of T1 weighted magnetic resonance imaging (MRI) image, or a resting-state functional MRI image in a predefined format, wherein the predefined format of the scan data comprises at least one of the DICOM format or the NIfTI format.
10. A processor-implemented method for evaluating a malignant or non-malignant growth in a brain by estimating a change in cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth in the brain from multi-modal structural magnetic resonance imaging (MRI) dataof a user (102) using a machine learning model, the method comprising:
obtaining, by an imaging device (104) that comprises at least one of a camera, or a screen, an MRI data of the user (102) that comprises at least one of scan data, wherein the scan data comprises at least one of T1 weighted magnetic resonance imaging (MRI) image, or a resting-state functional MRI image in a predefined format, wherein the predefined format of the scan data comprises at least one of digital imaging and communications in medicine (dicom) format or neuroimaging informatics technology initiative (NIfTI) format;
automatically pre-processing, by a server (108), the MRI data to obtain pre-processed MRI data;
segmenting, using a first cortical and a sub-cortical segmentation method, the pre-processed MRI data into segments, wherein the segments comprise at least one of cortical and sub-cortical areas of the brain;
determining cortical and sub-cortical volumes of the brain and a first output data by refining, using a second cortical and the sub-cortical segmentation method, boundaries of the segments, wherein the first output data comprise at least one of a thickness of the brain from different regions, segmentation boundaries, and cortical surface delineations;
predicting, using a machine learning model, a malignant or non-malignant growth and a second output data based on the pre-processed MRI data, wherein the machine learning model is trained by correlating historical MRI data, historical preprocessed MRI data with historical malignant or non-malignantgrowths of the brain, and historical second outputs, wherein the second output data comprises at least one of delineation of malignant or non-malignant growthcore areas, differential segmentation and/or delineation of malignant or non-malignant growthand malignant or non-malignant growthcore areas, multi differential segmentation and/or delineation of malignant or non-malignant growth, a peri-tumoral edema, a necrotic malignant or non-malignant growth, or a malignant or non-malignant growthcore; and
evaluating the malignant or non-malignant growth in the brain by comparing the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth to estimate a change as a third output in the cortical and sub-cortical volumes of the brain and the malignant or non-malignant growth, wherein the third output comprises at least one of a volume of malignant or non-malignant growth, a malignant or non-malignant growthinfiltration, and a rate of malignant or non-malignant growth.
Dated this 01st June, 2023
Signature of the Patent Agent
(ARJUN KARTHIK BALA)
IN/PA-1021
Agent for Applicant.
| # | Name | Date |
|---|---|---|
| 1 | 202241031535-STATEMENT OF UNDERTAKING (FORM 3) [01-06-2022(online)].pdf | 2022-06-01 |
| 2 | 202241031535-PROVISIONAL SPECIFICATION [01-06-2022(online)].pdf | 2022-06-01 |
| 3 | 202241031535-PROOF OF RIGHT [01-06-2022(online)].pdf | 2022-06-01 |
| 4 | 202241031535-POWER OF AUTHORITY [01-06-2022(online)].pdf | 2022-06-01 |
| 5 | 202241031535-FORM FOR STARTUP [01-06-2022(online)].pdf | 2022-06-01 |
| 6 | 202241031535-FORM FOR SMALL ENTITY(FORM-28) [01-06-2022(online)].pdf | 2022-06-01 |
| 7 | 202241031535-FORM 1 [01-06-2022(online)].pdf | 2022-06-01 |
| 8 | 202241031535-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [01-06-2022(online)].pdf | 2022-06-01 |
| 9 | 202241031535-EVIDENCE FOR REGISTRATION UNDER SSI [01-06-2022(online)].pdf | 2022-06-01 |
| 10 | 202241031535-DRAWINGS [01-06-2022(online)].pdf | 2022-06-01 |
| 11 | 202241031535-DRAWING [01-06-2023(online)].pdf | 2023-06-01 |
| 12 | 202241031535-CORRESPONDENCE-OTHERS [01-06-2023(online)].pdf | 2023-06-01 |
| 13 | 202241031535-COMPLETE SPECIFICATION [01-06-2023(online)].pdf | 2023-06-01 |
| 14 | 202241031535-Request Letter-Correspondence [28-06-2023(online)].pdf | 2023-06-28 |
| 15 | 202241031535-Power of Attorney [28-06-2023(online)].pdf | 2023-06-28 |
| 16 | 202241031535-FORM28 [28-06-2023(online)].pdf | 2023-06-28 |
| 17 | 202241031535-Form 1 (Submitted on date of filing) [28-06-2023(online)].pdf | 2023-06-28 |
| 18 | 202241031535-Covering Letter [28-06-2023(online)].pdf | 2023-06-28 |
| 19 | 202241031535-FORM-9 [21-09-2023(online)].pdf | 2023-09-21 |
| 20 | 202241031535-STARTUP [26-09-2023(online)].pdf | 2023-09-26 |
| 21 | 202241031535-FORM28 [26-09-2023(online)].pdf | 2023-09-26 |
| 22 | 202241031535-FORM 18A [26-09-2023(online)].pdf | 2023-09-26 |
| 23 | 202241031535-FER.pdf | 2023-11-16 |
| 24 | 202241031535-FORM 3 [15-12-2023(online)].pdf | 2023-12-15 |
| 25 | 202241031535-FORM 3 [22-02-2024(online)].pdf | 2024-02-22 |
| 26 | 202241031535-OTHERS [15-05-2024(online)].pdf | 2024-05-15 |
| 27 | 202241031535-FER_SER_REPLY [15-05-2024(online)].pdf | 2024-05-15 |
| 28 | 202241031535-CORRESPONDENCE [15-05-2024(online)].pdf | 2024-05-15 |
| 29 | 202241031535-COMPLETE SPECIFICATION [15-05-2024(online)].pdf | 2024-05-15 |
| 30 | 202241031535-CLAIMS [15-05-2024(online)].pdf | 2024-05-15 |
| 31 | 202241031535-US(14)-HearingNotice-(HearingDate-04-08-2025).pdf | 2025-06-30 |
| 32 | 202241031535-Correspondence to notify the Controller [16-07-2025(online)].pdf | 2025-07-16 |
| 33 | 202241031535-Correspondence to notify the Controller [28-07-2025(online)].pdf | 2025-07-28 |
| 34 | 202241031535-Annexure [28-07-2025(online)].pdf | 2025-07-28 |
| 35 | 202241031535-Written submissions and relevant documents [14-08-2025(online)].pdf | 2025-08-14 |
| 1 | 202241031535E_14-11-2023.pdf |