Sign In to Follow Application
View All Documents & Correspondence

System And Method For Generating Three Dimensional (3 D) Visualization Of Magnetic Resonance Imaging (Mri) Data

Abstract: SYSTEM AND METHOD FOR GENERATING THREE-DIMENSIONAL (3D) VISUALIZATION OF MAGNETIC RESONANCE IMAGING (MRI) DATA 5 A system and method for automatically generating a three-dimensional (3D) visualisation report of MRI data using a machine learning model 112 are provided. An imaging device 110 obtains an input file of the subject 102 and communicates to the 3D visualizing server 108 through a network 106. The input file includes scan data. . The 3D visualizing server 108 convert the predefined format of the scan data into an object format file or a sliced format file. The 3D 10 visualizing server 108 generates a 3D model of the subject. The 3D visualizing server 108 trains, using data analysis pipelines, the machine learning model 112 . The 3D visualizing server 108 determines features by analyzing the 3D model . The 3D visualizing server 108 visualizes, using an expert device 104, features that are determined. The 3D visualizing server 108 generates a 3D visualization report of the subject. 15 FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 February 2021
Publication Number
30/2022
Publication Type
INA
Invention Field
PHYSICS
Status
Email
ipo@myipstrategy.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-09-06
Renewal Date

Applicants

BRAINSIGHT TECHNOLOGY PRIVATE LIMITED
640, 14TH CROSS, JP NAGAR, 2ND PHASE, BANGALORE-560078

Inventors

1. Rimjhim Agrawal
640, 14TH CROSS, JP NAGAR, 2ND PHASE, BANGALORE-560078
2. Dilip Rajeswari
126/A, 2nd floor, 1st G-Cross, 4th Block, 2nd Phase, Banashankari 3rd Stage, Bangalore, Karnataka, India - 560085
3. Abhay Shankar Jha
Behind Srivastav Building, Bada Gamharia, Gamharia, Jharkhand, India- 832108

Specification

DESC:SYSTEM AND METHOD FOR GENERATING THREE-DIMENSIONAL
(3D) VISUALIZATION OF MAGNETIC RESONANCE IMAGING (MRI)
DATA
Technical Field
5 [0001] The embodiments herein generally relate to three-dimensional (3D)
visualization, more particularly, a system and method for generating a three-dimensional (3D)
visualization report of magnetic resonance imaging (MRI) data using a machine learning
model.
Description of the Related Art
10 [0002] Image visualization involves the presentation of image datasets. ThreeDimensional (3D) visualization is a part of a computerized environment that provides a method
of presentation of results. The 3D visualization affects users more than its 2D counterparts.
The 2D visualization is insufficient in providing more information compared to the 3D
visualization. As there is a lot of technological advancement in 3D visualization, it is utilized
15 very well in healthcare applications.
[0003] The healthcare segment is an inevitable part of the world economy. The
visualization has become very critical to understand and report neuroimaging data. The
complexities of brain geometry and its variation from one individual to another, combined
with the increasing number of imaging modalities and representations required to fully
20 characterize its structure and function, make visualization a formidable challenge. Existing
systems for viewing clinical and research neuroimaging data results in significant loss of
data. Existing systems are assessing the neuroimaging data with human intervention and
increasing the scope of errors in the assessment. Also, the human intervened assessment
may consume more time and is not real-time based.
25 [0004] Existing systems are confined to considering a few imaging modalities for
the representation of neuroimaging data. Considering the few imaging modalities may result
in less interpretation of the neuroimaging data which results in a lack of elaborate analysis.
Thus, it may result in a confusing stage for the health experts to move forward in treatments.
3
Existing systems are confined to visualizing a particular part or portion of the brain based
on the few modalities provided. It results in less scope of understanding of the other portions
of the brain that are connected to the particular part or portion.
[0005] Accordingly, there remains a need for a more efficient system and method for
5 mitigating and/or overcoming drawbacks associated with current methods.
SUMMARY
[0006] In view of the foregoing, an embodiment herein provides a system for
generating a three-dimensional (3D) visualization report of magnetic resonance imaging (MRI)
data using a machine learning model that enables to evaluate one or more health conditions of
10 a subject. The system includes an imaging device. The imaging device includes at least one of
a camera or a screen. The imaging device obtains an input file of the subject. In some
embodiments, the input file includes at least one scan data and one or more attributes. In some
embodiments, the scan data includes at least one of T1-weighted structural MRI (T1 sMRI)
image, a T2-weighted structural MRI (T2 sMRI) image, a T1-weighted structural contrast MRI
15 (T1wc MRI) image, a fluid-attenuated inversion recovery (FLAIR) image, diffusion-weighted
imaging (DWI) and corresponding diffusion tensor imaging (DTI) scans, or resting state
functional MRI (RS fMRI) image. In some embodiments, the scan data is in a predefined
format. In some embodiments, the predefined format of the scan data includes at least one of
digital imaging and communications in medicine (dicom) format or neuroimaging informatics
20 technology initiative (NIfTI) format. The system includes a 3D visualizing server. The 3D
visualizing server acquires the input file of the subject from the imaging device, and processes,
using the machine learning model (112), the input file of the subject. The 3D visualizing server
includes a memory that stores a database and a processor that is configured to execute the
machine learning model and is configured to (i) convert the predefined format of the scan data
25 into at least one of an object format file or a sliced format file by pre-processing the predefined
format of the scan data of the subject; (ii) generate a 3D model of the subject using the at least
one of the object format file or the sliced format file, the 3D file includes corresponding region
of interest (ROI) data of functional MRI, and structural MRI of the subject; (iii) trains, using
one or more data analysis pipelines, a machine learning model by providing one or more
4
historical 3D files of the subject and one or more features associated with the one or more
historical 3D files of the subjects as training data to obtain a trained machine learning model;
(iv) determines, using the trained machine learning, a plurality of features by analyzing the 3D
model of the subject using one or more attributes, the one or more attributes includes at least
5 one of region of interest (ROI) connectivity, activity, volumetric data, white matter tract atlas
data, or optimal brain ROIs estimation for stimulation; (v) visualizes, using an expert device
one or more features that are determined using the trained machine learning model in the 3D
model of the subject; (vi) generates a 3D visualization report of the subject based on the
visualized one or more features in the 3D model of the subject that enable to evaluate one or
10 more health conditions of the subject, the 3D visualization report includes at least one of a
volumetric chance, ROI regional and connectivity changes, white matter tracts, differential
tumor segmentation, or a grade of tumor.
[0007] In some embodiments, one or more data analysis pipelines include a resting
state functional MRI (RS fMRI) data analysis pipeline, a structural MRI data analysis pipeline,
15 a brain tumor segmentation, and a surgical planning data analysis pipeline, and a personalized
brain simulation and stimulation data analysis pipeline
[0008] In some embodiments, the RS fMRI data analysis pipeline provides regional
activity value changes in specified ROIs and functional connectivity value changes between
ROIs to the machine learning model, the structural MRI data analysis pipeline provides at least
20 one of percentage decrease/increase of brain volume, cortical surface area metrics, or cortical
thickness to the machine learning model, the brain tumor segmentation and the surgical
planning data analysis pipeline provide at least one of differential tumor grade, prognosis
analysis, segmentation masks, or corresponding DTI white matter tracts to the machine
learning model, the personalized brain simulation and stimulation data analysis pipeline
25 provides precise locations for brain stimulation to the machine learning model.
[0009] In some embodiments, one or more features include a parcellation 3D select, a
brain volume percentage change, regional and functional connectivity, a select/deselect ROI,
a dynamic causal modeling, a differential segmentation, a 3D tumor grading, a tumor metrics,
a 3D DTI white matter tracts, recommendations, a DTI fiber tracking visualization, a 3D
5
surgical planning, a longitudinal comparison white matter hyperintensities (WMH), a
longitudinal volumetric comparisons, a longitudinal brain atrophy tracking, a personalized
brain simulation and stimulation areas, a 3D reporting.
[0010] In some embodiments, the parcellation 3D select visualize at least one of
5 predefined atlases/parcellations, the at least one of predefined atlases/parcellations include
glasser atlas, HCP-MMP1 atlas, Automated Anatomical Labelling (AAL), Harvard-Oxford
cortical/subcortical atlases, Yeo, Desikan-Killiany atlas, or Destrieux atlas.
[0011] In some embodiments, the processor is configured to provide one or more
features interactive features of the 3D visualization report through the expert device. In some
10 embodiments, one or more features include a glass brain, a 3D interactive spin, a zoom in and
zoom out, a 3D sagittal coronal traverse views, a slice view.
[0012] In one aspect, a processor-implemented method for generating a threedimensional (3D) visualization report of magnetic resonance imaging (MRI) data using a
machine learning model that enables to evaluate one or more health conditions of a subject is
15 provided. The method includes obtaining the input file of a subject from an imaging device
that includes at least one of a camera, or a screen. In some embodiments, the input file includes
at least one of at least one scan data and a plurality of attributes. In some embodiments, the
scan data includes at least one of a T1-weighted structural MRI (T1 sMRI) image, a T2-
weighted structural MRI (T2 sMRI) image, a T1-weighted structural contrast MRI (T1wc
20 MRI) image, a Fluid-attenuated inversion recovery (FLAIR) image, Diffusion-weighted
imaging (DWI) image and its corresponding Diffusion tensor imaging (DTI) scans, or Resting
State functional MRI (RS fMRI) image. The method includes converting the predefined format
of the scan data into at least one of an object format file or a sliced format file by pre-processing
the predefined format of the scan data of the subject. The method includes generating a 3D
25 model of the subject using the at least one of the object format file or the sliced format file. In
some embodiments, the 3D file includes the corresponding region of interest (ROI) data of
functional MRI, and structural MRI of the subject. The method includes training, using one or
more data analysis pipelines, a machine learning model by providing a plurality of historical
3D files of the subject and a plurality of features associated with a historical plurality of
6
historical 3D files of the subjects as training data to obtain a trained machine learning model.
The method includes determining, using the trained machine learning, a plurality of features
by analyzing the 3D model of the subject using the one or more attributes. In some
embodiments, the one or more attributes includes at least one of region of interest (ROI)
5 connectivity, activity, volumetric data, white matter tract atlas data, or optimal brain ROIs
estimation for stimulation. The method includes visualizing, using an expert device one or
more features that are determined using the trained machine learning model in the 3D model
of the subject. The method includes generating a 3D visualization report of the subject based
on the visualized one or more features in the 3D model of the subject that enable to evaluate
10 one or more health conditions of the subject. In some embodiments, the 3D visualization report
includes at least one of a volumetric chance, ROI regional and connectivity changes, white
matter tracts, differential tumor segmentation, or a grade of tumor.
[0013] The system and/or method is used for accessing patient profiles from any
location and device to healthcare experts related to psychiatric, neuropsychiatric, mental
15 illness, neurological, neuro-psychotic disorders. The system or method may help the experts
in enabling ease of user interface and 3D visualization to understand the changes in the brain
in a better and easier manner. The system enables to understand the regional activity and
connectivity in the brain with psychiatric disorders. The understanding helps the health experts
in diagnosing brain cancers and prognosis help in surgical procedures through optimal surgical
20 planning. Brain stimulation helps health experts in brain stimulation to help individuals with
brain disorders with brain visualization.
[0014] These and other aspects of the embodiments herein will be better appreciated
and understood when considered in conjunction with the following description and the
accompanying drawings. It should be understood, however, that the following descriptions,
25 while indicating preferred embodiments and numerous specific details thereof, are given by
way of illustration and not of limitation. Many changes and modifications may be made within
the scope of the embodiments herein without departing from the spirit thereof, and the
embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
7
[0015] The embodiments herein will be better understood from the following detailed
description with reference to the drawings, in which:
[0016] FIG. 1 is a system for generating a three-dimensional (3D) visualization report
of magnetic resonance imaging (MRI) data using a machine learning model that enables to
5 evaluate one or more health conditions of the subject according to some embodiments herein;
[0017] FIG. 2 is a block diagram of a 3D visualizing server of FIG. 1 according to some
embodiments herein;
[0018] FIG. 3 is a block diagram of a visualizing features module of the 3D visualizing
server of FIG. 2 according to some embodiments herein;
10 [0019] FIG. 4 is a block diagram of an interactive module of the 3D visualizing server
of FIG. 2 according to some embodiments herein;
[0020] FIG. 5A illustrates exemplary representations of one or more interactive
features of the 3D visualization report according to some embodiments herein;
[0021] FIG. 5B illustrates exemplary representations of one or more features of the 3D
15 visualization report according to some embodiments herein;
[0022] FIGS. 6A and 6B are flow diagrams that illustrate a method for generating a
three-dimensional (3D) visualization report of magnetic resonance imaging (MRI) data using
a machine learning model that enables to evaluate one or more health conditions of the subject
according to some embodiments herein; and
20 [0023] FIG. 7 is a schematic diagram of a computer architecture in accordance with
the embodiments herein.
DETAILED DESCRIPTION OF THE DRAWINGS
[0024] The embodiments herein and the various features and advantageous details
thereof are explained more fully with reference to the non-limiting embodiments that are
25 illustrated in the accompanying drawings and detailed in the following description.
Descriptions of well-known components and processing techniques are omitted so as to not
unnecessarily obscure the embodiments herein. The examples used herein are intended merely
to facilitate an understanding of ways in which the embodiments herein may be practiced and
to further enable those of skill in the art to practice the embodiments herein. Accordingly, the
8
examples should not be construed as limiting the scope of the embodiments herein.
[0025] As mentioned, there is a need for a system that generates three-dimensional
(3D) visualization of magnetic resonance imaging (MRI) data using a machine learning model
in an expert device. Referring now to the drawings, and more particularly to FIGS. 1 through
7, where similar reference characters denote corresponding features consistently throughout
the figures, preferred embodiments are shown.
[0026] FIG. 1 is a system for generating a three-dimensional (3D) visualization report
of magnetic resonance imaging (MRI) data using a machine learning model 112 that enables
to evaluate one or more health conditions of the subject 102 according to some embodiments
5 herein. The system 100 includes an expert device 104, a network 106, a 3D visualizing server
108, an imaging device 110, and a machine learning model 112. In some embodiments, the
system 100 includes an android application package (APK), iOS App Store Package (IPA), or
any such application packages that are installed in the expert device 104 of the subject 102. In
some embodiments, the expert device 104, without limitation, is selected from a mobile phone,
10 a Personal Digital Assistant (PDA), a tablet, a desktop computer, or a laptop computer. In some
embodiments, the system 100 includes an application that may be installed in android based
devices, windows-based devices, or any such mobile operating systems devices.
[0027] The expert device 104 obtains an input file of the subject 102 and communicates
to the 3D visualizing server 108 through the network 106. In some embodiments, the network
15 106 without limitation, is selected from a wired network or a wireless network such as
Bluetooth, Wi-Fi, ZigBee, cloud, or any other communication networks. The imaging device
110 includes at least one of a camera or a screen. The imaging device 110 obtains an input file
of the subject 102. The input file includes at least one scan data and one or more attributes. In
some embodiments, the scan data includes at least one of a T1-weighted structural MRI (T1
20 sMRI), a T2-weighted structural MRI (T2 sMRI) image, a T1-weighted structural contrast MRI
(T1wc MRI) image, a Fluid-attenuated inversion recovery (FLAIR) image, Diffusion-weighted
imaging (DWI) and its corresponding Diffusion tensor imaging (DTI) scans, or Resting State
functional MRI (RS fMRI) image. In some embodiments, the scan data of the subject 102 is in
a Dicom format. In some embodiments, the 3D visualizing server 108 uses the data of a person
25 in the dicom format, for example, the data of the person include a number of attributes, for
9
example, a name of the person, an ID of the person, etc. In some embodiments, the T1-
weighted structural MRI (T1 sMRI) image is a basic pulse sequence in MRI and depicts
differences in signal based upon intrinsic T1 relaxation time of various tissues. In some
embodiments, the T2-weighted structural MRI (T2 sMRI) image provides good contrast
5 between gray matter and white matter, cerebrospinal fluid and brain tissue. In some
embodiments, the fluid-attenuated inversion recovery (FLAIR) image is an MRI sequence with
an inversion recovery set to null fluids. In some embodiments, the diffusion-weighted imaging
(DWI) and corresponding diffusion tensor imaging (DTI) scans measure random Brownian
motion of water molecules within a voxel of tissue. In some embodiments, the resting state
10 functional MRI (RS fMRI) image provides brain mapping to evaluate regional interactions that
occur in a resting or task-negative state, when an explicit task is being performed.
[0028] The 3D visualizing server 108 converts the predefined format of the scan data
into at least one of an object format file or a sliced format file by pre-processing the predefined
format of the scan data of the subject 102. In some embodiments, the pre-processing is applied
15 using python, linux or shell scripts. The 3D visualizing server 108 generates a 3D model of the
subject 102 using the at least one of the object format file or the sliced format file. In some
embodiments, the object file represents control data with a machine-independent format, and
the format provides common identification and interpretation of the object. In some
embodiments, the sliced format file includes a format for a sliced file that is generated by
20 adjusting model parameters based on requirements.. The 3D visualizing server 108 stores the
3D model of the subject 102 in variable values. In some embodiments, the 3D model includes
corresponding ROI data of the fMRI and sMRI. The machine learning model 112 is trained by
providing a historical 3D files of the subject 102 and a features associated with historical 3D
files of the subjects as training data to obtain a trained machine learning model. The machine
25 learning model 112 is trained by one or more data analysis pipelines. The one or more data
analysis pipelines include a resting state functional MRI data analysis pipeline, a structural
MRI data analysis pipeline, a brain tumor segmentation, and a surgical planning data analysis
pipeline, and a personalized brain simulation and stimulation data analysis pipeline.
[0029] The 3D visualizing server 108 determines one or more features by analyzing
30 the 3D model of the subject 102 using one or more attributes. In some embodiments, the 3D
10
model of the subject 102 is analysed in 3D Unity and/or Dassault 3D visualization. In some
embodiments, the one or more attributes includes at least one of region of interest (ROI)
connectivity, activity, volumetric data, white matter tract atlas data, or optimal brain ROIs
estimation for stimulation.
5 [0030] The 3D visualizing server 108 visualizes one or more features that are
determined using the trained machine learning model in the 3D model of the subject 102 using
an expert device 104. The one or more features include a parcellation 3D select, a brain volume
percentage change, regional and functional connectivity, a select/deselect ROI, a dynamic
causal modeling, a differential segmentation, a 3D tumor grading, a tumor metrics, a 3D DTI
10 white matter tracts, a recommendations, a DTI fiber tracking visualization, a 3D surgical
planning, a longitudinal comparison white matter hyperintensities (WMH), a longitudinal
volumetric comparisons, a longitudinal brain atrophy tracking, a personalized brain stimulation
areas, a 3D reporting. The parcellation 3D select allows the subject 102 to visualize 14-20
different predefined atlases/parcellations of the brain. In some embodiments, the different
15 predefined atlases/parcellations are Glasser atlas, HCP-MMP1 atlas, Automated Anatomical
Labelling (AAL), Harvard-Oxford cortical/subcortical atlases, Yeo, Desikan-Killiany atlas,
Destrieux atlas, etc. In some embodiments, each parcellation portrays varying number and
granularity of Region of interests (ROIs) in the brain. The brain volume percentage change
displays the percentage change in the volume of the ROI of the brain. In some embodiments,
20 the percentage change in volume of the ROI of the brain may be, for example, a 15% gray
matter volume decrease in Left Precentral gyrus of the subject with schizophrenia as compared
to healthy controls and/or bipolar group. The regional and functional connectivity displays
values of increase/decrease regional activity of an ROI and functional connectivity between
ROIs. In some embodiments, the regional and functional connectivity values are mapped into
25 the corresponding ROI, and portrayed to the subject 102. The select/deselect ROI allows the
subject 102 to select the ROI and access volumetric and/or regional activity and connectivity
changes and/or other changes associated with that ROI. The dynamic causal modelling runs
brain simulations by combining data from multiple modalities and simulating dynamic causal
modeling on a virtual brain model. In some embodiments, the virtual brain model is simulated
30 using virtual Brain and/or Dassault Systèmes’s 3D visualization and simulation. The
11
differential segmentation provides the subject 102 with differential masks and/or whole
segmentation masks for the brain tumor. In some embodiments, the differential masks include
edema, necrotic tumor core, advancing tumor, etc. In some embodiments, the segmentation
masks overlay on the corresponding multi-model MRI scans to visualize the accurate
5 delineation of tumors from healthy brain tissue. The 3D tumor grading allows the subject 102
to view the differential grade of the tumor based on tumor shape, size, and place. The tumor
metrics provide the nature of the tumor. In some embodiments, the nature of the tumor includes
tumor grade, tumor shape, shape variation, tumor coordinates, etc. The 3D DTI white matter
tracts derives white matter tracts (DTI) and overlays tumor segmentation maps and Multi10 model MRI scans to arrive at precise white matter structures. The recommendations allow the
subject 102 to view all the potential top aberrations/abnormalities of brain regions. The DTI
fiber tracking visualization allows the subject 102 to access the white matter architecture of
the individual with brain tumor and other neuropsychiatric or brain disorders. The 3D surgical
planning overlays maps, masks, and other outputs from different imaging pipelines. The
15 longitudinal comparison WMH allows the subject 102 to track the progression of white matter
tract deterioration, infection, stroke, etc. The longitudinal volumetric comparisons compare
brain volumes of different ROIs based on different atlases/parcellations across time. The
longitudinal brain atrophy tracking quantifies the rate of brain atrophy, and how that relates to
patient symptoms and illness progression. The personalized brain stimulation areas provide the
20 ability to pinpoint optimal brain foci, which when stimulated using brain stimulation
approaches, may result in optimal illness/disease symptom alleviation.
[0031] The 3D visualizing server 108 generates the visualization report of the subject
102 based on the visualized one or more features that enable to evaluate one or more health
conditions of the subject 102. The 3D visualization report includes a volumetric chance, ROI
25 regional and connectivity changes, segmentation maps, brain stimulation foci, etc. In some
embodiments, the 3D visualization report provides interactive files for estimation, diagnosis,
and prognosis based on pre-surgical planning, diagnosis, longitudinal brain changes, etc.
[0032] The 3D visualizing server 108 includes one or more interactive features, a glass
brain, a 3D interactive spin, a zoom in and zoom out, a 3D sagittal coronal traverse views, a
30 slice view, a parcellation 3D select. The glass brain allows the subject 102 to view the whole
12
brain and/or different Region of interest (ROI) defined by the atlases/parcellations of the MRI
scan, and interactively select the required brain area and/or ROI for analysis and metrics. In
some embodiments, the glass represents the transparency feature of the 3D visualization. The
3D interactive spin allows the subject 102 to freely interact, move left-right, up-down, and
5 rotate the glass or solid 3D brain in the X, Y, and Z coordinate axis. The zoom in and zoom
out allows to narrow down on a region, or move back the camera point of view (POV) by
scrolling a mouse wheel (where clockwise and anticlockwise rotation of the mouse wheel is
multiplied by a constant POV change which decides how slow or fast it is zoomed in or out),
touch screen interactivity (where the subjects use their fingers to zoom in or out), and a
10 keyboard (a key to zoom in and a key to zoom out by a constant POV change). The 3D sagittal
coronal traverse views allow the subject 102 to interact using the three standard brain
orientations - Sagittal (X-axis), Coronal (Y-axis), and Transverse/Axial (Z-axis) axis views of
the brain. The slice view allows to view different slices in/out of the MRI scan through mouse
scroll option, touch screen interaction, or keyboard keys. The parcellation 3D select allows the
15 subject 102 to select a specified region of the brain using region name, or in the interactive 3D
space.
[0033] In some embodiments, the RS fMRI data analysis pipeline provides regional
activity value changes in specified ROIs and functional connectivity value changes between
ROIs. The structural MRI data analysis pipeline provides structural inputs of different ROIs
20 such as percentage decrease/increase of brain volume, cortical surface area metrics, cortical
thickness, etc. The brain tumor segmentation and the surgical planning data analysis pipeline
provide differential tumor grade, prognosis analysis, segmentation masks, corresponding DTI
white matter tracts, etc. The personalized brain stimulation data analysis pipeline provides
precise locations for brain stimulation for individuals with neuropsychiatric disorders and other
25 brain disorders.
[0034] The 3D visualizing machine learning server 108 visualizes one or more features
of 3D visualization. The one or more features include a parcellation 3D select, a brain volume
percentage change, regional and fictional connectivity, a select/deselect ROI, a dynamic causal
modeling, a differential segmentation, a 3D tumor grading, a tumor metrics, a 3D DTI white
30 matter tracts, recommendations, a DTI fiber tracking visualization, a 3D surgical planning, a
13
longitudinal comparison white matter hyperintensities (WMH), a longitudinal volumetric
comparisons, a longitudinal brain atrophy tracking, a personalized brain stimulation areas, a
3D reporting. The parcellation 3D select may visualize 14-20 different predefined
atlases/parcellations of the brain. In some embodiments, the different predefined
5 atlases/parcellations are Automated Anatomical Labelling (AAL), Harvard-Oxford
cortical/subcortical atlases, Yeo, Desikan-Killiany atlas, Destrieux atlas, etc. In some
embodiments, each parcellation portrays varying number and granularity of Region of interests
(ROIs) in the brain. The brain volume percentage change may display the percentage change
in the volume of an ROI of the brain. In some embodiments, the percentage change in volume
10 of an ROI of the brain may be, for example, a 15% gray matter volume decrease in Left
Precentral gyrus of the subject 102 with schizophrenia as compared to healthy controls and/or
bipolar group. The regional and fictional connectivity may display values of increase/decrease
regional activity of an ROI and functional connectivity between ROIs. In some embodiments,
the regional and fictional connectivity values may be mapped into the corresponding ROI and
15 portrayed to the subject. The select/deselect ROI may select an ROI and access volumetric
and/or regional activity and connectivity changes and/or other changes associated with that
ROI. The dynamic causal modeling may run brain simulations by combining data from
multiple modalities and simulating dynamic causal modeling on a virtual brain model. In some
embodiments, the virtual brain model is simulated using virtual Brain and/or Dassault System’s
20 3D visualization and simulation.
[0035] The differential segmentation may provide the subject 102 with differential
masks and/or whole segmentation masks for a brain tumor. In some embodiments, the
differential masks include edema, necrotic tumor core, advancing tumor, etc. In some
embodiments, the segmentation masks overlay on the corresponding multi-model MRI scans
25 to visualize the accurate delineation of tumors from healthy brain tissue The 3D tumor grading
may allow the subject 102 to view the differential grade of the tumor based on tumor shape,
size, and place. The tumor metrics provide the nature of the tumor. In some embodiments, the
nature of the tumor includes tumor grade, tumor shape, shape variation, tumor coordinates, etc.
The 3D DTI white matter tracts derive white matter tracts (DTI). The derived 3D DTI white
30 matter tracts overlays tumor segmentation maps and Multi-model MRI scans to arrive at
14
precise white matter structures. The recommendations allow the subject 102 to view all the
potential top aberrations/abnormalities of brain regions based on the 3D file after analysing
one or more data analysis pipelines using a 3D Unity and/or a Dassault 3D visualization. The
DTI fiber tracking visualization allows the subject 102 to access the white matter architecture
5 of the individual with a brain tumor and other neuropsychiatric or brain disorders. The 3D
surgical planning may overlay maps, masks, and other outputs from different imaging
pipelines. The longitudinal comparison of white matter hyperintensities (WMH) track the
progression of white matter tract deterioration, infection, stroke, etc. The longitudinal
volumetric comparisons may compare brain volumes of different ROIs based on different
10 atlases/parcellations across time. The longitudinal brain atrophy tracking may quantify the rate
of brain atrophy, and how that relates to patient symptoms and illness progression. The
personalized brain stimulation areas may provide the ability to pinpoint optimal brain foci,
which when stimulated using brain stimulation approaches, will result in optimal
illness/disease symptom alleviation. The 3D reporting provides a 3D report that specifies the
15 Volumetric chance, ROI regional and connectivity changes, segmentation maps, brain
stimulation foci, etc. In some embodiments, the 3D report provides interactive files for
estimation, diagnosis, and prognosis based on pre-surgical planning, diagnosis, longitudinal
brain changes, etc.
[0036] The 3D visualizing machine learning server 108 includes one or more
20 interactive features, a glass brain, a 3D interactive spin, a zoom in and zoom out, a 3D sagittal
coronal traverse views, a slice view, a parcellation 3D select. The glass brain allows the subject
102 to view the whole brain and/or different Region of interest (ROI) defined by the
atlases/parcellations of the MRI scan, and interactively select the required brain area and/or
ROI for analysis and metrics. In some embodiments, the glass represents a transparency feature
25 of the 3D visualization. The 3D interactive spin allows the subject 102 to freely interact, move
left-right, up-down, and rotate the glass or solid 3D brain in the X, Y, and Z coordinate axis.
The zoom in and zoom out allows the subject 102 to narrow down on a region, or move back
the camera point of view (POV) by scrolling a mouse wheel (where clockwise and
anticlockwise rotation of the mouse wheel is multiplied by a constant POV change which
30 decides how slow or fast it is zoomed in or out), touch screen interactivity (where the subjects
15
use their fingers to zoom in or out), and a keyboard (a key to zoom in and a key to zoom out
by a constant POV change). In some embodiments, the zoom in and zoom out helps the subject
102 to visualize the required brain area and/or ROI clearly. The 3D sagittal coronal traverse
views allow to interact using the three standard brain orientations - Sagittal (X-axis), Coronal
5 (Y-axis), and Transverse/Axial (Z-axis) axis views of the brain. The slice view allows to view
different slices in/out of the MRI scan through mouse scroll option, touch screen interaction,
or keyboard keys. The parcellation 3D select allows the subject 102 to select a specified region
of the brain using region name, or in the interactive 3D space.
[0037] FIG. 2 is a block diagram of a 3D visualizing server 108 of FIG. 1 according to
10 some embodiments herein. The 3D visualizing server 108 includes a database 202, an input
file obtaining module 204, a format converting module 206, a 3D model generating module
208, a features determining module 210, a visualizing features module 212, a 3D visualization
report generating module 214, an interactive module 216, and a machine learning model 112.
The input file obtaining module 204 obtain input file of a subject 102 from an imaging device
15 110 that includes at least one of a camera, or a screen. The imaging device 110 obtains an input
file of the subject 102. The input file includes at least one scan data and one or more attributes.
In some embodiments, the scan data includes at least one of a T1-weighted structural MRI (T1
sMRI), a T2-weighted structural MRI (T2 sMRI) image, a T1-weighted structural contrast MRI
(T1wc MRI) image, a Fluid-attenuated inversion recovery (FLAIR) image, Diffusion-weighted
20 imaging (DWI) and its corresponding Diffusion tensor imaging (DTI) scans, or Resting State
functional MRI (RS fMRI) image. In some embodiments, the scan data of the subject 102 is in
a Dicom format. The format converting module 206 converts the predefined format of the scan
data into at least one of an object format file or a sliced format file by pre-processing the
predefined format of the scan data of the subject 102. The 3D model generating module 208
25 generates a 3D model of the subject 102 using the at least one of the object format file or the
sliced format file. In some embodiments, the 3D model includes the corresponding region of
interest (ROI) data of functional MRI, and structural MRI of the subject 102. The machine
learning model 112 is trained by providing a historical 3D files of the subject 102 and a features
associated with historical 3D files of the subjects as training data to obtain a trained machine
30 learning model. The machine learning model 112 is trained by one or more data analysis
16
pipelines. The one or more data analysis pipelines include a resting state functional MRI data
analysis pipeline, a structural MRI data analysis pipeline, a brain tumor segmentation, and a
surgical planning data analysis pipeline, and a personalized brain simulation and stimulation
data analysis pipeline.
5 [0038] The features determining module 210 determines one or more features by
analyzing the 3D model of the subject 102 using one or more attributes. In some embodiments,
the one or more attributes include at least one of region of interest (ROI) connectivity, activity,
volumetric data, white matter tract atlas data, or optimal brain ROIs estimation for stimulation.
[0039] The visualizing features module 212 visualizes one or more features that are
10 determined using the trained machine learning model in the 3D model of the subject 102 using
an expert device 104. The one or more features include a parcellation 3D select, a brain volume
percentage change, regional and fictional connectivity, a select/deselect ROI, a dynamic causal
modeling, a differential segmentation, a 3D tumor grading, a tumor metrics, a 3D DTI white
matter tracts, a recommendations, a DTI fiber tracking visualization, a 3D surgical planning, a
15 longitudinal comparison white matter hyperintensities (WMH), a longitudinal volumetric
comparisons, a longitudinal brain atrophy tracking, a personalized brain stimulation areas, a
3D reporting. The parcellation 3D select allows the subject 102 to visualize 14-20 different
predefined atlases/parcellations of the brain. In some embodiments, the different predefined
atlases/parcellations are Glasser atlas, HCP-MMP1 atlas, Automated Anatomical Labelling
20 (AAL), Harvard-Oxford cortical/subcortical atlases, Yeo, Desikan-Killiany atlas, Destrieux
atlas, etc.
[0040] The 3D visualization report generating module 214 generates the visualization
report of the subject 102 based on the visualized one or more features that enable to evaluate
one or more health conditions of the subject 102. The 3D visualization report includes a
25 volumetric chance, ROI regional and connectivity changes, segmentation maps, brain
stimulation foci, etc. In some embodiments, the 3D visualization report provides interactive
files for estimation, diagnosis, and prognosis based on pre-surgical planning, diagnosis,
longitudinal brain changes, etc.
[0041] The interactive module 216 includes one or more interactive features, a glass
30 brain, a 3D interactive spin, a zoom in and zoom out, a 3D sagittal coronal traverse views, a
17
slice view, a parcellation 3D select. The glass brain allows the subject 102 to view the whole
brain and/or different Region of interest (ROI) defined by the atlases/parcellations of the MRI
scan, and interactively select the required brain area and/or ROI for analysis and metrics. In
some embodiments, the glass represents the transparency feature of the 3D visualization. The
5 3D interactive spin allows the subject 102 to freely interact, move left-right, up-down, and
rotate the glass or solid 3D brain in the X, Y, and Z coordinate axis. The zoom in and zoom
out allows to narrow down on a region, or move back the camera point of view (POV) by
scrolling a mouse wheel (where clockwise and anticlockwise rotation of the mouse wheel is
multiplied by a constant POV change which decides how slow or fast it is zoomed in or out),
10 touch screen interactivity (where the subjects use their fingers to zoom in or out), and a
keyboard (a key to zoom in and a key to zoom out by a constant POV change). The 3D sagittal
coronal traverse views allow the subject 102 to interact using the three standard brain
orientations - Sagittal (X-axis), Coronal (Y-axis), and Transverse/Axial (Z-axis) axis views of
the brain. The slice view allows to view different slices in/out of the MRI scan through mouse
15 scroll option, touch screen interaction, or keyboard keys. The parcellation 3D select allows the
subject 102 to select a specified region of the brain using region name, or in the interactive 3D
space.
[0042] FIG. 3 is a block diagram of a visualizing features module 212 of the 3D
visualizing server 108 of FIG. 2 according to some embodiments herein. The visualization
20 module 212 includes one or more features for visualizing the 3D visualization report. The one
or more features include a parcellation 3D select 302, a brain volume percentage change 304,
a regional and fictional connectivity 306, a select/deselect ROI 308, a dynamic causal modeling
310, a differential segmentation 312, a 3D tumor grading 314, a tumor metrics 316, a 3D DTI
white matter tracts 318, a recommendations 320, a DTI fiber tracking visualization 322, a 3D
25 surgical planning 324, a longitudinal comparison white matter hyperintensities (WMH) 326, a
longitudinal volumetric comparisons 328, a longitudinal brain atrophy tracking 330, a
personalized brain stimulation areas 332, and a 3D reporting 334. The parcellation 3D select
302 allows the user 102 to visualize 14-20 different predefined atlases/parcellations of the
brain. In some embodiments, the different predefined atlases/parcellations are Automated
30 Anatomical Labelling (AAL), Harvard-Oxford cortical/subcortical atlases, Yeo, Desikan-
18
Killiany atlas, Destrieux atlas, etc. In some embodiments, each parcellation portrays varying
number and granularity of Region of interests (ROIs) in the brain. The brain volume percentage
change 304 displays the percentage change in volume of the ROI of the brain. In some
embodiments, the percentage change in volume of the ROI of the brain may be, for example,
5 the 15% gray matter volume decrease in Left Precentral gyrus of the user with schizophrenia
as compared to healthy controls and/or bipolar group. The regional and fictional connectivity
306 displays values of increase/decrease regional activity of an ROI and functional
connectivity between ROIs. In some embodiments, the regional and fictional connectivity
values are mapped into the corresponding ROI, and portrayed to the expert. The select/deselect
10 ROI 308 allows the user 102 to select the ROI and access volumetric and/or regional activity
and connectivity changes and/or other changes associated with that ROI. The dynamic causal
modeling 310 runs brain simulations by combining data from multiple modalities and
simulating dynamic causal modeling on a virtual brain model. In some embodiments, the
virtual brain model is simulated using virtual Brain and/or Dassault Systèmes’s 3D
15 visualization and simulation. The differential segmentation 312 provides the user 102 with
differential masks and/or whole segmentation masks for a brain tumor. In some embodiments,
the differential masks include edema, necrotic tumor core, advancing tumor, etc. In some
embodiments, the segmentation masks overlay on the corresponding multi-model MRI scans
to visualize the accurate delineation of tumors from healthy brain tissue The 3D tumor grading
20 314 allows the expert to view the differential grade of the tumor based on tumor shape, size,
and place. The tumor metrics 316 provide the nature of the tumor. In some embodiments, the
nature of the tumor includes tumor grade, tumor shape, shape variation, tumor coordinates, etc.
The 3D DTI white matter tracts 318 derives white matter tracts (DTI) and overlays tumor
segmentation maps and Multi-model MRI scans to arrive at precise white matter structures.
25 The recommendations 320 allows the expert to view all the potential top
aberrations/abnormalities of brain regions. The DTI fiber tracking visualization 322 allows the
expert to access the white matter architecture of the individual with a brain tumor and other
neuropsychiatric or brain disorders. The 3D surgical planning 324 overlays maps, masks, and
other outputs from different imaging pipelines. The longitudinal comparison WMH 326 tracks
30 the progression of white matter tract deterioration, infection, stroke, etc. The longitudinal
19
volumetric comparisons 328 compare brain volumes of different ROIs based on different
atlases/parcellations across time. The longitudinal brain atrophy tracking 330 quantify the rate
of brain atrophy, and how that relates to patient symptoms and illness progression. The
personalized brain stimulation areas 332 provide the ability to pinpoint optimal brain foci,
5 which when stimulated using brain stimulation approaches, will result in optimal
illness/disease symptom alleviation. The 3D reporting 334 provides a 3D report that specifies
the Volumetric chance, ROI regional and connectivity changes, segmentation maps, brain
stimulation foci, etc. In some embodiments, the 3D report provides interactive files for
estimation, diagnosis, and prognosis based on pre-surgical planning, diagnosis, longitudinal
10 brain changes, etc.
[0043] FIG. 4 is a block diagram of an interactive module 216 of the 3D visualizing
server 108 of FIG. 2 according to some embodiments herein. The interactive module 216
includes a glass brain 402, a 3D interactive spin 404, a zoom in and zoom out 406, a 3D sagittal
coronal traverse views 408, a slice view 410, a parcellation 3D select 412. The glass brain 402
15 allows the user 102 to view the whole brain and/or different Region of interest (ROI) defined
by the atlases/parcellations of the MRI scan, and interactively select the required brain area
and/or ROI for analysis and metrics. In some embodiments, the glass represents the
transparency feature of the 3D visualization. The 3D interactive spin 404 allows the expert to
freely interact, move left-right, up-down, and rotate the glass or solid 3D brain in the X, Y, and
20 Z coordinate axis. The zoom in and zoom out 406 allows the expert to narrow down on a
region, or move back the camera point of view (POV) by scrolling a mouse wheel (where
clockwise and anticlockwise rotation of the mouse wheel is multiplied by a constant POV
change which decides how slow or fast it is zoomed in or out), touch screen interactivity (where
the users use their fingers to zoom in or out), and a keyboard (a key to zoom in and a key to
25 zoom out by a constant POV change). The 3D sagittal coronal traverse 408 allows the expert
to interact using the three standard brain orientations - Sagittal (X-axis), Coronal (Y-axis), and
Transverse/Axial (Z-axis) axis views of the brain. The slice view 410 allows the user 102 to
view different slices in/out of the MRI scan through mouse scroll option, touch screen
interaction, or keyboard keys. The parcellation 3D select 412 allows the expert to select a
30 specified region of the brain using region name, or in the interactive 3D space.
20
[0044] FIG. 5A illustrates exemplary representations of one or more interactive
features of the 3D visualization report according to some embodiments herein. The exemplary
representation of one or more interactive features of the visualization and reporting system 100
include a 3D glass brain model 502, a 3D interactive spin 504A-D, a zoom in and zoom out
5 506, 3D sagittal coronal traverse views 508. The glass brain 502 allows the expert to view the
whole brain and/or different Region of interest (ROI) defined by the atlases/parcellations of
the MRI scan, and interactively select the required brain area and/or ROI for analysis and
metrics. The 3D interactive spin 504A-D allows the expert to freely interact, move left-right,
up-down, and rotate the glass or solid 3D brain in the X, Y, and Z coordinate axis. The zoom
10 in and zoom out 506 allows the expert to narrow down on a region, or move back the camera
point of view (POV) by scrolling a mouse wheel (where clockwise and anticlockwise rotation
of the mouse wheel is multiplied by a constant POV change which decides how slow or fast it
is zoomed in or out), touch screen interactivity (where the users use their fingers to zoom in or
out), and a keyboard (a key to zoom in and a key to zoom out by a constant POV change). The
15 3D sagittal coronal traverse 508 allows the user 102 to interact using the three standard brain
orientations - Sagittal (X-axis), Coronal (Y-axis), and Transverse/Axial (Z-axis) axis views of
the brain.
[0045] FIG. 5B illustrates exemplary representations of one or more features of the 3D
visualization report according to some embodiments herein. The exemplary representations of
20 one or more features of 3D visualization generated by the visualization and reporting system
100 include regional features 510 and a fictional connectivity 512.
[0046] FIGS. 6A and 6B are flow diagrams that illustrate a method for generating a
three-dimensional (3D) visualization report of magnetic resonance imaging (MRI) data using
a machine learning model 112 that enables to evaluate one or more health conditions of the
25 subject 102 according to some embodiments herein. At step 602, the method includes the step
of obtaining the input file of a subject from an imaging device that includes at least one of a
camera, or a screen. In some embodiments, the input file includes at least one of at least one
scan data and a plurality of attributes. In some embodiments, the scan data includes at least one
of a T1-weighted structural MRI (T1 sMRI) image, a T2-weighted structural MRI (T2 sMRI)
30 image, a T1-weighted structural contrast MRI (T1wc MRI) image, a Fluid-attenuated inversion
21
recovery (FLAIR) image, Diffusion-weighted imaging (DWI) image and its corresponding
Diffusion tensor imaging (DTI) scans, or Resting State functional MRI (RS fMRI) image. At
step 604, the method includes the step of converting the predefined format of the scan data into
at least one of an object format file or a sliced format file by pre-processing the predefined
5 format of the scan data of the subject. At step 606, the method includes the step of generating
a 3D model of the subject using the at least one of the object format file or the sliced format
file. In some embodiments, the 3D file includes corresponding region of interest (ROI) data of
functional MRI, and structural MRI of the subject. At step 608, the method includes the step
of training, using one or more data analysis pipelines, a machine learning model by providing
10 a plurality of historical 3D files of the subject and a plurality of features associated with
historical plurality of historical 3D files of the subjects as training data to obtain a trained
machine learning model. At step 610, the method includes the step of determining, using the
trained machine learning, a plurality of features by analyzing the 3D model of the subject using
the one or more attributes. In some embodiments, the one or more attributes includes at least
15 one of region of interest (ROI) connectivity, activity, volumetric data, white matter tract atlas
data, or optimal brain ROIs estimation for stimulation. At step 612, the method includes the
step of visualizing, using an expert device one or more features that are determined using the
trained machine learning model in the 3D model of the subject. At step 614, the method
includes the step of generating a 3D visualization report of the subject based on the visualized
20 one or more features in the 3D model of the subject that enable to evaluate one or more health
conditions of the subject. In some embodiments, the 3D visualization report includes at least
one of a volumetric chance, ROI regional and connectivity changes, white matter tracts,
differential tumor segmentation, or a grade of tumor.
[0047] In an exemplary embodiments, an automated 3D visualization and reporting of
25 Regions Of Interest (ROI) of brain based on different atlases/parcellations are provided by (i)
extracting the specified regions of interest maps from existing open-sourced atlas/parcellation,
which may be in NIfTI and/or DICOM format, with each color map corresponding to different
ROI; (ii) separating each region into different NIfTI files so that it can be processed into 3D
format files, (iii) displaying all regions that are now converted to 3D format files into 3D space;
30 (iv) loading 3D files in 3D space, stacking each ROI extracted from the MRI scan with fine
22
tuned transparency of each region such that maximum information can be extracted from the
visuals; (v) enabling ROI based real-time interactivity, and ROI specific brain metrics
(volumetric chance, ROI regional and connectivity changes, etc.) for 3D reporting.
[0048] In an exemplary embodiment, an automated Resting-state fMRI and structural
5 MRI analysis 3D visualization and reporting pipeline includes the following process, (i)
Outputs from RS fMRI and sMRI processing and analysis pipeline, in the form of DICOM
and/or NIfTI and/or tabular data (consisting of ROI connectivity, activity, and volumetric data,
etc.) are entered into our 3D visualization and reporting pipeline, (ii) the input DICOM and/or
NIfTI format scans (of sMRI, fMRI, etc.) are prepped for the 3D visualization pipeline using
10 python and linux/shell scripts to process and convert from dicom/NIfTI to 3D file using python
and linux/shell scripts and the corresponding ROI data of the fMRI and sMRI pipeline is stored
in variable values for further analysis and use (iii) the 3D files are rendered in 3D Unity and/or
Dassault 3D visualization software, the data from ROI values are given as inputs to enable our
3D visualization features, and displayed to the medical practitioners, (iv) a 3D visualization
15 report is generated specifying the Volumetric chance, ROI regional and connectivity changes,
etc. to the user as a .pdf or interactive format files for estimation, diagnosis, and prognosis.
[0049] In an exemplary embodiment, an automated 3D Surgical planning visualization
pipeline includes (i) Outputs from Brain Tumor segmentation and surgical planning pipeline,
in the form of DICOM and/or NIfTI (of sMRI, fMRI, DTI, etc.) and/or tabular data (consisting
20 of ROI connectivity, activity, volumetric data, while matter tract atlas data, etc.), are entered
into 3D visualization and reporting pipeline, (ii) the input DICOM and/or NIfTI format scans
(of sMRI, fMRI, segmentation masks, etc.) are prepped for 3D visualization pipeline. The
scans are processed and converted from dicom/NIfTI to 3D file using python and linux/shell
scripts. The corresponding ROI data of the fMRI and sMRI pipeline is stored in variable values
25 for further analysis and use, (iii) These 3D files are later rendered in 3D Unity and/or Dassault
3D visualization and simulation Software. The data from ROI values are given as inputs to
enable our 3D visualization features, and displayed to provide real time 3D visualization,
interactive features and brain simulation, (iv) Finally, a 3D visualization report and In silico
simulation is generated specifying the volumetric chance, white matter tracts, differential
30 tumor segmentation, tumor grade etc. to the user in an interactive format for optimal surgical
23
planning estimation and prognosis.
[0050] In an exemplary embodiment, an automated3D In Silico simulation for brain
stimulation planning and visualization pipeline includes (i) Outputs from Personalized Brain
Stimulation using In Silico Simulation pipeline, in the form of DICOM and/or NIfTI (of sMRI,
5 fMRI, DTI, etc.) and/or tabular data (consisting of ROI connectivity, activity, volumetric data,
while matter tract atlas data, optimal brain ROIs estimation for stimulation, etc.), are entered
into our 3D visualization and reporting pipeline, (ii) The input DICOM and/or NIfTI format
scans (of sMRI, fMRI, segmentation masks, etc.) are prepped for 3D visualization/simulation
pipeline. The scans are processed and converted from dicom/NIfTI to 3D file using python and
10 linux/shell scripts. The corresponding ROI data of the fMRI and sMRI pipeline is stored in
variable values for further analysis and use, (iii) These 3D files are later rendered and simulated
(using dynamic causal modelling) in 3D Unity, the virtual brain, and/or Dassault 3D
visualization and simulation Software. The data from ROI values are given as inputs to enable
our 3D visualization and simulation features.
15 [0051] A representative hardware environment for practicing the embodiments herein
is depicted in FIG. 7, with reference to FIGS. 1 through 6. This schematic drawing illustrates
a hardware configuration of a motion sensor analyzing server 104/computer system/ computing
device in accordance with the embodiments herein. The system includes at least one processing
device CPU 10 that may be interconnected via system bus 15 to various devices such as a
20 random-access memory (RAM) 12, read-only memory (ROM) 16, and an input/output (I/O)
adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk units 58 and
program storage devices 50 that are readable by the system. The system can read the inventive
instructions on the program storage devices 50 and follow these instructions to execute the
methodology of the embodiments herein. The system further includes a user interface adapter
25 22 that connects a keyboard 28, mouse 50, speaker 52, microphone 55, and/or other user
interface devices such as a touch screen device (not shown) to the bus 15 to gather user input.
Additionally, a communication adapter 20 connects the bus 15 to a data processing network
52, and a display adapter 25 connects the bus 15 to a display device 26, which provides a
graphical user interface (GUI) 56 of the output data in accordance with the embodiments
30 herein, or which may be embodied as an output device such as a monitor, printer, or transmitter,
24
for example.
[0052] The foregoing description of the specific embodiments will so fully reveal the
general nature of the embodiments herein that others can, by applying current knowledge,
readily modify and/or adapt for various applications such specific embodiments without
5 departing from the generic concept, and, therefore, such adaptations and modifications should
and are intended to be comprehended within the meaning and range of equivalents of the
disclosed embodiments. It is to be understood that the phraseology or terminology employed
herein is for the purpose of description and not of limitation. Therefore, while the embodiments
herein have been described in terms of preferred embodiments, those skilled in the art will
10 recognize that the embodiments herein can be practiced with modification within the scope. ,CLAIMS:I/We Claim:
1 1. A system (100) for generating a three-dimensional (3D) visualization report of magnetic
2 resonance imaging (MRI) data using a machine learning model (112) that enables to evaluate
3 a plurality of health conditions of a subject (102), the system (100) comprising:
4 an imaging device (110) that comprises at least one of a camera, or a screen, wherein
5 the imaging device (110) obtains an input file of the subject (102) that comprises at least one
6 scan data and a plurality of attributes, wherein the scan data comprises at least one of T1-
7 weighted structural MRI (T1 sMRI) image, a T2-weighted structural MRI (T2 sMRI) image, a
8 T1-weighted structural contrast MRI (T1wc MRI) image, a fluid-attenuated inversion recovery
9 (FLAIR) image, diffusion-weighted imaging (DWI) and corresponding diffusion tensor
10 imaging (DTI) scans, or a resting state functional MRI (RS fMRI) image, wherein the scan
11 data is in a predefined format, wherein the predefined format of the scan data comprises at least
12 one of digital imaging and communications in medicine (dicom) format or neuroimaging
13 informatics technology initiative (NIfTI) format;
14 a 3D visualizing server (108) that acquires the input file of the subject (102) from the
15 imaging device (110), and processes, using the machine learning model (112), the input file,
16 wherein the 3D visualizing server (108) comprises:
17 a memory that stores a database and the machine learning model (112);
18 a processor that is configured to execute the machine learning model (112) and is
19 configured to,
20 convert the predefined format of the scan data into at least one of an object
21 format file or a sliced format file by pre-processing the predefined format of the scan
22 data of the subject (102);
23 characterized in that,

Documents

Application Documents

# Name Date
1 202041037529-STATEMENT OF UNDERTAKING (FORM 3) [31-08-2020(online)].pdf 2020-08-31
2 202041037529-PROVISIONAL SPECIFICATION [31-08-2020(online)].pdf 2020-08-31
3 202041037529-PROOF OF RIGHT [31-08-2020(online)].pdf 2020-08-31
4 202041037529-POWER OF AUTHORITY [31-08-2020(online)].pdf 2020-08-31
5 202041037529-FORM FOR STARTUP [31-08-2020(online)].pdf 2020-08-31
6 202041037529-FORM FOR SMALL ENTITY(FORM-28) [31-08-2020(online)].pdf 2020-08-31
7 202041037529-FORM 1 [31-08-2020(online)].pdf 2020-08-31
8 202041037529-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-08-2020(online)].pdf 2020-08-31
9 202041037529-EVIDENCE FOR REGISTRATION UNDER SSI [31-08-2020(online)].pdf 2020-08-31
10 202041037529-DRAWINGS [31-08-2020(online)].pdf 2020-08-31
11 202041037529-PostDating-(19-07-2021)-(E-6-195-2021-CHE).pdf 2021-07-19
12 202041037529-APPLICATIONFORPOSTDATING [19-07-2021(online)].pdf 2021-07-19
13 202041037529-DRAWING [01-03-2022(online)].pdf 2022-03-01
14 202041037529-CORRESPONDENCE-OTHERS [01-03-2022(online)].pdf 2022-03-01
15 202041037529-COMPLETE SPECIFICATION [01-03-2022(online)].pdf 2022-03-01
16 202041037529-FORM-9 [22-07-2022(online)].pdf 2022-07-22
17 202041037529-STARTUP [25-07-2022(online)].pdf 2022-07-25
18 202041037529-FORM28 [25-07-2022(online)].pdf 2022-07-25
19 202041037529-FORM 18A [25-07-2022(online)].pdf 2022-07-25
20 202041037529-FER.pdf 2022-09-28
21 202041037529-OTHERS [28-03-2023(online)].pdf 2023-03-28
22 202041037529-FER_SER_REPLY [28-03-2023(online)].pdf 2023-03-28
23 202041037529-CORRESPONDENCE [28-03-2023(online)].pdf 2023-03-28
24 202041037529-COMPLETE SPECIFICATION [28-03-2023(online)].pdf 2023-03-28
25 202041037529-CLAIMS [28-03-2023(online)].pdf 2023-03-28
26 202041037529-PatentCertificate06-09-2023.pdf 2023-09-06
27 202041037529-IntimationOfGrant06-09-2023.pdf 2023-09-06

Search Strategy

1 SearchHistory-202041037529E_02-08-2022.pdf

ERegister / Renewals

3rd: 09 Nov 2023

From 28/02/2023 - To 28/02/2024

4th: 09 Nov 2023

From 28/02/2024 - To 28/02/2025

5th: 18 Jan 2025

From 28/02/2025 - To 28/02/2026

6th: 18 Jan 2025

From 28/02/2026 - To 28/02/2027

7th: 18 Jan 2025

From 28/02/2027 - To 28/02/2028

8th: 18 Jan 2025

From 28/02/2028 - To 28/02/2029

9th: 18 Jan 2025

From 28/02/2029 - To 28/02/2030

10th: 18 Jan 2025

From 28/02/2030 - To 28/02/2031