Abstract: A method (300) for real-time analysis of dynamic data generated by an imaging modality during a scanning procedure includes receiving (302) dynamic data (114) corresponding to a subject. Furthermore, the method (300) includes receiving (304) a deep learning model configured to process the dynamic data (114), where the deep learning model is based on a neural network trained using previously acquired dynamic data. The method (300) includes processing (306) in real-time the dynamic data (114) using the deep learning model to generate at least one feature value representative of a medical status of the subject. Additionally, the method (300) includes automatically generating (308) a medical recommendation (132) corresponding to the medical status based on the at least one feature value. The method (300) also includes presenting (310) the medical recommendation (132) via a display unit (130) to facilitate provision of medical care to the subject.
Claims:1. A method (300) for real-time analysis of dynamic data generated by an imaging modality during a scanning procedure, the method (300) comprising:
receiving (302), via a data acquisition unit (118), dynamic data (114) corresponding to a subject;
receiving (304), via the data acquisition unit (118), a deep learning model configured to process the dynamic data (114), wherein the deep learning model is based on a neural network trained using previously acquired dynamic data, and wherein the deep learning model is configured to determine at least one of a functional status of an anatomical region, a physiological status of a region of interest, and a clinical parameter for generating a clinical decision;
processing (306), in real-time, via a deep learning unit (120), the dynamic data (114) using the deep learning model to generate at least one feature value representative of a medical status of the subject;
automatically generating (308), via a recommendation unit (122), a medical recommendation (132) corresponding to the medical status based on the at least one feature value; and
presenting (310), via a display unit (130), the medical recommendation (132) to facilitate provision of medical care to the subject.
2. The method (300) as claimed in claim 1, wherein receiving the dynamic data (302) comprises receiving at least one of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data, dynamic susceptibility weight magnetic resonance imaging, multiple diffusion sensitivity factor (b-value) diffusion weighted imaging (multi-b-value-DWI) data, vector valued dynamic data at single voxel, computed tomography dynamic data, ultrasound dynamic data, a dynamic contrast enhancing (DCE) gadolinium time-series, dynamic positron emission tomography (PET) data, intravoxel incoherent motion (IVIM) data, or combinations thereof.
3. The method (300) as claimed in claim 1, further comprising:
assessing, in real-time, sufficiency of the received dynamic data (114) for generating the medical recommendation (132); and
terminating the scanning procedure based on the assessed sufficiency of the received dynamic data (114).
4. The method (300) as claimed in claim 1, further comprising generating the deep learning model based on the previously acquired dynamic data, and wherein generating the deep learning model comprises training a Recurrent Neural Network (RNN) based on the previously acquired dynamic data.
5. The method (300) as claimed in claim 1, wherein generating the medical recommendation (132) comprises processing the dynamic data (306) without use of an analytic model, and wherein the analytic model is representative of one or more of a physical model, a physiological model, an anatomical model, and a chemical model corresponding to the dynamic data (114).
6. The method (300) as claimed in claim 1, further comprising generating a risk image (212) based on a plurality of parameters representative of a malignancy status corresponding to a plurality of locations in the region of interest.
7. The method (300) as claimed in claim 6, wherein generating the medical recommendation (132) comprises superimposing the dynamic data (114) on the risk image (212) to generate a composite image.
8. The method (300) as claimed in claim 1, wherein processing the dynamic data (306) comprises:
generating a shape of a perfusion curve corresponding to a portion in the region of interest; and
classifying the shape of the perfusion curve to determine the at least one feature value.
9. The method (300) as claimed in claim 1, wherein processing the dynamic data (306) comprises generating a binary value representative of a normal condition of a portion in the region of interest.
10. The method (300) as claimed in claim 1, wherein automatically generating (308) the medical recommendation (132) comprises:
identifying a medical record from the previously acquired dynamic data based on the at least one feature value; and
generating at least one of a medical treatment option and an expected medical outcome based on the medical record.
11. A system for real-time analysis of dynamic data (114) generated by an imaging modality during a scanning procedure, the system comprising:
a database unit (116) configured to store a plurality of deep learning models corresponding to a plurality of medical status, wherein the plurality of deep learning models is based on a neural network trained using previously acquired dynamic data;
a data acquisition unit (118) configured to:
receive dynamic data (114) corresponding to a subject;
receive a deep learning model of the plurality of deep learning models, wherein the deep learning model is configured to determine at least one of a functional status of an anatomical region, a physiological status of a region of interest, and a clinical parameter for generating a clinical decision;
a deep learning unit (120) communicatively coupled to the data acquisition unit (118) and configured to process, in real-time, the dynamic data (114) using the deep learning model to generate at least one feature value representative of a medical status of the subject;
a recommendation unit (122) communicatively coupled to the deep learning unit (120) and configured to automatically generate a medical recommendation (132) corresponding to the medical status based on the at least one feature value; and
a display unit (130) communicatively coupled to the recommendation unit (122) and configured to present the medical recommendation (132) to facilitate provision of medical care to the subject.
12. The system as claimed in claim 11, wherein the dynamic data (114) comprises at least one of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data, dynamic susceptibility weight magnetic resonance imaging, multiple diffusion sensitivity factor (b-value) diffusion weighted imaging (multi-b-value-DWI) data, a vector valued dynamic data at single voxel, a computed tomography dynamic data, ultrasound dynamic data, and dynamic contrast enhancing (DCE) gadolinium time-series, dynamic positron emission tomography (PET) data, intravoxel incoherent motion (IVIM) data, or combinations thereof.
13. The system as claimed in claim 11, wherein the recommendation unit (122) is further configured to:
assess, in real-time, sufficiency of the received dynamic data (114) for generating the medical recommendation (132); and
terminate the scanning procedure based on the assessed sufficiency of the received dynamic data (114).
14. The system as claimed in claim 11, wherein the deep learning unit (120) is further configured to generate the plurality of deep learning models based on previously acquired dynamic data.
15. The system as claimed in claim 11, wherein the recommendation unit (122) is configured to process the dynamic data (114) without use of an analytic model, and wherein the analytic model is representative of one or more of a physical model, a physiological model, an anatomical model, and a chemical model corresponding to the dynamic data (114).
16. The system as claimed in claim 11, wherein the deep learning unit (120) is configured to generate a risk image (212) based on a plurality of parameters representative of a malignancy status at a plurality of locations in the region of interest.
17. The system as claimed in claim 16, wherein the recommendation unit (122) is configured to superimpose the dynamic data (114) on the risk image (212) to generate a composite image for display on the display unit (130).
18. The system as claimed in claim 11, wherein the deep learning unit (120) is configured to:
generate a shape of a perfusion curve corresponding to a portion in the region of interest; and
classify the shape of the perfusion curve to determine the at least one feature value.
19. The system as claimed in claim 11, wherein the deep learning unit (120) is configured to process the dynamic data (114) to generate a binary value representative of a normal condition of a portion in the region of interest.
20. The system as claimed in claim 11, wherein the recommendation unit (122) is configured to:
identify a medical record from the previously acquired dynamic data (114) based on the at least one feature value; and
generate at least one of a medical treatment option and an expected medical outcome based on the medical record.
, Description:BACKGROUND
[0001] Embodiments of the present specification relate generally to processing and evaluation of dynamic data, and more particularly relate to systems and methods for identifying a medical status using dynamic data acquired during a scanning procedure.
[0002] Medical imaging data is increasingly being used for diagnosis and treatment of health conditions such as, but not limited to, cancer conditions and artery diseases. Imaging techniques such as dynamic computed tomography (CT) and magnetic resonance imaging (MRI) generate large volumes of medical images having valuable diagnostic information. However, these images need to be studied by medical professionals to derive any useful diagnostic information. Also, the investigation of the images is a laborious and time-consuming process.
[0003] Visual inspection of dynamic data in emerging applications is not practical due to a large number of voxels required to represent increased dynamic resolution. Conventionally, the dynamic data is quantified by one or more parameters using either a physics based model or a physiological model. The parameters are processed to identify a pathology or ‘at-risk’ areas and derive therapeutic decisions or guide surgical intervention. Often, the quantification is not standardized and parameters have high variability and are not consistent. Moreover, in many cases, the quantification requires intensive computation effort and may need to be performed offline. The burden of analyzing the parameters and determining a medical status is on the medical professionals.
[0004] Automatic segmentation and analysis of medical image volumes is a promising and valuable tool used by the medical professionals for providing effective treatment plans for the patients. Machine learning techniques are frequently used to perform automatic segmentation and analysis of medical image volumes. However, conventional machine learning techniques require clinically relevant information to be extracted from the dynamic data. Conventionally, the clinically relevant information is manually generated by the medical professionals. The need for user intervention to generate training data sets invariably brings in subjectivity and quality issues.
[0005] Recently, deep learning techniques are being increasingly used for processing the dynamic data in various medical applications. These deep learning techniques have been used for machine learning tasks that are directed to extremely complex systems in terms of non-linearity modelling and computational requirements.
BRIEF DESCRIPTION
[0006] In accordance with one aspect of the present specification, a method for real-time analysis of dynamic data generated by an imaging modality during scanning procedure is disclosed. The method includes receiving in real-time, via a data acquisition unit, dynamic data corresponding to a subject. Moreover, the method includes receiving, via the data acquisition unit, a deep learning model configured to process the dynamic data, where the deep learning model is based on a neural network trained using previously acquired dynamic data, and where the deep learning model is configured to determine at least one of a functional status of an anatomical region, a physiological status of a region of interest, and a clinical parameter for generating a clinical decision. Furthermore, the method includes processing in real-time, via a deep learning unit, the dynamic data using the deep learning model to generate at least one feature value representative of a medical status of the subject. In addition, the method includes automatically generating, via a recommendation unit, a medical recommendation corresponding to the medical status based on the at least one feature value. The method also includes presenting, via a display unit, the medical recommendation to facilitate provision of medical care to the subject.
[0007] In accordance with another aspect of the present specification, a system for real-time analysis of dynamic data generated by an imaging modality during a scanning procedure is disclosed. The system includes a database unit configured to store a plurality of deep learning models corresponding to a plurality of medical status, where the plurality of deep learning models is based on a neural network trained using previously acquired dynamic data. Moreover, the system includes a data acquisition unit configured to receive, dynamic data corresponding to a subject. Additionally, the data acquisition unit is configured to receive a deep learning model of the plurality of deep learning models, where the deep learning model is configured to determine at least one of a functional status of an anatomical region, a physiological status of a region of interest, and a clinical parameter for generating a clinical decision. The system also includes system includes a deep learning unit communicatively coupled to the data acquisition unit and configured to process, in real-time, the dynamic data using the deep learning model to generate at least one feature value representative of a medical status of the subject. Moreover, the system includes a recommendation unit communicatively coupled to the deep learning unit and configured to automatically generate a medical recommendation corresponding to the medical status based on the at least one feature value. The system also includes a display unit communicatively coupled to the recommendation unit and configured to present the medical recommendation to facilitate provision of medical care to the subject.
DRAWINGS
[0008] These and other features and aspects of embodiments of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0009] FIG. 1 is a diagrammatic illustration of a system for evaluating dynamic data acquired during a scanning procedure, in accordance with aspects of the present specification;
[0010] FIG. 2 is a schematic diagram illustrating a workflow for processing dynamic data by the system of FIG. 1, in accordance with aspects of the present specification;
[0011] FIG. 3 is a flow chart of a method for evaluating dynamic data, in accordance with aspects of the present specification;
[0012] FIGs. 4A-4D illustrate dynamic susceptibility MRI perfusion data used for generating a deep learning model to detect an ischemic stroke, in accordance with aspects of the present specification;
[0013] FIGs. 5A-5B illustrate a Diffusion Weighted Imaging (DWI) representation used for generating a deep learning model to detect a tumor condition, in accordance with aspects of the present specification;
[0014] FIGs. 6A-6E are graphs representing dynamic contrast enhanced MRI (DCE-MRI) data used for generating a deep learning model to detect tumor aggressiveness, in accordance with aspects of the present specification;
[0015] FIG. 7A is an image representative of brain glioma DCE data used for generating a deep learning model, in accordance with aspects of the present specification;
[0016] FIGs. 7B-7C are graphs representing tumor and non-tumor regions corresponding to FIG. 7A, in accordance with aspects of the present specification;
[0017] FIGs. 8A-8D are images illustrating effectiveness of dynamic data processing in differentiating high blood volume regions between tumor and blood vessels, in accordance with aspects of the present specification; and
[0018] FIGs. 9A-9B are graphical representations that illustrate consistency of data features generated by dynamic data processing, in accordance with aspects of the present specification.
DETAILED DESCRIPTION
[0019] As will be described in detail hereinafter, systems and methods for real-time analysis of dynamic data are presented. More particularly, systems and methods for identifying, in real-time, medical status of a subject such as a patient using the dynamic data acquired during a scanning procedure are presented.
[0020] The term ‘dynamic data’ refers to data representative of time varying physiological characteristics corresponding to an organ, a blood vessel or a fluid movement within the body of a subject. The term ‘real-time’ refers to a time window corresponding to the rate of data generation, acquisition and/or processing. The phrase ‘receiving in real-time’ refers to receiving of data within a specified time duration from the instant of data generation and the phrase ‘processing in real-time’ refers to processing the received data within a specified time duration. The term ‘medical status’ refers to a medical condition of a subject and in particular, the medical condition of an anatomical organ of interest or a region of interest in the subject. The term ‘medical status’ may also refer to one of a functional status or a physiological status of the region of interest. The medical status is represented by one or more feature values referred herein as ‘at least one feature value’. It may be noted that in some embodiments, it may be sufficient to use one feature value to represent the medical status. In other embodiments, a plurality of feature values may be used to represent the medical status. The terms ‘at least one feature value,’ ‘one or more feature values,’ and ‘a plurality of feature values’ are used equivalently and interchangeably. The at least one feature value may be a clinical parameter required for generating a clinical decision.
[0021] FIG. 1 is a diagrammatic illustration of a medical imaging modality 100 having a dynamic imaging system 104 for real-time analysis of dynamic data acquired during a scanning procedure, in accordance with aspects of the present specification. In one embodiment, the medical imaging modality 100 includes the dynamic imaging system 104 and an image data acquisition system 106. It may be noted that although the example of FIG. 1 depicts the dynamic imaging system 104 as a standalone unit, in certain embodiments, the dynamic imaging system 104 may be a part of the image data acquisition system 106 or vice versa.
[0022] In one embodiment, the image data acquisition system 106 may be a computed tomography (CT) imaging system configured to generate CT image data. In another embodiment, the image data acquisition system 106 is a magnetic resonance imaging (MRI) system configured to generate MRI data. In yet another embodiment, the image data acquisition system 106 is an ultrasound imaging system configured to generate ultrasound image data. In certain other embodiments, the image data acquisition system 106 may be a positron emission tomography (PET) imaging system or an X-ray imaging system configured to respectively generate PET image data or X-ray image data. It may be noted that the image data acquisition system 106 is not limited to any class of imaging equipment and may include any other imaging system.
[0023] The image data acquisition system 106 includes a gantry 102 configured to receive a patient table 110, perform imaging of a subject 112 such as a patient, and generate dynamic data 114 in real-time. In one embodiment, the dynamic data 114 is represented in a digital imaging and communications in medicine (DICOM) standard format.
[0024] Further, the dynamic imaging system 104 is configured to receive the dynamic data 114 from the image data acquisition system 106 and process the dynamic data 114 to generate, in real-time, at least one feature value 108. The feature value 108 is representative of a medical status of the subject 112. In one embodiment, the feature value 108 includes a plurality of parameters corresponding to a plurality of locations in a region of interest (ROI) in the subject 112. The plurality of parameters may be representative of a physiological condition such as, but not limited to, a malignancy condition and an arterial disease condition corresponding to the plurality of locations in the ROI in the subject 112. The dynamic imaging system 104 may further be configured to generate, in real-time, a risk image representative of a health condition of the ROI based on the plurality of parameters. The dynamic imaging system 104 may also be configured to visualize the risk image on a console such as a display unit 130.
[0025] In one embodiment, the dynamic data 114 is time varying image data. It may be noted that the dynamic data 114 may be represented as two-dimensional (2D) image data, three-dimensional (3D) image data, or four-dimensional (4D) image data. It may be noted that the 2D image is formed by a plurality of pixels arranged in a plane. Each of the plurality of pixels is represented as a real value representative of a grey value or as a triplet of real values representative of color values. Similarly, the 3D image may be formed by a plurality of voxels arranged in a volume. Each of the plurality of voxels may be represented as a vector of real values representative of a grey (color) value and an opacity value. A plurality of feature values corresponding to the dynamic data 114 may also be similarly represented as a 2D image, a 3D image, or a 4D image. In one embodiment, the dynamic data may also be representative of vector valued dynamic data. It may be noted that the vector valued dynamic data includes time-series dynamic data. In one example, the vector valued dynamic data in a time-variant system may be dynamic data indexed by time. In another example, the vector valued dynamic data in a diffusion experiment may be dynamic data indexed by a diffusion sensitization factor. The time-series dynamic data is a special case of the vector valued dynamic data with time as the index parameter.
[0026] Based on the image data acquisition system 106, the dynamic data 114 may include one of the X-ray image data, CT image data, MRI data, ultrasound image data, and the like. In certain other embodiments, the dynamic data 114 may also include dynamic susceptibility magnetic resonance imaging (MRI) perfusion data, dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data, diffusion weighted imaging magnetic resonance imaging (DWI-MRI) data, and perfusion weighted imaging magnetic resonance imaging (PWI-MRI) data. In further embodiments, the dynamic data 114 may include dynamic susceptibility weight magnetic resonance imaging data, multiple diffusion sensitivity factor (b-value) diffusion weighted imaging (multi-b-value-DWI) data, a vector valued dynamic data at a single voxel, a computed tomography dynamic data, ultrasound dynamic data dynamic contrast enhanced (DCE) gadolinium time-series and dynamic PET, intravoxel incoherent motion (IVIM) data.
[0027] In a presently contemplated configuration, the dynamic imaging system 104 includes a database unit 116, a data acquisition unit 118, a deep learning unit 120, a recommendation unit 122, a processor unit 124, and a memory unit 126. The various units in the dynamic imaging system 104 are interconnected with each other through a communications bus 128.
[0028] The database unit 116 is configured to store a plurality of deep learning models. The deep learning models are in turn representative of non-linear models that are customized to aid in the determination of the medical status of the subject 112. The medical status includes, but is not limited to, a presence of a cancerous tissue, a coronary block, and other such disease conditions. In one embodiment, the deep learning models are generated offline based on historical or previously acquired dynamic datasets corresponding to the medical status of the subject 112. The historical dynamic datasets refer to previously acquired dynamic data and include medical records having dynamic data corresponding to previously performed scanning procedures. For example, the historical dynamic datasets may include images corresponding to a particular patient that have been acquired over a time period. The historical dynamic datasets further include known historical medical status and one or more historical parameters corresponding to the dynamic image data. The historical dynamic datasets may also include medical treatment options and corresponding medical outcome conditions.
[0029] In one embodiment, the deep learning models stored in the database unit 116 include, but are not limited to, neural network models, convolutional network models, and recursive neural network models. The deep learning model includes a processing scheme corresponding to the dynamic image data 114 of FIG. 1. In particular, the processing scheme is tailored/customized based on a desired format of the dynamic image data and includes a plurality of model parameters and processing steps. In general, the deep learning models are generated based on a subset of historical dynamic image datasets corresponding to a medical condition using training techniques. The trained deep learning models are configured to detect a medical status of the ROI.
[0030] Furthermore, in one embodiment, each deep learning model may be configured to detect a given medical status of the subject 112. By way of example, one deep learning model of the plurality of deep learning models is configured to detect an ischemic stroke using dynamic susceptibility MRI perfusion data corresponding to the brain region. Similarly, another deep learning model is configured to detect tumor aggressiveness using diffusion weighted imaging data corresponding to the prostrate region. In a similar manner, yet another deep learning model is configured to detect neurological tumors based on DCE-MRI data corresponding to the brain region having neuroglia (neuron supporting tissues). Also, one of the deep learning models is configured to differentiate a non-tumor region from a tumor region based on bolus concentration curves corresponding to an ROI. Similarly, a deep learning model may be configured to determine tumor probability values associated with blood vessels using DCE curves corresponding to the blood vessels. In some embodiments, one of the deep learning models may be configured to generate a numerical value representative of the medical status. In other embodiments, the deep learning model is configured to generate a diagnosis corresponding to the medical status. It should be noted herein that one or more deep learning models may also be configured to suggest a treatment option, probability of success corresponding to the treatment option, and other such parameters associated with the medical status.
[0031] The dynamic imaging system 104 further includes the data acquisition unit 118 communicatively coupled to the image data acquisition system 106 and configured to receive in in real-time the dynamic data 114 corresponding to the subject 112 generated during the scanning procedure. The data acquisition unit 118 is further configured to receive a deep learning model from the database unit 116. In one embodiment, one of the plurality of deep learning models is selected based on a type of received dynamic data 114 and a type of medical status to be determined. To that end, in some embodiments, the data acquisition unit 118 is configured to receive user preferences that aid in the selection of a corresponding deep learning model. Further, the data acquisition unit 118 is configured to select one of the plurality of deep learning models from the database unit 116 based on the received user preferences. In other embodiments, the processor unit 124 may be configured to automatically select one or more deep learning models from the plurality of deep learning models without user intervention. Moreover, in one embodiment, the deep learning model is generated based on a neural network trained using previously acquired dynamic data. Further, the deep learning model is configured to determine at least one of a functional status of an anatomical region, a physiological status of a region of interest, and a clinical parameter for generating a clinical decision.
[0032] Moreover, in certain embodiments, the database unit 116 is also configured to store the dynamic data 114 generated by the gantry 102. It may be noted that in one embodiment, the database unit 116 stores the dynamic data 114 in the DICOM format. In one embodiment, the database unit 116 employs a relational database management system (RDBMS) to support archiving of images that are reconstructed using the dynamic data 114. In another embodiment, the database unit 116 employs scalable, schema-less data supportive of not only structure query language (NoSQL) databases such as, but not limited to, Cassandra, CouchDB, MongoDB, BigTable, Redis, and Hbase.
[0033] The dynamic imaging system 104 also includes a deep learning unit 120 communicatively coupled to the data acquisition unit 118 and configured to process the dynamic data 114 in real-time. More particularly, the deep learning unit 120 is configured to process the dynamic data 114 using the selected deep learning model to generate at least one feature value 108 representative of a medical status of the subject 112. The deep learning unit 120 is configured to use the processing scheme specified by the selected deep learning model and process the dynamic data 114 for generating the feature value 108. In addition, the deep learning unit 120 is also configured to generate one or more deep learning models based on the historical dynamic datasets. In one embodiment, the deep learning unit 120 includes one or more processors, software code, and other hardware elements such as, but not limited to, custom made circuitry and off the shelf components.
[0034] In one embodiment, the feature value 108 may be a normalized numerical value representative of a probability of a medical status corresponding to a portion of the ROI. Some non-limiting examples of the medical status include, but are not limited to, a malignancy condition of a tissue and a hemorrhage condition of a blood vessel. In another embodiment, the feature value 108 may be a binary value representative of a Bayesian category. Some examples of the Bayesian categories may include tissue categories such as a normal condition and a malignant condition of tissues within the ROI.
[0035] Further, the deep learning unit 120 is configured to generate a plurality of parameters corresponding to a plurality of spatial locations in the ROI. In one embodiment, the plurality of parameters is determined based on the feature value 108. In one embodiment, the parameters may be representative of a malignancy status of tissue at the plurality of locations in the ROI.
[0036] Moreover, the deep learning unit 120 is configured to generate a risk image based on these parameters. In one embodiment, these parameters may have corresponding values that are representative of probability values. The probability values in turn may be representative of a likelihood of malignancy of tissues in the ROI. In another embodiment, the parameters may have a binary value. In this example, a “0” parameter value represents a normal condition of a portion of the ROI and a “1” parameter value represents an abnormal condition of the portion of the ROI.
[0037] Additionally, in certain embodiments, the risk image is formed as a 2D image having pixel values selected from the plurality of parameter values. It may be noted that if the plurality of parameters includes time varying parameters, the risk image is formed as a time varying 2D image. In another embodiment, the risk image may be formed as a 3D image having a plurality of voxel values selected from the plurality of parameter values derived from a 3D dynamic image dataset.
[0038] Further, in one embodiment, the deep learning unit 120 may be configured to process an image in the dynamic data 114 using a recurrent neural network (RNN). In another embodiment, the deep learning unit 120 may be configured to process an image in the dynamic data 114 using a recursive network. In yet another embodiment, the deep learning unit 120 may be configured to process an image in the dynamic data 114 using a convolutional neural network (CNN). The CNN may include a plurality of convolution stages and each convolution stage may include a convolution layer, an activation layer, and a pooling layer that are operatively coupled in a cascading manner. Each of the convolution layer, the activation layer, and the pooling layer includes a corresponding plurality of layer parameters. The number of stages and the number of layer parameters may be selected to model at least one feature value 108 of the dynamic data 114.
[0039] As previously noted, the deep learning unit 120 is also configured to generate the deep learning models based on the historical dynamic datasets. Specifically, the deep learning unit 120 is configured to select a neural network structure. The selected neural network structure is trained using training image data from the historical dynamic datasets. The training image data includes a plurality of training images and at least one feature value corresponding to each of the plurality of training images. Moreover, the training image data may be selected corresponding to a specific medical condition. The at least one feature value corresponding to the training image data may be useful in diagnosing the specific medical condition.
[0040] The dynamic imaging system 104 also includes the recommendation unit 122 communicatively coupled to the deep learning unit 120 and configured to generate a medical recommendation 132 corresponding to the medical status based on the feature value 108. In one embodiment, the medical recommendation 132 may be based on a diagnosis of a disease condition. In this example, the recommendation unit 122 is configured to generate a diagnosis of the disease condition, if any. In particular, in one example, the recommendation unit 122 is configured to generate the diagnosis of the disease condition by comparing the feature value 108 with a pre-determined threshold value. Some examples of the medical recommendation 132 include an additional scan, a blood test, a biopsy, and the like. In another example, the medical recommendation 132 may be a drug prescription for a diagnosed disease/medical condition.
[0041] In another embodiment, the recommendation unit 122 is configured to identify one or more medical records from the historical dynamic dataset based on the feature value 108. In one embodiment, the one or more medical records are identified based on a similarity between the feature value 108 and a corresponding historical feature value. Further, the recommendation unit 122 is configured to generate the medical recommendation 132 in real-time. The medical recommendation 132 includes treatment options and expected medical outcomes based on the one or more medical records.
[0042] In accordance with aspects of the present specification, the recommendation unit 122 may include one or more expert systems configured to generate the medical recommendation based on the at least one feature value 108. Moreover, the expert systems may be configured to learn decision rules based on feedback provided by medical professionals. By way of example, the medical professionals receive the medical recommendation generated by the recommendation unit 122 and compare the recommendation with their assessment to provide feedback to the expert systems.
[0043] In one embodiment, the recommendation unit 122 is further configured to assess, in real-time, sufficiency of the received dynamic data for generating the medical recommendation. Further, the recommendation unit 122 is also configured to terminate the scanning procedure based on the assessed sufficiency of the received dynamic data. Specifically, in one example, the recommendation unit 122 may terminate the scanning when the received dynamic data volume exceeds a determined volume of dynamic data. In certain embodiments, the determined volume of dynamic data may be provided by a user. In this example, the sufficiency of the received dynamic data is determined based on the volume of received dynamic data 114. In another example, the recommendation unit 122 may terminate the scanning when the recommendation generated based on the received dynamic data is able to accurately represent the medical status of the subject. In this example, the sufficiency is determined based on the quality of the medical recommendation 132. In one embodiment, the quality of the medical recommendation 132 is determined by an algorithm using a quality metric value derived from the received dynamic data 114.
[0044] It may be noted that the recommendation unit 122 is configured to process the dynamic data 114 using only a pre-configured deep learning model and without use of an analytic model. The term ‘analytic model’ used herein refers to one or more of a physical model, a physiological model, an anatomical model, and a chemical model corresponding to the dynamic data 114. As will be appreciated, the presently available systems use analytical models to generate corresponding recommendations. These analytical models generate the recommendations based on physical principles representative of behavior of the physical system, physiological understanding of a physiological process, and anatomical information corresponding to an anatomy. However, the analytical models may not accurately represent the physical process and may not able to capture the peculiarities of the dynamic data under consideration. The shortcomings of the present available systems are circumvented by avoiding use of analytic models and via use of a deep learning model to generate the recommendation.
[0045] The display unit 130 is communicatively coupled to the dynamic imaging system 104. Further, the display unit 130 is configured to visualize the at least one feature value 108 to a user. Also, the display unit 130 is configured to present the medical recommendation 132 to the user in real-time, thereby facilitating provision of medical care to the subject 112. The medical recommendation 132 is used for providing medical care to the subject 112 in real-time. For example, during a CT scanning procedure, the dynamic imaging system 104 may generate a composite image by superimposing the dynamic data 114 on the risk image. In some embodiments, the risk image may be a binary image. The dynamic imaging system 104 may display the composite image on the display unit 130 in real-time. The medical professional performing the scanning procedure may adjust and/or modify the scanning procedure based on the displayed composite image. By way of example, the medical professionals may, in real-time, perform additional imaging of desired regions without discontinuing the ongoing scanning procedure.
[0046] The processor unit 124 includes at least one of a general-purpose computer, a GPU, a digital signal processor, and a controller. In other embodiments, the processor unit 124 includes a customized processor element such as, but not limited to, an application-specific integrated circuit (ASIC) and a field-programmable gate array (FPGA). The processor unit 124 may be further configured to receive commands and/or parameters from an operator via a console that has a keyboard or a mouse or any other input device for generating the at least one feature value 108. In certain embodiments, the processor unit 124 may be configured to automatically aid in the generation of the at least one feature value 108. In some embodiments, the processor unit 124 may be configured to aid one or more of the data acquisition unit 118, the deep learning unit 120, and the recommendation unit 122 perform respective functions. The processor unit 124 may include more than one processor co-operatively working with each other for performing intended functionalities. The processor unit 124 is further configured to store (retrieve) contents into (from) the memory unit 126. In one embodiment, the processor unit 124 is configured to initiate and control the functionality of one or more of the data acquisition unit 118, the deep learning unit 120, and the recommendation unit 122.
[0047] In one embodiment, the processor unit 124 is communicatively coupled to the data acquisition unit 118 and configured to assist the data acquisition unit 118 in receiving historical dynamic datasets from the memory unit 126. The processor unit 124 is further configured to assist the deep learning unit 120 in generating the plurality of deep learning models based on the historical dynamic datasets. In one embodiment, the deep learning unit 120 is configured to generate the deep learning model. Generation of the deep learning model by the deep learning unit 120 entails selecting a neural network structure and training a plurality of deep learning model parameters corresponding to the selected neural network structure based on previously acquired dynamic data. Specifically, generating each deep learning model includes training a Recurrent Neural Network (RNN) based on the previously acquired dynamic data. Subsequent to the training phase, the deep learning model is stored in the database unit 116.
[0048] In one embodiment, the memory unit 126 is a random-access memory (RAM), read only memory (ROM), flash memory, or any other type of computer readable memory accessible by at least one of the data acquisition unit 118, the deep learning unit 120, and the recommendation unit 122. Also, in certain embodiments, the memory unit 126 may be a non-transitory computer readable medium encoded with a program having a plurality of instructions to instruct at least one of the data acquisition unit 118, the deep learning unit 120, and the recommendation unit 122 to perform a sequence of steps to generate at least one feature value 108. The program may further instruct the recommendation unit 122 to generate a medical recommendation 132 corresponding to the medical status based on the at least one feature value 108. Further, the program also instructs the recommendation unit 122 to present the medical recommendation 132 via the display unit 130 such that a user may use the recommendation 132 for providing medical care to the subject 112.
[0049] The medical imaging modality 100 as described hereinabove provides a robust technique for enhancing the processing of the dynamic data to provide, in real-time, clinically relevant information to the medical professional/clinician. The clinically relevant information may include an “at-risk status of the tissue” for the pathology being studied. Based on the information provided in real-time by the medical imaging modality 100, the clinician may further analyze the data corresponding to the risk areas using advanced data collection methods or offline quantification tools. Additionally, the medical imaging modality 100 provides deep learning models for processing different types of dynamic data to provide real-time information based on the continuously acquired and reconstructed dynamic data, while circumventing the need for any offline processing. Consequently, the impact of the imaging systems that include the exemplary the dynamic imaging system 104 not only in routine scanning but other medical care suites such as surgery planning and guidance, treatment delivery, radiation planning may be advantageously improved.
[0050] FIG. 2 is a schematic diagram illustrating a workflow 200 for processing dynamic data by the system 100 of FIG. 1, in accordance with aspects of the present specification. As depicted in FIG. 2, an image data acquisition system 202 such as the image data acquisition system 106 of FIG. 1 is configured to generate dynamic data 204 in real-time. The workflow 200 of FIG. 2 is described with reference to the components of FIG. 1.
[0051] As indicated by block 206, an anatomical image of the dynamic data 204 is processed by a deep learning model to generate at least one feature value 208 that is representative of a medical status of the subject 112. The deep learning model is one of a plurality of deep learning models stored in a data repository such as the database unit 116. As previously noted, the medical imaging modality 100 is configured to generate the deep learning models and store the deep learning models in the database unit 116. The real-time dynamic data 204 is processed by the deep learning model to generate one or more feature values 208 corresponding to an ROI in the subject 112. Block 206 may be performed by the deep learning unit 120.
[0052] Furthermore, at block 210, a risk image 212 is generated, in real-time, based on these feature values 208. The deep learning unit 120 may be employed to generate the risk image 212. In one embodiment, the risk image 212 is representative of a malignancy status of tissues at a plurality of locations in the ROI. In another embodiment, the risk image 212 is representative of progress of a disease at the plurality of locations in the ROI. The risk image 212 is visualized, in real-time, on a display device such as the display unit 130 of FIG. 1. In one embodiment, the deep learning unit 120 may be used to generate the risk image 212 using a plurality of parameters corresponding to a plurality of spatial locations in the ROI. These parameters are determined based on the feature value 108. Further, a composite image is generated by superimposing the risk image 212 on the anatomical image and is displayed on the display unit 130. The risk image 212 may include a quantification of a medical status, categorization of tissue regions, progress of a disease, and other such indicators of an underlying medical status of the subject 112.
[0053] Additionally, as indicated by block 214, the medical imaging modality 100 is configured to generate a medical recommendation 132 based on the feature values 208. Some non-limiting examples of the medical recommendation 132 include surgery planning 216, therapy delivery 218, report generation 220, and other treatment options. In one example, the feature values 208 may be used for an effective surgery planning 216. In another example, the feature values 208 may be used for therapy delivery 218. In yet another example, the feature values 208 may be used for recording and reporting of the medical status 220. The recommendation unit 122 of FIG. 1 is used for generating the medical recommendation 132.
[0054] FIG. 3 is a flow chart of a method 300 for real-time analysis of dynamic data, in accordance with aspects of the present specification. The method is described with reference to the components of FIGs. 1-2. The method 300 includes receiving, in real-time, dynamic data 114 corresponding to the subject 112, as indicated by step 302. The dynamic data 114 is generated by an imaging modality such as the image data acquisition system 106 during a scanning procedure. The dynamic imaging system 104 and in particular, the data acquisition unit 118 is configured to receive the dynamic data 114 from the image data acquisition system 106. The dynamic data 114 includes at least one of DCE-MRI data, Dynamic Contrast Enhancement Diffusion Weighted Imaging data (DCE-DWI), computed tomography dynamic data, ultrasound dynamic data or combinations thereof. In another embodiment, step 302 entails receiving DCE gadolinium time-series data.
[0055] Further, the method of 300 includes receiving a deep learning model configured to process the received dynamic data, as indicated by step 302. In certain embodiments, the received dynamic data may include vector valued dynamic data. The deep learning model is based on a neural network trained using previously acquired dynamic data.
[0056] In one embodiment, the deep learning model is one of a plurality of deep learning models generated by the deep learning unit 120. Generation of the deep learning models is presented in steps 312-316. Referring now to step 312, historical dynamic data is received from the memory unit 126. Subsequently, at step 314, a plurality of deep learning models is built by training a neural network using the historical dynamic data. Some non-limiting examples of the deep learning models include a Recurrent Neural Network (RNN) model, a recursive deep learning network model, and a convolutional neural network model. In accordance with aspects of the present specification, a set of medical cases corresponding to a medical status is used to generate each of the plurality of deep learning models. Each medical case in the set includes input data and corresponding desired output data. The set of medical cases is used to train the plurality of deep learning parameters of each of the plurality of deep learning models. Furthermore, at step 316, the plurality of deep learning models is stored in the memory unit 126 to be used for real-time processing of the dynamic data 114.
[0057] Turning now to step 306, in one embodiment, the dynamic data is processed in real-time via the deep learning model. In particular, the dynamic data 114 is processed by the deep learning model to generate at least one feature value 108 that is representative of a medical status of the subject 112. In addition, the dynamic data 204 is also processed to generate a risk image such as the risk image 212 corresponding to a plurality of locations in the ROI based on a plurality of feature values 208. The risk image 212 in turn includes a plurality of risk probability values representative of a malignancy status of tissues at the plurality of locations in the ROI. In another embodiment, at step 306, the dynamic data 114 may be processed to generate a shape of a perfusion curve corresponding to a portion in the ROI. In addition, the shape of the perfusion curve is classified to identify tissue categories. It may be noted that the method 300 is not restricted to the generation of a perfusion curve and is applicable to any other dynamic curve corresponding to a portion of the ROI.
[0058] Moreover, at step 308, a medical recommendation 132 corresponding to the medical status is automatically generated based on the at least one feature value 108. In one embodiment, the medical recommendation 132 may be generated based on historical data. The recommendation unit 122 is used to generate the medical recommendation 132. To that end, one or more medical cases that are substantially similar to the medical status represented by the at least one feature value 108 may be identified and/or retrieved from a medical database. These substantially similar cases may be used to generate the medical recommendation 132. In this example, the medical recommendation 132 may include treatment options and expected medical outcomes that are identified based on the similar cases. In another embodiment, the recommendation unit 122 is employed to superimpose the dynamic data 114 over the risk image 212 to generate a composite image. The composite image may be compared with a predefined template image to generate the medical recommendation 132. The risk image 212 may also be provided to the medical professionals to aid in facilitating a decision regarding the treatment options.
[0059] At step 310, the medical recommendation is presented to the medical professional, in real-time. In one embodiment, the medical recommendation may be visualized on the display unit 130. The medical recommendation may be used by the medical professional for providing medical care to the subject 112 in real-time. In one embodiment, providing the medical care may include performing at least one of a biopsy planning, a surgery guidance, a therapy planning, a treatment delivery, a radiation planning, a data acquisition task from the ROI, or combinations thereof.
[0060] FIGs. 4A-4D illustrate dynamic data such as historical dynamic data used to generate a deep learning model, in accordance with aspects of the present specification. The dynamic data presented in FIG 4A includes MRI diffusion data and FIGs. 4B-4D depict dynamic susceptibility MRI perfusion data.
[0061] FIG. 4A is an image 400 generated using diffusion weighted imaging (DWI) data corresponding to a stroke affected brain region. The image 400 includes a bright region 402 that is representative of infarcted tissues affected by an ischemic stroke. FIG. 4B is an image 404 generated using perfusion weighted image (PWI) data corresponding to the stroke affected brain image data illustrated in FIG. 4A. The image 404 includes a first region 406 representative of healthy tissues and a second region 408 representative of infarcted tissues affected by stroke.
[0062] FIG. 4C is a graph 410 of perfusion curves 412 having an x-axis 414 representative of a bolus time index and a y-axis 416 representative of a signal strength in AU units. The perfusion curves 412 correspond to the first region 406 of FIG. 4B. As depicted in FIG. 4C, the perfusion curves 412 include a valley region 418. The valley region 418 corresponds to an early bolus and is generally representative of normal tissues.
[0063] FIG. 4D is a graph 420 of perfusion curves 422 corresponding to infarcted tissue due to a stroke. The graph 420 includes an x-axis 424 representative of a bolus time index and a y-axis 426 representative of a signal strength in AU units. As depicted in FIG. 4D, the perfusion curves 422 of the graph 420 correspond to the second region 408 of FIG. 4B. The perfusion curves 422 are relatively flat in a region 428. This region 428 is generally representative of an infarcted region.
[0064] In one embodiment, pixels of the images 400, 404 may be used for training a deep learning model. In another embodiment, a plurality of samples from the perfusion curves 412, 422 is used for training the deep learning model. In one embodiment, the deep learning model may be trained to provide a feature value that is representative of a categorization of a tissue as normal or malignant. In another embodiment, the deep learning model may be trained to provide a feature value that is representative of a degree of malignancy of a tissue. The trained deep learning model is capable of detecting an ischemic stroke condition in the ROI. In addition, the trained deep learning model may be used to process a dynamic data set acquired from a new patient to generate at least one feature value representative of a medical condition of the new subject.
[0065] FIGs. 5A and 5B illustrate Diffusion Weighted Imaging (DWI) representations of image data used to generate a deep learning model to detect a tumor condition, in accordance with aspects of the present specification. In particular, FIG. 5A is an image 500 in a DWI format and is representative of diffusion of water molecules in a region of interest. The image 500 includes a first region 502 that is representative of tumor tissues and a second region 504 that is representative of normal tissues. It may be observed that the first region 502 is brighter compared to the second region 504, thereby enabling visual inspection of tumor regions.
[0066] FIG. 5B is a graph 506 having a first plurality of diffusion curves 512, 514 that is generally representative of tumor tissues and a second plurality of diffusion curves 516 that is representative of normal tissues. Reference numeral 508 is representative of an x-axis representing B-factor values and the reference numeral 510 is representative of a y-axis representing signal strength values. It may be seen that the first plurality of diffusion curves 512, 514 exhibits greater variations compared to the closely spaced second plurality of diffusion curves 516. In one embodiment, the diffusion curves of FIG. 5B corresponding to a plurality of subjects may be used for training a deep learning model. In one embodiment, the deep learning model may be configured to provide a feature value representative of categorization of a tissue as normal and malignant. In another embodiment, the deep learning model may be configured to provide a feature value representative of degree of malignancy of a tissue.
[0067] FIGs. 6A-6E respectively illustrate graphs 600, 602, 604, 606, 608 of dynamic contrast enhanced (DCE-MRI) data corresponding to an ROI in a subject and used to generate a deep learning model to detect tumor aggressiveness, in accordance with aspects of the present specification. In FIGs. 6A-6E, reference numeral 610 is generally represents an x-axis with time units and reference numeral 612 is representative of a y-axis with signal intensity units. The graphs 600, 602, 604, 606, 608 represent contrast response (referred herein as ‘enhancement’) of different types of tissue cells to a contrast agent.
[0068] The graph 600 of FIG. 6A includes a flat curve having no enhancement and is representative of a normal tissue. Also, the graph 602 of FIG. 6B includes a substantially linear curve having a slow sustained enhancement and is representative of a slow growing tumor type.
[0069] Similarly, the graph 604 depicted in FIG. 6C includes a first piecewise linear curve. In particular, a first portion of the piecewise linear curve has a rapid initial enhancement, while a second portion of the piecewise linear curve has a sustained late enhancement. This graph 604 is representative of mildly aggressive type of tumor tissues. Moreover, the graph 606 illustrated in FIG. 6D includes a second piecewise linear curve. In the example of FIG. 6D, a first portion of the piecewise linear curve has a rapid initial enhancement, while a second portion of the piecewise linear curve has a stable late enhancement. This type of graph is representative of aggressive type of tumor tissues. The graph 608 of FIG. 6E includes a third piecewise linear curve. In the example of FIG. 6E, a first portion of the piecewise linear curve has a rapid initial enhancement, while a second portion of the piecewise linear curve has a decreasing late enhancement. This type of graph is representative of most aggressive tumor type.
[0070] In one embodiment, a plurality of contrast response curves such as the curves 602-608 of FIGs. 6A-6E corresponding to a plurality of subjects may be used for training a deep learning model. In one embodiment, the deep learning model may be configured to provide a discrete feature value that is representative of categorization of a tissue as normal, slow, mild, aggressive, and very aggressive growing malignant tissues. In another embodiment, the deep learning model may be configured to provide a feature value that is representative of a real value. The real value in turn may be representative of the malignancy of the tissue. In one example, the feature value may have any fractional value between zero and one with extreme values respectively representing a normal tissue and a malignant tissue. By way of example, a “0” value may be indicative of normal tissues and a “1” value may be indicative of a more aggressive type of malignant tissue.
[0071] FIG. 7A is an image 700 that is generated using a DCE-MRI dataset corresponding to a brain glioma. In accordance with aspects of the present specification, the DCE-MRI data set corresponding to the brain glioma is used for generating a deep learning model. The image 700 includes a normal region 702 and a tumor like region 704. In one example, the training data for generating a deep learning model includes four DCE-MRI datasets, where each dataset has 120000 voxels. The training datasets correspond to a plurality of neuro tumor cases. The normal and tumor like voxels are labelled by a trained radiologist. The deep learning model for use in the detection of malignancy is trained using the labelled training datasets. The trained deep learning model is tested using 14 datasets having tumor regions and normal regions. During a training phase and a testing phase, DCE gadolinium concentration time-series data is processed by the deep learning model to generate one of a ‘normal’ label and a ‘tumor’ label at a voxel level. In this embodiment, the deep learning model corresponds to gate recurrent units (GRU), which is a variation of a recurrent neural network (RNN). An adaptive moment estimation (ADAM) optimizer having 100 hidden units and high dropout of about 60% is used for training the GRU type RNN.
[0072] FIG. 7B is a graph 706 that is representative of a plurality of DCE gadolinium concentration curves corresponding to a plurality of tumor tissues identified by the trained deep learning model. The graph 706 includes an x-axis 708 representative of a bolus phase number and a y-axis 710 representative of concentration values of tissues in response to a contrast agent. FIG. 7C is a graph 712 representative of a plurality of DCE gadolinium concentration curves corresponding to a plurality of normal tissues identified by the trained deep learning model. The graph 712 includes an x-axis 714 representative of a bolus phase number and a y-axis 716 representative of concentration values of tissues in response to the contrast agent. It may be observed that predicted curves corresponding to the tumor tissues in the graph 706 are distinguishable from the predicted curves corresponding to the normal tissues in the graph 712. Use of the DCE gadolinium concentration curves as described hereinabove corroborates the validity of the trained deep learning model.
[0073] FIGs. 8A, 8B, 8C and 8D are images illustrating effectiveness of a deep learning model in identifying blood vessels, in accordance with aspects of the present specification. FIG. 8A is an image 800 corresponding to the last bolus phase of a DCE imaging procedure. In the image 800, blood vessels 802 such as veins having low tumor probability values are indicated. A deep learning model is used to identify the probability values corresponding to the blood vessels 802.
[0074] FIG. 8B is an image 804 representative of a tumor probability map generated by the deep learning model used to identify the probability values corresponding to the blood vessels 802 of FIG. 8A. In the image 804, dark pixels are representative of blood vessels.
[0075] Also, FIG. 8C is an image 806 representative of a volume transfer (Ktrans) map generated by conventional physiological modelling techniques. FIG. 8D is an image 808 representative of an extracellular extravascular (Ve) map corresponding to the image 806 of FIG. 8C.
[0076] It may be noted that while the use of the exemplary deep learning model results in the clear delineation of the blood vessels in the form of dark colored pixels in the image 804, however, pixels in the corresponding regions in FIG. 8C and FIG. 8D include both light colored pixels and dark colored pixels. Accordingly, FIGs. 8A and 8B demonstrate that the deep learning models are more suitable for differentiating blood vessels from tumor regions even though both are high blood volume areas, compared to conventional techniques used in FIG. 8C.
[0077] FIGs. 9A and 9B illustrate a comparison of data features generated by a deep learning technique and conventional techniques. FIG. 9A is a graph 900 representative of tumor probability values for a plurality of subjects determined by a deep learning model, in accordance with aspects of the present specification. The graph 900 includes an x-axis 902 representative of a subject identity code and a y-axis 904 representative of tumor probability determined by the deep learning model. In this example, the graph 900 is a bar chart having comparable tumor probability values across the plurality of subjects. Comparable tumor probability values generated by the deep leaning model aids in the selection of a uniform cut-off threshold to aid patient stratification in group studies. These uniform threshold values provide an opportunity to automate analysis of medical images and diagnosis.
[0078] FIG. 9B is a graph 906 representative of tumor probability values corresponding to a plurality of subjects determined by a conventional physiological model. The graph 906 is a bar chart having an x-axis 908 representative of a subject identity code and a y-axis 910 representative of tumor probability determined by the conventional physiological model. As depicted in FIG. 9B, the bar chart exhibits large variations in Ktrans values corresponding to the plurality of subjects. The variability of the Ktrans values is not amenable for processing with a uniform cut-off threshold.
[0079] Systems and methods for evaluating dynamic data presented hereinabove enable processing of dynamic data using a deep learning model in real-time. Moreover, the systems and methods facilitate automated diagnosis of the underlying medical status in real-time, thereby avoiding repeated patient visits for scanning procedures. Additionally, use of the systems and methods provide consistent diagnosis of the medical status, recommend treatment options, and provide an expected outcome of the treatment in real-time. Use of the systems and methods presented hereinabove replaces the subjective visual assessment step with a deep learning methodology which is configured to learn the shape and other characteristics of dynamic data and guide therapy or further scanning. By way of example, if a tumor is delineated using the systems and methods presented herein, then the medical professional may request an MR spectroscopy in that region for obtaining a metabolic profile corresponding to suspicious areas during the same visit without having to recall the patient for a subsequent scan at a later time, thereby aiding the medical professional make informed decisions in an expeditious manner.
[0080] It is to be understood that not necessarily all such objects or advantages described above may be achieved in accordance with any particular embodiment. Thus, for example, those skilled in the art will recognize that the systems and techniques described herein may be embodied or carried out in a manner that achieves or improves one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.
[0081] While the technology has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the specification is not limited to such disclosed embodiments. Rather, the technology can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the claims. Additionally, while various embodiments of the technology have been described, it is to be understood that aspects of the specification may include only some of the described embodiments. Accordingly, the specification is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims.
| # | Name | Date |
|---|---|---|
| 1 | 201841004781-STATEMENT OF UNDERTAKING (FORM 3) [08-02-2018(online)].pdf | 2018-02-08 |
| 2 | 201841004781-REQUEST FOR EXAMINATION (FORM-18) [08-02-2018(online)].pdf | 2018-02-08 |
| 3 | 201841004781-FORM 18 [08-02-2018(online)].pdf | 2018-02-08 |
| 4 | 201841004781-FORM 1 [08-02-2018(online)].pdf | 2018-02-08 |
| 5 | 201841004781-FIGURE OF ABSTRACT [08-02-2018(online)].jpg | 2018-02-08 |
| 6 | 201841004781-DRAWINGS [08-02-2018(online)].pdf | 2018-02-08 |
| 7 | 201841004781-COMPLETE SPECIFICATION [08-02-2018(online)].pdf | 2018-02-08 |
| 8 | 201841004781-Proof of Right (MANDATORY) [16-05-2018(online)].pdf | 2018-05-16 |
| 9 | 201841004781-FORM-26 [16-05-2018(online)].pdf | 2018-05-16 |
| 10 | Form26_Power of Attorney_21-05-2018.pdf | 2018-05-21 |
| 11 | Correspondence by Agent_Form 1, Form 26_21-05-2018.pdf | 2018-05-21 |
| 12 | 201841004781-RELEVANT DOCUMENTS [13-02-2020(online)].pdf | 2020-02-13 |
| 13 | 201841004781-FORM 13 [13-02-2020(online)].pdf | 2020-02-13 |
| 14 | 201841004781-Request Letter-Correspondence [27-07-2020(online)].pdf | 2020-07-27 |
| 15 | 201841004781-Power of Attorney [27-07-2020(online)].pdf | 2020-07-27 |
| 16 | 201841004781-Form 1 (Submitted on date of filing) [27-07-2020(online)].pdf | 2020-07-27 |
| 17 | 201841004781-CERTIFIED COPIES TRANSMISSION TO IB [27-07-2020(online)].pdf | 2020-07-27 |
| 18 | 201841004781-RELEVANT DOCUMENTS [05-05-2021(online)].pdf | 2021-05-05 |
| 19 | 201841004781-PETITION UNDER RULE 137 [05-05-2021(online)].pdf | 2021-05-05 |
| 20 | 201841004781-OTHERS [05-05-2021(online)].pdf | 2021-05-05 |
| 21 | 201841004781-FER_SER_REPLY [05-05-2021(online)].pdf | 2021-05-05 |
| 22 | 201841004781-CORRESPONDENCE [05-05-2021(online)].pdf | 2021-05-05 |
| 23 | 201841004781-FER.pdf | 2021-10-17 |
| 24 | 201841004781-PatentCertificate17-01-2024.pdf | 2024-01-17 |
| 25 | 201841004781-IntimationOfGrant17-01-2024.pdf | 2024-01-17 |
| 1 | 2020-11-0512-29-10E_05-11-2020.pdf |