Abstract: Use of deep-learning methods for estimation of the quantitative perfusion maps have shown substantially improved performance but they miss the patient specific information while making estimation. Few more techniques for denoising sinogram to enable low-dose imaging were introduced but some required mapping between low-dose and high-dose images and some were facing quality concerns. Present application provides methods and systems for generating enhanced perfusion maps using low-dose computed tomography (LD-CT) data. The system first performs self-supervised denoising of sinogram data by leveraging statistical independence of noise in measurement space using self-supervised deep neural network (SS-DNN). The denoised sinogram data is further utilized to generate clean LD-CT images. Secondly, the system uses denoising capability of another DNN trained at specific noise level for generating perfusion maps based on clean LD-CT images. Thirdly, the system uses clean LD-CT images with yet another trained DNN for performing further enhancement of perfusion maps. [To be published with FIG. 4]
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of the invention:
AUTOMATED METHODS AND SYSTEMS FOR GENERATING ENHANCED PERFUSION MAPS USING LOW-DOSE COMPUTED
TOMOGRAPHY DATA
Applicant
Tata Consultancy Services Limited A company incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
Preamble to the description
The following specification particularly describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[001] The present application claims priority from Indian provisional application no. 202121043509, filed on September 24, 2021. The entire contents of the aforementioned application are incorporated herein by reference.
TECHNICAL FIELD [002] The disclosure herein generally relates to medical imaging, and, more particularly, to automated methods and systems for generating enhanced perfusion maps using low-dose computed tomography data.
BACKGROUND
[003] Neurodegeneration is a process in which nerve cells/neurons that are present in brain may lose function over time and ultimately die. As neurons are considered as building blocks of human nervous system, the dying of the neurons may lead to occurrence of neurodegenerative diseases (NDD), such as Parkinson’s disease, Huntington’s disease, dementia, and the like. The neurodegenerative diseases can result from several underlying causes, such as neurodegeneration due to accumulation of proteins plaques, and injuries or tumors in the brain
[004] X-ray computed tomography (CT) is an in vivo non-invasive method used for obtaining anatomical details of a human body, including brain. CT generally involve use of ionizing radiations for generating tomographic images that may contain detailed information about the body part. Specifically, in CT-based perfusion imaging, which is a clinically established imaging method for detection of stroke, a clinical decision is reached via computation of perfusion maps from a set of dynamic contrast enhanced CT images. The series of CT scans that are acquired for computation of perfusion maps, such as cerebral blood flow (CBF) map and other parameters like cerebral blood volume (CBV), mean transit time (MTT), time to peak (TTP) that help in early detection of NDD may sometimes result in higher deposition of ionizing radiations in patients. The deposition of such high doses of radiation is a cause of concern specifically for populations that are at
a higher risk of radiation exposure e.g., pregnant women, children, and young adults.
[005] To reduce the amount of radiation exposure, some techniques are available that use lower dose of ionizing radiation for generating perfusion maps. However, perfusion maps obtained from using low-dose techniques suffer from poor signal to noise ratio. Further, some existing techniques rely on denoising the low-dose CT data (images) followed by conventional regularized deconvolution to enhance the quality of perfusion maps as obtaining denoised perfusion maps is must for improving the quality.
[006] Recently, deep neural network (DNN) has been used for learning mapping between the perfusion maps obtained at low-dose and corresponding maps obtained at standard dose. However, DNN based methods are not robust to practical variations that are seen in real-world applications as they risk missing critical information, which is important for real-world applications, such as stroke imaging.
[007] Additionally, the techniques that are available are usually performed manually and lack end to end automation, which further increases the map generation time and also acts as an issue in creating portable CT scanners.
SUMMARY [008] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one aspect, there is provided a processor implemented method for generating enhanced perfusion maps using low-dose computed tomography data. The method comprises receiving, by a perfusion map generation system (PMGS) via one or more hardware processors, low-dose computed tomography (LD-CT) sinogram data, the LD-CT sinogram data comprising one or more sinograms; performing, by the PMGS via the one or more hardware processors, denoising of LD-CT sinogram data to obtain denoised LD-CT sinogram data using a trained first deep neural network (DNN), the denoised LD-CT sinogram data comprising one or more denoised sinograms; creating, by the PMGS via the one or more hardware processors, one or more LD-
CT images corresponding to the one or more denoised sinograms by applying a fast analytical algorithm over the one or more denoised sinograms; generating, by the PMGS via the one or more hardware processors, a perfusion image based on the one or more LD-CT images using an iterative framework, wherein the iterative framework comprises a trained second DNN; and enhancing, by the PMGS via the one or more hardware processors, the perfusion image to obtain an enhanced perfusion image based, at least in part, on the perfusion image and the one or more LD-CT images using a trained third DNN.
[009] In another aspect, there is provided a perfusion map generation system (PMGS) for generating enhanced perfusion maps using low-dose computed tomography data. The system comprises a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: receive low-dose computed tomography (LD-CT) sinogram data, the LD-CT sinogram data comprising one or more sinograms; perform denoising of LD-CT sinogram data to obtain denoised LD-CT sinogram data using a trained first deep neural network (DNN), the denoised LD-CT sinogram data comprising one or more denoised sinograms; create one or more LD-CT images corresponding to the one or more denoised sinograms by applying a fast analytical algorithm over the one or more denoised sinograms; generate a perfusion image based on the one or more LD-CT images using an iterative framework, wherein the iterative framework comprises a trained second DNN; and enhance the perfusion image to obtain an enhanced perfusion image based, at least in part, on the perfusion image and the one or more LD-CT images using a trained third DNN
[010] In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause a method for generating enhanced perfusion maps using low-dose computed tomography data. The method comprises receiving, by a perfusion map generation system (PMGS) via one or more hardware processors, low-dose computed
tomography (LD-CT) sinogram data, the LD-CT sinogram data comprising one or more sinograms; performing, by the PMGS via the one or more hardware processors, denoising of LD-CT sinogram data to obtain denoised LD-CT sinogram data using a trained first deep neural network (DNN), the denoised LD-CT sinogram data comprising one or more denoised sinograms; creating, by the PMGS via the one or more hardware processors, one or more LD-CT images corresponding to the one or more denoised sinograms by applying a fast analytical algorithm over the one or more denoised sinograms; generating, by the PMGS via the one or more hardware processors, a perfusion image based on the one or more LD-CT images using an iterative framework, wherein the iterative framework comprises a trained second DNN; and enhancing, by the PMGS via the one or more hardware processors, the perfusion image to obtain an enhanced perfusion image based, at least in part, on the perfusion image and the one or more LD-CT images using a trained third DNN.
[011] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[012] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[013] FIG. 1 illustrates an exemplary block diagram of a system for generating enhanced perfusion maps using low-dose computed tomography (LD-CT) data, in accordance with an embodiment of the present disclosure.
[014] FIG. 2 illustrates an exemplary flow diagram of a method for generating enhanced perfusion maps based on LD-CT data using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
[015] FIG 3 illustrates a schematic block diagram representation of a sinogram denoising process followed for obtaining denoised sinogram data using a
trained first deep neural network (DNN), in accordance with an embodiment of the present disclosure.
[016] FIG 4 illustrates a schematic representation of the iterative framework used for generating the sample perfusion image based on the one or more sample LD-CT images, in accordance with an embodiment of the present disclosure.
[017] FIG 5 illustrates a schematic representation of a training process for training a second DNN to generate the enhanced perfusion image, in accordance with an embodiment of the present disclosure.
[018] FIG. 6 is a pictorial representation illustrating cerebral blood flow (CBF) maps and corresponding error maps obtained by applying a plurality of perfusion map generation techniques, in accordance with an embodiment of the present disclosure.
[019] FIG. 7 is a pictorial representation illustrating CBF maps obtained by applying the plurality of perfusion map generation techniques, in accordance with another embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS [020] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[021] Nowadays, acute ischemic stroke (AIS) (a type of NDD) is becoming a common cause of mortality and morbidity all over the world. Computed tomography perfusion (CTP) imaging is a widely established technique for clinical assessment and treatment of time-critical medical conditions, such as AIS as it enables fast diagnosis of hemodynamics parameters, such as cerebral blood volume (CBV), mean transit time (MTT), and time to peak (TTP) that helps in early
detection of the AIS. Basically, CTP imaging helps in acquiring information about areas of the brain that are adequately/inadequately perfused with blood and information associated with blood flow in the brain. Typically, CTP scanning involves a scanning session in which a plurality of consecutive CT scans is taken after a bolus injection of a contrast agent. The amount of radiation to which the patient is exposed during the scanning session is a cause of concern as it can be harmful for the patient, thus compromising the safety of patient. To handle this concern, many clinics are adopting as low as reasonably achievable (ALARA) principle in which low-dose CTP imaging is performed by reducing the X-ray tube current for diagnosis and prognosis of multiple cerebrovascular pathologies. However, lowering of the dose affects the quality of the signal-to-noise ratio (SNR) in the sinogram data reconstructed CT images and consequently, affect the quality of the CT images and thus lead to low-quality estimated perfusion maps.
[022] In the recent past, several techniques have been introduced that focus on reducing ionizing radiations without compromising the quality of the CT images. For example, in a denoising technique by Wu, D. et al. (e.g., refer “Self-supervised dynamic CT perfusion image denoising with deep neural networks. IEEE Trans Radiat Plasma Med Sci (2020)”), authors attempted enhancement of perfusion map by denoising the CT perfusion images followed by truncated singular value decomposition for improving CBF map estimation. Few other authors focused on estimating the perfusion maps directly from the 4-D spatiotemporal contrast-enhanced CT data using model-based approaches.
[023] Recently, some techniques based on deep neural network (DNN) methods have been introduced in which a mapping between the perfusion maps obtained from LD-CT and those obtained using standard-dose CT (SD-CT) images is learned. In the DNN method, noise is added in the image domain of SD-CT images using a noise model for simulating the LD-CT images. The corruption in the signal typically happens in the sinogram space where the statistical properties of the noise can be categorized, such as independently identically distributed (i.i.d), Gaussian or Poisson noise. Thus, the available techniques for estimating improved CTP images from LD-CT data can be categorized as: (i) regularized deconvolution
to obtain CBF maps from noisy LD-CT data; (ii) denoising/enhancing the LD-CT images to obtain an estimate of the SD-CT images followed by application of TSVD algorithm to obtain improved CBF maps; and (iii) post-deconvolution denoising/enhancement of the CBF maps obtained from LD-CT images to estimate perfusion maps at standard dose.
[024] Though deep-learning methods have shown substantially improved performance for estimating the quantitative perfusion maps as compared to other learning-based methods, but they do not model the convolution-based forward process and are likely to miss the patient specific information while making estimation. This makes them susceptible to degradation in performance when presented with input data that deviates from the data used for training the DNN.
[025] Further, several self-supervised as well as unsupervised DNN have been proposed for performing the denoising of natural images which eliminate the need of having training data. The proposed DNNs may require a plurality of instances of the noisy image for training the self-supervised as well as the unsupervised DNNs. Thereafter, a technique is introduced in which denoising can be achieved with just the noisy image alone using the theory of J-invariant functions based on an assumption that the noise is elementwise statistically independent. However, for tomographic inverse problems as the property of elementwise statistical independence can be assumed only in the projection domain. Few more techniques for denoising the sinogram to enable low-dose imaging has been explored in several prior arts. However, the effect of noise on the anatomical image is still to be explored.
[026] Embodiments of the present disclosure overcome the above-mentioned disadvantages, such as mapping between the low-dose and high-dose images, quality concerns, no consideration of patient specific information, dependence on training data etc., by providing automated systems and methods for generating enhanced perfusion maps using LD-CT data. More specifically, the systems and methods of the present disclosure follow a three-steps process for generating enhanced perfusion maps. Firstly, the system performs self-supervised denoising of the sinogram data by leveraging statistical independence of noise in a
measurement space using a self-supervised deep neural network (SS-DNN) (or a first trained DNN). The denoised sinogram data is further utilized to generate clean LD-CT images. Secondly, the system uses the denoising capability of another DNN (or a second trained DNN) trained at a specific noise level for generating perfusion maps based on the clean LD-CT images. Thirdly, the system uses the clean LD-CT images with yet another trained DNN (or a third trained DNN) for performing further enhancement of the perfusion maps.
[027] In the present disclosure, system and method eliminate effects of noise on the perfusion image by providing a LD-CT based perfusion map generation technique in form an automated system (explained in detail with reference to FIG. 1) that relies more on patient specific information and is agnostic to changes made in data domain, such as X-ray dose, change in SNR of the data due to variation in tube current, change in the number of acquired CT frames etc., for generating clearer perfusion maps.
[028] Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[029] FIG. 1 illustrates an exemplary block diagram of a perfusion map generation system (PMGS) 100 for generating enhanced perfusion maps using low-dose computed tomography (LD-CT) data, in accordance with an embodiment of the present disclosure. In an embodiment, the PMGS 100 may also be referred as a system and may be interchangeably used herein. In some embodiments, the system 100 is embodied as a cloud-based and/or SaaS-based (software as a service) architecture. In some embodiments, the system 100 may be implemented in a server system. In some embodiments, the system 100 may be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, and the like.
[030] In an embodiment, the system 100 includes one or more processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and
one or more data storage devices or memory 102 operatively coupled to the one or more processors 104. The one or more processors 104 may be one or more software processing modules and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[031] The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
[032] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment a database 108 can be stored in the memory 102, wherein the database 108 may comprise, but are not limited to inputs received from one or more client devices (e.g., CT scanners, computing devices, and the like) such as sinogram data. In an embodiment, the memory 102 may store information pertaining to number of iterations, a pre-defined threshold, a pre-defined convergence threshold, configuration of one or more trained DNN(s), one or more analytical algorithms, loss function calculation formula and the like. The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step
performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis.
[033] FIG. 2, with reference to FIG. 1, illustrates an exemplary flow diagram of a method 200 for generating enhanced perfusion maps based on LD-CT data using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method 200 of the present disclosure will now be explained with reference to the components of the system 100 as depicted in FIG. 1, and the flow diagram.
[034] In an embodiment of the present disclosure, at step 202, the one or more hardware processors 104 comprised in the system 100 receive low-dose computed tomography (LD-CT) sinogram data. The LD-CT sinogram data includes one or more sinograms. In an embodiment, a sinogram can be referred as a two-dimensional representation of an X-ray CT scan captured using a CT scanner device.
[035] In an embodiment, the LD-CT sinogram data for representing �
dynamic sinograms may be represented by The noisy sinogram domain
measurements can be modelled as where ℛ represents the
CT forward operator, �� denotes a single CT image and ��� denotes the �-th element of the sinogram ��.
[036] At step 204 of the present disclosure, the one or more hardware processors 104 of the system 100 perform denoising of LD-CT sinogram data to obtain denoised LD-CT sinogram data using a trained first deep neural network (DNN) (hereinafter, also referred as a self-supervised deep neural network (SS-DNN)). The step 204 of the present disclosure can be better understood by way of following description.
[037] It is assumed that the noise in the sinogram data is independently identically distributed (i.i.d). A plurality of denoising algorithms have been proposed in the literature to denoise a signal in a self-supervised or unsupervised manner. As mentioned above, these denoising algorithms assume elementwise statistical independence for the noise present in image domain. So, in the general context of tomographic inverse problems, although the noise in the sinogram domain is i.i.d, the reconstruction of CT images based on the noisy sinogram data using techniques such as filtered back projection (FBP), causes the noise to be correlated across pixels. Thus, element-wise statistical independence (i.i.d. noise) can be modeled only in the sinogram domain and not in the image domain. So, to obtain a set of denoised CT images, the system first performs self-supervised denoising in the sinogram domain itself.
[038] In a prior technique by Batson, J., and Royer, L. referred as ‘Noise2Self, authors claim that the self-supervised denoising can be achieved by leveraging J-invariance. A function is said to be J-invariant if the value of ) restricted within a subset / 6 J does not depend on the elements within that subset. The technique relies only on a single instance of the image and does not necessitate any additional data. The assumptions considered while implementing the technique are: (i) elementwise statistical independence of noise and (ii) spatial correlation of the true underlying signal in a neighborhood. The same strategy is used in the present disclosure to obtain a J-invariant version of a denoising DNN that is referred as the self-supervised DNN (SS-DNN).
[039] Thereafter, the hardware processors 104 perform self-supervised training of the obtained SS-DNN (i.e., the first DNN) to perform denoising of LD-CT sinogram data to obtain denoised LD-CT sinogram data... The training of the SS-DNN is performed in a plurality of iterations using the received LD-CT sinogram data. Firstly, as part of a first iteration, a subset region ‘/’ is identified in the LD-CT sinogram data. The subset region ‘/’ is identified at an inference stage. Then, a statistical parameter for one or more pixels found in one or more neighborhood regions of the subset region ‘/’ is calculated. In an embodiment, the one or more neighborhood regions are referred to one or more regions that are found
in surrounding of the identified subset region ‘J’. Example of the statistical parameter include mean, median, average, etc. Further, a mask ‘M’ is applied over the subset region ‘J’ to obtain masked sinogram data. The mask M replaces one or more pixels present in the subset region ‘J’ with the calculated statistical parameter to obtain the masked sinogram data. Thereafter, the masked sinogram data is passed to the first DNN. The first DNN uses the received LD-CT sinogram data (i.e., noisy sinogram data) as reference for training the first DNN in an online fashion based on the masked sinogram data received as an input. Once the first iteration of self-supervised training is done, another region is identified as the subset region in the LD-CT sinogram data and the whole process is repeated until all subset regions are covered in the LD-CT sinogram data. The first DNN obtained after going through the plurality of iterations of self-supervised training is referred as the trained first DNN (i.e., trained SS-DNN).
[040] Once the trained SS-DNN is available, the hardware processors 104 provide the received LD-CT sinogram data as an input to the trained SS-DNN that performs denoising of LD-CT sinogram data to output denoised LD-CT sinogram data. In an embodiment, the denoised LD-CT sinogram data includes one or more denoised sinograms i.e., for T dynamic sinograms, T denoised sinograms are obtained.
[041] In an embodiment of the present disclosure, at step 206, the one or more hardware processors 104 of the system 100 create one or more LD-CT images corresponding to the one or more denoised sinograms by applying a fast analytical algorithm over the one or more denoised sinograms. Examples of the fast analytical algorithm includes a filtered back-projection algorithm, algebraic reconstruction technique, iterative back projection algorithm etc. It is to be understood by a person having ordinary skill in the art or person skilled in the art that implementation of such filtered back-projection algorithm shall not be construed as limiting the scope of the present disclosure. The T denoised sinograms obtained at step 204 are used for construction of the one or more LD-CT images (hereinafter, also referred as a set of dynamic LD-CT data) using the fast analytical algorithm. Suppose, X refers
to a single CT image with appropriate spatial and temporal index subscripts, then the set of T dynamic LD-CT data will be referred as
[042] The output of this step is cleaner LD-CT images (or also referred as clean LD-CT images) when compared with the LD-CT images obtained using techniques existing in the arts as denoised sinograms are used for the construction of the LD-CT images. The created clean LD-CT images may further help in obtaining cleaner perfusion image/map.
[043] At step 208 of the present disclosure, the one or more hardware processors 104 of the system 100 generates a perfusion image based on the one or more LD-CT images (clean LD-CT images) using an iterative framework, such as an alternative direction method of multipliers (ADMM). The iterative framework includes a trained second deep neural network (DNN). The iterative framework uses one or more DNN based regularizers and a deconvolution-based data fidelity term (explained with reference to FIG. 4) along with the trained second DNN for generating the perfusion image based on the one or more LD-CT images created at step 206. In one embodiment, the perfusion image is a blood flow image. Examples of the blood flow image includes, but are not limited to, cerebral blood flow (CBF) maps, cerebral blood volume (CBV), mean transit time (MTT), etc.
[044] Further, the step of generating the perfusion image based on the one or more LD-CT images using an iterative framework is preceded by: performing training of a second DNN to obtain the trained second DNN. For performing training of the second DNN to obtain the trained second DNN, the one or more hardware processors 104 of the system 100 first determine an arterial input function (AIF) from training data. The training data includes one or more sample standard dose computed tomography (SD-CT) images obtained at standard dose tube current and one or more sample LD-CT images created corresponding to the one or more sample SD-CT images.
[045] n an embodiment, the one or more sample LD-CT images are generated by simulation. Specifically, the one or more sample LD-CT images are generated by adding noise with a suitable standard deviation to the corresponding sample SD-CT images. For example, it is assumed that ISD and ILD represent the
tube currents employed for collecting sample standard-dose and low-dose CT data, respectively. Similarly, let and represent noise standard deviation
corresponding to and , respectively. Then to generate a set of sample LD-CT images given the sample SD-CT images received as part of training data
a noise standard deviation to be added is given by the relation
[046] Then, the hardware processors 104 define a matrix to represent the AIF. Thereafter, the hardware processors 104 compute a contrast measurement corresponding to each sample LD-CT image of the one or more sample LD-CT images to obtain a set of contrast measurements. Further, the hardware processors 104 determine a tissue residue function based, at least in part on, the set of contrast measurements and the matrix using a convolution based forward model.
[047] For example, it is assumed that A(n) represents the arterial input function (AIF) determined for the set of N sample LD-CT data, The
processors 104 define matrix A to represent the AIF. Let denote the unknown
tissue residue function within a small region v, so denotes the amount of
residual contrast in vascular structures. Then, the contrast measurements obtained corresponding to one or more sample LD-CT images can be represented by the convolution based forward model as
and rv are vectors denoting the measured contrast and the unknown residue function, respectively, in discrete setting the vectors can be represented as becomes
[048] The hardware processors 104 then generate a first blood flow image based, at least in part, on the tissue residue function, the set of contrast measurements and the matrix. Further, the hardware processors 104 compute a temporal average of one or more sample LD-CT images. The hardware processors 104 then use the generated first blood flow image and the temporal average of the one or more sample LD-CT images as an input to initiate an iterative process to obtain a sample perfusion image i.e., a sample blood flow image. In an embodiment,
the sample blood flow image is a sample CBF map created corresponding to the one or more sample LD-CT images.
[049] So, the estimate rv is obtained based on the convolution based forward model as data likelihood and a suitable regularization function yielding the objective function:
where, is the regularization parameter of the DNN based regularizer.
[050] The iterative process works until a scalar value obtained for the sample blood flow image is less than a pre-defined threshold. In one embodiment, an administrator (or a user/operator) managing the system 100 may define a value for the pre-defined threshold.
[051] In an embodiment, the iterative process includes a plurality of steps to be performed in iteration for obtaining the sample blood flow image corresponding to the one or more sample LD-CT images. So, firstly, as part of a first iteration, the hardware processors 104 augment the first blood flow image and the temporal average of the one or more sample LD-CT images to obtain an augmented blood flow image. Then, the hardware processors 104 extract one or more patches from each of the augmented blood flow image and the computed temporal average of the one or more sample LD-CT images. Thereafter, the hardware processors 104 provide the one or more patches to the DNN to obtain an intermediate blood flow image. Further, the hardware processors 104 compute a difference between the intermediate blood flow image and a standard dose blood flow image to obtain a scalar value for the intermediate blood flow image. In an embodiment, the standard dose blood flow image is generated based on the one or more sample SD-CT images present corresponding to the one or more LD-CT images from which the intermediate blood flow image is obtained. Basically, the standard dose blood flow image is generated based on the one or more sample SD-CT images that are received as part of the training data. The hardware processors 104 then update the first blood flow image as the intermediate blood flow image,
until the scalar value obtained for the intermediate blood flow image is less than the pre-defined threshold.
[052] Basically, the system 100 uses the iterative process with one or more DNN based regularizers to solve an optimization problem mentioned in Equation 1. The iterative framework updates equations for solving the optimization problem and can be written as follows:
where, is the scaled Lagrangian multiplier.
S is the variable splitting term used to convert the unconstrained problem
in Eq. 1 into a constrained problem, and
Eq. 3 corresponds to proximal map of the regularizer function and acts
as a denoiser.
[053] Once the scalar value obtained for the intermediate blood flow image is found to be less than the pre-defined threshold, the intermediate blood flow image is identified as the sample blood flow image. In an embodiment, the sample blood flow image is the sample CBF map created corresponding to the one or more sample LD-CT images. The sample CBF map obtained at this step are noisy sample LD-CBF map/image.
[054] Once the trained second DNN is available, the hardware processors 104 provide the one or more LD-CT images as an input to the trained second DNN that provides the perfusion image as an output. The perfusion image obtained from the trained second DNN is a low-dose noisy blood flow image i.e., noisy LD-CBF image. Though the amount of noise is considerably reduced in the generated CBF image, the scope of further enhancement still exists at this step.
[055] In an embodiment, at step 210, the one or more hardware processors 104 comprised in the system 100 enhance the perfusion image obtained from the second trained DNN to obtain an enhanced perfusion image based, at least in part,
on the perfusion image and the one or more LD-CT images using a trained third DNN.
[056] The perfusion image obtained at step 208 corresponding to the one or more LD-CT images is further enhanced using the trained third DNN. The one or more LD-CT images and the obtained perfusion image are provided as an input to the trained third DNN that provides the enhanced perfusion image as an output. Further, the step of enhancing the perfusion image obtained from the trained second DNN to create an enhanced perfusion image based, at least in part, on the perfusion image and the one or more LD-CT images using the trained third DNN is preceded by: performing training of a third DNN to obtain the trained third DNN.
[057] For performing training of the third DNN to obtain the trained third DNN, the one or more hardware processors 104 of the system 100 provide the one or more sample LD-CT images to the third DNN to obtain a first output blood flow image. Several steps are then performed in iteration(s) until the third DNN is trained to provide an enhanced sample blood flow image corresponding to the sample blood flow image.
[058] In an embodiment, the hardware processors 104, as part of first iteration, compare the first output blood flow image with a second output blood flow image to obtain an error value. The second output blood flow image is obtained by providing the first output blood flow image as an input to the third DNN. Then, the hardware processors 104 determine whether the obtained error value is less than a pre-defined convergence threshold. Upon determining that the error value is less than the pre-defined convergence threshold, the hardware processors 104 assume that the third DNN is the trained third DNN and identify the second output blood flow image as the enhanced sample blood flow image obtained corresponding to the sample blood flow image. In an embodiment, the enhanced sample blood flow image is a denoised sample LD-CBF image.
[059] In case, the error value is determined to be not less than the pre¬defined convergence threshold, the hardware processors 104 compute a loss function based on the second output blood flow image and the sample blood flow
image using a pre-defined loss function calculation formula. In an embodiment, the loss function calculation formula is defined as:
LF = (CBFOutput (second output blood flow image) - CBFOutput (sample blood
flow image).
[060] Once the loss function is available, the hardware processors 104 update one or more weights of the third DNN based on the computed loss function. Thereafter, the hardware processors 104 identify the second output blood flow image as the first output blood flow image i.e., the second output blood flow image is then provided as an input to the third DNN and the above steps are iteratively performed until the error value obtained for the third DNN output is less than the pre-defined convergence threshold. In an embodiment, the convergence threshold is defined as ‘0.001’. The convergence threshold may be pre-configured or empirically determined, in one embodiment of the present disclosure.
[061] Once the trained third DNN is available, the hardware processors 104 provide the one or more LD-CT images and the perfusion image obtained at step 208 as an input to the trained third CNN to obtain an enhanced perfusion image. In an embodiment, the enhanced perfusion image is a denoised CBF map obtained corresponding to the noisy LD-CT sinogram data received at step 202.
[062] For example, consider the noisy sample LD-CBF image obtained at step 208 is rnoisy, then the estimated denoised sample LD-CBF image can be given by:
where, represents the DNN and 0 represents optimal CNN parameters. [063] The optimal CNN parameters are obtained by solving the optimization problem defined as:
[064] FIG 3, with reference to FIGS. 1 through 2, illustrates a schematic block diagram representation 300 of a sinogram denoising process followed for
obtaining denoised sinogram data using the trained self-supervised deep neural network (SS-DNN), in accordance with an embodiment of the present disclosure.
[065] As seen in the FIG. 3, low-dose sinogram data (noisy sinogram data) is provided to the trained SS-DNN as an input and the trained SS-DNN outputs the denoised sinogram data. The process of training the SS-DNN for performing denoising of the sinogram data is already explained with reference to FIG. 2 and is not herein explained for the sake of brevity.
[066] FIG 4, with reference to FIGS. 1 through 3, illustrates a schematic representation of the iterative framework 400 used for generating the sample perfusion image based on the one or more sample LD-CT images, in accordance with an embodiment of the present disclosure.
[067] As already discussed in FIG. 2, a contrast measurement corresponding to each sample LD-CT image of the one or more sample LD-CT images is computed to obtain a set of contrast measurements represented as Further, the deconvolution-based data fidelity term is determined based on the set of contrast measurements and the tissue residue function using the equation:
[068] For better understanding, it is assumed that YLD represents the sample CBF map obtained for the sample LD-CT images represented by Similarly, let YSD represent the sample CBF map obtained from the sample SD-CT images represented by denote the mean of the sample LD-CT
images (temporal average) along the temporal direction. The second
DNN is then used for the regression task, where Θ represents network
parameters and I represents the pair of input images In an
embodiment, the used second DNN may follow a residual learning approach and this DNN accepts a multi-channel (two-channel) input. The regressor learns
a mapping between the input pair of images to the output image YSD.
[069] In an embodiment, a patch-based training is performed by extracting one or more patches from each of and YLD such that the noise characteristics
are captured within one or more extracted patches. In one embodiment, a patch size of 40×40 is used for performing the patch-based training.
[070] In at least one example embodiment, the second DNN comprises 17 layers with one or more convolutional layers, rectified linear units as activation function, and batch-normalization for stabilizing the patch-based training. At each iteration performed by the iterative framework, as the noise in the reconstructed image (rv) keeps reducing, the system 100 employs a separate denoiser
where Iq denotes the input at the q-th iteration and the
corresponding noise standard deviation for the q-th iteration as separate denoiser for each iteration gives better performance and faster convergence.
[071] FIG 5, with reference to FIGS. 1 through 4, illustrates a schematic representation 500 of a training process for training the third DNN to generate the enhanced perfusion image, in accordance with an embodiment of the present disclosure.
[072] As seen in the FIG.5, the one or more sample LD-CT images created corresponding to the one or more denoised LD-CT sinograms and the sample perfusion image i.e., sample blood flow image generated corresponding to the one or more denoised sinograms are provided as an input to the third DNN. In an embodiment, the third DNN is a combination of one or more 1X1 convolution layers and a U-Net architecture. In one example embodiment, the U-Net architecture consists of one or more convolutional layers with one or more max pooling layers in between them.
[073] Upon receiving the one or more sample LD-CT images and the sample perfusion image, the third DNN creates a first output blood flow image based on the one or more sample LD-CT images. The third DNN then performs an iterative enhancement process to obtain the enhanced sample perfusion image using the created first output blood flow image and the sample perfusion image. The iterative enhancement process is already explained with respect to FIG. 2 and is not herein explained again for the sake of brevity.
[074] FIG. 6 is a pictorial representation illustrating cerebral blood flow (CBF) maps and corresponding error maps obtained by applying a plurality of
perfusion map generation techniques, in accordance with an embodiment of the present disclosure.
[075] As can be seen from FIG. 6, an error map (b4) created using a technique disclosed in the present disclosure is almost close to no error map (b5).
[076] FIG. 7 is a pictorial representation illustrating CTP images and the corresponding CBF maps obtained by applying the plurality of perfusion map generation techniques using LD-CT data, in accordance with an embodiment of the present disclosure.
[077] As can be seen from FIG. 7, row (a) presents the CT frame corresponding to ceil (T/2). A tensor total variation (TTV) method estimates the CBF image directly from the LD-CT images and does not produce an intermediate denoised CT slice, and thus it is presented as an empty image. Among the CTP images (row (a), a Noise2Self in image space (N2S-IS) method shows a slight improvement over the LD-CTP image but as we can see the method also fails to remove majority of the noise present in the LD-CTP image. Whereas the image reconstructed from the denoised sinogram (using SS-DNN) shows smoother reconstructions and a superior recovery of structure and contrast while retaining important anatomical edges with substantially reduced noise when compared with (a3).
[078] Further, in case of CBF images shown in row (b), the CBF image (b2) generated using TTV method suffers from patchy artifacts that is characteristic of image-gradient based methods. The CBF image (b3) generated using N2S-IS method uses the TSVD deconvolution method and improves over the TTV method. However, the technique disclosed in the present disclosure uses the CTP images (as shown in (a4) appears closest to the SD-CBF maps (b5) compared to other methods.
[079] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do
not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[080] As mentioned above, the amount of radiation to which the patient is exposed during a scanning session performed for the diagnosis of life-threatening neurodegenerative disease is a cause of concern as it can be harmful for the patient, thus compromising the safety of patient. The only solution to reduce the risk for the patient is to reduce the amount of radiation while performing scanning session. However, reducing the radiation amount leads to increase in noise that is present in the sinogram data captured during the scanning session which ultimately leads to noisy perfusion maps. Various techniques are available in art for reducing the noise in sinogram space, but those available techniques suffer from one or more disadvantages, such as need of mapping between the low-dose and high-dose images, quality concerns, no consideration of patient specific information, dependence on training data etc., as discussed previously. To overcome the disadvantages, embodiments of the present disclosure provide automated systems and methods for generating enhanced perfusion imaged using low-dose computed tomography data. More specifically, the system first denoises the noisy sinogram data to obtain the denoised sinogram data using a trained DNN. The system then creates the LD-CT images corresponding to the denoised sinogram data. The LD-CT images are further used to generate a perfusion map using another trained DNN and then the generated perfusion map is further enhanced using yet another trained DNN. In performing the above method, the present disclosure ensures that it is an end-to-end automated process, which further reduces the CBF map generation time and the human errors that occurs due to variation in assessment which ultimately helps in early detection of the NDD. Further, the use of low-dose in the method of the present disclosure reduces the risk of harmful radiation to which a patient is exposed during a scanning session.
[081] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a
server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[082] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[083] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words
“comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[084] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[085] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
We Claim:
1. A processor implemented method, comprising:
receiving, by a perfusion map generation system (PMGS) via one or more hardware processors, low-dose computed tomography (LD-CT) sinogram data, the LD-CT sinogram data comprising one or more sinograms (202);
performing, by the PMGS via the one or more hardware processors, denoising of LD-CT sinogram data to obtain denoised LD-CT sinogram data using a trained first deep neural network (DNN), the denoised LD-CT sinogram data comprising one or more denoised sinograms (204);
creating, by the PMGS via the one or more hardware processors, one or more LD-CT images corresponding to the one or more denoised sinograms by applying a fast analytical algorithm over the one or more denoised sinograms (206);
generating, by the PMGS via the one or more hardware processors, a perfusion image based on the one or more LD-CT images using an iterative framework, wherein the iterative framework comprises a trained second DNN (208); and
enhancing, by the PMGS via the one or more hardware processors, the perfusion image to obtain an enhanced perfusion image based, at least in part, on the perfusion image and the one or more LD-CT images using a trained third DNN (210).
2. The processor implemented method of claim 1, wherein the perfusion image is a blood flow image, and the enhanced perfusion image is an enhanced blood flow image.
3. The processor implemented method of claim 1, wherein the step of generating, by the PMGS via the one or more hardware processors, the perfusion image based on the one or more LD-CT images using the iterative framework is preceded by:
performing, by the PMGS via the one or more hardware processors, training of a second DNN to obtain the trained second DNN.
4. The processor implemented method of claim 2, wherein the step of
performing, by the PMGS via the one or more hardware processors, training of the
second DNN to obtain the trained second DNN comprises:
determining, by the PMGS via the one or more hardware processors, an arterial input function (AIF) from training data, the training data comprising one or more sample standard dose computed tomography (SD-CT) and sample LD-CT images generated corresponding to the SD-CT images;
defining, by the PMGS via the one or more hardware processors, a matrix to represent the AIF;
computing, by the PMGS via the one or more hardware processors, a contrast measurement corresponding to each sample LD-CT image of the one or more sample LD-CT images to obtain a set of contrast measurements based on the AIF;
determining, by the PMGS via the one or more hardware processors, a tissue residue function based, at least in part on, the set of contrast measurements and the matrix using a convolution based forward model;
generating, by the PMGS via the one or more hardware processors, a first blood flow image based, at least in part on, the tissue residue function, the set of contrast measurements and the matrix;
computing, by the PMGS via the one or more hardware processors, a temporal average of the one or more sample LD-CT images; and
using, by the PMGS via the one or more hardware processors, the first blood flow image and the temporal average of the one or more sample LD-CT images as an input to initiate an iterative process to obtain a sample blood flow image, wherein the sample blood flow image is created corresponding to the one or more sample LD-CT images, wherein the iterative process works until a scalar value obtained for the sample blood flow image is less than a pre-defined threshold.
5. The processor implemented method of claim 4, wherein the iterative process
comprises:
iteratively performing:
augmenting, by the PMGS via the one or more hardware processors, the first blood flow image and the temporal average of the one or more sample LD-CT images to obtain an augmented blood flow image;
extracting, by the PMGS via the one or more hardware processors, one or more patches from each of the augmented blood flow image and the computed temporal average of the one or more sample LD-CT images;
providing, by the PMGS via the one or more hardware processors, the one or more patches to the second DNN to obtain an intermediate blood flow image;
computing, by the PMGS via the one or more hardware processors, a difference between the intermediate blood flow image and a sample standard dose blood flow image to obtain the scalar value for the intermediate blood flow image, wherein the sample standard dose blood flow image is generated based on one or more sample standard dose computed tomography (SD-CT) images present corresponding to the one or more sample LD-CT images from which the intermediate blood flow image is obtained; and
updating, by the PMGS via the one or more hardware processors, the first blood flow image as the intermediate blood flow image,
until the scalar value for the intermediate blood flow image is less than the pre-defined threshold; and
identifying, by the PMGS via the one or more hardware processors, the intermediate blood flow image as the sample blood flow image.
6. The processor implemented method of claim 5, wherein the step of
enhancing, by the PMGS via the one or more hardware processors, the perfusion image to obtain the enhanced perfusion image based, at least in part, on the perfusion image and the one or more LD-CT images using the trained third DNN is preceded by:
performing, by the PMGS via the one or more hardware processors, training of a third DNN to obtain the trained third DNN.
7. The processor implemented method of claim 6, wherein the step of
performing, by the PMGS via the one or more hardware processors, training of the
third DNN to obtain the trained third DNN comprises:
providing, by the PMGS via the one or more hardware processors, the one or more sample LD-CT images to the third DNN to obtain a first output blood flow image; and
iteratively performing:
comparing, by the PMGS via the one or more hardware processors, the first output blood flow image with a second output blood flow image to obtain an error value, wherein the second output blood flow image is obtained by providing the first output blood flow image as an input to the second DNN;
determining, by the PMGS via the one or more hardware processors, whether the error value is less than a pre-defined convergence threshold;
computing, by the PMGS via the one or more hardware processors, a loss function based on the second output blood flow image and the sample blood flow image using a pre-defined loss function calculation formula upon determining that the error value is not less than the pre-defined convergence threshold;
updating, by the PMGS via the one or more hardware processors, one or more weights of the third DNN based on the computed loss function; and
identifying, by the PMGS via the one or more hardware processors, the second output blood flow image as the first output blood flow image.
8. The processor implemented method of claim 7, wherein upon determining
that the error value is less than the pre-defined convergence threshold,
identifying, by the PMGS via the one or more hardware processors, the second output blood flow image as an enhanced sample blood flow image.
9. The method as claimed in claim 4, wherein the step of performing, by the
PMGS via the one or more hardware processors, denoising of LD-CT sinogram data
to obtain the denoised LD-CT sinogram data using the trained first DNN is preceded
by:
performing, by the PMGS via the one or more hardware processors, self-supervised training of the first DNN to perform denoising of received LD-CT sinogram data to obtain denoised LD-CT sinogram data by iteratively performing:
identifying a subset region in the LD-CT sinogram data, wherein the subset region is identified at an inference stage;
calculating a statistical parameter for one or more pixels found in one or more neighborhood regions of the subset region;
applying a mask over the subset region to obtain a masked sinogram data, wherein the mask replaces one or more pixels present in the subset region with the calculated statistical parameter to obtain the masked sinogram data;
passing the masked sinogram data to the first DNN, wherein the first DNN uses the LD-CT sinogram data as reference for training the first DNN based on the masked sinogram data; and
identifying another region as the subset region in the LD-CT sinogram data,
until all subset regions are covered in the LD-CT sinogram data
10. A perfusion map generation system (PMGS) (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
receive low-dose computed tomography (LD-CT) sinogram data, the LD-CT sinogram data comprising one or more sinograms;
perform denoising of LD-CT sinogram data to obtain denoised LD-CT sinogram data using a trained first deep neural network (DNN), the denoised LD-CT sinogram data comprising one or more denoised sinograms;
create one or more LD-CT images corresponding to the one or more denoised sinograms by applying a fast analytical algorithm over the one or more denoised sinograms;
generate a perfusion image based on the one or more LD-CT images using an iterative framework, wherein the iterative framework comprises a trained second DNN; and
enhance the perfusion image to obtain an enhanced perfusion image based, at least in part, on the perfusion image and the one or more LD-CT images using a trained third DNN.
11. The system as claimed in claim 10, wherein the perfusion image is a blood flow image, and the enhanced perfusion image is an enhanced blood flow image.
12. The system as claimed in claim 10, wherein the step of generating, by the PMGS via the one or more hardware processors, the perfusion image based on the one or more LD-CT images using the iterative framework is preceded by:
performing training of a second DNN to obtain the trained second DNN.
13. The system as claimed in claim 11, wherein the step of performing training
of the second DNN to obtain the trained second DNN comprises:
determining an arterial input function (AIF) from training data, the training data comprising one or more sample standard dose computed tomography (SD-CT) and sample LD-CT images generated corresponding to the SD-CT images;;
defining a matrix to represent the AIF;
computing a contrast measurement corresponding to each sample LD-CT image of the one or more sample LD-CT images to obtain a set of contrast measurements based on the AIF;
determining a tissue residue function based, at least in part on, the set of contrast measurements and the matrix using a convolution based forward mode;
generating a first blood flow image based, at least in part on, the tissue residue function, the set of contrast measurements and the matrix;
computing a temporal average of the one or more sample LD-CT images; and
using the first blood flow image and the temporal average of the one or more sample LD-CT images as an input to initiate an iterative process to obtain a sample blood flow image, wherein the sample blood flow image is created corresponding to the one or more sample LD-CT images, wherein the iterative process works until a scalar value obtained for the sample blood flow image is less than a pre-defined threshold.
14. The system as claimed in claim 13, wherein the iterative process comprises:
iteratively performing:
augmenting the first blood flow image and the temporal average of the one or more sample LD-CT images to obtain an augmented blood flow image;
extracting one or more patches from each of the augmented blood flow image and the computed temporal average of the one or more sample LD-CT images;
providing the one or more patches to the second DNN to obtain an intermediate blood flow image;
computing a difference between the intermediate blood flow image and a sample standard dose blood flow image to obtain the scalar value for the intermediate blood flow image, wherein the sample standard dose blood flow image is generated based on one or more sample standard dose computed tomography (SD-CT) images present corresponding to the one or more sample LD-CT images from which the intermediate blood flow image is obtained; and
updating the first blood flow image as the intermediate blood flow image,
until the scalar value for the intermediate blood flow image is less than the pre-defined threshold; and
identifying the intermediate blood flow image as the blood flow image.
15. The system as claimed in claim 14, wherein the step of enhancing, by the
PMGS via the one or more hardware processors, the perfusion image to obtain the
enhanced perfusion image based, at least in part, on the perfusion image and the
one or more LD-CT images using the trained third DNN is preceded by:
performing training of a third DNN to obtain the trained third DNN.
16. The system as claimed in claim 15, wherein the step of performing, by the
PMGS via the one or more hardware processors, training of the third DNN to obtain
the trained third DNN comprises:
providing the one or more sample LD-CT images to the third DNN to obtain a first output blood flow image; and iteratively performing:
comparing the first output blood flow image with a second output blood flow image to obtain an error value, wherein the second output blood flow image is obtained by providing the first output blood flow image as an input to the second DNN;
determining whether the error value is less than a pre-defined convergence threshold;
computing a loss function based on the second output blood flow image and the sample blood flow image using a pre-defined loss function calculation formula upon determining that the error value is not less than the pre-defined convergence threshold;
updating one or more weights of the third DNN based on the computed loss function; and
identifying the second output blood flow image as the first output blood flow image.
17. The system as claimed in claim 16, wherein upon determining that the error
value is less than the pre-defined convergence threshold,
identifying the second output blood flow image as an enhanced sample blood flow image.
18. The system as claimed in claim 13, wherein the step of performing, by the
PMGS via the one or more hardware processors, denoising of LD-CT sinogram data to obtain the denoised LD-CT sinogram data using the trained first DNN is preceded by:
performing self-supervised training of the first DNN to perform denoising of received LD-CT sinogram data to obtain denoised LD-CT sinogram data by iteratively performing:
identifying a subset region in the LD-CT sinogram data, wherein the subset region is identified at an inference stage;
calculating a statistical parameter for one or more pixels found in one or more neighborhood regions of the subset region;
applying a mask over the subset region to obtain a masked sinogram data, wherein the mask replaces one or more pixels present in the subset region with the calculated statistical parameter to obtain the masked sinogram data;
passing the masked sinogram data to the first DNN, wherein the first DNN uses the LD-CT sinogram data as reference for training the first DNN based on the masked sinogram data; and
identifying another region as the subset region in the LD-CT sinogram data,
until all subset regions are covered in the LD-CT sinogram data.
| # | Name | Date |
|---|---|---|
| 1 | 202121043509-STATEMENT OF UNDERTAKING (FORM 3) [24-09-2021(online)].pdf | 2021-09-24 |
| 2 | 202121043509-PROVISIONAL SPECIFICATION [24-09-2021(online)].pdf | 2021-09-24 |
| 3 | 202121043509-FORM 1 [24-09-2021(online)].pdf | 2021-09-24 |
| 4 | 202121043509-DRAWINGS [24-09-2021(online)].pdf | 2021-09-24 |
| 5 | 202121043509-FORM-26 [21-10-2021(online)].pdf | 2021-10-21 |
| 6 | 202121043509-FORM 18 [10-12-2021(online)].pdf | 2021-12-10 |
| 7 | 202121043509-ENDORSEMENT BY INVENTORS [10-12-2021(online)].pdf | 2021-12-10 |
| 8 | 202121043509-DRAWING [10-12-2021(online)].pdf | 2021-12-10 |
| 9 | 202121043509-CORRESPONDENCE-OTHERS [10-12-2021(online)].pdf | 2021-12-10 |
| 10 | 202121043509-COMPLETE SPECIFICATION [10-12-2021(online)].pdf | 2021-12-10 |
| 11 | Abstract1.jpg | 2021-12-14 |
| 12 | 202121043509-Proof of Right [11-01-2022(online)].pdf | 2022-01-11 |