Abstract: A method for detecting motion is presented. The method includes acquiring at least two image volumes of a region of interest. Further, the method includes selecting one or more image sHces corresponding to the at least two image volumes. Moreover, the method determines motion between the at least two image volumes by comparing the one or more image slices corresponding to the at least two image volumes. Furthermore, the method includes quantifying the determined motion. Finally, the method corrects the determined motion based on the quantified motion. Further systems for detecting motion and correcting motion-based artifacts are also presented. Fig. 1
Systems and Methods for Correcting Motion
BACKGROUND
[0001] Embodiments of the present disclosure relate generally to imaging systems, and more particularly, to methods and systems for detection of motion and correction of motion-based artifacts in images.
[0002] Medical imaging devices such as magnetic resonance imaging (MRI) systems, computed tomography (CT) imaging systems, and the like utilize different modalities to scan organs and generate images of the organs for disease diagnosis. For instance, MRI systems utilize the property of nuclear magnetic resonance (NMR) to image nuclei (protons) of atoms inside the body.
[0003] One of the major drawbacks of currently available medical imaging systems is image corruption due to patient motion. For instance, in case of an MRI system, during an examination, patients are often requested to lie in a supine position for extended periods of time. However, during the course of image acquisition, the patient may often fidget, twitch or move. Moreover, organs within the patient may experience involuntary movement due to respiration or other such bodily functions. Such external or involuntary movements of the patient may lead to ghosting, blurring, and other artifacts in the images.
[0004] A wide variety of techniques such as physiological gating, phase-encode reordering, and gradient moment nulling have been employed to minimize the effects of such motion. Moreover, a number of techniques have been developed for correcting motion-based artifacts in images prospectively and/or retrospectively. However, most of these techniques fail to adequately correct the motion-based artifacts in images and therefore have failed to gain widespread acceptance.
[0005] Furthermore, in some applications, such as perfusion imaging, the presently available motion correction techniques are inadequate. Perfusion imaging is a widely used technique for assessing different pathological processes including tumor characterization and progression, and determination of salvageable tissues post-acute ischemic events in the brain. Typically, perfusion imaging relies on utilizing a tracer, such as an exogenous tracer (e.g., contrast agent) or an endogenous tracer (e.g., spin labeling) and tracking the tracer over a region of interest by acquiring four-dimensional (4D) time series images. Generally, in perfusion imaging, in addition to issues related to patient motion, there exists another issue - localized variations in signal intensity due to
wash-in and washout of the tracer. Often, such variation in signal intensity is incorrectly categorized as motion by the currently available motion correction techniques. Moreover, such techniques may proceed to correct the falsely identified motion, thereby causing further image artifacts.
[0006] Furthermore, some of the presently known techniques utilize image registration for motion correction. In image registration, different sets of images are transformed into one coordinate system. Registration is often utilized to compare or integrate the images obtained from different measurements. The presently known techniques register each image in the 4D time series to an image at a specific time or to the mean image of the complete time-series data. While being a good framework to align volumes in MR imaging, registration of all the images may be a time-consuming process.
BRIEF DESCRIPTION OF THE INVENTION
[0007] According to one aspect of the present disclosure, a method for correcting motion is presented. The method includes selecting one or more image slices corresponding to at least two image volumes, where the at least two image volumes correspond to a region of interest. Further, the method includes detecting motion between the at least two image volumes by comparing the selected one or more image slices corresponding to the at least two image volumes. Moreover, the method includes quantifying the detected motion. The method also includes correcting the detected motion based on the quantified motion.
[0008] According to another aspect of the present disclosure, a system for correcting motion is presented. The system includes an imaging system where the imaging system includes an acquisition subsystem configured to acquire at least two image volumes of a region of interest. Moreover, the imaging system includes a motion correction platform configured to correct motion, where the motion correction platform includes a motion detector configured to select one or more image slices corresponding to the at least two image volumes and detect motion between the at least two image volumes by comparing corresponding selected one or more image slices corresponding to the at least two image volumes. Further, the motion correction platform includes a motion-quantifying limit configured to quantify the detected motion. In addition, the motion correction platform includes a motion-correcting limit configured to correct the detected motion based on the quantified motion.
[0009] According to yet another aspect of the present disclosure, a system for correcting motion is presented. The system includes an acquisition subsystem configured to acquire a plurality of image volumes of a region of interest. Further, the system includes a motion correction platform configured to correct motion, where the motion correction platform includes a motion detector configured to select at least two image volumes from the plurality of image volumes and detect motion between the selected at least two image volumes by comparing the selected at least two image volumes. Moreover, the motion correction platform includes a motion-quantifying unit configured to quantify the detected motion. Furthermore, the motion correction platform includes a motion-correcting unit configured to selectively register image volumes with motion or discard the image volumes with motion based on the quantified motion.
DRAWINGS
[0001] These and other features, aspects, and advantages of the present disclosure will be better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0002] FIG. 1 is a diagrammatical illustration of a system for detecting motion and correcting motion-based artifacts in images, in accordance with aspects of the present disclosure;
[0003] FIG. 2 is a diagrammatical illustration of one embodiment of the system of FIG. 1, in accordance with aspects of the present disclosure;
[0004] FIG. 3 is a flow chart depicting an exemplary method for correcting motion-
based artifacts in images, in accordance with aspects of the present disclosure;
[0005] FIG. 4 is a flowchart illustrating an exemplary method for detecting motion in accordance with aspects of the present disclosure;
[0006] FIG. 5 is a flowchart illustrating an exemplary method for quantifying motion in accordance with aspects of the present disclosure;
[0007] FIG. 6 is a diagrammatical representation of field motion maps in accordance with aspects of the present disclosure;
[0008] FIG. 7 is a diagrammatical representation of field motion maps for 24 adjacent image slices, in accordance with embodiments of the present disclosure; and
[0009] FIG. 8 is a diagrammatical illustration of a magnetic resonance imaging system for use in the system of FIG. 1.
DETAILED DESCRIPTION
[0010] The following detailed description of certain embodiments of the present disclosure will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of various embodiments, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., processors or memories) may be implemented in a single piece of hardware (e.g., a general-purpose signal processor or a block of random access memory, hard disk, or the like). Similarly, the programs may be stand along programs, incorporated as subroutines in operating systems, or functions in installed software packages, or the like. It should be understood that the various embodiments are not limited to the arrangements shown in the drawings.
[0011] As used herein, an element or step recited in the singular and proceeded with the word "a" or "an" should be understood as not excluding plural of the elements or steps, unless such exclusion is explicitly stated. Furthermore, references to "one embodiment" of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments "including" or "having" an element or a plurality of elements having a particular property may include additional elements not having that property.
[0012] Embodiments of the present disclosure relate to methods and systems for detecting motion and correcting motion-based artifacts in images. More particularly, the systems and methods disclosed herein are capable of differentiating between motion caused by external or involuntary movement and signal variations caused by the wash-in or washout of a tracer. To this end, the methods and systems employ a motion correction platform that detects motion by comparing image slices from different image volumes and quantifies the motion based on a degree of motion between the compared image slices. Alternatively, the motion correction platform may compare two or more image volumes to detect motion. Further, the motion correction platform may perform one or more functions to correct any motion-based artifacts present in the image slices.
[0013] FIG. 1 is a block diagram representation of an exemplary system 100 for use in diagnostic imaging, in accordance with aspects of the present disclosure. The system 100 may be configured to detect and quantify motion, and/or correct any motion-based
artifacts in images of an anatomical region of interest in an object of interest. Here, the object is illustrated as a human patient 102. However, it will be understood that in other applications the object may be an animal or an inanimate object such as a fluid or a pipeline without departing from the scope of the present disclosure.
[0014] The system 100 may be configured to acquire image data from the patient 102. In one embodiment, the system 100 may acquire image data from the patient 102 via an image acquisition device 104. Also, in one embodiment, the image acquisition device 104 may include a probe (not shown), where the probe may include an invasive probe, or a non-invasive or external probe, such as an external ultrasound probe, that is configured to aid in the acquisition of image data. Moreover, in certain other embodiments, image data may be acquired via one or more sensors (not shown) that may be disposed on the patient 102. By way of example, the sensors may include physiological sensors such as electrocardiogram (ECG) sensors and/or positional sensors such as electromagnetic field sensors or inertial sensors. These sensors may be operationally coupled to a data acquisition device, such as an imaging system, via leads (not shown), for example.
[0015] The system 100 may also include a medical imaging system 106 that is in operative association with the image acquisition device 104. The medical imaging system 106 may include any type of imaging system. For example, the medical imaging system 106 may include a position emission tomography (PET) imaging system, a single photon emission computed tomography (SPECT) imaging system, a computed tomography (CT) imaging system, an ultrasound imaging system, a magnetic resonance imaging (MRI) system, an X-ray system, or any other system capable of generating medical images. It will be appreciated that the medical imaging system 106 may also be a combination of the previously mentioned imaging systems, without departing from the scope of the present disclosure. For example, the imaging system may be a multimodality CT and MRI imaging system. Moreover, while some of the embodiments are described herein with respect to an MRI system, it should be realized that the embodiments described herein might also be used with other types of MRI images, such as a Magnetic Resonance Angiography (MRA) scans. In addition, the various embodiments of the present disclosure are not limited to medical imaging systems for imaging human subjects, but may include veterinary or non-medical systems for imaging non-human objects, etc. For example, utilization of the imaging system 106 in applications such as industrial imaging systems and non-destructive evaluation and inspection systems, such as pipeline inspection systems, liquid reactor inspection systems may also be contemplated.
[0016] As noted hereinabove, in a presently contemplated configuration, the medical imaging system 106 may be an MRI system. Further, the medical imaging system 106 may include an acquisition subsystem 108 and a processing subsystem 110, in one embodiment. The acquisition subsystem 108 may be configured to acquire image data representative of one or more anatomical regions of interest in the patient 102 via the image acquisition device 104. Moreover, the acquisition subsystem 108 may be configured to acquire one or more image data sets corresponding to an anatomical region of interest in the patient 102 prior to or after administering a tracer to the patient 102. Any known modality may be utilized to acquire the images based on the type of medical system utilized. For instance, in case of an MRI system, the nuclear magnetic property of protons within the anatomical region of interest may be utilized to acquire images of the anatomical region of interest.
[0017] In one embodiment, the acquired image data may include four-dimensional (4D) image data. For example, the 4D image data may include a plurality of three-dimensional (3D) image volumes acquired over a determined period of time. Furthermore, each of the 3D image volumes may include a plurality of two-dimensional (2D) image slices forming the 3D image volume. For instance, in case of a brain MRI, the acquisition subsystem 108 may repeatedly scan the whole brain over a determined period of time. Each complete scan of the brain may be considered an image volume. Moreover, each image volume may include a plurality of image slices. These image slices may correspond to a 2D image of one lateral portion of the brain. Additionally, the image data acquired from the patient 102 may subsequently be processed by the processing subsystem 110.
[0018] The image data acquired and/or processed by the medical imaging system 106 may be employed to aid an operator or a computing system in identifying disease states, assessing need for treatment, determining suitable treatment options, tracking the progression of the disease, and/or monitoring the effect of treatment on the disease states. In certain embodiments, the processing subsystem 110 may be further coupled to a storage system, such as the data repository 114, where the data repository 114 may be configured to store the acquired image data. In one embodiment, the data repository 114 may include a frame buffer (not shown) for storing the plurality of image slices. For instance, the data repository 114 may store all the image slices corresponding to an image
volume scanned in a unit of time as one array. The image slices corresponding to the next image volume may be stored as the next array and so on.
[0019] In FIG. 1, the acquisition subsystem 108, the processing subsystem 110, and the data repository 114 are illustrated as individual components of the medical imaging system 106. However, in other embodiments, these components may be implemented as a single device, which acquires image data, stores the image data, and performs motion correction. Along similar lines, it will be understood that any two of these components may be combined into a single device without departing from the scope of the present disclosure.
[0020] As previously noted, the patient 102 may experience volumtary or involuntary motion that results in motion-based artifacts in the acquired images. It may be noted that the term motion-based artifact may refer to image ghosting, blurring, and other such artifacts that are primarily caused because of any motion in the region of interest. In accordance with aspects of the present disclosure, an exemplary motion correction platform 112 flat circumvents the shortcomings of currently available motion correction systems is presented. This motion correction platform 112 may be configured to detect any motion in the image data, quantify the detected motion, segregate the image data with motion from the image data without any motion, and perform one or more corrective measures on the image data with motion. The motion may be caused involumtarily or voluntarily by the patient 102 or by internal motion of organs caused by bodily functions such as respiration.
[0021] In a presently contemplated configuration, the motion correction platform 112 is illustrated as a part of the processing subsystem 110. Alternatively, the motion correction platform 112 may be a standalone module. In such cases, the motion correction platform 112 may operate in conjunction with the acquisition subsystem 108 and the processing subsystem 110.
[0022] To perform motion correction, the motion correction platform 112 may be configured to compare 2D image slices from different image volumes to detect motion between the image volumes. Alternatively, the motion correction platform 112 may compare 3D image volumes to detect motion. Further, the motion correction platform 112 may be configured to perform motion correction retrospectively or prospectively. For instance, in some cases, it may be desirable to perform motion correction in real-time (prospectively) to aid in real-time diagnosis. Alternatively, in other cases, the motion correction may be performed once all the image volumes corresponding to an anatomical
region of interest are acquired and stored (retrospectively). The advantage of performing prospective correction is that if excessive movement is detected, an operator or the system may cancel the current image acquisition mid-process and restart the acquisition process, thereby saving time and resources. The functionality of the motion correction platform 112 will be described in greater detail with reference to FIGs. 2-7.
[0023] Further, as illustrated in FIG. 1, the medical imaging system 106 may include a display 116 and a user interface 118. In certain embodiments, such as in a touch screen, the display 116 and the user interface 118 may overlap. In addition, in some embodiments, the display 116 and the user interface 118 may include a common area. In accordance with aspects of the present disclosure, the display 116 of the medical imaging system 106 may be configured to display one or more images generated by the medical imaging system 106 based on the acquired image data. Additionally, in accordance with further aspects of the present disclosure, the corrected images may also be visualized on the display 116.
[0024] The user interface 118, on the other hand, may allow operators or technicians to communicate with the acquisition subsystem 108 or the processing subsystem 110. For instance, the operators may issue commands to display images, begin image acquisition, stop image acquisition, control the sensors attached to the patient 102, navigate through the images, and so on, through the user interface 118. Additionally, the user interface 118 may also be configured to aid in manipulating and/or organizing the images displayed on the display 116.
[0025] In addition, the user interface 118 of the medical imaging system 106 may include a human interface device (not shown) configured to aid the operator in manipulating image data displayed on the display 116. The human interface device may include a mouse-type device, a trackball, a joystick, a stylus, or a keyboard configured to aid the operator in identifying one or more regions of interest requiring therapy. However, as will be appreciated, other human interface devices, such as, but not limited to, a touch screen, may also be employed.
[0026] It will be understood that in some embodiments, the medical imaging system 106 may be utilized for perfusion imaging. In such cases, the anatomical region of interest may include any organ that can be perfused. For example, the anatomical region of interest may include the brain, the prostrate, the uterus, the liver, the kidneys, bones, and the like in the patient 102. Moreover, for perfusion, an exogenous or endogenous tracer may be applied to the patient 102. To administer the exogenous tracer, such as a
contrast agent, the medical imaging system 106 may optionally include a tracer unit 120. The tracer unit 120 may be an injector coupled to the patient 102 and configured to administer the contrast agent into the body of the patient 102 during the acquisition process. Moreover, the tracer unit 120 may be programmed to administer the contrast agent at various determined times during the acquisition process. Alternatively, the operator or the processing subsystem 110 may be configured to activate the tracer unit 120 to administer the contrast agent.
[0027] The tracer unit 120 may broadly encompass an automated medical device that administers the contrast agent into the patient 102 (intravenously or orally) upon receiving or generating a command. Further, the tracer unit 120 may also encompass a medical operator manually administering the contrast agent to the patient 102. It will be understood that in some perfusion imaging exogenous contrast agents may not be utilized. For example, in some cases, an endogenous spin label may be employed as a tracer within the patient 102. In these cases, the tracer limit 120 may not be employed.
[0028] The systems and units described with reference to FIG. 1 may communicate with each other through electrical and/or data connections. Data connections may be direct wired links, fiber optic connections, or wireless communication links without departing from the scope of the present disclosure. Electrical connections may also include wired or wire-less connections. In some cases, all the connections may be of the same type. Alternatively, different subsystems may be connected using different communication means. For instance, the acquisition subsystem 108 may be coupled to the acquisition device 104 through electrical connections, while the same acquisition subsystem 108 may be coupled to the data repository 114 through wireless or wired data connections. Moreover, all the elements of the medical imaging system 106 may be coupled to a network (not shown), such as a wired or wireless network where the various subsystems may interact and communicate with each other seamlessly.
[0029] Moreover, the subsystems and units illustrated in FIG. 1 are by no means a complete rendition of the components of the medical image system 106. The medical imaging system 106 may include many more units and subsystems to effectively function. For instance, the system 106 may include a patient positioning subsystem (not shown) to automatically position the patient 102 with respect to the acquisition device 104. Similarly, the medical imaging system 106 may include a host of electronic and electrical devices such as amplifiers, switches, and gradient coils without departing from the scope of the present disclosure.
[0030] Turning now to FIG. 2, a diagrammatical representation 200 of one embodiment of the processing subsystem 110 of FIG. 1 is depicted. Various subsystems and units interact with the processing subsystem 110 as described with inspect to FIG. 1. For instance, the acquisition subsystem 108 maybe configured to aid in the acquisition of image data 202 from the patient 102 (see FIG. 1) prior to and subsequent to introducing the tracer. Alternatively, the acquisition subsystem 108 may obtain the image data 202 from an archival site, a database, or an optical data storage article. For example, the acquisition subsystem 108 may be configured to acquire images stored in an optical data storage article. It may be noted that the optical data storage article may be an optical storage medium, a holographic storage medium, or another like volumetric optical storage medium, such as, for example, two-photon or multi-photon absorption storage format.
[0031] As previously noted, the image data 202 may include 4D image data that may include a plurality of 3D image volumes acquired over a determined period of time. Also, these 3D image volumes may include a plurality of 2D image slices acquired over time t. Furthermore, the image data 202 acquired by the acquisition subsystem 108 may be stored in the data repository 114. In certain embodiments, the data repository 114 may include a local database. The processing subsystem 110 may then access these images, such as the image data 202, from the local database 114.
[0032] Further, in a presently contemplated configuration, the motion correction platform 112 may include a motion detector 204, a motion-quantifying limit 206, and a motion-correcting unit 208. The motion detector 204 may be configured to detect patient and/or organ motion. In one example, the motion detector 204 may be configured to detect motion by comparing one or more image slices from one image volume with corresponding image slices from other image volumes. As noted previously, the motion detector 204 may also be configured to compare entire image volumes for motion detection. The decision to utilize a subset of image slices from different image volumes as opposed to utilizing entire image volumes depends on time constraints. By way of example, time taken to detect motion by comparing the subset of image slices corresponding to the image volumes is relatively small, while the time taken to compare entire image volumes is relatively large. Moreover, the motion detector 204 may be able to distinguish between real motion and false motion. As used herein, the term "real motion" refers to voluntary or involumtary patient motion, while the term "false motion"
refers to any signal intensity variations caused by the washing-in or washing out of a tracer in the anatomical regions of interest.
[0033] As acquisition of one image volume typically lasts for a few seconds, embodiments of the present disclosure assume that the probability of a patient moving within a single acquisition cycle is relatively smaller than the probability of the patient moving in between acquisition cycles. Therefore, the motion detector 204 is configured to selects image slices from different image volumes for the comparison. However, it will be understood that the motion detector 204 may also be configured to compare image slices from the same image volume in some embodiments. Moreover, by selecting a subset of the image slices from the image volumes for comparison, the motion detector 204 may detect motion in relatively lesser time than conventional motion detection systems that compare all or most of the image slices to detect motion. Accordingly, for prospective detection, where time is a constraint, the motion detector 204 may be configured to select fewer image slices. While, for retrospective detection, where quality may be paramount, the motion detector 204 may be configured to select a greater number of image slices for the comparison.
[0034] Furthermore, a central image slice of an image volume typically depicts a majority portion the subject's organ. Accordingly, the motion detector 204 may be configured to utilize one or more central image slices from adjacent image volumes for the comparison. As used herein, the term central image slice is representative of one or more image slices that are located at approximately the center of the image volume. For instance, if each image volume includes 100 image skuces, the central image slice may be the 50* image slice. Similarly, if the image volume includes about 500 image slices, the central image slices may include any image slice from about the 240* image slice to about the 260* image slice. Moreover, the term 'adjacent image volumes' is used to refer to image volumes that are acquired at two adjacent periods of time.
[0035] Alternatively, any other image slice or any other number of image slices from any image volumes may be utilized without departing from the scope of the present disclosure. For instance, central image slices corresponding to three adjacent image volumes may be employed. In another example, any image slice from image volumes that are a determined distance apart may be used to detect motion. If the MRI system acquires an image volume every one second, the motion detector 204 may be configured to select image slices from image volumes that are acquired 5 seconds apart, for example.
Alternatively, if the MRI system acquires an image volume every 10 seconds, image slices from adjacent image volumes may be selected for the comparison.
[0036] In one embodiment, the motion detector 204 may include an input unit 210, a slicing unit 212, and a comparator 214. Depending on the source of the images 202, the input limit 210 may be configured to retrieve image slices or image volumes from the local data repository 114, the acquisition subsystem 108, or archival sites. Embodiments of the present disclosure are described with reference to a subset of image slices from two or more image volumes instead of entire image volumes. It will be understood, however, that this description is merely exemplary and the same systems and methods may be utilized to detect motion by comparing entire image volumes without departing from the scope of the present disclosure.
[0037] The input limit 210 may be configured to select one or more image slices from corresponding image volumes. For example, if two image volumes are selected, the input unit may select one or more image slices from the first image volume and the same number of image slices from the second image volume. These selected image slices may be communicated to the slicing unit 212. For instance, the input unit 210 may select one or more image slices from a particular image volume and transmit only these selected image slices to the slicing unit 212. More particularly, in one embodiment, the input unit 210 may select a central image slice from two adjacent image volumes and transmit these central image slices to the slicing unit 212. Furthermore, the operator or the input unit 210 may selectively determine a selection range based on certain parameters. Parameters may include acquisition time, anatomical region of interest being scanned, age of the patient, state of the patient (e.g., sedated or not), and so on. By way of example, young patients may be prone to excessive movement during the acquisition process. In such cases, unage slices may be retrieved from adjacent image volumes. Alternatively, in older or sedated patients, random and persistent motion is generally not expected. In these cases, image slices from every 15* image volume may be selected. As will be appreciated, the time taken for motion detection and correction is dependent on the number of motion comparisons conducted by the motion correction platform 112. Therefore, based on the time and resource limitations, the operator or the processing subsystem 110 may be configured to selectively perform an intra-scan motion correction or an inter-scan motion correction.
[0038] Additionally, in accordance with further aspects of the present disclosure, the input unit 210 may also be configured to filter/decompose the retrieved image slices into
transformed sets of image components so that motion related artifacts may be better accentuated by the motion detector 204. For instance, the input unit 210 may utilize principal component analysis (PCA) or its variants, spectral filters, or independent component analysis (ICA) computational methods to transform the image slices into sets of image components. PCA, for instance, is a mathematical procedure that uses an orthogonal transformation of the image slices to convert a set of possibly correlated variables into a set of linearly incorrelated variables called principal components. Therefore, an image slice may be decomposed into a set of principal components by using PCA. The first principal component may have the largest possible variance, and each succeeding component in turn may have the highest variance possible under the constraint that it be orthogonal to the preceding components. These decomposed principal components may illustrate motion more vividly than the original image slices. Depending on the amount of motion detected in the principal components, the components that demonstrate the most motion may be transmitted to the slicing unit 212.
[0039] Once the image slices or the image components are selected, the slicing limit 212 may be configured to segment the image slices or the image components into a plurality of blocks. Such segmentation may be performed to reduce computation time for detecting motion. Moreover, the computation time may depend on the block size. For a given image resolution, larger block sizes may reduce computation time, while smaller block sizes may increase the computation time. However, larger block sizes may result in poorer quality of motion detection, while smaller block sizes may result in enhanced quality of motion detection. Therefore, a compromise is often sought between computation time and quality. In one example, for an image having a resolution of 128X128, blocks having dimensions of 32X32 may be selected. Alternatively, the image may be compressed to reduce the resolution of the image. In such cases, the block sizes may also be reduced. It will be understood that these resolutions and block sizes are merely illustrative. In actual implementation, the image resolution and block sizes may vary considerably from the values presented.
[0040] Furthermore, the slicing limit 212 may be configured to segment the image slices into fixed-sized blocks or variable-sized blocks. If the block sizes are variable, the slicing unit 212 may be configured to automatically determine the block size based on one or more parameters. These parameters may include image resolution, anatomical region of interest, presence or absence of a tracer, retrospective or prospective motion correction, and so on. For example, in case the image resolution is low, the image slice may be divided into blocks of smaller size. It may be noted that if the motion correction is prospective, the slicing unit 212 may be configured to segment the image slice into blocks of larger size to reduce computation time. Alternatively, if the region of interest is the brain and a tracer is utilized, the corresponding image slices may be segmented into blocks of smaller size to enable the motion correction platform 112 to adequately capture even slight movements in the brain. In case of automatic determination of block sizes, the slicing limit 212 may be in communication with the acquisition subsystem 108, the sensors, and/or the local data repository 114, to obtain the parameter values. In other embodiments, the operator may determine the block size and transmit this information to the slicing limit 212 through the user interface 118.
[0041] Moreover, before or after segmentation of the image slices into a plurality blocks, the slicing unit 212 may be configured to mask certain regions of the image slices. For instance, if the image slice is representative of a portion of the brain, a portion of the image slice depicting an area outside the brain may include excessive motion. This motion may be due to the constant motion of the cranial fluid around the brain. Therefore, to prevent any false motion and/or flow related artifacts such as ghosting or background noise arising due to the motion of the cranial fluid, the slicing unit 212 may be configured to mask this region. The masking may be performed on one or both of the image slices. Moreover, any known masking method may be utilized, such as thresholding, level sets, statistical methods, phase field or organ atlas without departing from the scope of the present disclosure.
[0042] Further, the comparator 214 may be configured to compare blocks in one image slice with corresponding blocks in another image slice corresponding to another image volume to detect the presence of motion. By way of example, two image volumes (e.g., first and second image volumes) may be selected and central image slices may be picked from both the image volumes. Subsequently, the two central image slices may be segmented into equal number of blocks. In this case, the comparator 214 may compare the 1" block of the first central image slice with the l** block of the second central image slice, the 3"* block of the fut central image slice with the 3*^ block from the second central image slice, and so on, to detect presence of motion.
[0043] Moreover, in case an exogenous or an endogenous tracer is introduced during the acquisition process, the signal intensity corresponding to various portions of the anatomical region of interest may vary as the tracer is absorbed by the anatomical region of interest. For example, the signal intensity corresponding to various organs, tissues, arteries, and veins may increase sequentially as the tracer is absorbed by the different cells. It will be appreciated that as the farcer may be absorbed first by the blood, followed by the tissues and the veins, the signal intensity corresponding to the arteries may increase first, followed by the increase in the signal intensities corresponding to the tissues and the veins. Further, it may be noted that multiple image acquisitions of the region of interest may be completed during the time taken by the tracer to be absorbed by the tissue. Therefore, often, in conventional systems, the changes in signal intensity in between acquisition cycles may be mistaken for motion and the conventional image correction systems attempt to correct for this false motion. To compensate for signal intensity fluctuations that may be introduced by the tracer, the comparator 214 may be configured to normalize the mean signal intensity of each block before the comparison. Such normalization may result in substantially similar signal intensities for each image block of corresponding image slices. Consequently, detection of false motion by the motion detector 204 due to any changes in signal intensity levels between two image slices may be circumvented.
[0044] The comparator 214 may utilize any known comparison technique to compare the blocks corresponding to the two or more image slices. For instance, the comparator 214 may utilize an optic flow technique such as phase correlation to compare the blocks corresponding to the two image slices. The output of the phase correlation may be a motion field. The motion field is typically indicative of an extent of motion in the X direction and the y direction. For example, a motion field value of (10, 20) indicates that the image blocks in the two adjacent image slices are offset by 10 pixels in the X direction and 20 pixels in the >> direction. Moreover, if the comparison results in non-zero motion field values, the motion field values may be indicative of motion between the pair of image slices being compared. Moreover, it may be assumed that if motion is detected in one image slice, the corresponding image volume may also include similar motion. In this manner, the motion detector 204 may be configured to label certain image volumes as image volumes with motion.
[0045] Once the comparator 214 detects that motion exists between a pair of image slices, the motion-quantifying limit 206 may be utilized to categorize the detected motion. The motion-quantifying limit 206 may be configured to categorize the motion. In one embodiment, the motion-quantifying unit 206 may be configured to quantify the detected motion as negligible motion, mild motion, or severe motion. Alternatively, the motion-quantifying unit 206 may be configured to categorize the motion based on the type of motion. For example, the categories may include no motion, rotational motion, translational motion, or rotational and translational motion. In other embodiments, the motion-quantifying unit 206 may utilize any known technique to quantify motion without departing from the scope of the present disclosure. Also, the number of categories may vary without departing from the scope of the present disclosure.
[0046] Moreover, in accordance with aspects of the present disclosure, the categorization of the detected motion may be customizable. In one embodiment, the operator may provide one or more criteria for categorizing the detected motion. For instance, in some acquisitions, the operator may indicate that less than 10% motion between a pair of image slices be categorized as negligible motion, while in other acquisitions, the operator may indicate that any motion between 5% and 15% between a pair of image slices be categorized as mild motion. Accordingly, based on the criteria provided by the operator motion may be quantified in any category. Alternatively, based on the anatomical regions of interest being scanned, the motion-quantifying unit 206 may automatically vary the categorization criteria. By way of example, for acquisition of brain images, the motion-quantifying unit 206 may categorize any motion below 5% between a pair of image slices as negligible, but in case of acquisition of abdominal images, the motion-quantifying unit 206 may categorize any motion below 10% between a pair of image slices as negligible. Furthermore, instead of quantifying motion into three categories - negligible, mild, or severe, the motion-quantifying unit 206 may categorize die motion in a different set of categories. An exemplary method for quantifying the detected motion will be described with reference to FIG. 5.
[0047] Based on the category of motion, the motion-correcting unit 208 may be configured to perform corrective action on the image slices with motion. As previously noted, if motion is detected in an image slice corresponding to an image volume, it may be assumed that motion is present in the entire image volume. Accordingly, the motion-correcting unit 208 may be configured to perform corrective action on the entire image volume in which motion is detected.
[0048] Various corrective actions may be contemplated within the scope of the present disclosure. Image registration and image rejection may be two such actions. Moreover, within image registration, the motion-correcting unit 208 may perform registration on all the image volumes in the image data 202 or perform selective registration. Exemplary selective image registration corrective actions include registering image volumes (with motion) with image volumes that do not depict any motion. For example, all the image volumes with motion may be registered with a first image volume without motion in the image data 202. Alternatively, image volumes with no motion may be registered with image volumes that exhibit uniform motion. For example, if a patient changes his/her position early on in the acquisition process and subsequently remains in the displaced position, the first few image volumes that were acquired prior to patient movement may be registered with the image volumes corresponding to the displaced position of the patient 102. Consequently, the motion-correcting unit 208 may save considerable time by registering only a few images in contrast to typical motion correction systems that register all the images volumes with the first image volume, irrespective of motion. It will be understood that various techniques may be employed to selectively register the image volumes without departing from the scope of the present disclosure. For instance, all the image volumes with motion may be registered with respect to a first image volume in the time series. Alternatively, all the image volumies with motion may be registered with the last image volume in the time series. In yet another example, each image volume with motion may be registered with an immediately preceding image volume, with the mean image volume of the image date 202, or with the transformed image components generated by the input unit 210.
[0049] In one embodiment, the motion-correcting unit 208 may determine the image registration technique based on one or more parameters. Moreover, these parameters may depend on the category of motion determined by the motion-quantifying unit 206. For instance, image registration parameters may include a number of registration iterations, whether the image slices were transformed, number of images to be registered, and the like. Only image volumes with about 10% rotational motion may be registered in one cycle, for example.
[0050] Exemplary image rejection corrective actions may include discarding the entire image data 202 and restarting the acquisition process if a majority of the image volumes exhibit severe motion. Alternatively, if a patient changes his/her position before application of the tracer and subsequently remains in the new position, the first few image volumes acquired before patient movement or before activation of the tracer may be discarded. Moreover, the motion-correcting limit 208 may discard one or more image volumes with severe motion from the image data 202. For example, if it is determined that image volume numbers 5,6,25, and 30 exhibit severe motion, the motion-correcting limit 208 may discard these image volumes. Subsequently, utilizing the adjacent image volumes (image volumes 4 and 7,24 and 26, and 29 and 31, in this example), the motion-
correcting unit 208 may be configured to interpolate the discarded image volumes. It will be understood that these corrective actions aim at reducing the time spent by typical motion correction systems, which register every image volume with the first image volimie, irrespective of motion. However, if time is not of essence, the motion-correcting unit 208 may be configured to register each image volume with the first image volume without departing from the scope of the present disclosure.
[0051] FIGs. 3-5 illustrate exemplary methods for detecting motion and correcting motion-based artifacts in the image data 202. More particularly, FIG. 3 is a flowchart illustrating an exemplary method 300 for motion correction in an imaging system, such as the imaging system 106 of FIG. 1, according to aspects of the present disclosure. FIG. 4 is a flowchart illustrating an exemplary method 400 for detecting motion in an object and FIG. 5 is a flowchart illustrating an exemplary method 500 for quantifying the detected motion. These methods 300, 400, and 500 will be described with reference to FIGs. 1 and 2. Moreover, the methods described with reference to FIGs. 3, 4, and 5 detect motion between one or more image slices from different image volumes instead of detecting motion between entire image volumes. It will be understood that this selection is merely exemplary and the methods may be applied, just as easily, to detect motion between two or more image volumes.
[0052] The method 300 depicts one image correction cycle. To detect motion and correct motion-based artifacts in the complete 4D image data 202, the method 300 may be repeated a plurality of times until image slices from all the desired image volumes are selected. Method 300 may begin at step 302, where two or more image volumes of an anatomical region of interest may be acquired. For instance, in case of a whole-body scan, the acquisition subsystem 108 may acquire a plurality of image volumes corresponding to the entire body of the patient 102 at various instances of times. Similarly, in case of a brain scan, the acquired 4D image data 202 may include a plurality of image volumes of the head of the patient 102. It will be noted that, often, the acquisition subsystem 108 may acquire up to a hundred image volumes of the same anatomical region of interest.
[0053] As discussed previously, the acquisition subsystem 108 may use a known technique to acquire the images based on the modality utilized. For instance, in case of an MRI system, a high power magnetic field may be employed. The magnetic spin of the protons in the patient 102 may align with the generated magnetic field. During this period, strong RF signals may be generated. Such signals may flip the spin of the
protons. When the RF signal is discontinued, the protons may return to the previous magnetic spin, and in the process, the protons may generate their own electrical signals. Such signals are collected and converted into digital data. The digital data may be employed to create the image data 202 using known reconstruction techniques, such as Fourier transforms. One such image acquisition and restructuring cycle may produce one image volume. To obtain multiple image volumes, this process may be repeated as desired. Moreover, in some embodiments, during acquisition, a tracer may be introduced into the patient 102. As described previously, the tracer may be an exogenous contrast agent administered by the tracer unit 120 or an operator, or an endogenous spin label.
[0054] At step 304, one or more image slices corresponding to at least two image volumes may be selected. During one image correction cycle, two or more images volumes may be selected from the entire image data 202. The motion correction method 300 may be repeated multiple times with other sets of image volumes. Accordingly, by the end of an n"" cycle of the method 300, the method may have traversed through n sets of image volumes.
[0055] The input unit 210 of the motion correction platform 112 may be employed to select the image slices from two adjacent image volumies. In one embodiment, one or more image slices from two adjacent image volumes may be selected. Alternatively, in other embodiments, one or more image slices corresponding to alternate image volumes or image volumes separated by a determined number of image volumes may be selected. Further, in some embodiments, a central image slice from each of the selected image volumes may be selected.
[0056] In one embodiment, the input unit 210 may also filter/decompose the image slices into transformed into image components as described previously. These image components may depict motion more vividly than image slices and therefore, in some embodiments, one or more of the mathematical functions such as PCA or its variants, ICA, or spectral filtering may be employed for motion detection.
[0057] Furthermore, at step 306, the two or more image slices selected at step 304 may be compared to detect presence of motion between the two or more image slices and their corresponding image volumes. To this end, the motion detector 204 may be configured to compute a motion field between the two or more image slices. In one example, central image slices from two adjacent image volumes may be selected. The two image slices may be represented as /„ and /„+/. Then, using any known computing method, the motion detector 204 may be configured to determine the motion field {X,u Y„) between the two image slices. Some exemplary computing means may include phase correlation, block-based methods, or differential methods. If any motion exists between the two image slices, the motion field will have a non-zero value. However, in the absence of any motion between the image slices, the motion field may be uniformly zero (0, 0). Therefore, based on the motion field, the motion detector 204 may be configured to detect the presence of any motion between a pair of image slices. Based on this determination, the motion correction platform 112 may be configured to identify the image volumes that include motion. As described previously, if motion is detected in one image slice of an image volume, the motion correction platform 112 may assume that the entire corresponding image volume may include motion. Determination of the motion field will be described in detail with respect to FIG. 4.
[0058] Subsequently, at step 308, the motion detected at step 306 may be quantified/categorized. In one embodiment, the motion may be quantified based on the computed motion field value. For example, the motion may be quantified as negligible, mild, or severe. If the motion field value falls within a certain range, the motion may be quantified as mild and if the motion field value falls within a different range, the motion may be quantified as severe. In case the motion field has a near zero value, the motion may be quantified as negligible. As described previously, motion may be quantified into other categories without departing from the scope of the present disclosure. For instance, the motion may be categorized based on the type of motion, the percentage of motion, or other such categorizations. One exemplary method for quantifying motion will be described with reference to FIG. 5.
[0059] At step 310, the motion may be corrected based on the quantified motion. For instance, in case of severe motion in multiple image volumes, the operator or the motion-correcting unit 208 may be configured to automatically discard the image data 202 and restart acquisition. Alternatively, in case severe motion is identified in a few image volumes and mild motion is identified in other image volumes, the motion-correcting limit 208 may be configured to perform selective image registration on the image slices that are identified with mild and severe motion. It will be understood that the motion-correcting unit 208 may be configured to correct motion-based artifacts in the image volumes in any other known fashion without departing from the scope of the present disclosure. For instance, if motion is present in only a few image volumes (for example, 3-4 image volumes) of the image data 202, the motion-correcting unit 208 may be configured to simply discard these few image volumes. Alternatively, the motion-
correcting unit 208 may be configured to determine whether the motion-based artifacts are present in the image volumes acquired before the tracer is absorbed. In such a case, the motion-correcting unit 208 may not perform any correction in these image volumes as these image volumes may not be utilized in diagnosis. It will be understood that in some embodiments the operator or the motion correction platform 112 may instruct or transmit commands to the motion-correcting unit 208 to perform one or more of these corrective actions through the user interface 118. Further, it will be appreciated that the motion-correcting unit 208 may not perform any motion correction for image volumes that depict negligible or no motion-based artifacts.
[0060] Turning now to FIG. 4, a flowchart depicting an exemplary method 400 for detecting motion in an object is depicted. More particularly, FIG. 4 illustrates the method step 306 of FIG. 3 in detail. Accordingly, in this method as well, the central slices from two adjacent image volumes are utilized, for illustrative purposes. As described with reference to FIG. 3, one or more image slices corresponding to one or more image volumes may be selected. The method begins at step 402, where the selected image slices may be segmented into a plurality of blocks. It may be noted that different parts of an image slice may undergo different magnitudes of transformations. For instance, there may be no motion in one part of the image slice with respect to a corresponding image slice in another image volume, but there may be excessive motion in another part of the same image slice with respect to the corresponding image slice in the other image volume. To account for such variation in motion, embodiments of the present disclosure employ the slicing unit 212 to segment the image into a plurality of blocks instead of detecting motion across the entire image as a whole. Moreover, by comparing the image slices block-wise, computation time may be reduced. In some embodiments, the slicing limit 212 may also mask some portion of the image slice that may cause image artifacts due to fluid movement, as described previously.
[0061] The image blocks from one image slice may then be compared with corresponding image blocks from the other image slice. Further, in accordance with aspects of the present disclosure, the block size may be configurable. In certain embodiments, the block size may be determined by an operator. However, in certain other embodiments, the block size may be automatically determined by the motion detector 204 based on one or more parameters. Let /„ and /„+/ denote a central image slice from image volume n and a central image slice from a subsequent image volume n+1, respectively. The blocks of these image slices may be represented as b„ and b„+i.
[0062] Subsequently, at step 404, pixel intensity value of each block of the selected image slices may be normalized. It may be noted that this step may be an optional step. As described previously, blocks may be normalized in case of perfusion imaging, where a tracer is introduced into the anatomical region of interest. Because the tracer may cause signal intensity variations in adjacent image slices, the signal intensity of the blocks may be normalized. For normalization, the mean pixel value of every block in an image slice may be shifted to zero. More particularly, the blocks may be mean-centered. In one example, the blocks may be mean centered by subtracting the pixel intensity of each pixel in the block from the mean pixel intensity corresponding to that block. These normalized blocks may be represented by equation (1):
Pbn = Pbn-PK (1) where, Pb denotes the normalized pixel value of block b„, Pbn denotes the mean pixel value in the block b„, and Pbn denotes the pixel value corresponding to the n"' pixel in block b„.
[0063] Subsequently, at step 406, a motion field may be determined by performing a phase correlation between corresponding blocks of the image slices from adjacent image volumes to detect any motion between the two image slices. In other words, if two image volumes (e.g., first and second image volumes) are selected, central image slices from both the image volumes are picked, and subsequently, the two central image slices are segmented into equal number of blocks, the phase correlation may be performed between the 1" block of the first central image slice and the 1" block of the second central image slice, between the 3"* block of the first central image slice and the 3"* block from the second central image slice, and so on.
[0064] Moreover, the motion may be rotational or translational. As used herein, translational motion refers to motion along a straight line, such as an axis. Rotational motion, as used herein, refers to motion around an axis of rotation. More particularly, in rotational motion, some portions of the image in a block may remain in the same position in space but the image block may rotate from an initial position about a pivotal point.
[0065] To detect the translational and/or rotational motion, aspects of the present disclosure utilize an exemplary phase correlation process. It will be understood that other processes or approximations of this process may be used without departing from the scope of the present disclosure. Accordingly, given two input image blocks {b„ and 6„+/)
corresponding to two image slices (/„ and /„+y), a window function such as a Hamming window may be applied on the blocks b„ and 6„+/ to reduce edge effects. Subsequently, a discrete Fourier transform may be computed for both the image blocks according to equation (2):
Bn=T{br,lBn+i=T{bn} (2)
where, B„ and 5„+/ are the Fourier transforms of the two image blocks.
[0066] Furthermore, a cross-power spectrum may be computed by determining the complex conjugate of the discrete Fourier transform (DFT) of the second block (b„+j) and multiplying the DFT of the second block b„+i with the Fourier transform of the first block {b„) element-wise. Equation (3) provides the cross-power spectrum as follows:
lBnllB*n+ll (3) where B'n+i denotes the complex conjugate of B„+i.
[0067] Subsequently, the cross-power spectrum may be normalized. For normaUzation, the inverse Fourier transform may be applied as illustrated in equation (4): r = T-'^{R) (4) where T~ denotes the inverse Fourier transform and r denotes the cross-correlation between the two image blocks.
[0068] From the cross-relation between the two image blocks corresponding to the two image slices, the degree of translation between two the image blocks may be determined. The process may be repeated for all the image blocks in the image slices to detect a relative motion between the pair of image slices. A rotation between a pair of image slices may be visualized as a translation of pixels from a first position (x, y) in the first image slice to a second position (x-Jx, y-Ay) in the second image slice. Hence, phase correlation may be used to determine the degree of motion.
[0069] Following the computation of phase correlation between these image blocks, a motion field may be determined for the image slices, as indicated by step 408. The motion field (X and Y) between the image slices /„ and /„+/ may be obtained based on the locations of maxima of the normalized cross-power spectrum using equation (5):
[0070] Location of the maxima of the normalized cross-power spectrum may be representative of the displacement between the two blocks b„ and bni. This displacement may be indicative of the translation of the center pixel of the current block b„ or the amount by which the block b„ may be translated to appear similar to the block b„^,. Moreover, the motion field (X. Y) provides the x and >> coordinates of motion translation. For instance, if the image in block b„ moves in the ;c direction by 20 pixels and in the >> direction by 10 pixels, the motion field may be represented as (20, 10). In case the motion field is (0, 0), it will be understood that the image in block b„+i is at the same position as the image in block b„.
[0071] FIG. 5 is a flowchart illustrating an exemplary method 500 to quantify the detected motion in an image slice. More particularly, the exemplary method 500 of FIG. 5 illustrates the quantification of the detected motion of step 308 of FIG. 3. The method begins at step 502, where two image slices, such as image shces /„ and /„+; are selected and the motion field (X, Y) between the pair of image slices is obtained from the motion detector 204 or the data repository 114.
[0072] Further, at step 504, a statistical measure, such as standard deviation, variance, or entropy value corresponding to the motion fields may be determined. Alternatively, clustering methods such as support vector machines or K-dimensional trees may be utilized to classify the motion fields and the phase correlation associated with the image slices. Embodiments of the present method are described with respect to enti-opy. However, it will be appreciated that motion may be quantified using any statistical measure or clustering method without departing from the scope of the present disclosure. The motion field (X„, Y„) describes the orthogonal components of the motion field of two representative image slices (/„ and /„+/) of the image volumes of the data set that includes N image volumes. In the absence of any motion, the motion field when viewed as an image may be substantially uniformly smooth (all pixel values may be equal). Presence of motion, on the other hand, may introduce a disturbance in the image of the motion field, which may be characterized in terms of entropy or variance of the motion field.
[0073] As described previously, to ensure that noise related changes outside the anatomical region of interest do not affect the motion field, a mask may be determined for each image slice. In one example, the mask may be determined using a phase-field based segmentation of the image slice. The mask may be generated individually for the image slices /„ and /„+/. Subsequently, the imion of the two masks may be generated.
This union may be employed to ensure that the entire span of motion in the two image slices (/„ and /„+/) is captured by the mask and consequently the motion field may fully reflect the extent of motion in those slices. For the image data 202, the total entropy S,ou,i and the peak entropy Speak may be determined using equations (6) and (7):
Stotal = Sn=l'n = Sn=l('X„ + 5y„) (6)
Speak = "lflPC(5„) (7)
where 5„ denotes the net entropy between a pair of image slices.
[0074] In accordance with aspects of the present disclosure, the entropy determined at step 504 may be employed to quantify the detected motion. Steps 506-514 describe one example of the categorization of the detected motion, where the detected motion is categorized as negligible, mild or severe. In one example, for an entropy value that is lesser than 20% of the peak entropy value, the motion may be categorized as negligible. In a similar fashion, for an entropy value between about 20% and about 50% of the peak entropy value, the motion may be categorized as mild. Also, for an entropy value above 50% of the peak entropy value, the motion may be categorized as severe. It will be appreciated that these ranges and categories are merely exemplary. These ranges and/or categories may be customized either manually by an operator or automatically by the motion-quantifying unit 206, as described previously. Moreover, the motion may be categorized into any number of categories as deemed fit by the operator or the motion-quantifying limit 206.
[0075] At step 506, a check may be carried out to verify whether the entropy is less than 20% of the peak entropy value Speak- If the calculated entropy value 5,„,o/ for a pair of image slices is lower than 20% of the peak entropy value Speak, control may be passed on to step 508, where the motion may be categorized as 'negligible motion'. However, at step 506, if it is determined that the entropy value S,otai of the pair of image slices is higher than 20% of the peak entropy value Speak, control may be passed on to step 510. At step 510, another check may be carried out to verify whether the entropy value S,ou,i of the pair of image slices is between 20% and 50% of the peak entropy value Speak- If the entropy value is within this range, control may be passed on to step 512, where the motion may be categorized as 'mild motion'. However, at step 510, if it is determined flat the entropy value Sumi is greater than 50% of the peak entropy value Speak, control may be passed on to step 514, where the motion may be categorized as 'severe motion'. It will be understood that the categorizations are merely exemplary and any other means
for categorizing the motion may be utilized without departing from the scope of the present disclosure.
[0076] FIG. 6 illustrates the phase-correlation based motion field for different motion categories according to exemplary embodiments of the present disclosure. More particularly, FIG. 6A is a diagrammatical representation 600 illustrating two brain perfusion MR image slices 602 and 604 of a patient at time tj and t:, for example, and corresponding motion field (Xand Y) images 606 and 608. FIG. 6B is a diagrammatical representation 610 illustrating brain perfusion MR image slices 612 and 614 of the same patient at times t4 and ts, for example, and the corresponding motion field (X, Y) images 616 and 618. FIG. 6C is a diagrammatical representation 620 illustrating the brain perfusion MR image slices 622 and 624 of the same patient at times tw and tu, for example, and the corresponding motion field (X. Y) images 626 and 628. As seen in FIG. 6A, despite the changes in signal intensity in the second image slice 604 as compared to the first image slice 602, the motion field images 606 and 608 are almost inform, indicating that the motion detector 204 is generally immime to the tracer-based signal changes in the region of interest. Moreover, the motion field images 606 and 608 are almost uniform because there is no or negligible motion between the two image slices 602 and 604. Image voliraies corresponding to such image slices 602 and 604 may be categorized as 'negligible motion' image volumes.
[0077] In FIG. 6B, there exists a relatively small rotational motion between the two images 612 and 614. This is evident from the orientation of the brain image in the second image slice 614 with respect to the first image slice 610. However, there is no signal intensity change in these images 612 and 614. In this case, both the motion field images 616 and 618 show a few variations at the upper end of the brain illustrating that there is more motion towards the upper end of the brain and relatively smaller motion towards the bottom end of the brain, therefore indicating rotational motion between the image slices 612 and 614. Moreover, the entropy of these motion field images 616 and 618 may lie in a mid-range of the peak entropy in the image data 202 and therefore, the image volumes corresponding to these image slices 612 and 614 may be categorized as 'mild motion' image volumes, "rotational motion" image volumes, or any other such motion category.
[0078] Further, in FIG. 6C, the image slices 622, 624 include both rotational and translational motion. This is evident from the orientation and the relative position of the second image slice 624 with respect to the furst image slice 618. Moreover, in areas near
the ventricles 630, there are signal intensity variations as well. This type of motion in the image slices 622, 624 causes excessive disturbances in the motion field images 626, 628 as illustrated in FIG. 6C. However, despite the changes in signal intensity near the ventricles 630, the motion field images 626, 628 are uniform in this region, suggesting that the tracer-based gradual signal intensity changes do not affect the motion detector 204. Moreover, the entropy of these motion field images 626, 628 may he in the excessive motion range, and therefore the image volumes corresponding to these image slices 622, 624 may be categorized as 'severe motion' image volumies, 'rotational and translation motion' image volumes, or any other such motion category.
[0079] FIG. 7A is a diagrammatical representation 700 of a plurality of 2D image slices corresponding to different image volumes in the image data 202. Moreover, FIG. 7B depicts a graph 710 that is representative of the entropy values corresponding to pairs of image slices image slices illustrated in FIG. 7A. More particularly, the diagrammatical representation 700 of the 2D image slices depicts 24 MR image slices of the brain of a patient taken over a time period ti to 125. Moreover, the MR image slices progress left to right and top to bottom. As depicted in image slice 6, it may be noticed that the tracer is absorbed in the brain and from then on until time
| # | Name | Date |
|---|---|---|
| 1 | 3164-CHE-2012 POWER OF ATTORNEY 01-08-2012.pdf | 2012-08-01 |
| 2 | 3164-CHE-2012 FORM-3 01-08-2012.pdf | 2012-08-01 |
| 3 | 3164-CHE-2012 FORM-2 01-08-2012.pdf | 2012-08-01 |
| 4 | 3164-CHE-2012 FORM-18 01-08-2012.pdf | 2012-08-01 |
| 5 | 3164-CHE-2012 FORM-1 01-08-2012.pdf | 2012-08-01 |
| 6 | 3164-CHE-2012 DRAWINGS 01-08-2012.pdf | 2012-08-01 |
| 7 | 3164-CHE-2012 DESCRIPTION (COMPLETE) 01-08-2012.pdf | 2012-08-01 |
| 8 | 3164-CHE-2012 CORRESPONDENCE OTHERS 01-08-2012.pdf | 2012-08-01 |
| 9 | 3164-CHE-2012 CLAIMS 01-08-2012.pdf | 2012-08-01 |
| 10 | 3164-CHE-2012 ABSTRACT 01-08-2012.pdf | 2012-08-01 |
| 11 | abstract3164-CHE-2012..jpg | 2013-10-09 |
| 12 | 3164-CHE-2012-FER.pdf | 2018-09-07 |
| 13 | 3164-CHE-2012-AbandonedLetter.pdf | 2019-03-12 |
| 1 | search_23-08-2018.pdf |