Abstract: The present disclosure relates approaches for removing or reducing the effects of motion in parallel and non-parallel data acquisitions using a nuclear medicine imaging system. In certain embodiments, translation vectors (130) are derived based on a registration (120) performed on transaxial slices (116) generated from the acquired projection data (104). The translation vectors (130) may be employed to update (142) a system matrix (11 0) such that images generated using the updated system matrix (144) are free or motion artifacts or have reduced motion artifacts.
e
BACKGROUND OF THE INVENTION
The subject matter disclosed herein relates to nuclear imaging, and more
particularly to correction of motion artifacts in single photon emission computed
tomography (SPECT).
A variety of imaging techniques are known and currently in use, such as for
medical diagnostic applications. Certain such techniques, such as SPECT, rely on the
emission of gamma rays during the radioactive decay of a radioisotope (or
radionuclide ), commonly administered in the form of a radiopharmaceutical agent that
can be carried, and in some cases, be accumulated in or bound to particular tissues of
interest. Such nuclear imaging technologies detect the emissions via a suitable
gamma radiation detector. In particular, a suitable gamma radiation detector may
consist of components which, in response to incident radiation, generate image data
related to the quantity of radiation impacting the individual regions of the detector.
The image data generated by the detector components may then be reconstructed to
generate images of internal structures of the subject.
While such systems have proven extremely useful at providing high quality
images with good diagnostic value, further refinement is possible. For example, in
some instances motion artifacts may be introduced due to patient motion within the
field of view and/or due to the motion of components of the imaging system during
the acquisition of image data. In certain gamma ray detection configurations where
non-parallel collimation techniques are employed, such motion may be difficult to
address and may, therefore, lead to visual artifacts in images generated using the
acquired image data.
BRIEF DESCRIPTION OF THE INVENTION
The present disclosure relates to approaches by which motion correction in
SPECT images may be achieved. In one embodiment, the translational displacements
of the acquired object (e.g., patient) may be addressed for both parallel and nonparallel
acquisition systems. In one such embodiment, translation and duration
information may be determined for a set of acquired projections, and an updated
system matrix may be generated based on the translation and duration information.
The updated system matrix may then be used to generate an image in which artifacts
attributable to motion are reduced or eliminated.
In accordance with one aspect of the present disclosure, an image
reconstruction method is provided. In accordance with this method, a set of
projection data is acquired at a plurality of views and time intervals with respect to an
imaging volume. A plurality of slices are reconstructed based on the set of projection
data and a system matrix associated with the acquisition of the set of projection data.
The slices are registered to generate a plurality of transformation vectors describing
the translation in three-dimensional space for each time interval during the acquisition
of the set of projection data. One or more transformation vectors are determined
based on the act of registering the slices. An updated system matrix is generated
based on the one or more transformation vectors and the associated time intervals. A
motion-corrected image is reconstructed using the updated system matrix.
In accordance with another aspect, one or more machine readable media are
provided that encode routines. The routines when executed by a processor, cause acts
to be performed that include: reconstructing a plurality of slices based on a set of
projection data acquired at a plurality of views and time intervals and a system matrix
associated with the acquisition of the set of projection data; generating a plurality of
transformation vectors describing the translation in three-dimensional space for each
time interval during the acquisition of the set of projection data; determining one or
more translational offsets based on the plurality of transformation vectors; and
generating an updated system matrix that apply the translational offsets to one or
more virtual detectors that correspond to different exposure times during the
acquisition of the set of projection data.
In accordance with a further aspect, an image analysis system is provided.
The image analysis system includes one or more processing components configured
to receive measured projections of an imaging volume acquired at different views and
time intervals with respect to the imaging volume, and to execute one or more
executable routines stored in a memory. The stored routines, when executed,
reconstruct a plurality of slices based on the set of projection data and a system matrix
associated with the acquisition of the set of projection data, register the slices to
generate a plurality of transformation vectors describing the translation in threedimensional
space for each time interval during the acquisition of the set of projection
data, and generate an updated system matrix based on the plurality of transformation
vectors and corresponding time intervals. The image analysis system also includes
interface circuitry configured to allow user interaction with the image analysis system.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features, aspects, and advantages of the present invention
will become better understood when the following detailed description is read with
reference to the accompanying drawings in which like characters represent like parts
throughout the drawings, wherein:
FIG. 1 is a diagrammatical representation of an embodiment of a SPECT
imaging system suitable for use in accordance with the present disclosure;
FIG. 2 depicts an example of aSPECT image acquisition occurring over a
variety of views using collimated gamma detector assembly, in accordance with
aspects of the present disclosure;
FIG. 3 depicts an example of aSPECT image acquisition occurring over a
variety of views using pin-hole camera type gamma detectors, in accordance with
aspects of the present disclosure;
FIG. 4 depicts a flow diagram of processor-executable logic for addressing
motion artifacts in SPECT images, in accordance with aspects of the present
disclosure;
FIG. 5 depicts an example of a motion correction operation performed on
the measured data, in accordance with one aspect of the present disclosure; and
FIG. 6 depicts an example of a motion correction operation performed on
the system geometry, e.g., system matrix, in accordance with one aspect of the present
disclosure.
DETAILED DESCRIPTION OF THE INVENTION
As discussed herein, the present disclosure relates to the generation of
nuclear medicine images, such as SPECT reconstructions, in which artifacts
attributable to motion are reduced or removed. For example, in one embodiment, a
system matrix that describes or corresponds to the physical camera geometry with
respect to the imaging volume may be modified to correspond to two or more
positions of the patient and/or camera during image acquisition. The modified system
matrix may then be used in the reconstruction of the acquired image data such that
image data associated with each modeled position or geometry is properly
reconstructed with the differences in geometry being reduced or removed. In this
manner, artifacts attributable to motion may be reduced or eliminated, even in system
where a non-parallel detector mechanism is employed.
With the foregoing discussion in mind, a diagrammatic representation of
one example of a SPECT imaging system suitable for use with the present approach is
shown in FIG. 1. The system of FIG. 1, designated generally by the reference
numeral 10, is designed to produce useful images of a subject 14 using suitable
detector components (such as pin-hole gamma cameras or collimated scintillating
detectors) as described in detail below. The subject is positioned in a scanner,
designated by reference numeral 16, in which a patient support 18 is positioned. The
support may be movable within the scanner to allow for imaging of different tissues
or anatomies of interest 20 within the subject. Prior to image data collection, a
radioisotope, such as a radiopharmaceutical substance (sometimes referred to as a
radiotracer), is administered to the patient, and may be bound or taken up by
particular tissues or organs 20. Typical radioisotopes include various radioactive
forms of elements that emit gamma radiation during decay. Various additional
substances may be selectively combined with such radioisotopes to target specific
areas or tissues 20 of the body.
s
Gamma radiation emitted by the radioisotope is detected by a detector
component 22, such as a digital detector or gamma cameras. Although illustrated in
the figure as a planar device positioned above the patient to simplify illustration, in
practice the detector structure(s) 22 may be positioned about the patient, such as in an
arc or ring about the patient, or may be attached to a positioner (e.g., a C-arm, gantry,
or other movable arm) that allows the detector structure(s) 22 to be moved in such an
arc or orbit about the patient during data acquisition. In general, the detector
structure(s) 22 typically include one or more components or elements capable of
sensing gamma radiation or otherwise generating a detectable signal in response to
such radiation. In the illustrated embodiment, the detector structures comprise one or
more collimators and a scintillator, together represented generally as reference
numeral 24. The collimator may be formed from parallel or non-parallel elements
that allow gamma radiation emitted only in certain directions to impact the detecting
components. In detector embodiments employing a scintillator, the scintillator may
be made of a crystalline material, such as sodium iodide (Nai), that converts the
received gamma radiation to lower energy light energy (e.g., in an ultraviolet range).
Photomultiplier tubes 26 then receive this light and generate image data
corresponding to photons impacting specific discrete picture element (pixel) regions.
In other embodiments, the detector structure 22 may not be collimated but may
instead use other gamma radiation sensing technologies, such as one or more pin-hole
gamma cameras, as also discussed herein.
In the depicted embodiment, the detector structure(s) 22 is coupled to
system control and processing circuitry 28. This circuitry may include a number of
physical and/or software components that cooperate to allow the collection and
processing of image data to create the desired images. For example, the circuitry may
include raw data processing circuitry 30 that initially receives the data from the
detector structure( s) 22, and that may perform various filtering, value adjustments,
and so forth. Processing circuitry 32 allows for the overall control of the imaging
system, and for manipulation and/or reconstruction of image data. The processing
circuitry 32 may also perform calibration functions, correction functions, and so forth
on the data. The processing circuitry 32 may also perform image reconstruction
functions, such as based on known algorithms (e.g., back projection, iterative
reconstruction, and so forth). Such functions may also be performed in postprocessing
on local or remote equipment. As will be appreciated, the various image
reconstruction and artifact correction algorithms discussed herein may be
implemented in part or in their entirety using one or both of the raw data processing
circuitry 30 and/or the processing circuitry 32.
In the depicted embodiment, the processing circuitry 32 interacts with
control circuitry/interface 34 that allows for control of the scanner and its
components, including the patient support, camera, and so forth. Moreover, the
processing circuitry 32 will be supported by various circuits, such as memory
circuitry 36 that may be used to store image data, calibration or correction values,
routines performed by the processing circuitry (such as the motion artifact correction
algorithms disclosed herein), and so forth. In one embodiment, the processing
circuitry executes one or more iterative reconstruction algorithms that may utilize
approaches ·for reducing or removing motion effects, as discussed herein. Such
iterative reconstruction approaches may generally utilize iterated comparisons
between expected or reference images and observed or measured image data to reduce
artifacts or irregularities attributable to non-physiological factors, such as factors
related to motion and/or imaging system geometry. In such an iterative reconstruction
approach, the convergence process or loop may be repeated or iterated until some
completion criteria is met, such as minimization of a cost function.
Finally, the processing circuitry may interact with interface circuitry 38
designed to support an operator interface 40. The operator interface allows for
imaging sequences to be commanded, scanner and system settings to be viewed and
adjusted, images to be viewed, and so forth. In the illustrated embodiment, the
operator interface includes a monitor 42 on which reconstructed images 12 may be
viewed.
In an institutional setting, the imaging system 10 may be coupled to one or
more networks to allow for the transfer of system data to and from the imaging
system, as well as to permit transmission and storage of image data and processed
7
images. For example, local area networks, wide area networks, wireless networks,
and so forth may allow for storage of image data on radiology department information
systems and/or on hospital information systems. Such network connections further
allow for transmission of image data to remote post-processing systems, physician
offices, and so forth.
With respect to the gamma ray detection components 22 of the SPECT
imaging system 10, two arrangements are used: parallel and non-parallel. In an
example of a parallel arrangement, a detector may be collimated with an arrangement
of parallel structures such that the resulting acquisition of gamma rays is .not
divergent. For example, turning to FIG. 2, a collimated detector assembly 60 or
collimated camera is employed and is depicted at four different radial views (A-D)
with respect to the patient 14. In one such arrangement, image data is sequentially
acquired, with the detector components being rotated to the different radial positions
(A-D) at discrete points in time to acquire image data at the respective radial views.
The collimator in such an assembly 60 acts to limit the angular range of gamma rays
striking the detector panel (i.e., gamma rays striking a detector panel at a given radial
view are substantially parallel to one another), thereby helping to localize the gamma
ray emission. Thus, in such an image acquisition configuration, the collimated
detector assembly 90 has a parallel field-of-view 62 that is limited, non-inverted, and
which does not expand with distance, i.e., does not diverge.
This arrangement is in contrast to detector arrangements where the
employed collimation is non-parallel such as pinhole collimator, fan-beam collimator,
or cone-beam collimator. For example, FIG. 3 depicts a pin-hole camera 70 or
multiple pin-hole cameras 70 at different radial views (A-D) with respect to the
patient 14. In one such arrangement, image data is sequentially acquired, by one or
more pin-hole cameras 70 being rotated to the different radial positions (A-D) at
discrete points in time to acquire image data at the respective radial views. In contrast
to the parallel collimated arrangement of FIG. 2, in the depicted pin-hole camera 70
arrangement a pin-hole camera 70 has an associated non-parallel field-of-view 72
from a given view angle, as depicted by respective dashed lines, that diverges with
distance. Thus, as will be appreciated, pin-hole cameras 70, such as those depicted,
and other non-parallel acquisition systems generally acquire conical projections
corresponding to an inverted image of the non-parallel field-of-view 72 associated
with the respective camera 70.
With respect to the use of fan-beam and cone-beam collimators, the nonparallel
field-of-view 72 is actually converging in two-dimensions or threedimensions
to a line or a point respectively. In such instances, the focal line or point
may be within the volume of the patient. Similar situations may also exist in
computed tomography (CT) where a volume is imaged. In some instances, a CT
imager may comprise a slow rotating gantry, such as a C-arm. The focal point of the
field-of-view of the associated two-dimensional detector array is the X-ray source
(e.g., an X-ray tube). While the present discussion focuses primarily on SPECT
systems in order to provide a useful context and easily visualized examples, it should
be understood that other types of imaging modalities that are susceptible to patient
motion, such as CT, positron emission tomography (PET), magnetic resonance
imaging (MRI), and others, may also benefit from the patient motion correction
approach disclosed herein.
As previously noted, over the course of an examination, the patient (or
internal organs of the patient) may move with respect to the acquisition imaging
geometry, regardless of type. Likewise, imaging geometry changes with respect to
the region or organ of interest due to the movement of the detector components about
the patient may result in perceived motion. In conventional systems, correction for
such motion effects may be based on detection of translational displacements of
projections relative to a reference. For example, a forward projection of a
reconstructed image may be compared to a previous projection employed as a
reference, with differences between the images being attributed to a translational
displacement. Once detected, the projection may be translated back to the expected,
i.e., motion free, location. Such corrections may be iteratively performed.
Such conventional motion correction may be sufficient for parallel shift
variant projections, such as where parallel collimation of the detector is employed, as
in FIG. 2. However, non-parallel shift variant projections (e.g., pin-hole camera
and/or detector assemblies employing diverging or converging collimators or FOVs)
may include motion components arising from non-translational transformations of the
projections. Such non-translation transformation cannot be corrected without
knowledge of the three-dimensional distribution of the data. That is, the non-parallel
aspects of the acquired projections result in perceived or observed motion or
differences that are not simply translations of the data in one direction or another, but
are instead other transformations of the data, such as perceived changes in shape or
size. For example, it is evident that a rigid translation motion of an organ away from
a pinhole collimator will cause a general reduction of the size of its image on the
detector, yet different parts of the organ will be differently distorted depending on
their exact spatial location in relation to the pinhole. Similarly, any motion, axial or
lateral, of an imaged object would cause a non-linear distortion of the projected
image.
Turning to FIG. 4, a flowchart is provided depicting control logic for
correcting for motion effects even in a non-parallel data acquisition, such as those
depicted in the system of FIG. 3. While the described approach is suitable for motion
correction in such a non-parallel system, the approach is also suitable for use with
data collected using a parallel acquisition system, as depicted in FIG. 2.
In the depicted flowchart 100, a step of acquiring (block 1 02) a set of
multiple projections (G) 104 of elements Gi is depicted. Each projection is typically
derived or generated over a non-overlapping time interval (t) 108, such as a time
interval of 20 seconds. In one embodiment, the set of projection data 104 represents
sequential acquisitions of projections over non-overlapping time intervals oflength t.
The acquisition step includes both spatial and temporal variables, denoted
by system geometry 106 and time intervals (t) 108 which respectively describe the
spatial and geometric relationship between the detector and the imaged volume during
different aspects of the acquisition process and the time intervals associated with each
geometric configuration of the system. The system geometry 106 and associated time
intervals 108 of the data acquisition may define or be used to generate a system
matrix (A) 110 that describes the relationship between elements of the detector(s) that
/0
generate a signal and voxels within the imaging volume at specified times. In one
embodiment, system matrix 110 may be provided as a table (e.g., a correlation table)
or other data structure that describes the relationship at specified times between what
signal or value would be observed at a detector element based on the activity or
intensity (e.g., emitted radiation) at a given voxel of the imaging volume. That is, the
system matrix describes the relationship between activity within the imaging volume
and expected observations at the detector for a given geometry and at a given time.
In the depicted example, each element (i.e., Gi) ofthe set of projections 104
is reconstructed (block 114) to generate respective transaxial slices (T) 116. The
reconstruction process, in one embodiment, is based on the system matrix 110 such
that:
where A is the system matrix, Gi is a projection or element of the set of projections
104, and Ti is a transaxial slice generated based on Gi.
The transaxial slices 116 may be registered (block 120) against a baseline
or reference transaxial slice to generate a plurality of registered transaxial slices 122.
In one embodiment the registration uses a translation or vector transformation with a
metric (e.g., cross-correlation, mutual information, least mean squares, and so forth).
In one embodiment, the elements of transaxial slices Ti where i > 0 may be registered
against the elements of a first or baseline transaxial slice T0. Registration against the
baseline or reference transaxial slice allows determination (block 128) of the
respective transformation vectors TR 130 (trx, try, trz) describing the translation
(lateral movement and/or rotational movement) of elements or structures in threedimensional
space for each time interval during the acquisition.
For elements where there was no translation is space or time during the
acquisition process, a combination may be performed (block 134) to simplify
subsequent computations. However, for those elements where a translation (lateral
movement or rotational) is determined to be present, a new set of projections (H) 140
may be generated (block 138) using a combination of all projections where there is a
I r
given translational offset. In one such embodiment, the number of new projections is
the product of the number of translational offsets by the number of original
projections. That is, projection data may be binned together based on the absence of
movement (i.e., no translation from the reference) and/or where the movement is the
same so that all projections to be corrected based on a given translation may be
binned together.
Based on the translation and duration of the projections associated with
each set of new projections H 140 and using the system matrix A which maps Gi to T
such that A * T = G (see equation 1) an updated system matrix B 144 may be
generated (block 142). For example, in one implementation, the updated system
matrix B 144 may describe a relationship where:
(2) B * T = H
In one implementation, the generation of the updated system matrix B 144 may be
simplified by assuming a translation in steps of whole voxels. Effectively, in this
manner, the updated system matrix is modified or updated to represent the various
geometries or relative camera positions present in the data acquisition step and the
corresponding observed projection data is essentially binned based on time and
camera geometry.
Because the updated system matrix 144 takes into account the observed
motion and corresponding time data for when motion is observed, the projection data
104 itself may be unmodified as the necessary motion compensation is provided for
by the corresponding elements ofthe updated system matrix 144. The updated system
matrix 144 may be used to reconstruct the entire projection data set to generate a
motion-free or motion-reduced image using methods that utilize a system matrix in
the reconstruction process, such as maximum likelihood expectation maximization
(MLEM), ordered subsets expectation maximization (OSEM), block sequential
regularized expectation maximization (BSREM), and so forth. In particular, the
updated system matrix 144 may be used to reconstruct a motion-free or motionreduced
image even in situations where the data acquisition involved the use of a non-
I'L
parallel detection scheme (i.e., pin-hole cameras or divergent or convergent
collimation schemes.). ·
With the foregoing discussion in mind, a simplified example is provided to
assist in visualizing motion and motion correction as presently contemplated. Turning
to FIG. 5, an incidence of undesired patient motion during a multi pin-hole camera
image acquisition process is depicted. In this example, the undesired motion occurs at
time t1 such that projection data acquired by a first detector A 150 and a second
detector B 152 during first the time interval between to and t1 corresponds to the
patient being at a first position. Similarly, projection data acquired during a second
time interval between t1 and tr by the first detector A 150 and the second detector B
152 corresponds to the patient being at a second position. With the foregoing in
mind, it will be appreciated that four different data sets are derived in this simplified
example: DA1 corresponding to the data acquired by first detector 150 between to and
t1; D81 corresponding to the data acquired by second detector 152 between to and t1;
DA2 corresponding to the data acquired by first detector 150 between t1 and tr; and
Ds2 corresponding to the data acquired by second detector 152 between t1 and tr. As
will be appreciated, reconstructing the entire data set acquired by the detectors 150,
152 between to and tr will yield a blurred image due to the change in patient position
within this time interval.
In accordance with previous approaches, the datasets acquired at the
different time intervals (i.e., datasets DA1 and Ds1 corresponding to to to t1 and
datasets DA2 and Ds2 corresponding to t1 to tr) are separately reconstructed to obtain
two translated images: first image (P1) 160 and second image (P2) 162. In performing
this reconstruction, system matrix MA and M8 may be used to represent the first
detector 150 and the second detector 152 respectively such that:
In accordance with previous approaches, second image P2 162 is offset with
respect to image P1 160 due to patient or other motion. The second image P2 162 may
be translated by a translation vector "Tr" such that the two images are correctly
registered, i.e., aligned, where
(5) (Tr)(Pz)-? Pz'.
Once registered, P1 and P2' may be summed to get a final image:
(6) P1 + Pz'-7 P'.
As noted previously, this previously known approach may yield sub-optimal results
since each dataset contains less detected gamma events, resulting in P1 160 and Pz 162
having high levels of statistical noise.
It should be noted that iterative algorithms are mathematically non-linear in
the sense that adding reconstruction results of several data sub-sets is not the same as
adding the data sub-sets and then reconstructing. In linear reconstruction algorithms
such as Filter Back Projection (FBP), the order of summing is not important. Image
quality of iterative reconstruction of correctly summed data sub-sets may thus be
superior to the results of summating of results of iterative reconstruction of each subset
separately. It is the deformation caused by the non parallel nature of the camera
that prevents simple translation and summation of the data sub-sets and requires
creating a compensating system matrix to allow iterative reconstruction of the entire
data set.
Turning to FIG. 6, and in accordance with certain present embodiments, the
parameters of the patient translation Tr may be used in an alternative manner for
motion correction. For example, in the depicted implementation the detection system
(i.e., first detector 150 and second detector 152) is virtually translated such that the
patient remains stationary. It should be understood that, as used herein, a translation
may be a rigid translation or lateral move, a rigid rotation, or a combination or lateral
and rotational moves.
!Lr
In this example, this would result in two sets of detectors (i.e., four virtual
detectors in this example) being obtained, with each virtual detector characterized by
its coordinates and exposure time. In this example, the four virtual detectors may be
characterized as: virtual detectors A1 170 and B1 172 that acquired data during the
first time interval to to t1 and virtual detector A2' 174 and B2' 176 that acquired data
during the second time interval and where A2' 17 4 and B2' 17 6 are translated by (Tr),
i.e., negatively translated). In view of the use of the virtual detectors
characterized in coordinates and exposure time, the datasets associates with each
virtual detector may be described as: datasets DA1 corresponding to detector A1 170 at
to tot~, Ds1 corresponding to detector Bl 172 at to to t1, DA2 corresponding to detector
A2' 174 at t1 to fT, Ds2 corresponding to detector B2 176 at t1 to fT. As will be
appreciated, the datasets are not shifted due, instead, to the appropriate virtual
detectors (e.g., A2' and B2') being shifted.
Based on this virtualized detection system, a new composite system matrix
(i.e., updated system matrix 144) may be characterized for the four virtual detectors of
this example. For example the new composite system matrix may be comprised of
system matrices MA, MA ', Ms, and Ms' which each take into account the appropriate
translated or untranslated system geometry and appropriate acquisition time for each
interval to to t1 and t1 to fT. As discussed above, the new composite system matrix may
be used to reconstruct the entire dataset to get an improved final image P' ':
(7)
It should be understood that, though the preceding example related to only
two time intervals to simplify explanation, the presently disclosed approach may be
applied to more than two time intervals. For example, in one implementation acts
may be performed to identify the various times associated with patients motions and
to construct the appropriate time intervals based on the observed patient motion. In
one such implementation, the total time to to tT may be divided into N intervals (e.g.,
0-dt, dt-2dt, 2dt-3dt, ... , (N-l)dt-T, where dt=T/N). Reconstruction may performed
for each interval separately, as discussed above, and the reconstructed images may be
analyzed (such as by registration to a reference) to identify instances of motion.
I~
If there was only one instance of patient motion (such as at t(i)- dt*i t(i)),
image shifts Tr are given by:
(8) Trl=Tr2= ... Tr(i) = 0;
(9) Tr(i+ 1)- ... - Tr(N)- Tr(motion) ;
where Tr(motion) is the motion of the patient at time dt*i. The total acquired data can
now be split between two intervals, 0-dt*l and dt*i-T, since there was only one
instance of motion (i.e., pre-motion data and post-motion data). Alternatively, in
other implementations an external and/or independent motion detection mechanism
may be employed to detect one or more instances of patient motion. When the time
or times that the motions occurred is known, the process may be carried out as
discussed above
On example of an implementation in which the present approach may be
applied is in the field of cardiology. For example, after a cardiac stress test (in which
the patient performs a strenuous physical exercise to increase the heart rate) the heart
shifts from its rest position, and, during recovery, slowly moves back toward its rest
position. In such an instance, Tr is a function of time. For example, Tr(at time t =i
*dt) is given by Tr(dt*i) = dTr*i or if a non linear translation is assumed Tr(dt*i) =
dTr*i + ddTr*i*i.
In this cardiology example, the times associated with patient motion (here
heart motion) may be used to determine TR1. TR2, ... TRN, and so forth that describe
the lateral motion and/or rotation of the heart in each time interval that is identified as
including motion. Once the respective translation factors, TR1. TR2, . . . TRN, are
known, these translation factors may be fitted to the preceding equation to determine
dTr (and optionally ddTr). A new composite system matrix may then be constructed
with the (now known) translations given by Tr(dt*i) = dTr*i or Tr(dt*i) = dTr*i +
ddTr*i*i. The new composite system matrix may then be used to reconstruct the
entire image dataset to generate an improved, motion-corrected image.
Technical effects of the invention include generation of a reconstructed
volume in which the effects of motion are reduced or eliminated. Technical effects
may include acquiring projection data using a non-parallel detector architecture and
generating motion-free or motion-reduced images based on the acquired projection
data. Technical effects may also include generation of an updated system matrix
based at least on transformation information obtained from transaxial slices
reconstructed from measured projection data.
This written description uses examples to disclose the invention, including
the best mode, and also to enable any person skilled in the art to practice the
invention, including making and using any devices or systems and performing any
incorporated methods. The patentable scope of the invention is defined by the claims,
and may include other examples that occur to those skilled in the art. Such other
examples are intended to be within the scope of the claims if they have structural
elements that do not differ from the literal language of the claims, or if they include
equivalent structural elements with insubstantial differences from the literal languages
ofthe claims.
We Claim:
1. An image reconstruction method, comprising the acts of:
acquiring (1 02) a set of projection data (1 04) at a plurality of views and time
intervals (108) with respect to an imaging volume;
reconstructing (114) a plurality of slices based on the set of projection data
(104) and a system matrix (110) associated with the acquisition of the set of
projection data (104);
registering (120) the slices to generate a plurality of transformation vectors
describing the translation in three-dimensional space for each time interval (108)
during the acquisition (102) ofthe set of projection data (104);
determining (128) one or more transformation vectors (130) based on the act
of registering (120) the slices;
generating (142) an updated system matrix (144) based on the one or more
transformation vectors (130) and the associated time intervals; and
reconstructing a motion-corrected image using the updated system matrix
(144).
2. The image reconstruction method of claim 1, wherein the set of projection
data ( 1 04) comprises a set of single photon emission computed tomography (SPECT)
projection data.
3. The image reconstruction method of claim 1, wherein the system matrix
(110) describes one or both of a system geometry (106) or time intervals (108) at
different imaging positions associated with the acquisition of the set of projection data
(104).
4. The image reconstruction method of claim 1, wherein the slices comprise
transaxial slices (116).
•
5. The image reconstruction method of claim 1, wherein registering (120) the
slices comprises registering the slices against a baseline or reference slice.
6. The image reconstruction method of claim 1, compnsmg combining
elements (134) where a translational offset is not determined to be present based on
the act of registering (120) the slices.
7. The image reconstruction method of claim 1, wherein the registration uses
a translation of vector transformation.
8. The image reconstruction method of claim 1, wherein the updated system
matrix (144) corresponds to a two or more virtual detectors (170, 172), each
characterized by a position and an exposure time.
9. The image reconstruction method of claim 1, wherein the set of projection
data (104) is acquired (102) using a non-parallel detector geometry.
10. The image reconstruction method of claim 10, wherein the non-parallel
detector geometry is associated with one or more of a pin-hole camera (70), a
convergently collimated detector, or a divergently collimated detector.
11. An image analysis system, comprising:
one or more processing components (32) configured to receive measured
projections (1 04) of an imaging volume acquired at different views and time intervals
with respect to the imaging volume, and to execute one or more executable routines
stored in a memory (36);
the memory (36) storing the one or more executable routines, wherein the
stored routines, when executed, reconstruct (114) a plurality of slices based on the set
of projection data (1 04) and a system matrix (110) associated with the acquisition of
the set of projection data (104), register (120) the slices to generate a plurality of
transformation vectors (130) describing the translation in three-dimensional space for
each time interval (108) during the acquisition (102) of the set of projection data
I~
c
(104), and generate (142) an updated system matrix (144) based on the plurality of
transformation vectors (130) and corresponding time intervals (108); and
interface circuitry (38) configured to allow user interaction with the image
analysis system.
12. The image analysis system of claim 11, comprising:
one or more detector assemblies suitable for detecting radiation emitted from a
patient (14), wherein the one or more detector assemblies detect non-parallel radiation
emissions;
data acquisition circuitry (30) configured to acquire signals from the one or
more detector assemblies, wherein the measured projections (104) are or are derived
from the acquired signals.
13. The image analysis system of claim 12, wherein the one or more
detector assemblies comprise pin-hole gamma cameras (70), convergently collimated
detector assemblies, or divergently collimated detector assemblies.
14. The image analysis system of claim 11, wherein the registration uses a
translation or vector transformation.
| # | Name | Date |
|---|---|---|
| 1 | 1840-DEL-2012-ASSIGNMENT WITH VERIFIED COPY [18-03-2025(online)].pdf | 2025-03-18 |
| 1 | 1840-DEL-2012-IntimationOfGrant21-12-2023.pdf | 2023-12-21 |
| 1 | 1840-del-2012-Other-Documents-(14-06-2012).pdf | 2012-06-14 |
| 2 | 1840-DEL-2012-FORM-16 [18-03-2025(online)].pdf | 2025-03-18 |
| 2 | 1840-del-2012-GPA-(14-06-2012).pdf | 2012-06-14 |
| 2 | 1840-DEL-2012-PatentCertificate21-12-2023.pdf | 2023-12-21 |
| 3 | 1840-DEL-2012-FORM 3 [22-09-2023(online)].pdf | 2023-09-22 |
| 3 | 1840-del-2012-Form-5-(14-06-2012).pdf | 2012-06-14 |
| 3 | 1840-DEL-2012-POWER OF AUTHORITY [18-03-2025(online)].pdf | 2025-03-18 |
| 4 | 1840-DEL-2012-IntimationOfGrant21-12-2023.pdf | 2023-12-21 |
| 4 | 1840-DEL-2012-Information under section 8(2) [22-09-2023(online)].pdf | 2023-09-22 |
| 4 | 1840-del-2012-Form-3-(14-06-2012).pdf | 2012-06-14 |
| 5 | 1840-DEL-2012-Written submissions and relevant documents [22-09-2023(online)].pdf | 2023-09-22 |
| 5 | 1840-DEL-2012-PatentCertificate21-12-2023.pdf | 2023-12-21 |
| 5 | 1840-del-2012-Form-2-(14-06-2012).pdf | 2012-06-14 |
| 6 | 1840-del-2012-Form-1-(14-06-2012).pdf | 2012-06-14 |
| 6 | 1840-DEL-2012-FORM 3 [22-09-2023(online)].pdf | 2023-09-22 |
| 6 | 1840-DEL-2012-Correspondence to notify the Controller [06-09-2023(online)].pdf | 2023-09-06 |
| 7 | 1840-DEL-2012-Information under section 8(2) [22-09-2023(online)].pdf | 2023-09-22 |
| 7 | 1840-del-2012-Drawings-(14-06-2012).pdf | 2012-06-14 |
| 7 | 1840-DEL-2012-AMENDED DOCUMENTS [18-08-2023(online)].pdf | 2023-08-18 |
| 8 | 1840-del-2012-Description-(Complete)-(14-06-2012).pdf | 2012-06-14 |
| 8 | 1840-DEL-2012-FORM 13 [18-08-2023(online)].pdf | 2023-08-18 |
| 8 | 1840-DEL-2012-Written submissions and relevant documents [22-09-2023(online)].pdf | 2023-09-22 |
| 9 | 1840-DEL-2012-Correspondence to notify the Controller [06-09-2023(online)].pdf | 2023-09-06 |
| 9 | 1840-del-2012-Correspondence-others-(14-06-2012).pdf | 2012-06-14 |
| 9 | 1840-DEL-2012-FORM-26 [18-08-2023(online)].pdf | 2023-08-18 |
| 10 | 1840-DEL-2012-AMENDED DOCUMENTS [18-08-2023(online)].pdf | 2023-08-18 |
| 10 | 1840-del-2012-Claims-(14-06-2012).pdf | 2012-06-14 |
| 10 | 1840-DEL-2012-POA [18-08-2023(online)].pdf | 2023-08-18 |
| 11 | 1840-del-2012-Assignments-(14-06-2012).pdf | 2012-06-14 |
| 11 | 1840-DEL-2012-FORM 13 [18-08-2023(online)].pdf | 2023-08-18 |
| 11 | 1840-DEL-2012-US(14)-HearingNotice-(HearingDate-12-09-2023).pdf | 2023-08-14 |
| 12 | 1840-DEl-2012-ABSTRACT [28-04-2020(online)].pdf | 2020-04-28 |
| 12 | 1840-del-2012-Abstract-(14-06-2012).pdf | 2012-06-14 |
| 12 | 1840-DEL-2012-FORM-26 [18-08-2023(online)].pdf | 2023-08-18 |
| 13 | 1840-DEL-2012-POA [18-08-2023(online)].pdf | 2023-08-18 |
| 13 | 1840-del-2012-Correspondence-Others-(26-06-2012).pdf | 2012-06-26 |
| 13 | 1840-DEl-2012-CLAIMS [28-04-2020(online)].pdf | 2020-04-28 |
| 14 | 1840-del-2012-Assignment-(26-06-2012).pdf | 2012-06-26 |
| 14 | 1840-DEl-2012-COMPLETE SPECIFICATION [28-04-2020(online)].pdf | 2020-04-28 |
| 14 | 1840-DEL-2012-US(14)-HearingNotice-(HearingDate-12-09-2023).pdf | 2023-08-14 |
| 15 | 1840-DEl-2012-ABSTRACT [28-04-2020(online)].pdf | 2020-04-28 |
| 15 | 1840-del-2012-Correspondence Others-(09-07-2012).pdf | 2012-07-09 |
| 15 | 1840-DEl-2012-CORRESPONDENCE [28-04-2020(online)].pdf | 2020-04-28 |
| 16 | 1840-DEl-2012-CLAIMS [28-04-2020(online)].pdf | 2020-04-28 |
| 16 | 1840-DEl-2012-DRAWING [28-04-2020(online)].pdf | 2020-04-28 |
| 16 | 1840-del-2012-Form-3-(03-12-2012).pdf | 2012-12-03 |
| 17 | 1840-DEl-2012-COMPLETE SPECIFICATION [28-04-2020(online)].pdf | 2020-04-28 |
| 17 | 1840-del-2012-Correspondence Others-(03-12-2012).pdf | 2012-12-03 |
| 17 | 1840-DEl-2012-FER_SER_REPLY [28-04-2020(online)].pdf | 2020-04-28 |
| 18 | 1840-DEl-2012-CORRESPONDENCE [28-04-2020(online)].pdf | 2020-04-28 |
| 18 | 1840-DEl-2012-OTHERS [28-04-2020(online)].pdf | 2020-04-28 |
| 18 | GPOA_GEC.pdf | 2015-06-04 |
| 19 | 1840-DEl-2012-DRAWING [28-04-2020(online)].pdf | 2020-04-28 |
| 19 | 1840-DEL-2012-FER.pdf | 2019-11-25 |
| 19 | 248350 Form 13.pdf | 2015-06-04 |
| 20 | 1840-DEl-2012-FER_SER_REPLY [28-04-2020(online)].pdf | 2020-04-28 |
| 20 | 1840-DEL-2012-FORM 13 [26-09-2019(online)].pdf | 2019-09-26 |
| 20 | GPOA_GEC.pdf_105.pdf | 2015-06-23 |
| 21 | 248350 Form 13.pdf_104.pdf | 2015-06-23 |
| 21 | 1840-DEL-2012-RELEVANT DOCUMENTS [26-09-2019(online)].pdf | 2019-09-26 |
| 21 | 1840-DEl-2012-OTHERS [28-04-2020(online)].pdf | 2020-04-28 |
| 22 | 1840-DEL-2012-FER.pdf | 2019-11-25 |
| 22 | 1840-DEL-2012-RELEVANT DOCUMENTS [26-09-2019(online)].pdf | 2019-09-26 |
| 22 | 248350 Form 13.pdf_104.pdf | 2015-06-23 |
| 23 | 1840-DEL-2012-FORM 13 [26-09-2019(online)].pdf | 2019-09-26 |
| 23 | GPOA_GEC.pdf_105.pdf | 2015-06-23 |
| 24 | 248350 Form 13.pdf | 2015-06-04 |
| 24 | 1840-DEL-2012-RELEVANT DOCUMENTS [26-09-2019(online)].pdf | 2019-09-26 |
| 24 | 1840-DEL-2012-FER.pdf | 2019-11-25 |
| 25 | 248350 Form 13.pdf_104.pdf | 2015-06-23 |
| 25 | GPOA_GEC.pdf | 2015-06-04 |
| 25 | 1840-DEl-2012-OTHERS [28-04-2020(online)].pdf | 2020-04-28 |
| 26 | 1840-del-2012-Correspondence Others-(03-12-2012).pdf | 2012-12-03 |
| 26 | 1840-DEl-2012-FER_SER_REPLY [28-04-2020(online)].pdf | 2020-04-28 |
| 26 | GPOA_GEC.pdf_105.pdf | 2015-06-23 |
| 27 | 1840-DEl-2012-DRAWING [28-04-2020(online)].pdf | 2020-04-28 |
| 27 | 1840-del-2012-Form-3-(03-12-2012).pdf | 2012-12-03 |
| 27 | 248350 Form 13.pdf | 2015-06-04 |
| 28 | GPOA_GEC.pdf | 2015-06-04 |
| 28 | 1840-DEl-2012-CORRESPONDENCE [28-04-2020(online)].pdf | 2020-04-28 |
| 28 | 1840-del-2012-Correspondence Others-(09-07-2012).pdf | 2012-07-09 |
| 29 | 1840-del-2012-Assignment-(26-06-2012).pdf | 2012-06-26 |
| 29 | 1840-DEl-2012-COMPLETE SPECIFICATION [28-04-2020(online)].pdf | 2020-04-28 |
| 29 | 1840-del-2012-Correspondence Others-(03-12-2012).pdf | 2012-12-03 |
| 30 | 1840-DEl-2012-CLAIMS [28-04-2020(online)].pdf | 2020-04-28 |
| 30 | 1840-del-2012-Correspondence-Others-(26-06-2012).pdf | 2012-06-26 |
| 30 | 1840-del-2012-Form-3-(03-12-2012).pdf | 2012-12-03 |
| 31 | 1840-DEl-2012-ABSTRACT [28-04-2020(online)].pdf | 2020-04-28 |
| 31 | 1840-del-2012-Abstract-(14-06-2012).pdf | 2012-06-14 |
| 31 | 1840-del-2012-Correspondence Others-(09-07-2012).pdf | 2012-07-09 |
| 32 | 1840-del-2012-Assignment-(26-06-2012).pdf | 2012-06-26 |
| 32 | 1840-del-2012-Assignments-(14-06-2012).pdf | 2012-06-14 |
| 32 | 1840-DEL-2012-US(14)-HearingNotice-(HearingDate-12-09-2023).pdf | 2023-08-14 |
| 33 | 1840-del-2012-Claims-(14-06-2012).pdf | 2012-06-14 |
| 33 | 1840-del-2012-Correspondence-Others-(26-06-2012).pdf | 2012-06-26 |
| 33 | 1840-DEL-2012-POA [18-08-2023(online)].pdf | 2023-08-18 |
| 34 | 1840-del-2012-Abstract-(14-06-2012).pdf | 2012-06-14 |
| 34 | 1840-del-2012-Correspondence-others-(14-06-2012).pdf | 2012-06-14 |
| 34 | 1840-DEL-2012-FORM-26 [18-08-2023(online)].pdf | 2023-08-18 |
| 35 | 1840-DEL-2012-FORM 13 [18-08-2023(online)].pdf | 2023-08-18 |
| 35 | 1840-del-2012-Description-(Complete)-(14-06-2012).pdf | 2012-06-14 |
| 35 | 1840-del-2012-Assignments-(14-06-2012).pdf | 2012-06-14 |
| 36 | 1840-DEL-2012-AMENDED DOCUMENTS [18-08-2023(online)].pdf | 2023-08-18 |
| 36 | 1840-del-2012-Claims-(14-06-2012).pdf | 2012-06-14 |
| 36 | 1840-del-2012-Drawings-(14-06-2012).pdf | 2012-06-14 |
| 37 | 1840-DEL-2012-Correspondence to notify the Controller [06-09-2023(online)].pdf | 2023-09-06 |
| 37 | 1840-del-2012-Correspondence-others-(14-06-2012).pdf | 2012-06-14 |
| 37 | 1840-del-2012-Form-1-(14-06-2012).pdf | 2012-06-14 |
| 38 | 1840-del-2012-Description-(Complete)-(14-06-2012).pdf | 2012-06-14 |
| 38 | 1840-del-2012-Form-2-(14-06-2012).pdf | 2012-06-14 |
| 38 | 1840-DEL-2012-Written submissions and relevant documents [22-09-2023(online)].pdf | 2023-09-22 |
| 39 | 1840-del-2012-Drawings-(14-06-2012).pdf | 2012-06-14 |
| 39 | 1840-del-2012-Form-3-(14-06-2012).pdf | 2012-06-14 |
| 39 | 1840-DEL-2012-Information under section 8(2) [22-09-2023(online)].pdf | 2023-09-22 |
| 40 | 1840-DEL-2012-FORM 3 [22-09-2023(online)].pdf | 2023-09-22 |
| 40 | 1840-del-2012-Form-1-(14-06-2012).pdf | 2012-06-14 |
| 40 | 1840-del-2012-Form-5-(14-06-2012).pdf | 2012-06-14 |
| 41 | 1840-del-2012-Form-2-(14-06-2012).pdf | 2012-06-14 |
| 41 | 1840-DEL-2012-PatentCertificate21-12-2023.pdf | 2023-12-21 |
| 42 | 1840-del-2012-Form-3-(14-06-2012).pdf | 2012-06-14 |
| 42 | 1840-DEL-2012-IntimationOfGrant21-12-2023.pdf | 2023-12-21 |
| 43 | 1840-DEL-2012-POWER OF AUTHORITY [18-03-2025(online)].pdf | 2025-03-18 |
| 43 | 1840-del-2012-Form-5-(14-06-2012).pdf | 2012-06-14 |
| 44 | 1840-del-2012-GPA-(14-06-2012).pdf | 2012-06-14 |
| 44 | 1840-DEL-2012-FORM-16 [18-03-2025(online)].pdf | 2025-03-18 |
| 45 | 1840-del-2012-Other-Documents-(14-06-2012).pdf | 2012-06-14 |
| 45 | 1840-DEL-2012-ASSIGNMENT WITH VERIFIED COPY [18-03-2025(online)].pdf | 2025-03-18 |
| 1 | SearchStrategyMatrix_1840_del_2012_20-11-2019.pdf |