Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For Rigid Alignment In Fusion Biopsy

Abstract: ABSTRACT Methods and systems for anatomical aware initial rigid alignment of 2D-US to 3D-MR image space are disclosed in the embodiments herein. Embodiments herein relates to the field of medical imaging devices and more particularly to methods and systems for anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space. The system is configured to register an anatomical 3D model, marked with plurality of pre-determined anatomical planes, with organ segmentation in 3D MR image space. The system is configured to receive a plurality of 2D US images corresponding to a pre-determined anatomical planes of an organ associated with an object in a tracker space, and refine an established initial rigid alignment by using a multi-modal registration between the plurality of features from 2D US images and the corresponding anatomical planes in the 3D MR image space. FIG. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 June 2019
Publication Number
50/2020
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
patent@bananaip.com
Parent Application

Applicants

Samsung Medison
3366, Hanseo-ro, Yangdeokwon-ri, Nam-myeon, Hongcheon-gun, Gangwon-do, Republic of Korea

Inventors

1. Bhavya Ajani
#2870, Phoenix Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore - 560037, Karnataka, India
2. Aditya Bharadwaj
#2870, Phoenix Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore - 560037, Karnataka, India
3. Soumik Mukhopadhyay
#2870, Phoenix Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore - 560037, Karnataka, India
4. Jun-Sung Park
3366, Hanseo-ro, Yangdeokwon-ri, Nam-myeon, Hongcheon-gun, Gangwon-do, Republic of Korea.
5. Yuri Son
3366, Hanseo-ro, Yangdeokwon-ri, Nam-myeon, Hongcheon-gun, Gangwon-do, Republic of Korea.

Specification

DESC:CROSS REFERENCE TO RELATED APPLICATION
This application is based on and derives the benefit of Indian Provisional Application 201941022449 filed on 6th June 2019, the contents of which are incorporated herein by reference.
TECHNICAL FIELD
[001] The present disclosure relates to the field of medical imaging devices and more particularly to methods and systems for performing anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space.
BACKGROUND
[002] Generally, plurality of medical imaging methods may be utilized to provide images of internal patient structure for diagnostic purposes and interventional procedures. Accordingly, the ultrasound (US) medical imaging method can be used for guidance during a biopsy. However, it can be difficult to visualize suspicious cancer (such as Prostatic cancer (PCa)) using only US images.
[003] Also, MRI/MR (Magnetic Resonance Imaging/Magnetic Resonance) medical imaging can be used for detection of suspicious cancer. However, it may be difficult for real-time biopsy guidance. Furthermore, medical imaging methods such as three dimensional (3-D) imaging methods can also be used for detection and/or treatment of prostate (i.e. organ) cancer.
[004] Additionally, a fusion biopsy or a targeted biopsy method can be used, that may allow fusion of both the MR images and the US images to improve biopsy outcome for clinically significant cancer. Accordingly, suspicious areas on the body of a subject (such as a human, an animal, and so on) may be first marked on a pre-operative MR and during live biopsy. Further, the MR images may be overlaid on an intra-operative US image to guide clinicians to biopsy targets. However, a rigid alignment between the intra-operative US images and the pre-operative 3D MR is critical for determining accurate biopsy. The rigid alignment may facilitate side by side visualization of similar MR images as that of live US images for cognitive biopsy. The rigid alignment may also facilitate mapping targets from the MR image onto the US images for fusion guided biopsy.
[005] Conventional 3D ultrasound (US) image acquisition methods for automatic rigid alignment may register a previously obtained volume(s) onto an ultrasound (US) volume during the ultrasound (US) procedure to produce a multimodal image. In yet another conventional method, the intra-operative ultrasound (US) image may be integrated with a stereotactic system, wherein the stereotactic system may interactively register two-dimensional (2D) US and three-dimensional (3D) magnetic resonance (MR) images.
[006] Further, the conventional systems disclose an Ultrasound (US) magnetic resonance (MR) image fusion registration method using a finite element model by constructing a magnetic resonance (MR) image of the organ (such as the prostrate) and the acquired ultrasound elasto-graphy into the finite element model. According to the finite element model, the MR image and the 3D transrectal US image data may be registered.
[007] Additionally, the conventional methods disclose a method for fusion of ultrasound (US) images with pre-acquired images acquisition of a first 3D US image in real-time. The 3D ultrasound image acquired in real-time is registered with the pre-acquired 3D image using the reference pattern or object.
[008] However, conventional systems may require volumetric data of both 3D US volume and 3D MR volume for rigid alignment. The 3D US volumetric acquisition requires a 3D US probe that may be expensive. Alternatively, the 3D reconstruction of the US volumetric data from 2D sweep may require an experience urologist/radiologist. Further, the 3D reconstruction from the 2D images may introduce errors due to sweep or reconstruction, further degrading the accuracy of the fusion biopsy. Further, a segmentation of the organ on both the 3D US volume and the 3D MR volume may be often required for surface to surface rigid alignment in the 3D image space.
OBJECTS
[009] The principal object of the embodiments herein is to disclose methods and systems for performing anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space.
[0010] Another object of the embodiments herein is to disclose methods and systems for performing coarse 3D US-MR rigid alignment in fusion biopsy by estimating an anatomical relationship between a 2D US image plane acquired from pre-determined anatomical plane, and a 3D MR image plane to obtain a pre-defined structure of the organ segmentation in the 3D MR image plane.
[0011] Another object of the embodiments herein is to disclose methods and systems for determining an optimal 3D MR image plane in the received 3D MR images of the organ segmentation.
[0012] Another object of the embodiments herein is to refine for residue error in the 3D US-MR rigid alignment by constructing a 3D correction from a set of in-plane corrections over a set of 2D US images, acquired from a plurality of orientations.
SUMMARY
[0013] Accordingly, the embodiments herein provide a method for anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, the method comprising registering, by a processor, an anatomical 3D model, marked with a plurality of pre-determined anatomical planes, with an organ segmentation in a 3D MR image space; establishing, by the processor, a set of features for the marked plurality of pre-determined anatomical planes, by projecting the organ segmentation in the 3D MR image space onto the marked plurality of pre-determined anatomical planes; receiving, by the processor, a plurality of 2D US images corresponding to a pre-determined anatomical planes of an organ associated with an object in a tracker space; mapping, by the processor, the received plurality of 2D US images to respective plurality of the anatomical plane in the anatomical 3D model; identifying, by the processor, a plurality of features from at least one of an organ segmentation of the received plurality of 2D US images and the received plurality of 2D US images; establishing, by the processor, a translation component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching the identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model; establishing, by the processor, a rotational component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model; and refining, by the processor, the established initial rigid alignment by using a multi-modal registration between the plurality of features from 2D US images and the corresponding anatomical planes in the 3D MR image space.
[0014] Accordingly, the embodiments herein provide a system for anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, the system comprising a 2D US image probe communicatively coupled to an electronic device; a 3D Magnetic Resonance imaging device (MRI) communicatively coupled to the electronic device; a processor of the electronic device; and a memory unit coupled to the processor, wherein the memory unit comprising a processing module configured to register an anatomical 3D model, marked with plurality of pre-determined anatomical planes, with an organ segmentation in a 3D MR image space; establish a set of features for the marked plurality of pre-determined anatomical planes, by projecting the organ segmentation in the 3D MR image space onto the marked plurality of pre-determined anatomical planes; receive a plurality of 2D US images corresponding to a pre-determined anatomical planes of an organ associated with an object in a tracker space; map the received plurality of 2D US images to respective plurality of the anatomical plane in the anatomical 3D model; identify a plurality of features from at least one of an organ segmentation of the received plurality of 2D US images and the received plurality of 2D US images; establish a translation component of the initial rigid alignment of the intra-operative 3DUS tracker space with the pre-operative 3D MR image space, by matching the identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model; establish a rotational component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model; and refine the established initial rigid alignment by using a multi-modal registration between the plurality of features from 2D US images and the corresponding anatomical planes in the 3D MR image space.
[0015] Accordingly, the embodiments herein provide a system for anatomical aware initial rigid alignment of an intra-operative three Dimensional (3D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, the method comprising a 2D US image probe communicatively coupled to an electronic device; a 3D Magnetic Resonance imaging device (MRI) communicatively coupled to the electronic device; a processor of the electronic device; and a memory unit coupled to the processor, wherein the memory unit comprising a processing module configured to acquire a plurality of 2D US images by a tracked image probe with tracking information and a pre-operative 3D organ segmentation on 3D MR image plane with anatomical co-ordinate information; estimate a coarse rigid alignment by dynamically matching a plane in 2D US image plane taken from known anatomical orientation with corresponding plane in 3D MR image plane, by utilizing prior anatomical meta-information and an optimal plane search in constraint 6 Degree of Freedom (DOF) image space; and refine the estimated coarse rigid alignment, by deriving a 3D correction from a sub-set of in-plane US-MR alignment corrections over plurality of 2D US images taken from different orientation of the tracked image probe.
[0016] These and other aspects of the example embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating example embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the example embodiments herein without departing from the spirit thereof, and the example embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0017] Embodiments herein are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0018] FIG. 1a illustrates a block diagram of a system for performing anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, according to embodiments as disclosed herein;
[0019] FIG. 1b illustrates a detailed view of a processing module as shown in FIG. 1a, comprising various modules, according to embodiments as disclosed herein;
[0020] FIG. 2 illustrates a schematic diagram for rigid alignment in fusion biopsy, according to embodiments as disclosed herein;
[0021] FIG. 3a is a flow diagram depicting a method for estimating a coarse 3D rigid alignment based on matching anatomically similar planes on US image plane and a MR image plane, according to embodiments as disclosed herein;
[0022] FIG. 3b is a flow diagram depicting a method for refining an initial rigid alignment using an in-plane alignment errors over plurality of planes, according to embodiments as disclosed herein;
[0023] FIG. 4 illustrates a block diagram to perform instantaneous motion correction by real-time in-plane rigid correction for dynamic misalignment due to organ motion, according to embodiments as disclosed herein;
[0024] FIG. 5 illustrates a schematic diagram for multi-modality registration using plurality of 2D US images and a pseudo 3D US image derived from the 3D MR image, according to embodiments as disclosed herein;
[0025] FIG. 6a is a flow diagram for static alignment correction of the US- MR rigid alignment, according to embodiments as disclosed herein;
[0026] FIG. 6b is a flow diagram for dynamic progressive alignment correction of the US-MR rigid alignment, according to embodiments as disclosed herein;
[0027] FIG. 7a is a flow chart depicting a method for anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, according to embodiments as disclosed herein;
[0028] FIG. 7b is a flow chart depicting a method for estimating a full 3D correction to be applied over initial rigid alignment, by composing the in-plane residue errors, according to embodiments as disclosed herein; and
[0029] FIG. 7c is a flow chart depicting a method for correcting the instantaneous motion correction, by taking weighted moving average of past in-plane corrections, according to embodiments as disclosed herein.

DETAILED DESCRIPTION
[0030] The example embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The description herein is intended merely to facilitate an understanding of ways in which the example embodiments herein can be practiced and to further enable those of skill in the art to practice the example embodiments herein. Accordingly, this disclosure should not be construed as limiting the scope of the example embodiments herein.
[0031] The embodiments herein achieve methods and systems for performing anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space (i.e. US-tracking space) with a pre-operative 3D Magnetic Resonance (MR) image space. Referring now to the drawings, and more particularly to FIGs. 1a through 7c, where similar reference characters denote corresponding features consistently throughout the figures, there are shown example embodiments.
[0032] FIG. 1a illustrates a block diagram of a system 100 for performing anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, according to embodiments as disclosed herein.
[0033] The system 100 includes a two Dimensional (2D) Ultrasound (US) image probe 102, a Magnetic Resonance Imaging (MRI) device 104, and an electronic device 106. The 2D US image probe 102 and the MRI device 104 can be connected to the electronic device 106 via a communication network 108. The communication network 108 may be a wired (such as a local area network, Ethernet, and so on) or a wireless communication network (such as Wi-Fi, Bluetooth, Zigbee, cellular networks, and so on).
[0034] The electronic device 106 includes a memory unit 106a, a processing module 106b, a storage unit 106c, a database 106d, a display unit 106e, and a processor 106f. The electronic device 106 can function as a server (not shown) such as remote server, a standalone server or a server on a cloud. Further, the electronic device 106 may include a processing module 106b. When machine readable instructions are executed, the processing module 106b causes the electronic device 106 to process the data received from the system 100. Furthermore, the electronic device 106 includes the database 106d, which can be used to store a plurality of required data. The electronic device 106 may occasionally connect to the server (not shown) via a communication network (not shown). The communication network may be a wired (such as a local area network, Ethernet, and so on) or a wireless communication network (such as Wi-Fi, Bluetooth, and so on). The electronic device 106 may also retrieve the data from the external databases (not shown) based on the requirement and store the retrieved data in the database 106d associated with the electronic device 106.
[0035] Examples of the electronic device 106 can be at least one of, but not limited to, a mobile phone, a smart phone, a tablet, a handheld device, a phablet, a laptop, a computer, a wearable computing device, a workstation, a scanner, a server, a medical device, a imaging device, and so on. The electronic device 106 may comprise other components such as an input/output interface, a communication interface and so on.
[0036] The system 100 includes the 2D US image probe 102 operative to obtain at least one 2D US image. The 2D US image probe 102 may be one of a variety of known US devices/probes, including two-dimensional (or three dimensional) US image probes. For example, the 2D US image probe 102 may be a real-time freehand transrectal ultrasound (TRUS) probe and may be used to guide needle positioning for a biopsy or seed placement. The 2D US image probe 102 can also be coupled to a tracking device (not shown). The tracking may be performed during the ultrasound imaging of the patient (for example, trans-rectal prostate imaging (TRUS)). The tracking of the 2D US probe 102 can be performed by integrating tracking sensors into a device that attaches rigidly to the 2D US image probe 102, such as a biopsy guide, or by integrating tracking sensors into the 2D US image probe 102.
[0037] The electronic device 106 acquires real-time images from the 2D US image probe 102. In an embodiment herein, the electronics device 106 can display on a display unit 106e of the electronic device 106 and/or a display connected to the electronic device 106. The electronic device includes modules that allow the identification of one or more points, (for example, an anatomical landmarks, a lesions, and so on) in the ultrasound image from the 2D US image probe 102. The electronic device can convert the coordinates of these identified points from ultrasound image coordinates to coordinates in a coordinate system using the real time probe tracking information provided by the probe tracking sensor (not shown). The conversion of the coordinates is performed by an initial anatomical plane matching, and using the probe tracking system. Further, a registration of real-time free hand ultrasound images with acquired (e.g., 3D) images of the same organ in MR image plane is realized with the system 100.
[0038] In an embodiment, the system 100 is configured to register an anatomical 3D model, marked with one or more pre-determined anatomical planes, with an organ segmentation in a 3D MR image space. In an embodiment, the system 100 is configured to establish a set of features (such as center plane and so on.) for the marked plurality of pre-determined anatomical planes, by projecting the organ segmentation in the 3D MR image space onto the marked plurality of pre-determined anatomical planes. The projecting of the organ segmentation is performed by using of anatomical template.
[0039] In an embodiment, the system 100 is configured to receive a plurality of 2D US images corresponding to a pre-determined anatomical planes of an organ associated with an object, in a tracker space. The 2D US images may be received from a ultrasound probe. The tracking space is a 2D or 3D space where, an electromagnetic field generator, static device placed near the probe. Further, an electromagnetic sensor may be attached to the probe, and measures the value of electromagnetic field. The electromagnetic generator produces a space varying electromagnetic field. Using this information the tracking system can track (i.e. know the movement) of the sensor in real-time. The tracker space can be formed/used by the electromagnetic tracker probe. The object can be at least one of, but not limited to a human, a animal, a creature, and so on.
[0040] In an embodiment, the system 100 is configured to map the received plurality of 2D US images to respective plurality of the anatomical plane in the anatomical 3D model. In an embodiment, the system 100 is configured to identify a plurality of features from at least one of an organ segmentation from the received plurality of 2D US images.
[0041] In an embodiment, the system 100 is configured to establish a translation component of the initial rigid alignment of the intra-operative 3DUS tracker space with the pre-operative 3D MR image space, by matching the identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model. In an embodiment, the system 100 is configured to establish a rotational component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model. In an embodiment, the system 100 is configured to refine the established initial rigid alignment by using a multi-modal registration between the plurality of features from 2D US images and the corresponding anatomical planes in the 3D MR image space.
[0042] In an embodiment, the system 100 is configured to acquire the plurality of 2D US images from a plurality of orientations of the organ, with respective tracker transform. In an embodiment, the system 100 is configured to extract a plurality of corresponding 2D MR images from the 3D MR images, by using the tracker transform and the initial rigid alignment. In an embodiment, the system 100 is configured to estimate a residue error associated with the initial rigid alignment for each of the acquired orientations of the plurality of 2D US images, by performing a 2D to 2D multi-modal image based registration. In an embodiment, the 2D in-plane error can be represented as a 3D error in fixed tracker space. In an embodiment, the system 100 is configured to estimate a full 3D correction to be applied over initial rigid alignment, by composing the in-plane residue errors.
[0043] In an embodiment, the system 100 is configured to estimate a correction transform to refine an instantaneous error due to motion of at least one of the object and the external tracker. In an embodiment, the system 100 is configured to correct the instantaneous motion correction, by taking weighted moving average of previous in-plane corrections.
[0044] In an embodiment, the anatomical aware initial rigid alignment is performed using at least one of intra-operative imaging modality and pre-operative imaging modality by at least one of, but not limited to a Ultrasound - Computed Tomography (US-CT), a Ultrasound – Ultrasound (US-US), a Ultrasound - Positron Emission Tomography (US-PET), the Ultrasound - Magnetic Resonance (US-MR), and so on. In an embodiment, the plurality of 2D US images can be tracked using the external tracker with the tracker transform by mapping of the 2D US image space onto the fixed tracker space. In an embodiment, the 3D MR image is replaced with a pseudo 3D US image from the 3D MR image using an artificial intelligence based image translation method. The artificial intelligence based image translation method may map US-MR texture to a common space, using a deep learning method.
[0045] In an embodiment, the system 100 is configured to estimate an orientation of an anatomy alignment of each of the received plurality of the 2D US images with corresponding plane in a received 3D MR image of an organ segmentation of the object, based on a pre-determined information corresponding to the anatomical plane in the 2D US image space and a pre-determined co-ordinate information corresponding to an anatomy in the 3D MR image space. In an embodiment, the system 100 is configured to determine at least one optimal image plane in the received 3D MR image of organ segmentation that corresponds to the estimated orientation of the anatomy alignment in the received plurality of 2D US images, by performing plurality of iterations of 2D to 3D image registration in a pre-determined narrowed search space. In an embodiment, the determination of at least one optimal plane in the received of 3D MR image of organ segmentation comprises anatomy constraint plane corrections of at least one of, a rotational correction, a translational alignment correction, and an in-plane alignment correction.
[0046] In an embodiment, the system 100 is configured to obtain the initial rigid alignment of the determined optimal image plane in the received 3D MR image of the organ segmentation and the corresponding plurality of the 2D US images, by mapping the estimated orientation of the anatomy alignment of plurality of the 2D US images and the 3D MR image, and mapping a centroid of the organ segmentation in the image plane of plurality of the 2D US images and the 3D MR image. In an embodiment, the system 100 is configured to refine the obtained initial rigid alignment of image plane of the 3D MR image and the corresponding plurality of the 2D US images, by determining at least one 3D correction from a sub-set of the in-plane alignment correction associated with the 3D MR image and the plurality of the 2D US images, over the plurality of the 2D US images obtained from different orientation of image planes.
[0047] In an embodiment, the pre-determined anatomical plane on the object can be at least one of, but not limited to, a mid-axial plane, a mid-transverse plane, a mid-sagittal plane, a base plane, and so on. In an embodiment, the at least one optimal image plane in the received 3D MR image of the organ segmentation can be determined in constraints of 6 Degree of Freedom (DOF) image space, based on a pre-determined information of statistical variations in anatomy of the organ segmentation. In an embodiment, the determination of the at least one optimal image plane in the received 3D MR image of organ segmentation comprises performing 2D to 3D multi-modality image registration using at least one of a gradient descent method and a line search optimization method. In an embodiment, the multi-modality image registration comprises a deep learning method to translate the 3D MR image onto pseudo 3D US images for organ segmentation on the plurality of the 2D US images and the 3D MR image.
[0048] In an embodiment, the in-plane alignment correction is estimated using the image based registration utilizing organ segmentation on 2D US image plane and the 3D MR image. In an embodiment, the in-plane alignment correction is mapped as homologous 3D correction in world space using change of basis rule. In an embodiment, the in-plane alignment correction further comprises utilizing motion information during acquisition of the plurality of the 2D images, to derive a true estimate for the anatomy constraint plane corrections. In an embodiment, the refinement of the initial rigid alignment comprises utilizing in-plane alignment correction over the plurality of 2D US images acquired during at least one of a rotational sweep of organ in the anatomical plane and a fee hand acquisition on the anatomical plane. In an embodiment, the refinement of rigid alignment comprises utilizing the in-plane alignment correction over plurality of the 2D US images acquired at specific anatomical orientations. In an embodiment, the rigid alignment correction comprises estimating instantaneous in-plane rigid correction to compensate for dynamic motion by taking weighted moving average of past alignment corrections. Further, the image registration can be a standard image processing method, which can correct for miss-alignment between 2 different images. Embodiments herein disclose about the image registration with the organ segmentation in both MR and US for estimating translation and rotation errors.
[0049] In an embodiment, the system 100 is configured to estimate a localization data of the image plane in z direction, associated with the 3D MR image of the organ segmentation, by retrieving an anatomical position data of a pre-defined structure of the organ segmentation in the 3D MR image plane. In an embodiment, the system 100 is configured to estimate the translation alignment of the image plane in x-y direction, associated with the 3D MR image of the organ segmentation, by retrieving the centroid of the organ segmentation on in the image plane of plurality of the 2D US images and the 3D MR image, to estimate optimal 3D MR image plane in the received 3D MR images of the organ segmentation. In an embodiment, the pre-defined structure of the organ segmentation in the 3D MR image plane is rigidly registered with the 3D MR image of organ segmentation in 3D MR image plane to facilitate mapping of the anatomical plane from the 2D US image plane onto the 3D MR image plane.
[0050] In an embodiment, the system 100 is configured to acquire plurality of the 2D US images in different orientations of image plane on the pre-determined anatomical plane on the object. In an embodiment, the system 100 is configured to perform an organ segmentation on the 2D US image plane and re-slicing the corresponding 3D MR image plane of the received 3D MR image of organ segmentation using the initial rigid alignment. In an embodiment, the system 100 is configured to perform an expeditive in-plane registration of the 2D US image plane with the 3D MR image plane using the performed organ segmentation, to estimate image based errors. The image based errors are the difference between the pixel values of two images. One way to define is the sum of square of difference of intensities of corresponding pixel for the image pairs. In an embodiment, the system 100 is configured to estimate the anatomy constraint plane corrections for the 2D US image plane based on the in-plane registration, by decreasing the plurality of motion errors during the acquisition of the plurality of 2D US images.
[0051] In an embodiment, the system 100 is configured to obtain an input corresponding to at least one of a dynamic refinement and a static refinement, to refine the initial rigid alignment of the image plane of the 3D MR image and the corresponding plurality of the 2D US images. In an embodiment, the system 100 is configured to apply the alignment correction over an instantaneous rigid alignment correction and provided to subsequent iteration of the rigid alignment correction estimation in a closed loop pattern, if the obtained input corresponds to the dynamic refinement. In an embodiment, the system 100 is configured to utilize the alignment corrections of the plurality of 2D US image plane to estimate the correction over initial rigid alignment if, the obtained input corresponds to the static refinement.
[0052] FIG. 1a illustrates functional components of the computer implemented system. In some cases, the component may be a hardware component, a software component, or a combination of hardware and software. Some of the components may be application level software, while other components may be operating system level components. In some cases, the connection of one component to another may be a close connection where two or more components are operating on a single hardware platform. In other cases, the connections may be made over network connections spanning long distances. Each embodiment may use different hardware, software, and interconnection architectures to achieve the functions described.
[0053] FIG. 1b illustrates a detailed view of a processing module as shown in FIG. 1a, comprising various modules, according to embodiments as disclosed herein.
[0054] In an embodiment, the electronic device 106 may comprise the processing module 106b stored in the memory unit 106a (as depicted in FIG. 1b). The processing module 106b may comprise a plurality of sub modules. The plurality of sub modules can comprise of a registration module 202, a features establishing module 204, a 2D image reception module 206 a mapping module 208, an identification module 210, a translation component establishing module 212, an rotational component establishing module 214, and a refining module 216.
[0055] In an embodiment, the registration module 202 is configured to register an anatomical 3D model, marked with plurality of pre-determined anatomical planes, with an organ segmentation in a 3D MR image space. In an embodiment, the features establishing module 204 is configured to establish a set of features for the marked plurality of pre-determined anatomical planes, by projecting the organ segmentation in the 3D MR image space onto the marked plurality of pre-determined anatomical planes. In an embodiment, the 2D image reception module 206 is configured to receive a plurality of 2D US images corresponding to a pre-determined anatomical planes of an organ associated with an object (using ultrasound probe) in a tracker space. In an embodiment, the mapping module 208 is configured to map the received plurality of 2D US images to the respective anatomical plane in the anatomical 3D model. In an embodiment, the identification module 210 is configured to identify a plurality of features from at least one of an organ segmentation of the received plurality of 2D US images and the received plurality of 2D US images. In an embodiment, the translation component establishing module 212 is configured to establish a translation component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching the identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model. In an embodiment, the rotational component establishing module 214 is configured to establish a rotational component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model. In an embodiment, the refining module 216 is configured to refine the established initial rigid alignment by using a multi-modal registration between the plurality of features from 2D US images and the corresponding anatomical planes in the 3D MR image space.
[0056] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0057] FIG. 2 illustrates a schematic diagram for the rigid alignment in the fusion biopsy, according to embodiments as disclosed herein.
[0058] In an embodiment, the methods includes estimating a coarse 3D US-MR rigid alignment in fusion biopsy based on establishing the anatomical plane relationship between the plurality of 2D US images, acquired from known anatomical orientation, and the 3D MR image using the pre-defined 3D structure/model of the organ (such as the prostrate). The US-MR 3D rigid alignment is executed, when performing fusion biopsy of the organ, without using 3D US volumetric data or an organ segmentation on the 3D US imaging plane. The processing module 106b may capture the shape of the pre-defined structure/model, wherein the pre-defined structure/model can be pre-built from a set of 3D MR training data pre-loaded into system 100. Further, a 3D segmentation of the organ of interest in 3D MR data is computed using an automatic method and/or semi-automatic method such as a deep learning, a statistical shape model, and so on. Furthermore, the pre-defined structure/model of the organ is rigidly aligned with the organ segmentation using appropriate methods such as, but not limited to, an iterative closest point method, to estimate organ model to the 3D MR rigid transform.
[0059] In an example, during the real-time biopsy, the user can acquire the plurality of the 2D US from the pre-determined anatomical orientation or plane (such as the mid-axial, the mid-sagittal, and so on). Further, rotational components of the 2D US image plane to organ structure/organ model transform may be estimated by mapping a given anatomical plane of the acquired plurality of 2D US images with one of the previously marked anatomical planes on the 3D organ structure/model using the methods such as, but not limited to, a plane lock method. Further, translational components or offset components of plurality of the 2D US images to organ structure/model can be estimated by aligning the centroid of contour of 3D organ model over the anatomical plane with the center of field of view on the acquired 2D US images. Accordingly, the coarse 3D US-MR rigid alignment can be established by composing the plurality of 2D US images to organ structure/model transform with organ structure/model to the 3D MR image transform.
[0060] FIG. 3a is a flow diagram depicting a method for estimating a coarse 3D rigid alignment based on matching anatomically similar planes on US image plane and a MR image plane, according to embodiments as disclosed herein.
[0061] In an embodiment, based on the pre-selected/pre-determined anatomical plane in the 2D US image space and a Digital Imaging and Communications in Medicine (DICOM) anatomy co-ordinate information from the 3D MR image space, an anatomy align orientation is estimated. Further, the anatomy aware transform locator may estimate optimal 3D MR image plane using the anatomical position of organ structure/model segmentation in the 3D MR image plane as a rough estimation for the 3D MR image plane localization in the z direction. Further, the optimal 3D MR image plane is estimated using the centroid of organ segmentation on the 2D US image plane and corresponding 3D MR image plane to estimate x-y translation alignment. Accordingly, by running iterations of the 2D-3D registration in the narrow search space as identified from pre-defined anatomy to estimate the best MR plane. The final rigid alignment may be obtained by matching the corresponding orientations and organ centroids in the 2D US image plane and the 3D MR image plane.
[0062] In an embodiment, a 2D segmentation of the organ of interest on the 2D US image can be performed using segmentation approach such as, but not limited to, deep learning and so on. Further, the centroid of organ segmentations may be mapped between the 2D US image plane and the 2D MR image plane to estimate the translational components or the offset components for the rigid alignment. Further, the anatomy based coarse 3D US-MR rigid alignment can be further refined by optimally matching the 2D US image plane onto the 3D MR image plane using a 2D-3D registration method such as, but not limited to, a gradient descent method, and so on. Also, the optimal MR image plane is determined by running iterations of 2D-3D registration in a narrow search space of parameters. The parameter is defined from pre-defined information and depends on anatomical variations.
[0063] FIG. 3b is a flow diagram depicting a method for refining an initial rigid alignment using in-plane alignment errors over plurality of planes, according to embodiments as disclosed herein.
[0064] In an embodiment, the US-MR rigid alignment may be refined to eliminate residue errors in 3D US-MR rigid alignment by constructing a 3D correction from a set of in-plane corrections over a set of 2D US images acquired from a plurality of orientations. Accordingly, a user may acquire a set of 2D US images of the organ of interest. The acquired image can be a partial or a full sweep of the organ in rotational and/or translational manner, which can be then processed in parallel. Further, for each of the 2D US images from the set of 2D US image, a corresponding 2D MR image may be derived by re-slicing the 3D MR image using current 3D US-MR rigid alignment and tracking transform. Further, a rigid in-plane correction may be computed by registering the 2D US image plane with the 2D MR image plane using at least one registration method such as, but not limited to, gradient descent method. The in-plane rigid correction can be mapped to homologous 3D correction in the 3D image space using appropriate change of basis transform.
[0065] The initial rigid alignment may be refined using in-plane alignment errors over the plurality of planes acquired from the 2D US image probe 102. For example, the user may request progressive refinement for the initial rigid alignment and may acquire 2D US images from the plurality of plane orientations on the pre-determined/selected anatomical plane. Accordingly, for each 2D US image plane, the alignment error may be estimated by performing the organ segmentation on given 2D US image plane and re-slicing corresponding 3D MR image plane of the 3D MR organ segmentation using the initial rigid alignment. Further, a quick in-plane registration of the 2D US image plane with the 2D MR image plane using the organ segmentation as a feature can be used to estimate the image based errors. Further, alignment error may be estimated for true alignment correction of given 2D US image plane by decreasing motion occurring during the acquisition of the 2D US images from the 2D US image probe 102.
[0066] As depicted in the FIG. 3b, the 3D MR is a 3D MR image, a Tt3D can be a tracking transform of image plane,Mt3D and a Mt-13D are the past and current estimation of tracker correction respectively, Rt3D and a Rt+13D are the past rigid alignment image and current rigid alignment image respectively, and At3D is the in-plane correction/alignment corrected image plane.
[0067] FIG. 4 illustrates a block diagram to perform instantaneous motion correction by real-time in-plane rigid correction for dynamic misalignment due to organ motion, according to embodiments as disclosed herein.
[0068] In an embodiment, in-plane rigid alignment correction may be performed in real time, for preventing dynamic misalignment, which may occur due to the motion of the organ. Further, the in-plane rigid alignment correction may be performed during biopsy to correct for static misalignment, which can occur due to patient motion or drift in the tracking transformation. Accordingly, the fixed or final rigid alignment is received for the instantaneous motion correction. Further, a moving average equation(i.e. Ct3D) may be derived for motion correction. Composition of past alignments (i.e. At3D) may be considered for the instantaneous motion correction. Further, the final rigid alignment (i.e. Rt3D) may not be modified and only the alignments (i.e. At3D) may be updated for instantaneous motion correction.
[0069] FIG. 5 illustrates a schematic diagram for multi-modality registration using plurality of 2D US images and a pseudo 3D US image derived from the 3D MR image, according to embodiments as disclosed herein.
[0070] In an embodiment, a multi-modality 2D-3D registration may be performed using the 2D US images and a pseudo 3D US image reconstructed from the 3D MR image using the deep learning method. The deep learning method may be used to translate the 3D MR image to the pseudo 3D US image for multi-modality registration or segmentation of organ on to the 2D US image. The 3D MR image may be utilized for feature based registration
[0071] FIG. 6a is a flow diagram for static alignment correction of the US- MR rigid alignment, according to embodiments as disclosed herein.
[0072] The rigid alignment may be corrected using the static refinement method. The alignment corrections from plurality of the 2D US image planes may be utilized to estimate the correction over rigid alignment. Further, the initial rigid alignment (i.e. Rt3D) may be modified/updated(i.e. Rt+13D), and composition of past alignments (i.e. At3D, At-13D, A03D) may be considered.
[0073] FIG. 6b is a flow diagram for dynamic progressive alignment correction of the US-MR rigid alignment, according to embodiments as disclosed herein.
[0074] The rigid alignment may be corrected using the dynamic refinement method. The rigid alignment correction may be applied over the instantaneous rigid correction which is provided to next iteration of alignment correction estimation in a closed loop pattern. The alignment corrections from only one 2D US image plane at a time may be utilized to estimate the correction over rigid alignment Further, the initial rigid alignment (i.e. R_3D^t) may be modified/updated (i.e.R_3D^(t+1)), and composition of current alignment (i.e. A_3D^t) may be considered.
[0075] FIG. 7a is a flow chart depicting a method 700a for anatomical aware initial rigid alignment of an intra-operative three Dimensional (3D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, according to embodiments as disclosed herein.
[0076] At step 702, the method 700a includes registering an anatomical 3D model, marked with the plurality of pre-determined anatomical planes, with an organ segmentation in a 3D MR image space. At step 704, the method 700a includes establishing a set of features for the marked plurality of pre-determined anatomical planes, by projecting the organ segmentation in the 3D MR image space onto the marked plurality of pre-determined anatomical planes. At step 706, the method 700a includes receiving a plurality of 2D US images corresponding to pre-determined anatomical planes of an organ associated with an object (using ultrasound probe) in a tracker space. At step 708, the method 700a includes mapping the received plurality of 2D US images to respective plurality of the anatomical plane in the anatomical 3D model. At step 710, the method 700a includes identifying a plurality of features from at least one of an organ segmentation of the received plurality of 2D US images and the received plurality of 2D US images. At step 712, the method 700a includes establishing a translation component of the initial rigid alignment of the intra-operative 3DUS tracker space with the pre-operative 3D MR image space, by matching the identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model. At step 714, the method 700a includes establishing a rotational component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model. At step 716, the method 700a includes refining the established initial rigid alignment by using a multi-modal registration between the plurality of features from 2D US images and the corresponding anatomical planes in the 3D MR image space.
[0077] The various actions in method 700a may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 7a may be omitted.
[0078] FIG. 7b is a flow chart depicting a method 700b for estimating a full 3D correction to be applied over initial rigid alignment, by composing the in-plane residue errors, according to embodiments as disclosed herein.
[0079] At step 722, the method 700b includes acquiring the plurality of 2D US images from a plurality of orientations of the organ, with respective tracker transform. At step 724, the method 700b includes extracting a plurality of corresponding 2D MR images from the 3D MR images, by using the tracker transform and the initial rigid alignment. At step 726, the method 700b includes estimating a residue error associated with the initial rigid alignment for each of the acquired orientations of the plurality of 2D US images, by performing a 2D to 2D multi-modal image based registration, wherein the 2D in-plane error are represented as a 3D error in fixed tracker space. At step 728, the method 700b includes estimating a full 3D correction to be applied over initial rigid alignment, by composing the in-plane residue errors.
[0080] The various actions in method 700b may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 7b may be omitted.
[0081] FIG. 7c is a flow chart depicting a method 700c for correcting the instantaneous motion correction, by taking weighted moving average of past in-plane corrections, according to embodiments as disclosed herein.
[0082] At step 732, the method 700c includes estimating a correction transform to refine an instantaneous error due to motion of at least one of the object and the external tracker. At step 734, the method 700c includes correcting the instantaneous motion correction, by taking weighted moving average of past in-plane corrections.
[0083] The various actions in method 700c may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 7c may be omitted.
[0084] Embodiments herein disclose methods and systems for estimating the 3D US-MR rigid alignment through initialization of the rigid alignment using anatomy co-ordinate information. Embodiments herein disclose methods and systems for progressive refinement of the rigid alignment, by composing a set of in-plane US-MR alignment corrections. Embodiments herein may not use the 3D US volumetric data and reduces cost, time and complexity of the fusion procedure. Embodiments herein disclose methods and systems for appropriate tracking mechanism (such as electro-magnetic) to co-align both the 2D US image space and 3D MR image space into common world co-ordinate frame. Embodiments disclosed herein may require only a limited set of 2D US images from selected planes. Embodiments discloses herein may be used any time during the biopsy to correct rigid alignment due to patient motion and or tracking error as 3D US image is not required. Further, embodiments herein may be extended to perform an in-plane motion correction in real-time.
[0085] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the elements. The elements shown in FIG. 1a, 1b and FIG. 2 can be at least one of a hardware device, or a combination of hardware device and software module.
[0086] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
,CLAIMS:STATEMENT OF CLAIMS
We claim:
1. A method (700a) for anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, the method (700a) comprising:
registering, by a processor (106f), an anatomical 3D model, marked with a plurality of pre-determined anatomical planes, with an organ segmentation in a 3D MR image space;
establishing, by the processor (106f), a set of features for the marked plurality of pre-determined anatomical planes, by projecting the organ segmentation in the 3D MR image space onto the marked plurality of pre-determined anatomical planes;
receiving, by the processor (106f), a plurality of 2D US images corresponding to a pre-determined anatomical planes of an organ associated with an object in a tracker space;
mapping, by the processor (106f), the received plurality of 2D US images to respective plurality of the anatomical plane in the anatomical 3D model;
identifying, by the processor (106f), a plurality of features from at least one of an organ segmentation of the received plurality of 2D US images and the received plurality of 2D US images;
establishing, by the processor (106f), a translation component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching the identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model;
establishing, by the processor (106f), a rotational component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model; and
refining, by the processor (106f), the established initial rigid alignment by using a multi-modal registration between the plurality of features from 2D US images and the corresponding anatomical planes in the 3D MR image space.
2. The method (700a) as claimed in claim 1, wherein refining the established initial rigid alignment, the method (700b) further comprises:
acquiring, by the processor (106f), the plurality of 2D US images from a plurality of orientations of the organ, with respective tracker transform;
extracting, by the processor (106f), a plurality of corresponding 2D MR images from the 3D MR images, by using the tracker transform and the initial rigid alignment;
estimating, by the processor (106f), a residue error associated with the initial rigid alignment for each of the acquired orientations of the plurality of 2D US images, by performing a 2D to 2D multi-modal image based registration, wherein the 2D in-plane error are represented as a 3D error in fixed tracker space; and
estimating, by the processor (106f), a full 3D correction to be applied over initial rigid alignment, by composing the in-plane residue errors.
3. The method (700a) as claimed in claim 1, wherein refining the established initial rigid alignment, the method (700c) further comprises:
estimating, by the processor (106f), a correction transform to refine an instantaneous error due to motion of at least one of the object and the external tracker; and
correcting, by the processor (106f), the instantaneous motion correction, by taking weighted moving average of past in-plane corrections.
4. The method (700a) as claimed in claim 1, wherein the anatomical aware initial rigid alignment is performed using at least one of intra-operative imaging modality and pre-operative imaging modality by at least one of, a Ultrasound - Computed Tomography (US-CT), a Ultrasound – Ultrasound (US-US), a Ultrasound - Positron Emission Tomography (US-PET), and the Ultrasound - Magnetic Resonance (US-MR).
5. The method (700a) as claimed in claim 1, wherein the plurality of 2D US images is tracked using the external tracker with the tracker transform by mapping of the 2D US image space onto the fixed tracker space.
6. The method (700a) as claimed in claim 1, wherein the 3D MR image is replaced with a pseudo 3D US image from the 3D MR image using an artificial intelligence based image translation method.
7. The method (700a) as claimed in claim 1, wherein the plurality of pre-determined anatomical planes of the organ associated with the object in the tracker space can be at least one of, a mid-axial plane, a mid-transverse plane, a mid-sagittal plane, and a base plane.
8. The method (700a) as claimed in claim 1, wherein the refinement of the initial rigid alignment comprises utilizing an in-plane alignment correction over the plurality of 2D US images acquired during at least one of a rotational sweep of organ in the anatomical plane and a free hand acquisition on the anatomical plane, wherein the refinement of the initial rigid alignment comprises utilizing the in-plane alignment correction over plurality of the 2D US images acquired at specific anatomical orientations.
9. The method (700a) as claimed in claim 1, wherein the initial rigid alignment correction comprises estimating instantaneous in-plane rigid correction to compensate for dynamic motion by taking weighted moving average of past alignment corrections.
10. The method (700b) as claimed in claim 2, wherein the composing the in-plane residue errors is dynamically applied by initially estimating the in-plane correction for the orientations of the 2D US image and applied over the initial rigid alignment preceding to acquisition of subsequent orientation of the 2D US image.
11. The method (700b) as claimed in claim 2, wherein the composing the in-plane residue errors is statically applied by initially estimating plurality of the in-plane corrections from plurality of the orientations and post a 3D correction is estimated to be applied over initial rigid alignment.
12. A method (700d) for anatomical aware initial rigid alignment of an intra-operative three Dimensional (3D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, the method (700d) comprising:
acquiring, by a processor (106f), a plurality of 2D US images by a tracked image probe with tracking information and a pre-operative 3D organ segmentation on 3D MR image plane with anatomical co-ordinate information;
estimating, by the processor (106f), a coarse rigid alignment by dynamically matching a plane in 2D US image plane taken from known anatomical orientation with corresponding plane in 3D MR image plane, by utilizing prior anatomical meta-information and an optimal plane search in constraint 6 Degree of Freedom (DOF) image space; and
refining, by the processor (106f), the estimated coarse rigid alignment, by deriving a 3D correction from a sub-set of in-plane US-MR alignment corrections over plurality of 2D US images taken from different orientation of the tracked image probe.
13. A system (100) for anatomical aware initial rigid alignment of an intra-operative two Dimensional (2D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, the system (100) comprising:
a 2D US image probe (102) communicatively coupled to an electronic device (106);
a 3D Magnetic Resonance imaging device (MRI) (104) communicatively coupled to the electronic device (106);
a processor (106f) of the electronic device (106); and
a memory unit (106a) coupled to the processor (106f), wherein the memory unit (106a) comprising a processing module (106b) configured to:
register an anatomical 3D model, marked with plurality of pre-determined anatomical planes, with an organ segmentation in a 3D MR image space;
establish a set of features for the marked plurality of pre-determined anatomical planes, by projecting the organ segmentation in the 3D MR image space onto the marked plurality of pre-determined anatomical planes;
receive a plurality of 2D US images corresponding to a pre-determined anatomical planes of an organ associated with an object in a tracker space;
map the received plurality of 2D US images to respective plurality of the anatomical plane in the anatomical 3D model;
identify a plurality of features from at least one of an organ segmentation of the received plurality of 2D US images and the received plurality of 2D US images;
establish a translation component of the initial rigid alignment of the intra-operative 3DUS tracker space with the pre-operative 3D MR image space, by matching the identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model;
establish a rotational component of the initial rigid alignment of the intra-operative 3D US-tracker space with the pre-operative 3D MR image space, by matching identified plurality of features on at least one of the organ segmentation and the plurality of 2D US images, with respective set of features in the mapped plurality of anatomical planes in the anatomical 3D model; and
refine the established initial rigid alignment by using a multi-modal registration between the plurality of features from 2D US images and the corresponding anatomical planes in the 3D MR image space.
14. The system (100) as claimed in claim 13, wherein refining the established initial rigid alignment further comprises:
acquire the plurality of 2D US images from a plurality of orientations of the organ, with respective tracker transform;
extract a plurality of corresponding 2D MR images from the 3D MR images, by using the tracker transform and the initial rigid alignment;
estimate a residue error associated with the initial rigid alignment for each of the acquired orientations of the plurality of 2D US images, by performing a 2D to 2D multi-modal image based registration, wherein the 2D in-plane error are represented as a 3D error in fixed tracker space; and
estimate a full 3D correction to be applied over initial rigid alignment, by composing the in-plane residue errors.
15. The system (100) as claimed in claim 13, wherein refining the established initial rigid alignment further comprises:
estimate a correction transform to refine an instantaneous error due to motion of at least one of the object and the external tracker; and
correct the instantaneous motion correction, by taking weighted moving average of past in-plane corrections.
16. The system (100) as claimed in claim 13, wherein the anatomical aware initial rigid alignment is performed using at least one of intra-operative imaging modality and pre-operative imaging modality by at least one of, a Ultrasound - Computed Tomography (US-CT), a Ultrasound – Ultrasound (US-US), a Ultrasound - Positron Emission Tomography (US-PET), and the Ultrasound - Magnetic Resonance (US-MR).
17. The system (100) as claimed in claim 13, wherein the plurality of 2D US images is tracked using the external tracker with the tracker transform by mapping of the 2D US image space onto the fixed tracker space.
18. The system (100) as claimed in claim 13, wherein the 3D MR image is replaced with a pseudo 3D US image from the 3D MR image using an artificial intelligence based image translation method.
19. The system (100) as claimed in claim 13, wherein the plurality of pre-determined anatomical planes of the organ associated with the object in the tracker space can be at least one of, a mid-axial plane, a mid-transverse plane, a mid-sagittal plane, and a base plane.
20. The system (100) as claimed in claim 13, wherein the refinement of the initial rigid alignment comprises utilizing an in-plane alignment correction over the plurality of 2D US images acquired during at least one of a rotational sweep of organ in the anatomical plane and a free hand acquisition on the anatomical plane, wherein the refinement of the initial rigid alignment comprises utilizing the in-plane alignment correction over plurality of the 2D US images acquired at specific anatomical orientations.
21. The system (100) as claimed in claim 13, wherein the initial rigid alignment correction comprises estimating instantaneous in-plane rigid correction to compensate for dynamic motion by taking weighted moving average of past alignment corrections.
22. The system (100) as claimed in claim 14, wherein the composing the in-plane residue errors is dynamically applied by initially estimating the in-plane correction for the orientations of the 2D US image and applied over the initial rigid alignment preceding to acquisition of subsequent orientation of the 2D US image.
23. The system (100) as claimed in claim 14, wherein the composing the in-plane residue errors is statically applied by initially estimating plurality of the in-plane corrections from plurality of the orientations and post a 3D correction is estimated to be applied over initial rigid alignment.
24. A system (100) for anatomical aware initial rigid alignment of an intra-operative three Dimensional (3D) ultrasound (US) tracking space with a pre-operative 3D Magnetic Resonance (MR) image space, the method (100) comprising:
a 2D US image probe (102) communicatively coupled to an electronic device (106);
a 3D Magnetic Resonance imaging device (MRI) (104) communicatively coupled to the electronic device (106);
a processor (106f) of the electronic device (106); and
a memory unit (106a) coupled to the processor (106f), wherein the memory unit (106a) comprising a processing module (106b) configured to:
acquire a plurality of 2D US images by a tracked image probe with tracking information and a pre-operative 3D organ segmentation on 3D MR image plane with anatomical co-ordinate information;
estimate a coarse rigid alignment by dynamically matching a plane in 2D US image plane taken from known anatomical orientation with corresponding plane in 3D MR image plane, by utilizing prior anatomical meta-information and an optimal plane search in constraint 6 Degree of Freedom (DOF) image space; and
refine the estimated coarse rigid alignment, by deriving a 3D correction from a sub-set of in-plane US-MR alignment corrections over plurality of 2D US images taken from different orientation of the tracked image probe.

Documents

Application Documents

# Name Date
1 201941022449-STATEMENT OF UNDERTAKING (FORM 3) [06-06-2019(online)].pdf 2019-06-06
2 201941022449-PROVISIONAL SPECIFICATION [06-06-2019(online)].pdf 2019-06-06
3 201941022449-POWER OF AUTHORITY [06-06-2019(online)].pdf 2019-06-06
4 201941022449-FORM 1 [06-06-2019(online)].pdf 2019-06-06
5 201941022449-DRAWINGS [06-06-2019(online)].pdf 2019-06-06
6 201941022449-DECLARATION OF INVENTORSHIP (FORM 5) [06-06-2019(online)].pdf 2019-06-06
7 201941022449-Proof of Right (MANDATORY) [08-07-2019(online)].pdf 2019-07-08
8 Correspondence by Agent _Form 1_10-07-2019.pdf 2019-07-10
9 201941022449-FORM 18 [20-01-2020(online)].pdf 2020-01-20
10 201941022449-DRAWING [20-01-2020(online)].pdf 2020-01-20
11 201941022449-CORRESPONDENCE-OTHERS [20-01-2020(online)].pdf 2020-01-20
12 201941022449-COMPLETE SPECIFICATION [20-01-2020(online)].pdf 2020-01-20
13 201941022449-Request Letter-Correspondence [20-05-2020(online)].pdf 2020-05-20
14 201941022449-REQUEST FOR CERTIFIED COPY [20-05-2020(online)].pdf 2020-05-20
15 201941022449-Power of Attorney [20-05-2020(online)].pdf 2020-05-20
16 201941022449-Form 1 (Submitted on date of filing) [20-05-2020(online)].pdf 2020-05-20
17 201941022449-CERTIFIED COPIES TRANSMISSION TO IB [20-05-2020(online)].pdf 2020-05-20
18 201941022449-FER.pdf 2021-10-27
19 201941022449-OTHERS [27-04-2022(online)].pdf 2022-04-27
20 201941022449-FER_SER_REPLY [27-04-2022(online)].pdf 2022-04-27
21 201941022449-CORRESPONDENCE [27-04-2022(online)].pdf 2022-04-27
22 201941022449-CLAIMS [27-04-2022(online)].pdf 2022-04-27
23 201941022449-ABSTRACT [27-04-2022(online)].pdf 2022-04-27
24 201941022449-US(14)-HearingNotice-(HearingDate-26-11-2025).pdf 2025-11-03
25 201941022449-Correspondence to notify the Controller [21-11-2025(online)].pdf 2025-11-21
26 201941022449-Annexure [21-11-2025(online)].pdf 2025-11-21
27 201941022449-FORM-26 [24-11-2025(online)].pdf 2025-11-24

Search Strategy

1 SearchHistoryE_07-09-2021.pdf