Sign In to Follow Application
View All Documents & Correspondence

Method And System For Combining A Plurality Of Subimages

Abstract: A method for combining a plurality of sub-images representing different portions of a region of interest is presented. The method includes obtaining labels corresponding to one or more features in the plurality of sub-images. Further, the method includes determining one or more overlap regions between adjacent sub-images in the plurality of sub-images based on the labels of the one or more objects. Moreover, the method includes aligning the adjacent sub-images based on the determined one or more overlap regions. The method further includes combining the plurality of sub-im* to form a continuous image based on the aligned adjacent sub-images. Fig. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 December 2012
Publication Number
44/2015
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

GENERAL ELECTRIC COMPANY
1 RIVER ROAD, SCHENECTADY, NEW YORK 12345

Inventors

1. PATEL, HIMA MUKESH
122, EPIP PHASE 2, HOODI VILLAGE, WHITEFIELD ROAD, BANGALORE 560 066
2. VAIDYA, VIVEK PRABHAKAR
122, EPIP PHASE 2, HOODI VILLAGE, WHITEFIELD ROAD, BANGALORE 560 066
3. SUNDARARAJAN, RAMASUBRAMANIAN GANGAIKONDAN
1509 HUTTON STREET, APT# 1, TROY, NY - 12180
4. ANNAMRAJU, RAVI BHARADWAJ
122, EPIP PHASE 2, HOODI VILLAGE, WHITEFIELD ROAD, BANGALORE 560 066
5. VENKATESAN, RAMESH
1509 HUTTON STREET, APT# 1, TROY, NY - 12180
6. PATIL, MERU ADAGOUDA
F3, ANUGRAHA LOTUS, #16/7, 2ND MAIN ROAD, KACHARAKANHALLI, ST. THOMAS TOWN POST, BANGALORE - 560 084
7. SKINNER, JOHN VERNON
3200 N GRANDVIEW BLVD, WTE POLE LL2, WAUKESHA, WI 53188-1678

Specification

METHOD AND SYSTEM FOR COMBINING A PLURALITY OF SUB-IMAGES

BACKGROUND

[0001] Embodiments of the present disclosure are related to image processing, and more particularly to methods and systems for combining a plurality of sub-images to form a single, continuous image.

[0002] In imaging applications, it may be desirable to image an object of interest that is larger than a field of view (FOV) of an imaging device. For instance, in photography, users may wish to capture a 180° panoramic view in a single image. One technique for capturing such a panoramic view includes zooming out to fit the entire scene in the FOV of the imaging device. Unfortunately, such shrinking of the scene may lead to deterioration in quality of the image.

[0003] Also, currently available imaging systems use image detectors that have a FOV of about 40 cm x 40 cm. This limited FOV limits the size of an object of interest that may be captured by the image detectors. Consequently, the size of the image detector restricts the size of the object that may be captured by the image detector. By way of example, imaging an object of interest such as a vertebral column or any vasculature that is larger than the FOV of the image detector may be a challenging task. In these cases, shrinking the images to fit the object of interest in the FOV of the image detector may result in loss of finer details in the image, thereby reducing efficacy of diagnosis.

[0004] Certain currently available techniques capture several sub-images of different regions of the object of interest and combine these sub-images to form a single, continuous image of the object of interest. For example, if a single continuous image of a vertebral column having a length of about 90 cm is desired, a plurality of separate sub-images of the vertebral column, each depicting about 40 cm of the vertebral column, may be captured. Subsequently, these sub-images may be combined to form the single, continuous image of the complete vertebral column. Similarly, to view constrictions or clots in the vasculature, a plurality of sub-images corresponding to different sections of the vasculature may be captured and combined to form the single, continuous image.

[0005] Moreover, numerous techniques are available for combining the sub-images into the single, continuous image. One such technique combines the sub-images by employing determined overlap regions between the sub-images and combining the sub-images at the overlap region to obtain a single continuous image. However, often the actual overlap regions between the sub-images may differ from the overlap region selected by the combining technique. Accordingly, combining the sub-images using this technique may result in multiple artifacts in the continuous image. In another technique, the overlap region is determined based on geometric or anatomical structures present in the sub-images. In this technique, a correlation between anatomically similar regions in the overlap region is used to combine the sub-images. In another technique, intensity variations in the sub-images are used to determine the overlap region. However, in this technique, the overlap region may be very large (almost 25% of a sub-image). Determining the correlation corresponding to this large overlap region may be computationally intensive and time consuming. In addition, the currently available techniques may introduce artifacts in the continuous image ranging from double representation of a portion of the overlap region, deletion of a portion in the overlap region, or blurring the combined image.

BRIEF DESCRIPTION

[0006] In accordance with aspects of the present disclosure, a method for combining a plurality of sub-images is presented. The method includes obtaining labels corresponding to one or more features in a plurality of sub-images. Moreover, the method includes determining one or more overlap regions between adjacent sub-images in the plurality of sub-images based on the labels of the one or more features. In addition, the method includes aligning the adjacent sub-images based on the determined one or more overlap regions. The method also includes combining the plurality of sub-images to form a continuous image based on the aligned adjacent sub-images.

[0007] In accordance with other aspects of the present disclosure, a system for combining a plurality of sub-images is presented. The system includes an input unit configured to retrieve a plurality of sub-images and labels associated with the plurality of sub-images. Furthermore, the system includes a computing unit configured to determine overlap regions between adjacent sub-images of the plurality of sub-images based on the labels associated with the plurality of sub-images and align the adjacent sub-images based on the determined overlap regions of the adjacent sub-images. Additionally, the system includes a combining unit configured to combine the plurality of sub-images based on the aligned sub-images to generate a continuous image.

[0008] In accordance with further aspects of the present disclosure, an imaging system is presented. The system includes an acquisition subsystem configured to acquire a plurality of sub-images. In addition, the system includes a processing subsystem operatively coupled to the acquisition subsystem and including a merging platform configured to combine the plurality of sub-images into a continuous image, where the merging platform includes an input unit configured to retrieve the plurality of sub-images and labels associated with the plurality of sub-images, a computing unit configured to determine overlap regions between adjacent sub-images of the plurality of sub-images based on the labels associated with the plurality of sub-images, align the adjacent sub-images based on the determined overlap regions of the adjacent sub-images, and a combining unit configured to combine the plurality of sub-images based on the aligned sub-images to generate the continuous image.

DRAWINGS

[0009] These and other features, aspects, and advantages of the present disclosure will be better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:

[0010] FIG. 1 is diagrammatical representation of a typical vertebral column;

[0011] FIG. 2 is a diagrammatical representation of sub-images of the vertebral column;

[0012] FIG. 3 is a diagrammatical illustration of an exemplary system for combining sub-images, in accordance with aspects of the present disclosure;

[0013] FIG. 4 is a diagrammatical representation of a merging platform of FIG. 3, in accordance with aspects of the present disclosure;

[0014] FIG. 5 is a diagrammatical illustration of a method of combining sub-images based on a typical overlap region between the sub-images;

[0015] FIG. 6 is a diagrammatical illustration of a method of combining sub-images, in accordance with aspects of the present disclosure; and

[0016] FIG. 7 is a flowchart illustrating an exemplary method for combining sub-images, in accordance with aspects of the present disclosure.

DETAILED DESCRIPTION

[0017] The following detailed description of certain embodiments of the present disclosure will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of various embodiments, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., processors or memories) may be implemented in a single piece of hardware (e.g., a general-purpose signal processor or a block of random access memory, hard disk, or the like). Similarly, the programs may be stand along programs, incorporated as subroutines in operating systems, or functions in installed software packages, or the like. It should be understood that the various embodiments are not limited to the arrangements shown in the drawings.

[0018] As used herein, an element or step recited in the singular and proceeded with the word "a" or "an" should be understood as not excluding plural of the elements or steps, unless such exclusion is explicitly stated. Furthermore, references to "one embodiment" of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments "including" or "having" an element or a plurality of elements having a particular property may include additional elements not having that property.

[0019] Embodiments of the present disclosure are related to methods and systems for combining a plurality of sub-images corresponding to a region/object of interest to form a single continuous image representing the region/object of interest in its entirety. In one embodiment, the methods and systems of the present disclosure determine overlap regions between adjacent sub-images of the plurality of sub-images. Moreover, the methods of the present disclosure may determine the overlap region based on one or more labels of features of interest within the sub-images.

[0020] It will be understood that the embodiments of the present disclosure may find application in numerous applications. For instance, embodiments of the present application may be utilized in digital still photography to obtain a continuous image of a large region or object of interest. Similarly, embodiments of the present application may also be utilized in medical imaging to obtain a continuous image of a region of interest that is larger than the FOV of an image detector. It will be understood that these applications are merely exemplary and the systems and methods of the present disclosure may be utilized in multiple other applications without departing from the scope of the present application.

[0021] Further, it may be noted that in the present disclosure, the systems and methods for combining sub-images are described with reference to medical images. Particularly, embodiments of the present disclosure are described with respect to medical images that correspond to at least a portion of a vertebral column. However, it will be understood that such selection of a medical application and more particularly images of the vertebral column is merely utilized to illustrate the embodiments of the present disclosure and does not limit the scope of the present disclosure to any particular type of medical images in any manner. For instance, the sub-images may correspond to operational medical data.

[0022] FIG. 1 is a diagrammatical representation 100 of a typical vertebral column. The length of the vertebral column 100 in adults may range from about 60 cm to about 90 cm. Moreover, the vertebral column 100 may be divided into three different regions - a cervical region 102, a thoracic region 104, and a lumbar region 106. The cervical region 102 begins at the back of a patient's head and extends up to the upper back. The thoracic region 104 is representative of a region between the upper back and the lower back. In addition, the lumbar region 106 extends from the lower back to the pelvis. Further, each region typically includes a fixed number of vertebrae 108. For instance, the cervical region 102 includes 7 vertebrae, which are typically labelled as C1-C7, while the thoracic region 104 includes 12 vertebrae, which are typically labelled as T1-T12. Moreover, the lumbar region 106 includes 5 vertebrae, which are typically labelled as L1-L5. In some cases, however, patients may have one or more extra vertebrae in any of the regions 102, 104 106, while other patients may have fewer vertebrae in any of the regions 102, 104, 106. Generally, soft inter-vertebral discs 110 are present between the vertebrae 108 to provide support and shock absorbing power to the vertebral column 100.

[0023] Further, as alluded to previously, presently available image detectors have a FOV of about 40 cm X 40 cm. Also, the length of the vertebral column 100 in most adults may vary from 60 cm to 90 cm. Accordingly, typical image detectors may fail to capture the entire length of the vertebral column 100 in a single image. Accordingly, the entire vertebral column 100 may be captured in a plurality of sub-images that depict different portions of the vertebral column 100. For instance, in case the complete vertebral column 100 is captured in two sub-images, one sub-image may capture the cervical region 102 and an upper half of the thoracic region 104, while the second sub-image may capture a lower half of the thoracic region 104 and the lumbar region 106.

[0024] FIG. 2 is a diagrammatical representation 200 of three sub-images of the vertebral column 100 of FIG. 1. More particularly, in the example illustrated in FIG. 2, the vertebral column 100 is captured in three sub-images 202, 204, 206. In the first sub-image 202, the cervical region 102 and a top portion of the thoracic region 104 is captured. The second sub-image 204 depicts the thoracic region 104 between the top portion and a bottom portion, while the third sub-image 206 captures an area of the vertebral column 100 representative of the bottom portion of the thoracic region 104 and the lumbar region 106. Moreover, it will be noted that during image acquisition, the sub-images 202, 204, 206, are captured such that at least a portion of the vertebral column 100 is common between adjacent sub-images. For instance, in the example illustrated in FIG. 2, the top portion of the thoracic region 104 is common between the first sub-image 202 and the second sub-image 204. Moreover, the bottom portion of the thoracic region 104 is common between the second sub-image 204 and the third sub-image 206. This common portion of the vertebral column 100 captured in adjacent pairs of sub-images is generally referred to as an overlap region 208. Moreover, this overlap region 208 is typically present in both the adjacent sub-images. FIG. 2 depicts an overlap region 208 present between the first and second sub-images 202, 204 and an overlap region 208 present between the second and third sub-images 204, 206.

[0025] In accordance with aspects of the present disclosure, sub-images of the region of interest, such as the sub-images 202, 204, 206, may be combined to form a single, continuous image. Moreover, in accordance with further aspects of the present disclosure, the overlap regions 208 may be determined based on labels corresponding to the vertebrae 108. Accordingly, the vertebrae 108 in the sub-images 202, 204, 206 may be labelled. The labelling may be performed manually or automatically. Alternatively, the sub-images 202, 204, 206 may be pre-labelled and the systems and methods of the present disclosure may retrieve these labels to determine the overlap region 208. Moreover, the overlap region 208 between adjacent pairs of sub-images may be determined based on the labels of the vertebrae 108 that are common across the adjacent sub-images.

[0026] FIG. 3 is a block diagram representation of an exemplary system 300 for use in diagnostic imaging, in accordance with aspects of the present disclosure. The system 300 may be configured to combine a plurality of sub-images that are representative of different regions of an object of interest into a single continuous image of the object of interest. In the present example of FIG. 3, the object of interest is a human patient 302. However, it will be understood that in other applications the object of interest may be any other vertebrate without departing from the scope of the present disclosure.

[0027] The system 300 may be configured to acquire image data from the patient 302. In one embodiment, the system 300 may be configured to acquire image data from the patient 302 via an image acquisition device 304. Moreover, in certain other embodiments, image data may be acquired via one or more sensors (not shown) that may be disposed on the patient 302. By way of example, the sensors may include physiological sensors such as electrocardiogram (ECG) sensors and/or positional sensors such as electromagnetic field sensors or inertial sensors. These sensors may be operationally coupled to a data acquisition device, such as an imaging system, via leads (not shown), for example.

[0028] Furthermore, the system 300 may also include a medical imaging system 306 that is in operative association with the image acquisition device 304. The medical imaging system 306 may include any type of imaging system. For example, the medical imaging system 306 may include a tomography imaging system, a computed tomography (CT) imaging system, an ultrasound imaging system, a magnetic resonance imaging (MRI) system, a positron emission tomography (PET) system, an X-ray system, or any other system capable of generating medical images. It will be appreciated that the medical imaging system 306 may also be a multimodality imaging system, such as an MRI/PET system, or a single modality imaging system without departing from the scope of the present disclosure. Moreover, while some of the embodiments are described herein with respect to an MRI system, it should be realized that the embodiments described herein might also be used with other types of medical imaging systems. In addition, the various embodiments of the present disclosure are not limited to medical imaging systems for imaging human subjects, but may include veterinary systems for imaging non-human objects.

[0029] As noted hereinabove, in a presently contemplated configuration, the medical imaging system 306 may be an MRI system. Further, the medical imaging system 306 may include an acquisition subsystem 308 and a processing subsystem 310, in one embodiment. The acquisition subsystem 308 may be configured to acquire image data representative of one or more anatomical regions of interest in the patient 302 via the image acquisition device 304. Any known modality may be utilized to acquire the images based on the type of medical imaging system utilized. For instance, in case of an MRI system, the nuclear magnetic property of protons within the anatomical region of interest may be utilized to acquire images of the anatomical region of interest. In the present disclosure, the anatomical region of interest is representative of the vertebral column 100 of the patient 302. However it will be understood that the anatomical region of interest may be any region within the patient's body that is in proximity to the vertebral column 100 (see FIG. 1). For example, the anatomical region of interest may include the brain, the throat, the stomach, the prostrate, the liver, the kidneys, and the like, without departing from the scope of the present disclosure.

[0030] Moreover, in one embodiment, the acquired image data may include a plurality of sub-images, such as sub-images 202, 204, 206 (see FIG. 2) of the patient's vertebral column 100. The sub-images 202, 204, 206 may include dorsal, axial, coronal, or sagittal views of different portions of the vertebral column 100. For example, the image data may include a plurality of sagittal sub-images of different regions of the vertebral column 100, such as sub-images 202, 204, 206 of the lumbar region 106, the thoracic region 104, and/or the cervical region 102 (see FIG. 1). It will be understood that in medical applications, the image data may include sub-images of vasculature (such as the renal artery), a combination of vertebral column and vasculature, bones of the legs, and the like. In one embodiment, the image data may include sub-images corresponding to a three-dimensional (3D) volume of the region of interest. In such cases, the image data may include a plurality of sub-image slices corresponding to the imaged volume.

[0031] Furthermore, the image data may be acquired in a single acquisition cycle or in a plurality of acquisition cycles. Moreover, the image data may be acquired using different acquisition techniques or in different sessions. Also, in each acquisition cycle, one or more sub-images corresponding to the anatomical region of interest may be captured. Accordingly, the image data pertaining to one patient 302 acquired in one or more acquisition cycles or sessions may include one or more sub-images of the cervical region 102, one or more sub-images of the thoracic region 104, or one or more sub-images of the lumbar region 106, and so on. Additionally, the acquired image data may be processed by the processing subsystem 310 to label the vertebrae 108 in the plurality of sub-images and/or to combine the plurality of sub-images into the single, continuous image of the vertebral column 100.

[0032] The image data acquired and/or processed by the medical imaging system 306 may be employed to aid an operator or a computing system in identifying disease states, assessing need for treatment, determining suitable treatment options, tracking the progression of the disease, and/or monitoring the effect of treatment on the disease states. In certain embodiments, the processing subsystem 310 may be further coupled to a storage system, such as the data repository 316, where the data repository 316 may be configured to store the acquired and/or processed image data. It may be noted that sub-images corresponding to different patients may be stored in the data repository 316. In one example, metadata may be included in the image data such that image data pertaining to a specific patient 302 may be easily differentiated from image data pertaining to other patients.

[0033] In FIG. 3, the acquisition subsystem 308, the processing subsystem 310, and the data repository 316 are illustrated as individual components of the medical imaging system 306. However, in other embodiments, these components may be implemented as a single device configured to acquire image data, store the image data, and combine the sub-images in the image data to form a single, continuous image. Along similar lines, it will be understood that any two of these components may be combined into a single device without departing from the scope of the present disclosure.

[0034] In some situations, as alluded to previously, the image acquisition subsystem 308 may not be capable of acquiring a single continuous image that is representative of the entire vertebral column 100. Instead, the image acquisition subsystem 308 may acquire multiple sub-images 202, 204, 206 representative of different portions of the vertebral column 100. To combine these sub-images 202, 204, 206 to generate a single, continuous image, the system 300 of the present disclosure may include a merging platform 312. Additionally, in certain embodiments, the system 300 may also include a labelling platform 314.

[0035] The merging platform 312 may be configured to combine a plurality of sub-images corresponding to different regions of the vertebral column 100 of a particular patient to form a single, continuous image of the vertebral column 100. To that end, the merging platform 312 may be configured to determine overlap regions such as the overlap regions 208 (see FIG. 2) between adjacent sub-images. In accordance with aspects of the present disclosure, embodiments of the merging platform 312 may be configured to utilize labels corresponding to the vertebrae in the sub-images to determine the overlap regions. The functionality of the merging platform 312 will be described in greater detail with reference to FIGs. 3-5.

[0036] In one embodiment, the labelling platform 314 may be configured to assign labels to features of interest in the sub-images. These labels may be utilized by the merging platform 312 to determine the overlap region. The labels, as referred to herein, are representative of notations of the vertebrae or inter-vertebral discs, such as C1-C7, T1-T12, and L1-L5. In case the features of interest in the sub-images are not representative of vertebrae, the labels may be representative of the names of such features of interest. It will be understood that the labelling platform 314 may utilize any known technique to determine and/or assign these labels. For instance, the labelling platform 314 may be configured to label the vertebrae 108 based on user input. Alternatively, the labelling platform 314 may be configured to utilize other techniques to automatically determine the labels of the vertebrae 108. In another embodiment, the labelling platform 314 may be configured to utilize a semi¬automatic approach where an operator may be requested to input some initial information based on which the labelling platform 314 may be configured to automatically label the vertebrae 108 in the sub-images. Furthermore, the labelling platform 314 may be configured to determine the coordinates of the labels with respect to the position of vertebrae in the sub-images.

[0037] With continuing reference to FIG. 3, the labelling platform 314 may be configured to label the sub-images retrospectively or in real-time. For instance, in some cases, it may be desirable to label the vertebrae 108 in real-time to aid in real¬time diagnosis. Alternatively, in other cases, the labelling may be performed once image data is received from one or more patients and stored in a data repository 316. Additionally, the labelling platform 314 may be configured to store the labelled sub-images in the data repository 316. In this embodiment, the merging platform 312 may be configured to retrieve the pre-labelled sub-images from the data repository 316 for combining the sub-images into a single continuous image.

[0038] It may be noted that in some situations the sub-images may include labels. In these situations, the labelling platform 314 may be configured to check whether the sub-images are pre-labelled. If it is determined that the sub-images are pre-labelled, the labelling platform 314 may be configured to retrieve the labels from the sub-images. However, if it is determined that the sub-images do not include labels, the labelling platform 314 may be configured to label the features of interest, such as the vertebrae, in the sub-images.

[0039] Further, the merging platform 312 may be coupled to the labelling platform 314 such that the merging platform 312 may obtain the labels corresponding to the acquired sub-images from the labelling platform 314. In FIG. 3, the labelling platform 314 and the merging platform 312 are illustrated as part of the processing subsystem 310. Alternatively, the labelling platform 314 and/or the merging platform 312 may be standalone modules that may be operatively coupled to each other through a network connection, such as a LAN, the internet, and the like. Moreover, in case the labelling platform 314 and/or the merging platform 312 are standalone modules, these platforms may be configured to assign labels or combine sub-images using cloud computing, for example. In such cases, the labelling platform 314 and/or the merging platform 312 may either operate individually or in conjunction with the acquisition subsystem 308 and the processing subsystem 310. Further, in some embodiments, the labelling platform 314 may be a part of the merging platform 312, without departing from the scope of the present disclosure.

[0040] As illustrated in FIG. 3, the medical imaging system 306 may also include a display 318 and a user interface 320. In certain embodiments, such as in a touch screen, the display 318 and the user interface 320 may overlap. Additionally, the display 318 and the user interface 320 may include a common area, in some embodiments. Moreover, the user interface 320 and/or the display 318 may be part of a handheld device, such as a cell phone, or a mobile device. In accordance with aspects of the present disclosure, the display 318 of the medical imaging system 306 may be configured to display one or more sub-images or the combined continuous images generated by the medical imaging system 306 based on the acquired image data. Additionally, in accordance with further aspects of the present disclosure, the labelled sub-images and /or the combined images may also be visualized on the display 318.

[0041] The user interface 320, on the other hand, may allow operators to communicate with the acquisition subsystem 308, the processing subsystem 310, the labelling platform 314, and/or the merging platform 312. For instance, operators may issue commands to display sub-images or combined images, begin image acquisition, stop image acquisition, control the sensors that are coupled to the patient 302, navigate through the sub-images, label the sub-images, combine the sub-images, and the like, through the user interface 320. Additionally, the user interface 320 may also be configured to aid in manipulating and/or organizing the images displayed on the display 318.

[0042] In addition, the user interface 320 of the medical imaging system 306 may include a human interface device (not shown) configured to aid the operator in manipulating image data displayed on the display 318. The human interface device may include a mouse-type device, a trackball, a joystick, a stylus, or a keyboard configured to aid the operator in identifying one or more regions of interest requiring therapy, and marking the starting and end points. However, as will be appreciated, other human interface devices, such as, but not limited to, a touch screen, may also be employed.

[0043] The systems, platforms, and subsystems described with reference to FIG. 3 may communicate with each other through electrical and/or data connections. Data connections may be direct wired links, fiber optic connections, or wireless communication links without departing from the scope of the present disclosure. In some embodiments, data connections may include compact discs, flash drives, and the like. Electrical connections may also include wired or wire-less connections. In some cases, all the connections may be of the same type. Alternatively, different subsystems may be coupled using different communication means. For instance, the acquisition subsystem 308 may be coupled to the image acquisition device 304 through electrical connections, while the acquisition subsystem 308 may be coupled to the data repository 316 through wireless or wired data connections. Moreover, all the elements of the medical imaging system 306 may be coupled to a network (not shown), such as a wired or wireless network where the various subsystems and platforms may interact and communicate with each other seamlessly.

[0044] Moreover, the subsystems and platforms illustrated in FIG. 3 are by no means a complete rendition of the components of the medical imaging system 306. The medical imaging system 306 may include other subsystems to effectively function. For instance, the medical imaging system 306 may include a patient positioning subsystem (not shown) to automatically position the patient 302 with respect to the image acquisition device 304. Similarly, the medical imaging system 306 may include a host of electronic and electrical devices such as amplifiers, switches, and gradient coils, without departing from the scope of the present disclosure.

[0045] FIG. 4 is a diagrammatical representation 400 of an exemplary embodiment of the merging platform 312 of FIG. 3, according to aspects of the present disclosure. Further, in a presently contemplated configuration, the merging platform 312 may include an input unit 404, an image-processing unit 406, a computing unit 408, and a combining unit 410. In some embodiments, the merging platform 312 may optionally include a labelling unit 412.

[0046] The input unit 404 may be configured to retrieve image data 402, user inputs and/or labels associated with image data 402. In one example, the image data 402 may be retrieved directly from the acquisition subsystem 308 or the data repository 316. As previously noted, the image data 402 may include a plurality of sub-images associated with one or more patients and these sub-images may be acquired in different acquisition cycles, using different acquisition techniques, or in different acquisition sessions. Further, the input unit 404 may be configured to retrieve the image data 402 corresponding to a single patient, such as the patient 302. For instance, the input unit 404 may be configured to retrieve two sub-images corresponding to the patient 302, where the first sub-image corresponds to the cervical-thoracic region of the patient 302 and the second sub-image corresponds to the thoracic-lumbar region of the same patient 302. Moreover, the input unit 404 may be configured to display the sub-images on the display 318 and/or issue commands via the display 318 requesting an operator to insert label information.

[0047] Furthermore, the input unit 404 may be configured to retrieve labels associated with the sub-images in the image data 402 from the labelling platform 314 (see FIG. 3). Alternatively, in case the sub-images are pre-labelled and stored in the data repository 316, the input unit 404 may retrieve the labels directly from the data repository 316.

[0048] Moreover, the input unit 404 may be configured to communicate with the user interface 320 to obtain user inputs. User inputs may include commands to combine the sub-images into a continuous image, commands to display the sub-images and/or the continuous image, and commands to edit any of the sub-images. Furthermore, in case the sub-images are labelled manually or semi-automatically, the user inputs may include label information.

[0049] The image-processing unit 406 may be configured to process the sub-images retrieved by the input unit 404. For example, the image-processing unit 406 may be configured to enhance the contrast of the sub-images, accentuate the edges of the vertebrae 108 in the sub-images, reduce any blurring in the sub-images, and the like.

[0050] As previously noted, it may be desirable to obtain labels associated with the sub-images to combine the sub-images to generate a single, continuous image. Also, as described with reference to FIG. 3, the labelling platform 314, which may be part of the processing subsystem 310 or a standalone platform, may assign labels to the sub-images. However, in some cases, the processing subsystem 310 may not include the labelling platform 314 and/or the images may not be pre-labelled. In such cases, the merging platform 312 may include a labelling unit 412. Moreover, the operation of the labelling unit 412 may be substantially similar to the operation of the labelling platform 314. Accordingly, the labelling unit 412 may be configured to label one or more features, such as the vertebrae 108, in the sub-images. However, if the image data 402 is representative of an image of the inter-vertebral discs 110 (see FIG. 2), the labelling unit 412 may be configured to label the inter-vertebral discs 110 based on their location with respect to the vertebral column 100. Further, the labelling unit 412 may utilize any known technique to label the vertebrae 108 in the sub-images. For instance, the labelling unit 412 may request an operator to insert the labels manually. Alternatively, the labelling unit 412 may be configured to automatically determine the labels of the vertebrae 108. In another embodiment, the labelling unit 412 may utilize a semi-automatic approach where an operator may be requested to input some initial information based on which the labelling unit 412 may be configured to automatically label the vertebrae 108 of the sub-images. Also, the labelling unit 412 may be configured to assign labels in real-time or retrospectively without departing from the scope of the present disclosure.

[0051] The computing unit 408 may be configured to receive and process labelled sub-images. Particularly, the computing unit 408 may be configured to determine overlap regions between adjacent sub-images and align the sub-images at the overlap region. To that end, the computing unit 408 may be configured to identify vertebrae that are common between adjacent sub-images based on the labelled sub-images. Accordingly, the computing unit 408 may be configured to retrieve coordinates of the labels in adjacent sub-images and identify the labels that are common in the adjacent sub-images. For instance, if one sub-image includes vertebrae CI to T6 and another sub-image includes vertebrae T5 to L5, the computing unit 408 may be configured to identify that labels T5 and T6 are common in both the sub-images.

[0052] Additionally, the computing unit 408 may also be configured to determine boundaries of the vertebrae in the sub-images. In one embodiment, the computing unit 408 may be configured to determine the boundaries of the vertebrae based on a vertebrae segmentation technique. In the vertebrae segmentation technique, the edges of the vertebrae 108 may be determined by identifying variations in intensity of pixels. At the edges of the vertebrae, the intensity of pixels may vary. This variation in the intensity of pixels may be indicative of the boundaries of the vertebrae. In one example, the computing unit 408 may be configured to draw horizontal parallel lines from the coordinate of the label in the left and right directions. By detecting the variation in pixel intensity as the horizontal lines move away from the coordinate of the label, the computing unit 408 may be configured to determine the vertical edges of the vertebrae 108. Alternatively, the computing unit 408 may be configured to compare the labelled vertebrae 108 with vertebrae templates. A variance between the labelled vertebrae 108 and the vertebrae templates may be calculated to determine the edges of the vertebrae 108.

[0053] Moreover, the computing unit 408 may be configured to determine boundaries of the overlap region between adjacent sub-images based on the identified boundaries of the common vertebrae in adjacent sub-images. For instance, the overlap region may be identified as the region whose boundaries substantially coincide with the boundaries of the common vertebrae. Alternatively, the overlap region may be identified as a region with boundaries extending beyond the boundaries of the commonly present vertebrae or a region having boundaries within the boundaries of the commonly present vertebrae in adjacent sub-images.

[0054] As will be appreciated, a patient may experience voluntary or involuntary motion between acquisitions of the sub-images. This movement may cause misalignment between adjacent sub-images. Accordingly, the computing unit 408 may also be configured to align the sub-images. In particular, the computing unit 408 may be configured to align the sub-images by aligning the overlap regions of adjacent sub-images. In one embodiment, the overlap regions may be aligned via use of an image registration process.

[0055] Image registration is the process of transforming different sets of image data into one coordinate system. Moreover, image registration enables comparison or integration of the data obtained from different measurements. In addition, to align the images, one image may be considered as a source image and the other images may be considered as target images. The target images may be translated such that the target images are aligned with the source image. Various transformations may be performed on the target images to align the target images with the source image. For instance, the target images may be rotated or transposed such that the target images are aligned with the source image.

[0056] In accordance with aspects of the present disclosure, one sub-image may be considered as the source image and an adjacent sub-image may be considered as the target image. Particularly, the overlap region of one sub-image may be considered as the source image and the overlap region of the adjacent sub-image may be considered as the target image. Accordingly, the target overlap region may be transposed and/or rotated such that the target overlap region is aligned with the source overlap region.

[0057] Subsequent to each transformation of the target overlap region, the computing unit 408 may be configured to compare the source overlap region with the target overlap region to determine a degree of alignment between the source overlap region and the target overlap region. Moreover, the computing unit 408 may also be configured to generate an alignment score corresponding to the determined degree of alignment. Various comparison techniques may be utilized to generate the alignment score. In one example, a maximum correlation may be calculated between the target overlap region and the source overlap region. Alternatively, the computing unit 408 may be configured to calculate a minimum variance between the target overlap region and the source overlap region. It will be understood that maximum correlation and minimum variance are two examples of comparison techniques that may be employed to determine whether the source and target overlap regions are aligned. Other known image registration techniques may be employed to align the overlap regions corresponding to adjacent sub-images without departing from the scope of the present disclosure. For instance, techniques such as mutual information, normalized mutual information, sum of squared differences, or phase correlation may be utilized to determine the degree of alignment between the source and target images.

[0058] In one embodiment, the computing unit 408 may be configured to continue transforming the target overlap region with respect to the source overlap region until a computed alignment score exceeds a determined threshold score. This threshold score may be stored in the computing unit 408. Moreover, in accordance with aspects of the present disclosure, the target overlap region may be transformed by a determined amount and an alignment score may be computed for each of these transformations until the computed alignment score exceeds the determined threshold score. Once the alignment score exceeds the determined threshold score, the computing unit 408 may be configured to stop transforming the target sub-image and identify the transformation for which the alignment score exceeds the threshold score as a best-aligned fit.

[0059] In accordance with another aspect of the present disclosure, the computing unit 408 may be configured to determine a search set. This search set may include various possible transformations of the target overlap region with respect to the source overlap region. In this case, the alignment scores may be calculated for all the possible transformations in the search set. The transformation with the highest alignment score may be selected to be representative of a best-aligned fit of the overlap regions.

[0060] The combining unit 410 may be configured to combine the adjacent sub-images based on the determined best-aligned fit to generate a single, continuous image that is representative of the entire object of interest, such as the vertebral column 100. Any known combining techniques may be utilized to combine the adjacent sub-images. For instance, the combining unit 410 may be configured to determine an area of interest, such as a horizontal line, between the adjacent sub-images. Subsequently, the combining unit 410 may select a first set of pixels corresponding to a first region in one sub-image, where the first region encompasses a region above the determined area of interest. Further, a second set of pixels corresponding to a second region in the adjacent sub-image may be selected, where the second region encompasses a region below the determined area of interest. Further, the combining unit 410 may combine the first set of pixels and the second set of pixels about the determined level to generate the combined image.

[0061] Combining the sub-images using this technique may however result in a distinct line that is indicative of a region where the sub-images are combined. Such a distinction appears in the continuous image because the average intensities of the sub-images may vary. In another technique, the combining unit 410 may be configured to combine the sub-images with a smooth transition. To this end, the combining unit 410 may be configured to utilize a weighted average of the intensities of the pixels in the aligned overlap regions. It will be understood that various other techniques known in the art may be employed by the combining unit 410 to combine the sub-images without departing from the scope of the present disclosure.

[0062] Moreover, the single, continuous image generated by the combining unit 410 may be processed further to improve the quality of the image. In one embodiment, the image-processing unit 406 may be configured to process the combined images. For instance, the image-processing unit 406 may be configured to perform intensity standardization on the combined image so that any intensity variations between the sub-images of the combined image are minimized. Various intensity normalization techniques are available and the image-processing unit 406 may employ any of these techniques without departing from the scope of the present disclosure. In one embodiment, the image-processing unit 406 may be configured to standardize the intensity in the combined image based on the intensity of the vertebrae. Some other examples of post-processing techniques include adjusting the contrast of the combined image and smoothening any transitions between the sub-images of the combined image.

[0063] It will be appreciated that in the presently known techniques, the overlap region between adjacent sub-images includes the entire rectangular region that is common between the adjacent sub-images. This overlap region typically encompasses the vertebral column, neighbouring tissues, organs, bones, and the like. However, in accordance with exemplary aspects of the present disclosure, the overlap region determined by the computing unit 408 is a subset of the rectangular region that is identified by the currently available techniques. Specifically, the computing unit 408 may be configured to identify a region that includes a portion of the vertebral column 100 that is common in the adjacent sub-images. It may be noted that the overlap region determined by the computing unit 408 of the present disclosure may be in the range of about 20% to 70% of the rectangular region identified by the currently known techniques. Such reduction in the overlap region is possible via use of embodiments of the present disclosure as the boundaries of the overlap region 208 are determined based on the boundaries of the labelled vertebrae that are common in the adjacent sub-images. Implementing the system 400 as described hereinabove aids in reducing the overlap regions between the adjacent sub-images. This reduction in the overlap regions in turn facilitates a reduction in computation time.

[0064] FIGs. 5-6 are diagrammatical representations of the overlap regions determined by currently available systems and the exemplary merging platform 312 of the present disclosure, respectively. In particular, FIG. 5 is a diagrammatical representation 500 of adjacent sub-images and the overlap region determined using presently available techniques. In FIG. 5, two sub-images 502, 504 representative of a vertebral column of a patient are depicted. Reference numeral 506 is representative of a single continuous image generated by the combination of the sub-images 502, 504. It will be understood that the single image 506 is combined using presently known techniques. Also, reference numeral 508 is representative of the overlap region as determined by the currently available techniques. It may be noted that the overlap region 508 is representative of a rectangular region that is common across the two sub-images 502, 504.

[0065] Turning now to FIG. 6, a diagrammatical representation 600 of adjacent sub-images and the overlap regions determined using the exemplary computing unit 408 of the merging platform 312 is depicted. Two sub-images 602, 604 representative of different portions of the vertebral column of a patient are depicted in FIG. 6. Moreover, reference numeral 606 is representative of a single continuous image 606 generated by the combination of the sub-images 602, 604. It will be understood that the single, continuous image 606 is combined using the exemplary merging platform 312 of the present disclosure. Also, reference numeral 608 is representative of the overlap region as determined by the exemplary merging platform 312. As previously noted, in accordance with embodiments of the present disclosure, the overlap region 608 is representative of the region that includes one or more vertebrae that are commonly present in adjacent sub-images. It may be noted that the overlap region 608 is substantially smaller than the overlap region 508 determined by currently available techniques. This reduction in the size of the overlap region 608 substantially reduces the time taken to align and/or combine the sub-images as compared to the time taken by presently known techniques for such combination.

[0066] FIG. 7 is a flow chart 700 depicting an exemplary method for combining a plurality of sub-images, in accordance with aspects of the present disclosure. As previously noted, the sub-images are representative of different portions of a region of interest, such as a vertebral column. The method will be described with reference to FIGs. 1-6. The method begins at step 702, where the merging platform 312 may be configured to receive the plurality of sub-images. In one embodiment, these sub-images may correspond to different portions of the vertebral column 100 of a particular patient, such as the patient 302. For instance, a first sub-image, such as the sub-image 202 may be representative of the cervical-thoracic region, a second sub-image, such as the sub-image 204 may be representative of the cervical-thoracic-lumbar region and a third sub-image, such as the sub-image 206 may be representative of the thoracic-lumbar region. In another example, the vertebral column 100 may be captured in two sub-images, where the first sub-image may be representative of the cervical-thoracic region and the second sub-image may be representative of the thoracic-lumbar region. In yet another example, the sub-images may correspond to portions of a region of interest, such as vasculature or other operational regions of interest without departing from the scope of the present disclosure. As described previously, the sub-images may be acquired using different acquisition techniques or in different sessions. Further, the input unit 404 may be configured to receive or retrieve these sub-images from the acquisition subsystem 308 or the data repository 316. It will be understood that an object/region of interest may be captured in more than three sub-images without departing from the scope of the present disclosure. Moreover, each pair of adjacent sub-images in the plurality of sub-images may include overlap regions.

[0067] Subsequently, at step 704, labels and/or the coordinates of the labels associated with one or more features of interest in the sub-images may be retrieved. In case the sub-images correspond to portions of the vertebral column 100, the features may be vertebrae or inter-vertebral discs. Moreover, in one example, the input unit 404 may be employed to retrieve the labels. For instance, the input unit 404 may retrieve two or more pre-labelled sub-images from the data repository 316. Alternatively, the coordinates of the labels may be retrieved from the labelling platform 314. Also, the sub-images may be labelled manually by an operator or automatically by the labelling platform 314. In another embodiment, the merging platform 312 may include the labelling unit 412 that may be used to label the retrieved sub-images manually, automatically, or semi-automatically.

[0068] Further, overlap regions may be determined between adjacent sub-images based on the retrieved labels, as depicted by step 706. The determination of the overlap region is described with reference to the vertebrae of the vertebral column 100. However, it will be understood that the functions of the computing unit 408 may be altered to determine overlap regions based on any other features in the sub-images, without departing from the scope of the present disclosure.

[0069] The computing unit 408 may be configured to determine the vertebrae that are common in adjacent sub-images based on the retrieved labels. For instance, if one sub-image includes vertebrae CI to T8 and the adjacent sub-image includes vertebrae T5 to L5, the computing unit 408 may be configured to determine that the common vertebrae between the adjacent sub-images are the vertebrae T5 to T8. Subsequently, the computing unit 408 may be configured to determine boundaries of the vertebrae that are common in the adjacent sub-images. The boundaries of the vertebrae may be computed using vertebrae segmentation or template matching as described with reference to FIG. 4. Based on the identified boundaries, the computing unit 408 may be configured to determine boundaries of the overlap region for each sub-image. Further, the overlap region may be representative of a region whose boundaries coincide with the boundaries of one or more vertebrae that are common between adjacent sub-images. Alternatively, the overlap region may be representative of a region extending beyond the boundaries of the common vertebrae or a region having boundaries within the boundaries of the common vertebrae without departing from the scope of the present disclosure.

[0070] Subsequently, at step 708, the overlap regions corresponding to adjacent sub-images may be aligned. In one embodiment, the computing unit 408 may be used to align the overlap regions. In one example, the computing 408 may be configured to employ image registration techniques to align the overlap regions. In image registration, the overlap region of one sub-image may be representative of a source overlap region and the overlap region of the adjacent sub-image may be representative of a target overlap region. The computing unit 408 may be configured to compute an alignment score between the source and target overlap regions by transforming the target overlap region with respect to the source overlap region. The alignment score may be representative of a degree of alignment between the source overlap region and the target overlap region. In one embodiment, the computing unit 408 may be configured to compute alignment scores for each transformation in a determined search set. Alternatively, the alignment scores may be computed for different transformations until a computed alignment score exceeds a determined threshold score. Once the alignment score exceeds the determined threshold score, the computing unit 408 may be configured to stop transforming the target sub-image and identify the transformation for which the alignment score exceeds the threshold score as a best-aligned fit. Moreover, the computing unit 408 may be configured to transform the target sub-image with respect to the source sub-image in a direction along which the alignment scores improve over the previously computed alignment scores.

[0071] Moreover, any known statistical means may be employed to determine the alignment scores. For instance, the computing unit 408 may be configured to compute a maximum correlation or a minimum variance between the overlap regions of the adjacent sub-images. It will be understood that maximum correlation and minimum variance are exemplary techniques to determine the best-aligned fit and any other statistical techniques may be employed to determine the best-aligned fit without departing from the scope of the present disclosure. For instance, techniques such as mutual information, normalized mutual information, sum of squared differences, or phase correlation may be utilized to determine the degree of alignment between the source and target overlap regions.

[0072] The sub-images may be combined to form the single continuous image, based on the aligned overlap regions, as depicted in step 710. As described previously, the combining unit 410 may employ any known technique for combining the plurality of sub-images based on the aligned overlap regions. For instance, the sub-images may be combined at a specific area, such as a horizontal line by using pixels above the horizontal line from one sub-image and pixels below the horizontal line from the adjacent sub-image. Alternatively, the combining unit 410 may be configured to combine the sub-images by blending the aligned overlap regions using any known techniques. Further, any other known combination technique may be employed without departing from the scope of the present disclosure.

[0073] Once the sub-images are combined, the image-processing unit 406 may be configured to process the combined image further to enhance the quality of the combined image. For instance, the image-processing unit 406 may to process the combined image to standardize the intensity. One technique for standardizing the intensity may be based on the intensity of the vertebrae. It will be understood that various other techniques to standardize the intensity in the combined image are known and any of these techniques may be employed without departing from the scope of the present disclosure.

[0074] The foregoing examples, demonstrations, and process steps such as those that may be performed by the system may be implemented by suitable code on a processor-based system, such as a general-purpose or special-purpose computer. It should also be noted that different implementations of the present technique may perform some or all of the steps described herein in different orders or substantially concurrently, that is, in parallel. Furthermore, the functions may be implemented in a variety of programming languages, including but not limited to C++ or Java. Such code may be stored or adapted for storage on one or more tangible, machine-readable media, such as on data repository chips, local or remote hard disks, optical disks (that is, CDs or DVDs), memory, or other media, which may be accessed by a processor-based system to execute the stored code. Note that the tangible media may comprise paper or another suitable medium upon which the instructions are printed. For instance, the instructions may be electronically captured via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a data repository or memory.

[0075] Systems and methods described hereinabove facilitate the automated combining of sub-images representative of different portions of a region of interest to form a single continuous image, thereby enhancing clinical workflow. Moreover, the systems and methods of the present disclosure are configured to determine the overlap region based on labels associated with features in the sub-images. For instance, in case of images of the vertebral column, the overlap regions may be determined based on the labels of vertebrae of the vertebral column. Moreover, as the overlap region is determined based on the labels, the overlap region may be a subset of a complete rectangular area that is common between adjacent sub-images. Specifically, the overlap region may be representative of a bounded area around the overlapping vertebrae. Accordingly, the embodiments of the present disclosure may use the relatively small overlap region to align the sub-images, thereby enhancing the accuracy and reducing the time consumed to combine the sub-images into the combined image.

[0076] A skilled artisan will recognize the interchangeability of various features from different embodiments. Similarly, the various method steps and features described, as well as other known equivalents for each such methods and feature, can be mixed and matched by one of ordinary skill in this art to construct additional assemblies and techniques in accordance with principles of this disclosure.

[0077] While only certain features of the invention have been illustrated and described herein, many modifications and changes may occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the disclosure.

CLAIMS:

1. A method, comprising:

obtaining labels corresponding to one or more features in a plurality of sub-images;

determining one or more overlap regions between adjacent sub-images in the plurality of sub-images based on the labels of the one or more features;

aligning the adjacent sub-images based on the determined one or more overlap regions; and

combining the plurality of sub-images to form a continuous image based on the aligned adjacent sub-images.

2. The method of claim 1, wherein the region of interest comprises a vertebral column, vasculature, or a combination thereof.

3. The method of claim 2, wherein the one or more features in the plurality of sub-images comprise vertebrae or inter-vertebral discs.

4. The method of claim 3, wherein determining the one or more overlap regions between adjacent sub-images comprises:

identifying one or more vertebrae or one or more inter-vertebral discs commonly present in the adjacent sub-images; and

determining boundaries of the one or more vertebrae or the one or more inter-vertebral discs commonly present in the adjacent sub-images.

5. The method of claim 4, wherein determining the overlap regions further comprises determining regions in the adjacent sub-images that substantially coincide with the determined boundaries of the one or more vertebrae or the one or more inter-vertebral discs commonly present in the adjacent sub-images.

6. The method of claim 1, wherein aligning the adjacent sub-images comprises registering the adjacent sub-images based on the determined one or more overlap regions.

7. The method of claim 6, wherein registering the adjacent sub-images comprises determining an alignment score, wherein the alignment score is based on a degree of alignment between the overlap regions of the adjacent sub-images.

8. The method of claim 1, wherein combining the plurality of sub-images comprises:

selecting a first set of pixels corresponding to a first region in one sub-image, wherein the first region encompasses a region above a determined level;

selecting a second set of pixels corresponding to a second region in the adjacent sub-image, wherein the second region encompasses a region below a determined level; and

combining the first set of pixels and the second set of pixels about the determined level to generate the combined image.

9. The method of claim 1, wherein combining the plurality of sub-images comprises using a weighted average of intensity of pixel in the one or more overlap regions to generate the combined image.

10. A system, comprising:

an input unit configured to retrieve a plurality of sub-images and labels associated with the plurality of sub-images;

a computing unit configured to:

determine overlap regions between adjacent sub-images of the plurality of sub-images based on the labels associated with the plurality of sub-images;

align the adjacent sub-images based on the determined overlap regions of the adjacent sub-images; and

a combining unit configured to combine the plurality of sub-images based on the aligned sub-images to generate a continuous image.

11. The system of claim 10, further comprising a labelling unit configured to assign the labels to one or more features in the plurality of sub-images.

12. The system of claim 11, wherein the one or more features in the plurality of sub-images comprises vertebrae or inter-vertebral discs.

13. The system of claim 12, wherein the computing unit is configured to:

identify one or more vertebrae commonly present in the adjacent sub-images; and

determine boundaries of the one or more vertebrae commonly present in the adjacent sub-images.

14. The system of claim 13, wherein the computing unit is further configured to determine the overlap regions in the adjacent sub-images such that boundaries of the overlap region substantially coincide with the determined boundaries of the one or more vertebrae commonly present in the adjacent sub-images.

15. The system of claim 13, wherein the computing unit is configured to align the overlap regions of the adjacent sub-images.

16. The system of claim 15, wherein the computing unit is configured to compute alignment scores based on a degree of alignment between the overlap regions of the adjacent sub-images.

17. The system of claim 13, wherein the combining unit is configured to generate the combined image based on a first set of pixels corresponding to one sub-image and a second set of pixels corresponding to an adjacent sub-image, a weighted average intensity of pixels in the overlap regions of the plurality of sub-images, or a combination thereof.

18. An imaging system, comprising:

an acquisition subsystem configured to acquire a plurality of sub-images; a processing subsystem operatively coupled to the acquisition subsystem and comprising a merging platform configured to combine the plurality of sub-images into a continuous image, wherein the merging platform comprises:

an input unit configured to retrieve the plurality of sub-images and labels associated with the plurality of sub-images; a computing unit configured to:

determine overlap regions between adjacent sub-images of the plurality of sub-images based on the labels associated with the plurality of sub-images;

align the adjacent sub-images based on the determined overlap regions of the adjacent sub-images; and

a combining unit configured to combine the plurality of sub-images based on the aligned sub-images to generate the continuous image.

19. The imaging system of claim 18, wherein the processing subsystem comprises a labelling platform configured to assign labels to one or more features in the plurality of sub-images.

20. The medical imaging system of claim 18, wherein the merging platform further comprises a labelling unit configured to assign labels to one or more features in the plurality of sub-images.

Documents

Application Documents

# Name Date
1 5555-CHE-2012 POWER OF ATTORNEY 31-12-2012.pdf 2012-12-31
2 5555-CHE-2012 FORM-3 31-12-2012.pdf 2012-12-31
3 5555-CHE-2012 FORM-2 31-12-2012.pdf 2012-12-31
4 5555-CHE-2012 FORM-18 31-12-2012.pdf 2012-12-31
5 5555-CHE-2012 FORM-1 31-12-2012.pdf 2012-12-31
6 5555-CHE-2012 DRAWINGS 31-12-2012.pdf 2012-12-31
7 5555-CHE-2012 DESCRIPTION (COMPLETE) 31-12-2012.pdf 2012-12-31
8 5555-CHE-2012 CORRESPONDENCE OTHERS 31-12-2012.pdf 2012-12-31
9 5555-CHE-2012 CLAIMS 31-12-2012.pdf 2012-12-31
10 5555-CHE-2012 ABSTRACT 31-12-2012.pdf 2012-12-31
11 5555-CHE-2012 FORM-1 06-05-2013.pdf 2013-05-06
12 5555-CHE-2012 CORRESPONDENCE OTHERS 06-05-2013.pdf 2013-05-06
13 abstract5555-CHE-2012.jpg 2014-05-13
14 5555-CHE-2012-FER.pdf 2018-10-15
15 5555-CHE-2012-AbandonedLetter.pdf 2019-04-22

Search Strategy

1 2018-10-15-converted_15-10-2018.pdf