Sign In to Follow Application
View All Documents & Correspondence

System And Method For Automatic View Identification

Abstract: A method for automated view identification is disclosed. The method includes generating an image of a target region of a subject using a selected view. The method further includes transforming the image into a polar image. Additionally, the method includes computing a plurality of histograms of oriented gradients corresponding to one or more blocks in the polar image. Moreover, the method includes constructing one or more feature vectors based on the plurality of histograms. The method also includes determining if the selected view matches a desired view by comparing the one or more feature vectors with one or more stored patterns corresponding to the desired view. FIG. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 December 2012
Publication Number
19/2016
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

GENERAL ELECTRIC COMPANY
1 RIVER ROAD, SCHENECTADY, NEW YORK 12345

Inventors

1. SHRIRAM, KRISHNA SEETHARAM
122, EPIP PHASE 2, HOODI VILLAGE, WHITEFIELD ROAD, BANGALORE 560 066
2. SUBRAMANIAN, NAVNEETH
122, EPIP PHASE 2, HOODI VILLAGE, WHITEFIELD ROAD, BANGALORE 560 066
3. AGRAWAL, DHRUV
28 AGRASEN NAGAR, UDAIPOLE, UDAIPUR, RAJASTHAN 313 001

Specification

SYSTEM AND METHOD FOR AUTOMATIC VIEW IDENTIFICATION

BACKGROUND

[0001] Embodiments of the present disclosure relate generally to diagnostic imaging, and more particularly to systems and methods for automatic view identification during ultrasound imaging.

[0002] Medical diagnostic ultrasound is an imaging modality that employs ultrasound waves to probe acoustic properties of biological tissues and generate corresponding images. Particularly, diagnostic ultrasound systems are used to visualize muscles, tendons and other internal organs such as heart and liver to assess their size, structure and any pathological lesions using real-time diagnostic images. Further, ultrasound systems also find use in therapeutics where an ultrasound probe is used to guide interventional procedures such as biopsies.

[0003] Particularly, when imaging a heart using a two-dimensional (2D) ultrasound probe, different locations and/or angular orientation of the probe may be used to image different views of the heart. Diagnostic analysis of cardiac structures, however, entails acquisition of projection data using certain standard views for imaging specific regions of the anatomy. By way of example, a parasternal long axis (PLAX) view may provide better visualization of the heart for assessing left ventricular contractions, pericardial effusion and right ventricular strain. However, for obtaining cross-sectional images of the heart that aid in diagnosing mitral stenosis and congenital heart disease, a parasternal short axis (SAX) view may be preferred.

[0004] Clinical diagnosis and treatment often relies on image-derived parameters of a region of interest (ROI) such as the heart. Reconstruction of the ultrasound images for use in the clinical diagnosis typically entails certain assumptions corresponding to the specific view (for example, a PLAX view and/or a SAX view) used for acquiring the projection data. Accordingly, use of the correct view allows for accurate measurement of projection data for reconstructing high quality images of structures of interest in the ROI. However, identification of the correct view may be a challenge due to a high degree of variability in local texture in the ultrasound images resulting from factors such as speckle noise and acoustic shadow. Particularly, the view identification may be further confounded by use of shape descriptors that derive information from a local neighborhood and have to overcome challenges of poor contrast between target structures and surrounding tissues.

[0005] Although, with extensive training and practice, experienced radiologists may be capable of acquiring the desired view within a few tens of seconds, manual intervention based view recognition may become a bottleneck for large-scale deployment of image analysis techniques. Additionally, in absence of relevant experience, view recognition from novice users may often be inconsistent and/or inaccurate, which in turn may lead to incorrect image reconstruction and/or erroneous diagnosis.

[0006] Accordingly, conventional ultrasound systems are known to employ certain automatic view recognition approaches. These approaches may include, for example, use of constellation-of-parts, spatial multi-resolution spline filtering of the training images and/or a hierarchical classification strategy. Another view recognition approach uses intensity histograms for view classification using a multilayer perceptron where a number of hidden layer units are chosen empirically. Certain other view recognition approaches use Haar-like features, motion information, active shape models and/or Scale Invariant Feature Transform (SIFT) for automatic view recognition. Most of these conventional view recognition techniques increase sensitivity to noise and may be unreliable in view of extreme appearance variability in the ultrasound images. Conventional view recognition techniques, thus, provide insufficient indication of the view, especially to novice users.

BRIEF DESCRIPTION

[0007] In accordance with aspects of the present disclosure, a method is disclosed. The method includes generating an image of a target region of a subject using a selected view. The method further includes transforming the image into a polar image. Additionally, the method includes computing a plurality of histograms of oriented gradients corresponding to one or more blocks in the polar image. Moreover, the method includes constructing one or more feature vectors based on the plurality of histograms. The method also includes determining if the selected view matches a desired view by comparing the one or more feature vectors with one or more stored patterns corresponding to the desired view. A non-transitory computer readable medium that stores instructions executable by one or more processors to perform a method for automated view identification is also presented.

[0008] In accordance with further aspects of the present disclosure, a system is presented. The system includes an image acquisition device configured to image a target region of a subject. The system also includes a processing unit configured to generate an image of a target region of a subject using a selected view, transform the image into a polar image,
compute a plurality of histograms of oriented gradients corresponding to one or more blocks in the polar image, construct one or more feature vectors based on the plurality of histograms, and determine if the selected view matches a desired view by comparing the one or more feature vectors with one or more stored patterns corresponding to the desired view. Further, the system may include an input-output device configured to provide feedback for moving an image acquisition device along a desired direction to achieve the desired view based on the comparison.

DRAWINGS

[0009] These and other features, aspects, and advantages of die present technique will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:

[0010] FIG. 1 is a schematic representation of an exemplary ultrasound imaging system, in accordance with aspects of the present disclosure;

[0011] FIG. 2 is a flow diagram illustrating an exemplary method for automatic view recognition, in accordance with aspects of the present disclosure;

[0012] FIG. 3 is a schematic representation of exemplary ultrasound images that may be transformed into corresponding polar images, in accordance wim aspects of the present disclosure;

[0013] FIG. 4 is an exemplary polar image generated from an ultrasound image acquired using a PLAX view, in accordance with aspects of the present disclosure;

[0014] FIG. 5 is a diagrammatical representation of exemplary histograms of oriented gradients (HOGs) computed for a plurality of blocks in a polar image corresponding to a PLAX view, in accordance with aspects of the present disclosure;

[0015] FIG. 6 is an exemplary polar image generated from an ultrasound image acquired using a SAX view, in accordance with aspects of the present disclosure;

[0016] FIG. 7 is a diagrammatical representation of exemplary HOGs computed for a plurality of blocks in a polar image corresponding to a SAX view, in accordance with aspects of the present disclosure; and

[0017] FIG. 8 is a graphical representation depicting an exemplary distribution of feature vectors that correspond to PLAX and SAX images in a histogram of oriented gradients (HOG) feature space, in accordance with aspects of the present disclosure.

DETAILED DESCRIPTION

[0018] The following description presents systems and methods for automatic view recognition during diagnostic imaging. Particularly, certain embodiments illustrated herein describe the systems and the methods for automatic view recognition in ultrasound imaging systems using a HOG-based feature extractor and a classifier that uses a supervised self-learning approach.

[0019] Although the present disclosure is made with reference to ultrasound imaging, various embodiments described herein may also be implemented in connection with other types of medical imaging systems. By way of example, these systems may include magnetic resonance imaging (MRI) systems, computed-tomography (CT) systems, positron emission tomography systems (PET), single positron emission computed tomography (SPECT) systems and/or optical computed tomography systems. Alternatively, embodiments of the present disclosure may be employed in hybrid systems such as PET-CT systems, MR-PET systems, CT-MR systems and/or systems that include automatic view recognition for monitoring targeted drug and/or gene delivery.

[0020] In certain embodiments, the present systems and methods may also be used for non-medical purposes, such as for nondestructive testing of elastic materials such as plastics and aerospace composites that may be suitable for ultrasound imaging and airport screening. An exemplary environment that is suitable for practicing various implementations of the present system is described in the following sections with reference to FIG. 1.

[0021] FIG. 1 illustrates an ultrasound system 100 for use in diagnostic imaging and/or providing therapy to one or more target locations. The target locations may include, for example, biological tissues such as cardiac tissues, liver tissues, breast tissues, prostate tissues, thyroid tissues, lymph nodes, vascular structures and/or other objects suitable for ultrasound imaging. For discussion purposes, embodiments of the present disclosure may be described with reference to automatic classification and recognition of PLAX and SAX views for use in imaging a cardiac region of a patient. However, the system 100 may be configured to automatically classify and recognize other views or scan-planes such as an apical two-chamber or four-chamber view of the heart, view of a fetal head, fetal abdomen or vascular landmarks during imaging.

[0022] Accordingly, in certain embodiments, the system 100 may include transmit circuitry 102 that may be configured to generate a pulsed waveform to drive an array 104 of transducer elements 106, for example, piezoelectric crystals within an image acquisition device such as a transducer probe 108. The transducer probe 108 may be configured to emit ultrasonic pulses into a body or volume of a subject (not shown). Although FIG. 1 illustrates the transducer probe 108 as an external device suitable for positioning over body of the patient, in certain other embodiments, the image acquisition device may include a minimally-invasive interventional device, for example an intravascular ultrasound catheter, suitable for use within the body of the patient.

[0023] In one embodiment, at least a portion of the ultrasonic pulses emitted by the transducer probe 108 may backscatter from the target locations, for example, including adipose tissue, muscular tissue, connective tissue, blood cells, veins or objects within the body such as a catheter or needle to produce echoes that return to the transducer array 104. In certain embodiments, the returning echoes are received by a receive circuitry 110 for further processing. To that end, in one embodiment, the receive circuitry 110 may be coupled to a beamformer 112 that may be configured to process the received echoes and output corresponding radio frequency (RF) signals.

[0024] In certain embodiments, the resulting RF signals may be provided to a processing unit 114 that may be configured to process the RF signals according to a plurality of selectable ultrasound modalities in real time and/or offline mode. Accordingly, the processing unit 114 may include devices such as one or more general-purpose or application-specific processors, digital signal processors, microcomputers, microcontrollers, Application Specific Integrated Circuits (ASICs) and/or Field Programmable Gate Arrays (FPGA).

[0025] Furthermore, the processing unit 114 may be configured to provide control and timing signals for controlling a delivery sequence of different pulses for imaging a target region of a biological tissue using a specific view. In certain embodiments, the processing unit 114 may be configured to store the delivery sequence, frequency, time delay, beam
intensity and/or other imaging system parameters corresponding to the specific view in a memory device 116 for further processing. To that end, the memory device 116 may include storage devices such as a random access memory, a read only memory, a disc drive, solid-state memory device and/or a flash memory.

[0026] In one embodiment, the processing unit 114 may be configured to control the probe 108 or, more particularly, the transducer elements 106 to acquire a desired view by directing one or more groups of pulse sequences toward the target tissues. Further, the processing unit 114 may be configured to track the displacements in the ROI of the target tissues in response to the transmitted pulses to determine corresponding tissue characteristics. Particularly, the processing unit 114 may be configured to evaluate the acquired ultrasound information and output patient data including diagnostic and therapeutic ultrasound images and/or video frames for review, diagnosis, analysis and/or treatment. Alternatively, the processing unit 114 may be configured to store the images and/or video frames of the ROI for later review and analysis or communicate the images and/or video frames to another location for further review.

[0027] As previously noted, specific views may be employed for optimally imaging a desired ROI in the target tissues. Accordingly, in one embodiment, one or more imaging parameters including the desired view for imaging a specific ROI may be selected based on operator input. To that end, in certain embodiments, the processing unit 114 may be further coupled to one or more user input-output devices 118 such as a display device 120, a keyboard, a touchscreen, a microphone, a mouse, one or more buttons and/or switches for receiving commands and inputs from an operator.

[0028] In one embodiment, the processing unit 114 may be configured to process the RF signal data based on an operator-selected view and generate corresponding images and/or video frames for display on the display device 120. The display device 120 may be local or a remote device communicatively coupled to the processing unit 114. In certain embodiments, the display device 120 may include a graphical user interface (GUI) for providing the operator with configurable options for imaging the target locations. By way of example, the configurable options may include a selectable view, a target ROI, a delay profile, a designated pulse sequence, a desired pulse repetition frequency and/or other suitable system settings.

[0029] Although, in certain embodiments, the operator may input and/or select the desired view for imaging a specific region of the patient, as previously noted, even slight changes in a location and/or angular orientation of the ultrasound probe 108 may alter the view. Additionally, identification of the desired view from a plurality of possible views may be confounded by speckle noise and low contrast. Accordingly, embodiments of the system 100 allow for automatic recognition of the desired view in real-time.

[0030] Particularly, in one embodiment, the processing unit 114 may include an automatic view recognition unit 122 configured to identify the desired view. In certain embodiments, the automatic view recognition unit 122 may be configured to identify the desired view using a supervised learning approach. By way of example, the automatic view recognition unit 122 may be trained to identify the desired view using a plurality of training images acquired using the desired view. As used herein, the term "training images" corresponds to ultrasound images that are acquired using the desired view and are used to train the automatic view recognition unit 122. Particularly, the training images are representative of the desired view and may be used to train the automatic view recognition unit 122 to identify if a view of an ultrasound image acquired in real-time matches the desired view. To that end, in one embodiment, the training images may be labeled as being representative of the desired view that is used to acquire the training images. Accordingly, the training images may include a label that is indicative of the desired view.

[0031] Additionally, in certain embodiments, a plurality of control images may also be supplied to the automatic view recognition unit 122. As used herein, the term "control images" may correspond to ultrasound images acquired using a view that is different from the desired view. Particularly, the control images may be used by the automatic view recognition unit 122 to differentiate between the desired view and the other views that are different from the desired view.

[0032] To that end, in one embodiment, the automatic view recognition unit 122 may further include a feature extractor 124. The feature extractor 124 may be configured to construct local and global feature vectors using a histogram of oriented gradients corresponding to one or more regions in the training images. Additionally, the automatic view recognition unit 122 may include a classifier 126. In one embodiment, the classifier 126 may be based on a support vector machine (SVM) that provides supervised learning models with associated learning algorithms. The learning algorithms analyze the feature vectors corresponding to the training images and recognize patterns for use in classifying the images into different views. The classifier 126 may be configured to correlate the identified patterns with the desired view. An exemplary classification of the PLAX and SAX images by the classifier 126 will be described in detail with reference to FIG. 8. Further, the identified patterns may be stored, for example, in the memory device 116 for comparison with feature vectors derived from selected views that are acquired by a radiologist in real-time. If a selected view does not match a desired view, the system 100 may be configured to provide audio and/or visual instructions to the operator to move the probe 108 in a determined direction to achieve the desired view.

[0033] Use of the automatic view recognition unit 122, thus, may allow radiologists to identify a desired view from a plurality of possible views with greater accuracy and consistency. Automatic and accurate view recognition capability may extend use of the ultrasound imaging system 100 to novice users at point-of-care systems. Additionally, the novice users may also benefit from the supervised learning approach of the classifier 126, where the classifier 126 may be configured to learn to differentiate between new categories of images. Particularly, the classifier 126 may be configured to learn to differentiate between the new categories of images based on labeled images corresponding to the new categories. The functioning of the automatic view recognition unit 122 for simplifying and automating the clinical workflow will be described in greater detail with reference to FIGs. 2-6.

[0034] FIG. 2 illustrates a flowchart 200 depicting an exemplary method for automatic
view identification during diagnostic imaging. The exemplary method may be described in a general context of computer executable instructions on a computing system or a processor. Generally, computer executable instructions may include routines, programs, objects, components, data structures, procedures, modules, functions, and the like that perform particular functions or implement particular abstract data types. The exemplary method may also be practiced in a distributed computing environment where optimization functions are performed by remote processing devices that are linked through a wired and/or wireless communication network. In the distributed computing environment, the computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.

[0035] Further, in FIG. 2, the exemplary method is illustrated as a collection of blocks in a logical flow chart, which represents operations that may be implemented in hardware, software, or combinations thereof. In the context of software, the blocks represent computer instructions that, when executed by one or more processing subsystems, perform the recited operations. The order in which the exemplary method is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the exemplary method or augmented by additional blocks with added functionality without departing from the spirit and scope of the subject matter described herein. For discussion purposes, the exemplary method will be described in FIG. 2 with reference to the elements of FIG. 1.

[0036] Particularly, FIG. 2 illustrates training and real-time view recognition phases of the exemplary method. In certain embodiments, the training phase may entail training a view recognition system, such as the automatic view recognition unit 122 of FIG. 1 using a plurality of training images. The training images may include labels that correspond to specific views that are used to acquire the training images. By way of example, the training images may include labels such as, but not limited to, a PLAX view and/or a SAX view. The training images, thus, may serve as a training dataset that may be analyzed to generate an inference function. The inference function, in turn, may be employed to evaluate a new image acquired by the radiologist in real-time to identify a view associated with the new image. By way of example, the inference function may be used to determine if the new image was acquired using the PLAX view or the SAX view. To that end, in a presently contemplated embodiment, the automatic view recognition system 122 may include a support vector machine (SVM) classifier. The classifier may be configured to aid the automatic view recognition unit 122 in differentiating between the different imaging views using a supervised and/or a self-learning approach. An example of a supervised learning approach to differentiate between the PLAX and SAX images will be described in detail with reference to FIG. 8.

[0037] The method starts at step 202, where a plurality of training images of the target region acquired using a desired view may be input to the classifier. Particularly, in one embodiment, the training images may be labeled by a radiologist to indicate that the training images were acquired using the desired view. Although the present disclosure is described with reference to automatic view recognition of PLAX and SAX B-mode echocardiograms, embodiments of the present classifier may also be used for classifying a plurality of views corresponding to different imaging modalities.

[0038] Moreover, in one embodiment, a plurality of control images may be supplied to the automatic view recognition unit 122 for training the classifier to differentiate between the training images corresponding to the desired view and other views. By way of example, if the desired view corresponds to a PLAX view, the plurality of control images may include images acquired using the SAX view. Accordingly, the control images may be employed for training the classifier to differentiate between the training images corresponding to the desired PLAX view and other views. The training images and/or the control images may also undergo certain preprocessing steps before being classified, in certain embodiments. By way of example, the training images and/or the control images may be down sampled using a Gaussian low pass filter to reduce a corresponding data rate for efficient computation. The down sampling may also allow the training images and/or the control images to be adapted for a smaller display, for example, if the ultrasound system is implemented in a portable device. Additionally, the training images and/or the control images may be filtered, for example, using a median filter to reduce any salt-pepper noise present in the training images and/or the control images.

[0039] Further, at step 204, the plurality of training images may be transformed into corresponding polar images. More specifically, the plurality of training images may be transformed into corresponding rectangular beam space equivalents. Typically, the training images correspond to a conical region or a sector of a circle. Accordingly, for transforming a selected training image into a corresponding polar image, each pixel in the selected training image may be indexed by a radius (r) and an angle (d) about an axis that bisects a sector corresponding to the selected training image. Particularly, each pixel in the selected training image may be indexed by r and 6 to generate a rectangular polar image.

[0040] In one embodiment, the polar images may be divided into one or more cells and/or blocks, where each of the blocks may include a plurality of cells. Generally, the blocks may be a rectangular portion of the polar image of a determined size in pixels. By way of example, in one embodiment, the polar image may be divided into four blocks or quadrants for further processing. Division of the polar image into different blocks or quadrants may localize the information into smaller packets, and thus, may allow for feature extraction in a localized manner.

[0041] Subsequently, at step 206, a plurality of HOGs corresponding to the one or more blocks in the polar image may be computed. Typically, a HOG provides structural information derived from an image. To that end, a set of local histograms corresponding to each polar image may be computed. It may be noted that each of the local histograms corresponds to a local part, for example, a block of the image. In addition, each of these local histograms may be computed based on a count of occurrences of gradient orientations in a corresponding local part of the image. Additionally, a magnitude of the gradients corresponding to the one or more blocks in the polar image may also be determined. In one embodiment, the magnitude and the orientation of the gradients corresponding to the one or more blocks in the polar image may be determined, for example, by filtering the polar image using at least two one-dimensional filters. Thus, a HOG may be built for a particular block in the polar image based on the magnitude and the orientation of the gradients corresponding to that block. In certain embodiments, however, more than one HOG may be computed for each block, where each HOG corresponds to a group of cells constituting a portion of the block.

[0042] In one example, one or more HOGs for each block may be built by accumulating votes into bins for each gradient orientation. In certain embodiments, these votes may be weighted. Particularly the votes may be weighted, for example, by a magnitude of a gradient before being accumulated into the bins. In one example, weighting of the votes may allow for assignment of greater significance to desired structures of interest. By way of example, edges of a target organ that is under evaluation may be assigned a higher weight, whereas other regions in the target organ may be assigned a lower weight. The higher weight assigned to the edges allows for enhanced visualization of the edges. The enhanced visualization of the edges in the polar image may allow for accurate characterization of the structures of interest, thereby enhancing identification of a pathological condition associated with the patient.

[0043] However, at times, the polar images may include illumination variations, which may result in artifacts in reconstructed images. In accordance with aspects of the present disclosure, the one or more HOGs corresponding to each block in the polar image may be normalized to minimize occurrence of artifacts in the reconstructed images. Particularly, in one example, the one or more HOGs corresponding to a particular block may be locally normalized based on values of HOGs corresponding to neighboring blocks. To that end, a normalization factor corresponding to a particular block may be computed. Subsequently, the HOGs corresponding to the block may be normalized locally using the computed normalization factor.

[0044] It may be noted that different normalization schemes may be employed for a vector V that includes all histograms of a given block to overcome illumination and/or other biases. Particularly, in one embodiment, a normalization factor (nf) may be obtained for the vector V of the given block, for example, using an LI normalization, as depicted in equation (1):

where e corresponds to a regularization constant.

[0045] Alternatively, the normalization factor may be obtained for the vector V, for example, using an L2 normalization, as depicted in equation (2):

[0046] Subsequent to the normalization, one or more feature vectors corresponding to the plurality of normalized histograms may be constructed, as indicated by step 208. Particularly, the HOG feature vectors corresponding to each block may be built by concatenating normalized HOGs corresponding to determined groups of cells in the block. In one embodiment, the normalized HOGs corresponding to the determined groups of cells may be concatenated to build a local feature vector that characterizes a structure of interest in a localized region. In an alternative embodiment, however, the normalized HOGs corresponding to all the blocks may be concatenated to build a global feature vector for identifying characteristics of an overall view corresponding to the polar image.

[0047] In certain embodiments, the normalized HOGs may be converted into a string of numbers. To that end, in one example, a feature vector corresponding to the plurality of bins that accumulate votes for gradient orientations in a particular block may be constructed. This feature vector may be representative of the string of numbers for the particular block. Similar strings of numbers generated for all blocks may be concatenated to build the global feature vector. The global feature vector, in turn, may be input to the classifier.

[0048] Further, at step 210, one or more patterns in the one or more feature vectors corresponding to the desired view may be identified. By way of example, the patterns may correspond to one or more identifiable characteristics corresponding to the desired view used to acquire an ultrasound image. In one embodiment, the patterns corresponding to the desired view may be identified by the SVM classifier. It may be noted that the SVM classifier has been trained using the plurality of labeled images acquired using the desired view, for example, the PLAX view. Particularly, the SVM classifier may be configured to construct a hyperplane to identify the patterns such that the images acquired using the PLAX view are clearly distinguished from other images, for example acquired using the SAX view. An exemplary construction of the hyperplane for identifying patterns corresponding to the desired view will be discussed in greater detail with reference to FIG. 8. The identified patterns may then be correlated with the desired view.

[0049] Further, at step 212, the feature vectors and/or the identified patterns corresponding to the desired view may be stored in a storage device such as the memory device 116 of FIG. 1. Particularly, the stored information may find use as a reference for automatically comparing feature vectors derived from PLAX and SAX images acquired in real-time with the stored feature vectors corresponding to the desired view.

[0050] Steps 202-212 are generally representative of the training phase of the exemplary method for automated view identification. Consequent to the processing of the plurality of the training images, feature vectors and/or identified patterns corresponding to a plurality of desired views may be generated and stored. In particular, the classifier may be trained to identify the various views used to acquired ultrasound images in real-time.

[0051] Subsequent to the training phase, the method may be used to image a target region of the subject using the desired view. In particular, when imaging the heart of the patient in real-time, the radiologist may generate an ultrasound image of the patient using a selected view as indicated by step 214. It may be noted that the selected view may or may not be representative of the desired view. For correctly identifying the desired view in real-time, the ultrasound image may be processed as discussed with reference to steps 204-208. Particularly, the ultrasound image may be processed to construct a corresponding feature vector that characterizes the ultrasound image, as indicated by step
216. Accordingly, the feature vector corresponds to the selected view.

[0052] Subsequently, at step 218, a check may be carried out to determine if the selected view matches the desired view. In one example, the derived feature vector corresponding to the selected view may be compared with one or more feature vectors and/or stored patterns corresponding to the desired view.

[0053] Furthermore, at step 218, if it is determined that the selected view does not match the desired view, it may be desirable to provide a user with feedback to move the image acquisition device in a desired direction to achieve the desired view, as indicated by step 220. To that end, the automatic view recognition unit may be configured to generate an indicator to the user to move the probe in the desired direction. By way of example, the indicator may be provided through an output device such as the display device 120 or the input-output devices 118 of FIG. 1. Particularly, the indicator may be provided through audio and/or visual signals, such as through audible and/or on-screen instructions to indicate a direction of recommended movement. In one embodiment, the desired direction for achieving the desired view may be determined based on a determined difference between spatial coordinates of the selected view and stored spatial coordinates of the desired view. By way of example, if the selected view corresponds to a SAX view and the desired view corresponds to the PLAX view, the user may be provided specific instructions to rotate the probe to face a shoulder of the patient. However, if the selected view corresponds to the desired view, the user may be notified that the desired view for imaging the target region has been achieved.

[0054] Use of automatic view recognition, as described herein, thus, simplifies the clinical workflow and allows novice users to be more productive. Certain examples of polar transformation of ultrasound images, computation of HOGs and identification of patterns in feature extractors for use in automatic view recognition will be described in greater detail with reference to FIGs. 3-8.

[0055] FIG. 3 illustrates a schematic representation 300 of exemplary ultrasound images and their corresponding polar images. Particularly, FIG. 3 depicts ultrasound images 302 and 304 corresponding to the PLAX view and the SAX view, respectively. The ultrasound images 302 and 304 depict acquired ultrasound information represented in a Cartesian coordinate system. Indexing of pixels in the ultrasound images 302 and 304 along r and 6 may aid in generating corresponding polar images 306 and 308 corresponding to the PLAX view and the SAX view, respectively. Transforming the ultrasound images 302 and 304 to the corresponding polar images 306 and 308 along r and 0 allows representation of the ultrasound information in a format suitable for use in constructing a corresponding feature vector, in accordance with aspects of the present disclosure.

[0056] FIG. 4 illustrates an example of a polar image 400 generated from an ultrasound image acquired using the PLAX view of a cardiac region of a patient. In one example, the polar image 400 may be generated from the ultrasound image, as described with reference to step 204 of FIG. 2 and FIG. 3. As depicted in FIG. 4, the polar image 400 may be divided into a plurality of blocks 402, 404, 406 and 408 for computing local and/or global HOGs indicative of local and/or global structural characteristics of the cardiac region. Although FIG. 4 illustrates four blocks 402,404,406 and 408, in certain embodiments, the polar image 400 may be divided into greater or fewer numbers of blocks.

[0057] Moreover, FIG. 5 is a diagrammatical representation 500 of exemplary HOGs 502, 504, 506 and 508 computed for the four blocks 402, 404, 406 and 408, respectively, of the polar image 400 of FIG. 4. Particularly, the HOGs 502, 504, 506 and 508 may be computed by plotting gradient orientations in corresponding blocks 402, 404, 406 and 408 along an X-axis. Further, a count of occurrences of the gradient orientations in the corresponding blocks 402, 404, 406 and 408 are plotted along a Y-axis to compute the HOGs 502, 504, 506 and 508. In one example, the HOGs 502, 504, 506 and 508 may be computed, as described with reference to step 206 of FIG. 2.

[0058] Further, FIG. 6 illustrates an example of a polar image 600 generated from an ultrasound image acquired using the SAX view of a cardiac region of the patient. The polar image 600 may be divided into a plurality of blocks 602,604,606 and 608.

[0059] Moreover, FIG. 7 is a diagrammatical representation 700 of exemplary HOGs 702, 704, 706 and 708 computed for the four blocks 602, 604, 606 and 608, respectively, of the polar image 600 of FIG. 6. The HOGs 702, 704, 706 and 708 may be computed by plotting gradient orientations in corresponding blocks 602, 604, 606 and 608 along an X-axis. Further, a count of occurrences of the gradient orientations in the corresponding blocks 602, 604, 606 and 608 are plotted along a Y-axis to compute the HOGs 702, 704, 706 and 708. As previously noted, the HOGs 702, 704, 706 and 708 corresponding to the blocks 602, 604, 606 and 608 may be built by accumulating votes into bins for each gradient orientation associated with a corresponding block. It may be noted that larger the number of bins, higher is a dimension of a corresponding feature vector that may be derived from a corresponding polar image, such as the polar image 600 of FIG. 6.

[0060] The HOGs, such as the HOGs 602, 604,606 and 608 of FIG. 6 or HOGs 702, 704, 706 and 708 of FIG. 7 may be used to construct one or more feature vectors corresponding to a desired PLAX view or a desired SAX view, as described with reference to step 208 of FIG. 2. One or more patterns in the feature vectors corresponding to the desired view may be identified. The identified patterns may be representative of the desired view, and thus, may be used to differentiate between the desired view, for example the SAX view, and other views during real-time ultrasound imaging.

[0061] FIG. 8 illustrates a graphical representation 800 depicting an exemplary distribution of feature vectors that correspond to ultrasound images in a HOG feature space for differentiating between PLAX and SAX views. To that end, in one exemplary implementation, a training database including labeled PLAX images and SAX images may be supplied to an automatic view recognition unit, such as automatic view recognition unit 122 of FIG. 1. The automatic view recognition unit may be configured to transform the labeled PLAX images and the SAX images to corresponding polar images. The polar images may further be resized, for example to a 124 x 64 size to optimize computations. Each of the resized images may be divided into four non-overlapping blocks.

[0062] Further, the automatic view recognition unit may be configured to quantize gradient orientations in each of the four blocks, for example, into eighteen directional bins, thereby generating one or more HOGs corresponding to each block. The HOGs for the four blocks may then be normalized using a normalization factor that may be computed for each of the four blocks. Particularly, in one example, the normalization factor may be computed using equation (2). Subsequently, the normalized HOGs may be concatenated to form an n-dimensional feature vector. Such feature vectors 802 and 804 may be constructed for both the PLAX images and the SAX images, respectively.

[0063] The feature vectors 802 and 804 corresponding to the PLAX images and the SAX images, respectively, in the training database may then be supplied to a classifier, such as the classifier 126 of FIG. 1, for training the classifier to differentiate between PLAX and SAX images. In a presently contemplated embodiment, the classifier includes an SVM classifier. Accordingly, when trained using a set of training images, each labeled as belonging to one of two categories (for example, a PLAX category and a SAX category), the classifier may be configured to build an SVM model. This SVM model may then be used to categorize previously unseen ultrasound images into the PLAX category or the SAX category.

[0064] To that end, the SVM model may be a representation of the images as points in space. Particularly, the SVM model may map the points in space such that feature vectors 802 and 804 corresponding to the PLAX images and the SAX images are separated by a clear demarcation, as depicted in the graphical representation 800. In one embodiment, the demarcation between the PLAX and the SAX images in the graphical representation 800 corresponds to a hyperplane 806. Typically, a hyperplane of an -dimensional space is a flat subset with a dimension (n -1) that separates the -dimensional space into two distinct regions. By way of example, in one embodiment, the classifier may assume that the feature vectors 802 and 804 correspond to vectors in a p-dimensional space. The classifier may then separate thep-dimensional vectors 802 and 804 with a (p - l)-dimensional hyperplane.

[0065] In accordance with exemplary aspects of the present disclosure, a hyperplane 806 that represents the largest separation or margin between the PLAX and SAX categories may be selected. Accordingly, in one embodiment, the classifier may be configured to select the hyperplane 806 such that the distance from the hyperplane 806 to the nearest data point on each side is maximized. Alternatively, the hyperplane 806 that mathematically fits a string of numerals corresponding to the feature vectors corresponding to each of the four blocks may be selected. Particularly, the hyperplane 806 may be selected such that the PLAX and SAX images fall on opposite sides of the hyperplane 806. Once the hyperplane 806 is identified, the real-time identification of the PLAX and SAX views may be reduced to identifying a side of the hyperplane 806 that a feature vector corresponds to. This technique advantageously aids in distinguishing between the PLAX images and the SAX images.

[0066] Use of the automatic view recognition system, thus, may allow for a simplified imaging workflow that may benefit even novice radiologists to acquire ultrasound images using accurate views. Particularly, use of the automated view recognition workflow obviates a need for supervision and/or correction of the view by an experienced radiologist, thus saving on scanning time and effort. The reduction in scanning time, in turn, may allow for extension of the present systems and methods for automatic view recognition to real-time device guidance. Additionally, as the automatic view recognition system employs a self-learning approach, the system may be trained to recognize additional views by providing labeled images and without any additional resources. Moreover, embodiments of the present disclosure may also be retrofit to existing ultrasound systems for providing enhanced automatic view recognition capability.

[0067] It may be noted that the foregoing examples, demonstrations and process steps that may be performed, for example, by the processing unit 114 and the automatic view recognition system 122 of FIG. 1, may be implemented by suitable code on a processor-based system, such as a general-purpose or a special-purpose computer. It may also be noted that different implementations of the present disclosure may perform some or all of the steps described herein in different orders or substantially concurrently, that is, in parallel.

[0068] Additionally, the functions may be implemented in a variety of programming languages, including but not limited to Ruby, Hypertext Preprocessor (PHP), Perl, Delphi, Python, C, C++, or Java. Such code may be stored or adapted for storage on one or more tangible, machine-readable media, such as on data repository chips, local or remote hard disks, optical disks (that is, CDs or DVDs), solid-state drives, or other media, which may be accessed by the processor-based system to execute the stored code.

[0069] It may also be noted that although specific features of various embodiments of the present disclosure may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments, for example, to construct additional assemblies and techniques for automatic view recognition in imaging systems.

[0070] While only certain features of the present invention have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

CLAIMS

1. A method, comprising:

generating an image of a target region of a subject using a selected view;

transforming the image into a polar image;

computing a plurality of histograms of oriented gradients corresponding to one or more blocks in the polar image;

constructing one or more feature vectors based on the plurality of histograms; and

determining if the selected view matches a desired view by comparing the one or more feature vectors with one or more stored patterns corresponding to the desired view.

2. The method of claim 1, further comprising providing feedback for moving an image acquisition device along a desired direction to achieve the desired view.

3. The method of claim 2, further comprising determining the desired direction to
achieve the desired view based on the comparison.

4. The method of claim 3, wherein determining the desired direction comprises determining a difference between spatial coordinates corresponding to the selected view and stored spatial coordinates corresponding to the desired view.

5. The method of claim 1, further comprising training a classifier using a plurality of training images of the target region corresponding to the desired view.

6. The method of claim 5, wherein training the classifier comprises:

supplying the plurality of training images of the target region to the classifier, wherein the plurality of training images is acquired using the desired view;

transforming the plurality of training images into corresponding polar images;

computing a plurality of histograms of oriented gradients corresponding to one or more blocks in the polar images;

constructing one or more feature vectors corresponding to the plurality of histograms;

identifying patterns in the one or more feature vectors corresponding to the desired view; and

storing the one or more feature vectors, the identified patterns, or a combination thereof.

7. The method of claim 5, wherein the classifier comprises a Support Vector
Machine classifier.

8. The method of claim 1, wherein computing the plurality of histograms comprises counting occurrences of one or more orientations of gradients corresponding to the one or more blocks in the polar image.

9. The method of claim 8, further comprising weighting the occurrences based on a magnitude of the gradients corresponding to the one or more blocks in the polar image.

10. The method of claim 1, further comprising:

computing a normalizing factor for the one or more blocks in the polar image; and

normalizing the plurality of histograms corresponding to the one or more blocks based on the normalization factor to minimize imaging artifacts.

11. The method of claim 10, wherein constructing the one or more feature vectors comprises concatenating the plurality of normalized histograms corresponding to the one or more blocks.

12. The method of claim 10, wherein determining if the selected view matches the
desired view comprises:

converting the plurality of normalized histograms into corresponding strings of numerals;

identifying a hyperplane that mathematically fits the strings of numerals; and

determining if the selected view matches the desired view based on the hyperplane.

13. A system, comprising:

an image acquisition device configured to image a target region of a subject; a processing unit configured to:

generate an image of the target region using a selected view;

transform the image into a polar image;

compute a plurality of histograms of oriented gradients corresponding to one or more blocks in the polar image;

construct one or more feature vectors based on the plurality of histograms;

determine if the selected view matches a desired view by comparing the one or more feature vectors with one or more stored patterns corresponding to the desired view; and

an input-output device configured to provide feedback for moving an image acquisition device along a desired direction to achieve the desired view based on the comparison.

14. The system of claim 13, wherein the processing unit comprises a classifier, and wherein the processing unit is configured to train the classifier using a plurality of training images of the target region corresponding to the desired view.

15. The system of claim 13, further comprising a memory device configured to store the plurality of training images, the polar image, the plurality of histograms, the one or
more feature vectors, the stored patterns, or combinations thereof.

16. The system of claim 13, wherein the image acquisition device comprises a transducer probe.

17. The system of claim 13, wherein the image acquisition device comprises a
minimally invasive interventional device.

18. The system of claim 13, wherein the image acquisition device comprises an
intravascular ultrasound catheter.

19. The system of claim 11, wherein the input-output device comprises an
interactive user interface.

20. A non-transitory computer readable medium that stores instructions executable
by one or more processors to perform a method for automated view identification,
comprising:

generating an image of a target region of a subject using a selected view;

transforming the image into a polar image;

computing a plurality of histograms of oriented gradients corresponding to one or more blocks in the polar image;

constructing one or more feature vectors based on the plurality of histograms; and

determining if the selected view matches a desired view by comparing the one or more feature vectors with one or more stored patterns corresponding to the desired view.

Documents

Application Documents

# Name Date
1 5471-CHE-2012 FORM-3 27-12-2012.pdf 2012-12-27
1 5471-CHE-2012-AbandonedLetter.pdf 2019-03-15
2 5471-CHE-2012-FER.pdf 2018-09-13
2 5471-CHE-2012 FORM-26 27-12-2012.pdf 2012-12-27
3 abstract5471-CHE-2012.jpg 2014-08-21
3 5471-CHE-2012 FORM-2 27-12-2012.pdf 2012-12-27
4 5471-CHE-2012 FORM-18 27-12-2012.pdf 2012-12-27
4 5471-CHE-2012 CORRESPONDENCE OTHERS 07-07-2014.pdf 2014-07-07
5 5471-CHE-2012 FORM-1 27-12-2012.pdf 2012-12-27
5 5471-CHE-2012 FORM-1 07-07-2014.pdf 2014-07-07
6 5471-CHE-2012 DESCRIPTION (COMPLETE) 27-12-2012.pdf 2012-12-27
6 5471-CHE-2012 ABSTRACT 27-12-2012.pdf 2012-12-27
7 5471-CHE-2012 CORRESPONDENCE OTHERS 27-12-2012.pdf 2012-12-27
7 5471-CHE-2012 CLAIMS 27-12-2012.pdf 2012-12-27
8 5471-CHE-2012 DRAWINGS 27-12-2012.pdf 2012-12-27
9 5471-CHE-2012 CORRESPONDENCE OTHERS 27-12-2012.pdf 2012-12-27
9 5471-CHE-2012 CLAIMS 27-12-2012.pdf 2012-12-27
10 5471-CHE-2012 ABSTRACT 27-12-2012.pdf 2012-12-27
10 5471-CHE-2012 DESCRIPTION (COMPLETE) 27-12-2012.pdf 2012-12-27
11 5471-CHE-2012 FORM-1 27-12-2012.pdf 2012-12-27
11 5471-CHE-2012 FORM-1 07-07-2014.pdf 2014-07-07
12 5471-CHE-2012 FORM-18 27-12-2012.pdf 2012-12-27
12 5471-CHE-2012 CORRESPONDENCE OTHERS 07-07-2014.pdf 2014-07-07
13 abstract5471-CHE-2012.jpg 2014-08-21
13 5471-CHE-2012 FORM-2 27-12-2012.pdf 2012-12-27
14 5471-CHE-2012-FER.pdf 2018-09-13
14 5471-CHE-2012 FORM-26 27-12-2012.pdf 2012-12-27
15 5471-CHE-2012-AbandonedLetter.pdf 2019-03-15
15 5471-CHE-2012 FORM-3 27-12-2012.pdf 2012-12-27

Search Strategy

1 searchstrategy_12-09-2018.pdf