Sign In to Follow Application
View All Documents & Correspondence

System And Method For Identifying Regions Of Interest Using Ultrasound Images

Abstract: A method includes receiving at least one ultrasound image of an object. Further, the method includes identifying a plurality of anatomical features in the at least one ultrasound image. Also, the method includes localizing at least one anatomical feature corresponding to a first region of interest from the plurality of anatomical features based on a scoring of each of the plurality of anatomical features.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 October 2012
Publication Number
18/2014
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

GENERAL ELECTRIC COMPANY
NEW YORK CORPORATION, 1 RIVER ROAD, SCHENECTADY, NEW YORK 12345

Inventors

1. ANNANGI, PAVAN KUMAR VEERABHADRA
122, EPIP PHASE 2, HOODI VILLAGE, WHITEFIELD ROAD, BANGALORE 560 066
2. SUBRAMANIAN, NAVNEETH
122, EPIP PHASE 2, HOODI VILLAGE, WHITEFIELD ROAD, BANGALORE 560 066
3. GUPTA, MITHUN DAS
A505 MANTRI SAROVAR APTS, SECTOR-1, HSR LAYOUT, BANGALORE 560 102
4. PAVANI, SRI KAUSHIK
4TH FLOOR, EMBASSY STAR, 8 PALACE ROAD, VASANTH NAGAR, BANGALORE 560 052

Specification

SYSTEM AND METHOD FOR IDENTIFYING REGIONS OF INTEREST USING ULTRASOUND IMAGES

BACKGROUND

[0001] Embodiments of the present disclosure relate to imaging, and more particularly to identification of regions of interest using ultrasound images.

[0002] Ultrasound imaging has been employed for a wide variety of applications, such as measuring cardiac hypertrophy, assessing gestational age (GA) and the line. During the process of ultrasound scanning, a clinician attempts to capture a view of a certain anatomy which confirms/negates a particular medical condition. Once the clinician is satisfied with the quality of the view or the scan plane, the image is typically frozen. This image is then used to perform measurements on a region of interest in a patient. For example, ultrasound images are routinely used to monitor cardiac health of the patient or to assess gestational age (GA) and/or weight of a fetus. Ultrasound measurements of specific features of anatomy such as the head, abdomen or the femur are determined using two-dimensional (2D) or three-dimensional (3D) image data. These measurements are used in the determination of GA, assessment of growth patterns, and/or identification of anomalies. Similarly, for cardiac applications, thickness of cardiac walls, such as, the posterior wall, are routinely measured by cardiologists to measure cardiac hypertrophy.

[0003] Measuring the thickness of the posterior wall in diastole (PWd) is accepted as one of the cornerstone measurements in echocardiography. As will be appreciated, the thickness of the PWd is considered as one of the key parameters in cardiology for measuring cardiac hypertrophy. Also, the thickness of the PWd, along with the size of the left ventricle is used as one of the main indicators of cardiac hypertrophy. Furthermore, as cardiac hypertrophy is known to potentially lead to other cardiac complications, the measurement of the thickness of the PWd may be used for screening purposes.

[0004] Echocardiograms are used to measure the PWd thickness. However, manual measurement of the PWd thickness using echocardiograms suffers from large inter-observer and/or intra-observer variability based on the experience of the clinician. For example, the PWd thickness measurement varies with different observers. Furthermore, the PWd thickness measurement may also vary for the same observer at different instances. Such variations in the measurement of PWd thickness may in turn cause erroneous diagnoses. For example, erroneous diagnosis may lead to hypertrophic subjects being classified as healthy, and healthy subjects being classified as hypertrophic, both of which have undesirable consequences.

[0005] In addition to the manual measurement of PWd thickness, conventional systems and methods face other challenges such as inter-frame variations caused by cardiac motion and appearance of chordae and papillary muscles in varying degrees causing large scale textural variations in the LV cavity region. Further, the measurement of the PWd thickness becomes more compounded by inaccurate image acquisitions. This challenge is further compounded by misalignment between available acoustic windows for imaging the patient's heart. These issues can lead to undesirable variations in the diagnosis, resulting in missed detection of patients and/or unnecessary testing of healthy patients.

BRIEF DESCRIPTION

[0006] Briefly in accordance with one aspect of the present disclosure, a method is presented. The method includes receiving at least one ultrasound image of an object. Further, the method includes identifying a plurality of anatomical features in the at least one ultrasound image. Also, the method includes localizing at least one anatomical feature corresponding to a first region of interest from the plurality of anatomical features based on a scoring of each of the plurality of anatomical features.

[0007] In accordance with a further aspect of the present disclosure, a method is presented. The method includes receiving at least one ultrasound image of an object, wherein the at least one ultrasound image includes a plurality of anatomical features. The method further includes localizing a first anatomical feature associated with a first region of interest and a second anatomical feature associated with a second region of interest based on a scoring of the plurality of anatomical features in the at least one ultrasound image. Also, the method includes determining a thickness of a third region of interest based on the localized first anatomical feature and the second anatomical feature.

[0008] In accordance with another aspect of the present disclosure, a system is presented. The system includes an acquisition subsystem configured to acquire a series of ultrasound images of an object. The system further includes a processing subsystem operatively coupled to the acquisition subsystem and configured to localize a first anatomical feature associated with a first region of interest and a second anatomical feature associated with a second region of interest based on a scoring of a plurality of anatomical features in at least one ultrasound image in the series of ultrasound images and determine a thickness of a third region of interest based on the localized first anatomical feature and the second anatomical feature.

DRAWINGS

[0009] These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:

[0010] FIG. 1 is a diagrammatical illustration of an exemplary system for automated measurement of posterior wall (PW) thickness, in accordance with aspects of the present disclosure;

[0011] FIG. 2 is a diagrammatical illustration of an image frame corresponding to a patient's heart in the parasternal long axis view;

[0012] FIG. 3 is a diagrammatical illustration of one embodiment of the system of FIG. 1, in accordance with aspects of the present disclosure;

[0013] FIG. 4 is a flow chart depicting an exemplary method for localizing a region of interest in an object, in accordance with aspects of the present disclosure; and

[0014] FIG. 5 is a diagrammatical flow chart depicting an exemplary method for automatically measuring the thickness of a region of interest, in accordance with aspects of the present disclosure.

DETAILED DESCRIPTION

[0015] As will be described in detail hereinafter, various embodiments of exemplary systems and methods for monitoring a medical condition of a patient are presented. Particularly, the exemplary systems and methods may be used for measuring a thickness of posterior wall in diastole (PWd), which in turn may be used to detect cardiac hypertrophy. By employing the methods and the various embodiments of the system described hereinafter, variations in the measurement of the thickness of the posterior wall may be minimized, thereby enhancing the diagnosis of an object/patient.

[0016] Turning now to the drawings, and referring to FIG. 1, a block diagram of an exemplary system 100 for use in diagnostic imaging, in accordance with aspects of the present disclosure, is depicted. The system 100 is configured to aid a clinician such as a radiologist or an ultrasound technician in imaging an object 102. The object 102 may include a heart, a fetus, or a test object. In the example of FIG. 1, the system 100 may be configured to aid in imaging the heart 102 of a patient 103. In another example, the system 100 may be configured to aid in imaging a test object such as, but not limited to, luggage or baggage. However, for ease of understanding, the various systems and methods are described hereinafter with reference to imaging of the heart 102 of the patient 103. It may be noted that the terms "object" and "heart" may be used interchangeably. Also, it may be noted that although the present disclosure is described with reference to imaging the heart 102, imaging of other objects is also envisaged.

[0017] Typically, during a scanning procedure, the patient 103 is positioned on a support platform 105. The system 100 and more particularly an imaging system 106 may be employed to image the heart 102 of the patient 103. In the present example, the imaging system 106 includes an ultrasound imaging system. However, use of other imaging systems is also contemplated.

[0018] The clinician typically positions an image acquisition device 104, such as, but not limited to, an ultrasound probe on or about the heart 102 to be imaged. During the scanning procedure, the clinician acquires a plurality of image frames corresponding to the heart 102. The image frame may include one or more regions of interest corresponding to the heart 102. In one example, the regions of interest in the heart 102 may include a posterior wall (PW), a pericardium, a septum (IVS), a left ventricle (LV) cavity, and a mitral valve (MV). These regions of interest will be explained in greater detail with reference to FIG. 2.

[0019] As illustrated in FIG. 1, the system 100 may be configured to acquire image data corresponding to the heart 102 via the image acquisition device 104. Also, in one embodiment, the image acquisition device 104 may include a probe, where the probe may include an invasive probe, or a non-invasive or external probe, such as an external ultrasound probe, that is configured to aid in the acquisition of image data. Also, in certain other embodiments, the image data may be acquired via one or more sensors (not shown) that may be disposed on the heart 102. By way of example, the sensors may include physiological sensors (not shown) such as electrocardiogram (ECG) sensors and/or positional sensors such as electromagnetic field sensors or inertial sensors. These sensors may be operationally coupled to a data acquisition device, such as an imaging system, via leads (not shown), for example.

[0020] Further, the plurality of acquired image frames may be communicated to the imaging system 106. The imaging system 106 may be configured to process the received image frames to aid in the diagnosis. By way of example, the imaging system 106 may be configured to measure the thickness of the posterior wall of the heart 102. The measured thickness may then be employed to aid in the detection of cardiac hypertrophy and more particularly, LV hypertrophy.

[0021 ] It should be noted that although the exemplary embodiments illustrated hereinafter are described in the context of a medical imaging system, other imaging systems and applications such as industrial imaging systems and non¬destructive evaluation and inspection systems, such as pipeline inspection systems, liquid reactor inspection systems, are also contemplated. Additionally, the exemplary embodiments illustrated and described hereinafter may find application in multi-modality imaging systems that employ ultrasound imaging in conjunction with other imaging modalities, position-tracking systems or other sensor systems. For example, the multi-modality imaging system may include a positron emission tomography (PET) imaging system-ultrasound imaging system. Furthermore, it should be noted that although the exemplary embodiments illustrated hereinafter are described in the context of a medical imaging system, such as an ultrasound imaging system, use of other imaging systems, such as, but not limited to, a computed tomography (CT) imaging system, a contrast enhanced ultrasound imaging system, an X-ray imaging system, an optical imaging system, a positron emission tomography (PET) imaging system, a magnetic resonance (MR) imaging system and other imaging systems is also contemplated in accordance with aspects of the present disclosure.

[0022] Furthermore, in one embodiment, the imaging system 106 may include an acquisition subsystem 108 and a processing subsystem 110. The acquisition subsystem 108 may be configured to acquire image data representative of one or more regions of interest in the heart 102 via the image acquisition device 104, in one embodiment. The acquired image data may include a plurality of two-dimensional (2D) image frames or slices, in one example. In certain embodiments, the image frames may include B-mode ultrasound images. Additionally, the 2D image frames may include static 2D image frames or cine loops that include a series of 2D image frames acquired over time. It may be noted that although the present disclosure is described in terms of 2D ultrasound images, use of the present disclosure with three-dimensional (3D) ultrasound images and four-dimensional (4D) ultrasound images is also envisaged.

[0023] In a conventional system, the acquired image data is typically processed manually to determine the medical condition of the patient. More specifically, the acquired image data is processed manually to identify the PWd and measure the thickness of the PWd. However, the manual measurement of the PWd thickness suffers from large inter-clinician and/or intra-clinician variability based on the experience of the clinician. Also, the PWd thickness measurement usually varies for the same clinician at different instances. Such variability in the measurement of the PWd thickness can lead to erroneous diagnoses.

[0024] To circumvent these shortcomings of the currently available techniques, in accordance with the exemplary aspects of the present disclosure, the system 100 may be configured to automatically monitor the health of the patient 103. In one example, the system 100 may be configured to automatically monitor the patient 103 to detect cardiac hypertrophy. To that end, the imaging system 106 includes the processing subsystem 110 that may be configured to aid in the automated measurement of the PWd thickness. The measured thickness may in turn be used for detecting the cardiac hypertrophy.

[0025] In a presently contemplated configuration, the processing subsystem 110 may be configured to identify a plurality of anatomical features in the acquired image frame of the heart 102. The anatomical features may include features in the heart 102 such as, but not limited to, tubular structures in the heart 102. Further, the processing subsystem 110 may be configured to determine a score for each of these identified anatomical features based on one or more parameters corresponding to the anatomical features. In one embodiment, the parameters may include a size of the anatomical features, a distance of the anatomical features from an edge of the image frame, an intensity of the anatomical features, and the like.

[0026] Additionally, the processing subsystem 110 may be configured to localize an anatomical feature associated with a first region of interest from the plurality of anatomical features based on the scoring of each of the anatomical features. In one example, the first region of interest may include the pericardium. Accordingly, the processing subsystem 110 may be configured to localize an anatomical feature corresponding to the pericardium. Moreover, while determining the score corresponding to the anatomical features, an anatomical feature associated with the pericardium may receive a maximum score based on the corresponding parameters. The processing subsystem 110 may be configured to select the anatomical feature that has the maximum score, thereby localizing the pericardium in the heart 102. In addition, the processing subsystem 110 may be configured to use the location of the pericardium to segment other regions of interest, such as the posterior wall since the posterior wall is typically located adjacent one side of the pericardium. Further, the processing subsystem 110 may also be configured to measure the thickness of the segmented posterior wall to aid in any diagnosis of the cardiac region of the patient 103. The working of the processing subsystem 110 will be explained in greater detail with reference to FIGs. 2-6.

[0027] Furthermore, as illustrated in FIG. 1, the medical imaging system 106 may include a data repository 114, a display 116, and a user interface 118. The data repository 114 may include a local database to store the image data acquired by the acquisition subsystem 108. In certain embodiments, such as in a touch screen, the display 116 and the user interface 118 may overlap. Also, in some embodiments, the display 116 and the user interface 118 may include a common area. In accordance with aspects of the present disclosure, the display 116 of the medical imaging system 106 may be configured to display an image generated by the medical imaging system 106 based on the acquired image data. Additionally, in accordance with further aspects of the present disclosure, the localized pericardium and/or the measured thickness of the PWd may also be visualized on the display 116.

[0028] In addition, the user interface 118 of the medical imaging system 106 may include a human interface device (not shown) configured to aid the clinician in manipulating image data displayed on the display 116. The human interface device may include a mouse-type device, a trackball, a joystick, a stylus, or a touch screen configured to facilitate the clinician in identifying one or more areas requiring therapy. However, as will be appreciated, other human interface devices, such as, but not limited to, a touch screen, may also be employed. Furthermore, in accordance with aspects of the present disclosure, the user interface 118 may be configured to aid the clinician in navigating through the images acquired by the medical imaging system 106. Additionally, the user interface 118 may also be configured to aid in manipulating and/or organizing the displayed images and/or generated indicators displayed on the display 116.

[0029] Turning now to FIG. 2, a diagrammatical representation 200 of an ultrasound image frame of the heart is depicted. In particular, the image frame of FIG. 2 is representative of a Parasternal Long Axis (PLAX) view of the heart. It may be noted that in accordance with aspects of the present disclosure, the method for measuring a thickness of the posterior wall using the image frame corresponding to the heart may also find application in the imaging of a fetal heart or the heart of a child.

[0030] Reference numeral 202 is generally representative of a PLAX view corresponding to the heart. It may be noted that at the right scan plane and with optimal ultrasound instrument settings, it may be desirable to identify/verify the presence of one or more regions of interest in the heart. These regions of interest may include the pericardium 204, the posterior wall (PW) 206, the right ventricular outflow tract (RVOT) 208, the septum 210, the aortic valve (AV) 212, the left ventricle (LV) 214, the mitral valve (MV) 216, the left atrium (LA) 218, the descending aorta (DA) 220, and the like. It is to be noted that the terms "first region of interest" and "pericardium" may be used interchangeably. Similarly, the terms "second region of interest" and "septum" may be used interchangeably, and the terms "third region of interest' and "posterior wall" may be used interchangeably. Also, the terms "fourth region of interest" and "left ventricle cavity" may be used interchangeably in the present disclosure. It may also be noted that the terms region of interest and anatomical landmarks (AL) may be used interchangeably. Furthermore in FIG. 2, reference numeral 222 is representative of a first edge of the image frame 202, and reference numeral 224 is representative of a length of the image frame 202.

[0031] Further, the posterior wall (PW) 206 is a soft muscle tissue that is located between the pericardium 204 and the LV cavity 214. Also, as a blood pool inside the LV cavity 214 provides a weak response to an ultrasound beam from the probe 104 (see FIG. 1) due to anechoic characteristics of the blood pool, the LV cavity 214 is typically manifested as a relatively dark region in the image 202. Moreover, the PW 206 being a soft muscle tissue generates a weak response to the ultrasound beam, which further results in a weak contrast at the LV-PW boundary. Thus, it is challenging to localize and/or measure the thickness of the posterior wall (PW) 206 using the image frame 202.

[0032] As will be appreciated, the pericardium 204 is typically disposed adjacent to a bottom side of the posterior wall 206. Moreover, the pericardium 204 is typically manifested as one of the most prominent and well defined signatures in the PLAX view ultrasound image 202 owing to the hyper echogenic property of the pericardium 204 to the ultrasound beams. Also, the pericardium 204 appears as a bright and coherent structure in the PLAX view of the ultrasound image 202 owing to its high impedance to ultrasound beams.

Additionally, the septum 210 also appears a bright and coherent structure in the ultrasound image 202.

[0033] In accordance with aspects of the present disclosure, identifying the pericardium region 204 and/or septum 210 in the ultrasound image 202 aids in localizing the posterior wall 206. Also, in accordance with further aspects of the present disclosure, an inner layer of the posterior wall 206 may be localized by imposing a width constraint of the pericardium 204, the LV cavity 214, and the septum 210 on the ultrasound image 202, and will be explained in greater detail with reference to FIG. 3.

[0034] Referring to FIG. 3, a diagrammatical illustration 300 of one embodiment of the system of FIG. 1, in accordance with aspects of the present disclosure, is depicted. For ease of understanding of the present disclosure, the system 300 is described with reference to the components and features of FIGs. 1 and 2. As previously noted with reference to FIG. 1, the acquisition subsystem 108 is configured to aid in the acquisition of image data from the heart 102 of the patient 103. Accordingly, one or more image data sets representative of the heart 102 may be acquired by the acquisition subsystem 108. In certain embodiments, the one or more image data sets may include ultrasound image data 302. It may be noted that the ultrasound image data 302 may be representative of various regions of interest in the heart 102. As previously noted, the ultrasound image data 302 may include two-dimensional ultrasound image frames, in one example. Also, the ultrasound image data 302 may include cine loops, where the cine loops include 2D image frames acquired over time t.

[0035] Furthermore, the ultrasound image data 302 acquired by the acquisition subsystem 108 may be stored in the data repository 114. In certain embodiments, the data repository 114 may include a local database. The processing subsystem 110 (see FIG. 1) may then access the ultrasound image data 302, from the local database 114. Alternatively, the ultrasound image data 302 may be obtained by the acquisition subsystem 108 from an archival site, a database, or an optical data storage article. For example, the acquisition subsystem 108 may be configured to acquire images stored in the optical data storage article. It may be noted that the optical data storage article may be an optical storage medium, such as a compact disc (CD), a digital versatile disc (DVD), multi-layer structures, such as DVD-5 or DVD-9, multi-sided structures, such as DVD-10 or DVD-18, a high definition digital versatile disc (HD-DVD), a Blu-ray disc, a near field optical storage disc, a holographic storage medium, or another like volumetric optical storage medium, such as, for example, two-photon or multi-photon absorption storage format. Further, the ultrasound image data sets 302 so acquired by the acquisition subsystem 108 may be stored locally on the medical imaging system 106 (see FIG. 1). The ultrasound image data sets 302 may be stored in the local database 114, for example.

[0036] Also, in the embodiments illustrated in FIG. 3, the processing subsystem 110 is shown as including a component extracting unit 304, a localizing unit 306, a segmentation unit 308, and a thickness measurement unit 310. It may be noted that although the configuration of FIG. 3 depicts the processing subsystem 110 as including the component extracting unit 304, the localizing unit 306, the segmentation unit 308, and the thickness measurement unit 310, fewer or more number of such units may be used.
[
0037] In accordance with aspects of the present disclosure, the component extracting unit 304 may be configured to process the acquired image frames 302 to identify or extract one or more anatomical features in the heart 102. For example, as previously noted the anatomical features may include tubular structures in the acquired image data 302. Also, in one example, the ultrasound image data 302 may include the ultrasound image frame 202. The image frame 202 may be processed to enhance the contrast of the anatomical features. In one embodiment, the image frame 202 may be processed via a Frangi vesselness filter or a Hessian based enhancement filter to enhance the contrast of the anatomical features. Particularly, the image frame 202 may be filtered using the Frangi vesselness filter or the Hessian based enhancement filter to mitigate any intensity inhomogeneity to generate an intermediate image. This intermediate image may generally be referred to as a vesselness image frame. It may be noted that the Frangi vesselness filter or the Hessian based enhancement filter is an enhancement filter that is used to enhance the contrast of tubular structures with respect to the background.

[0038] The component extracting unit 304 may be configured to process the vesselness image frame to generate a binary image. It may be noted that in the vesselness image frame, in addition to the tubular structures of interest, there may exist regions of near-field haze and boundary artifacts. Therefore, it may be desirable to delete any undesirable hazy regions from the vesselness image frame. To that end, the vesselness image frame may be thresholded by the component extracting unit 304 to generate the binary image. In the present example, the binary image may include at least three tubular structures that correspond to at least the pericardium 204, the septum 210, and the mitral valve 216, in addition to the other regions of the heart and imaging artifacts.

[0039] Once the binary image is generated, the component extracting unit 304 may be configured to compute a score corresponding to the binary image. The binary image includes one or more anatomical features, where each of the anatomical features represents a corresponding tubular structure in the image 202.

[0040] As previously noted, the tubular structures may correspond to one or more of the pericardium 204, the septum 210, and the mitral valve 216. The component extracting unit 304 may be configured to compute a score corresponding to each of these tubular structures based on their size, intensity, and distance from the first edge 222 of the image frame 202. More specifically, the score corresponding to a tubular structure may be computed based on one or more of a size of the tubular structure, a maximum size of a largest tubular structure in the binary image, a mean intensity of pixels corresponding to the tubular structure, a maximum intensity of pixels corresponding to the tubular structure, an orientation of the tubular structure, an angle of a probe that is used to acquire the image frame 202, a distance of the tubular structure from the first edge 222 of the image frame 202, and a maximum distance or length 224 of the image frame 202.

[0041] In one embodiment, for each tubular structure 7' in the image frame 202, the score may be computed using:

[0042] In equation (1), the "tubular structure size" represents a number of pixels corresponding to each tubular structure and the "maximum size" represents a size of the largest tubular structure. Similarly, the "mean intensity" represents a mean intensity of all the pixels corresponding to the tubular structure. Also, the "maximum intensity" represents a maximum intensity of all the pixels in the image frame 202. Moreover, the "orientation of the tubular structure (if may be computed as an orientation of a major axis of a fitted ellipse. Further, the "probe angle" represents an angle of a probe that is used to acquire the image frame or data. Also, the "distance from the first edge 222 of the image frame 202" is measured as a mean of the Euclidean distance of each pixel corresponding to the tubular structure from the first edge 222 of the image frame 202. In addition, the "maximum distance" represents the length 224 of the image frame 202.

[0043] Further, the localizing unit 306 may be configured to select a desired region of interest based on the scores corresponding to each of the tubular structures in the image frame 202. Particularly, while scoring die tubular structures based on parameters shown in equation (1), the tubular structure that is associated with the pericardium 204 or the first region of interest may receive a maximum score. The localizing unit 306 may be configured to select the tubular structure corresponding to the maximum score, thereby localizing the pericardium 204 in the image frame 202. In addition to localizing the pericardium 204, the localizing unit 306 may also be configured to localize the tubular structure associated with the septum 210 based on its corresponding score. In one example, the score corresponding to the septum 210 may be representative of a second highest score.

[0044] Moreover, the localizing unit 306 may also be configured to process the localized pericardium 204 to identify an outer layer and an inner layer of the pericardium 204. In one embodiment, the localizing unit 306 may select a distal end of the pericardium 204 based on the intensity of pixels at the boundary of the pericardium 204. This distal end of the pericardium 204 may be further used to identify the inner layer and/or outer layer of the pericardium 204. By way of example, a generic polynomial may be fit to the inner layer of the pericardium 204 to represent the inner layer of the pericardium 204 over a series of image frames 302. In one example, a second degree polynomial of the form y = (ax2+ bx +c) may be fit to the inner layer of the pericardium 204 for tracking the inner layer of the pericardium 204 over a series of image frames 302.

[0045] Once the pericardium 204 is identified, the segmentation unit 308 may be configured to use the localized pericardium 204 to accurately segment the posterior wall 206 in the image frame 202. In one embodiment, the posterior wall 206 may be segmented by imposing a width constraint on the image frame 202. In one example, the width constraint may indicate that the septum wall thickness (IVsd), the LV internal diameter (LVId), and the posterior wall thickness (PWTd) are in a ratio of 1:3:1 for a normal patient.

[0046] Further, the thickness measurement unit 310 may be used to measure the thickness of the segmented posterior wall 206. In one example, a PW thickness algorithm may be used to measure the thickness of the posterior wall 206.

[0047] Furthermore, in one embodiment, the segmentation unit 308 and the thickness measurement unit 310 may be combined to form a single unit that is configured to localize/segment the posterior wall 206 and measure the thickness of the posterior wall 206. In this example, the single unit may receive a processed ultrasound image I: fit—>R, CI = [a, b] x [c, d] from the localizing unit 306. Further, in the received ultrasound image, a smooth one-dimensional (1D) function f: [a, b] —>[c, d] may be used to represent the inner posterior wall between two smooth ID functions h:[a, b] —>[c, d] and g:[a, b] —>[c, d]. The functions f, g, and h are depicted by dotted lines in FIG. 2. The function h:[a, b] —>[c, d] represents an outer layer of the septum, while the function g:[a, b] —>[c, d] represents an inner layer of the pericardium. The region between the functions 'h' and 'f is representative of the LV cavity and the region between the functions 'f and 'g' is representative of the posterior wall. Also, the image intensities in these regions may be piecewise constant. In one embodiment, energy function depicted by equation (2) may be employed for measuring the thickness of the posterior wall.

[0048] In equation (2), m represents the mean intensity of the PW region between the functions 'f and 'g' and orpw represents the standard deviation of the intensity values of the PW region between the functions T and 'g.' Similarly, filv represents the mean intensity of the LV cavity region between the functions T and 'h' and lv represents the standard deviation of the intensity values of the LV cavity region between the functions 'f 'and 'h.' Further, 'w' represents the width of the posterior wall and T represent the ultrasound image. In addition, X«, Xw represent weights of a corresponding term in the energy function.

[0049] In an exemplary system, the energy function includes three terms, such as a data term, a smoothness term, and a width term. The first term in equation (2) is representative of the data term. This data term drives the functional 'f such that the intensity distribution on either side of this 'f' curve (see FIG. 2) is homogeneous. The second term in equation (2) is representative of the smoothness term. The contribution of the smoothness term is governed by the weight parameter Xs. The last two terms of equation (2) is representative of the width term. The width term control the width of the left ventricle (LV) and posterior wall (PW) and these terms are weighted by X,,. The width term controls the width of the posterior wall that ensures that the width of IVSd, LVId, and PWTd are approximately in the ratio of 1:3:1 at the level of the mitral valve.

[0050] Thus, by localizing the pericardium region and/or septum region, the posterior wall may be segmented. In addition, the thickness of the posterior wall may be measured without or with negligible variability. Also, since the posterior wall thickness is automatically measured, erroneous diagnoses may be substantially reduced.

[0051] Referring to FIG. 4, a flow chart depicting an exemplary method for automatically localizing a region of interest in an object, in accordance with aspects of the present disclosure, is depicted. It may be noted that the method of FIG. 4 is described in terms of the various components of FIGs. 1-3.

[0052] The method 400 may be described in a general context of computer executable instructions. Generally, computer executable instructions may include routines, programs, objects, components, data structures, procedures, modules, functions, and the like that perform particular functions or implement particular abstract data types. In certain embodiments, the computer executable instructions may be located in computer storage media, such as a memory, local to an imaging system 106 (see FIG. 1) and in operative association with a processing subsystem 110. In certain other embodiments, the computer executable instructions may be located in computer storage media, such as memory storage devices, that are removed from the imaging system. Moreover, the method for localizing the regions of interest in the object 102 may be implemented in hardware, software, or combinations thereof.

[0053] As will be appreciated, during a typical imaging session, the patient 103 is positioned for imaging and the clinician attempts to image desired objects, such as the heart 102 of the patient 103. Accordingly, the method starts at step 402 where at least one ultrasound image of the object 102 is obtained. To that end, the processing subsystem 110 may be configured to receive the ultrasound image 202 of the object 102. As previously noted, the processing subsystem 110 may be configured to receive the ultrasound image from the acquisition subsystem 108, in one example.

[0054] Subsequently, at step 404, one or more anatomical features in the received ultrasound image 202 may be identified. As previously noted, in the ultrasound image 202, the anatomical features may include tubular structures. In particular, the component extracting unit 304 may be used to extract or identify the anatomical features in the ultrasound image 202. Specifically, the component extracting unit 304 may be configured to enhance contrast of the tubular structures in the ultrasound image 202. In one example, the received ultrasound image 202 may be processed with a Frangi vesselness algorithm to enhance the contrast of the tubular structures. Upon enhancing the contrast of the tubular structures, the component extracting unit 304 may be configured to filter the ultrasound image to generate a binary image. The binary image typically includes only the tubular structures in the received ultrasound image.

[0055] Further, at step 406, at least one anatomical feature from the plurality of anatomical features that is associated with a first region of interest may be localized based on a scoring of each of the plurality of anatomical features. To that end, the localizing unit 306 may be used for localizing the at least one anatomical feature associated with the first region of interest. It may be noted that the terms "first region of interest" and "pericardium" may be used interchangeably. Particularly, the localizing unit 306 may be configured to score each of the anatomical features in the ultrasound image 202 based on at least one of a size, an intensity, and a distance of the corresponding anatomical feature from the first edge 222 of the image frame 202. More specifically, the score corresponding to the anatomical feature may be computed based on one or more parameters of equation (1). Further, since the anatomical feature that corresponds to the first region of interest 204 receives a maximum score as per equation (1), the localizing unit 306 may be configured to identify the anatomical feature having the maximum score to localize the first region of interest. The method of FIG. 4 may be better understood with reference to FIG. 5.

[0056] Turning now to FIG. 5, a diagrammatical representation 500 of an exemplary method for measuring the thickness of posterior wall in diastole (PWd) is presented. For ease of understanding, the method is described with reference to the components of FIGs. 1-3. In certain embodiments, the method may be configured to preprocess the plurality of image frames 302 corresponding to the heart 102. A PLAX quality metric (PQM) indicator may be employed to determine a quality of the image frames 302, in one example. Moreover, in one embodiment, the PQM may be used as an indicator to determine if an ultrasound image frame 302 is suitable to be used for the automated localization of FIG. 4. In one example, the plurality of image frames 302 may include Parasternal Long Axis (PLAX) B-mode echocardiograms.

[0057] The method starts at step 502, where an image frame 504 representative of the heart 102 may be received. As noted hereinabove, in one example, the received image frame 502 may be representative of a PLAX view.

According to aspects of the present disclosure, the method for measuring the thickness of posterior wall in diastole (PWd) entails verifying, in real-time, the presence of one or more regions of interest in the image frame 502. In one example, the regions of interest may include the pericardium 204, the septum 210, and the mitral valve 216. It may be noted that if the three regions of interest such as the pericardium 204, the septum 210, and the mitral valve 216 are visible in an image frame, that image frame may be assumed to have a desired quality.

[0058] Subsequently, the image frame 504 may be processed to enhance the contrast of the tubular structures, as depicted by step 506. In one embodiment, the image frame 504 may be processed via a Frangi vesselness filter or a Hessian based enhancement filter to enhance the contrast of the tubular structures. Particularly, the image frame 504 may be filtered using the Frangi vesselness filter or the Hessian based enhancement filter to mitigate any intensity inhomogeneity. Subsequent to the processing by the Frangi vesselness filter or the Hessian based enhancement filter, an intermediate image such as a vesselness image frame 508 may be generated. As will be appreciated, the Frangi vesselness filter or the Hessian based enhancement filter is an enhancement filter that is used to enhance the contrast of tubular structures with respect to the background. The tubular structures may correspond to the regions of the interest such as the pericardium 204, the septum 210, and the mitral valve 216.

[0059] Moreover, at step 510, the vesselness image frame 508 may be processed to generate a binary image 512. It may be noted that in the vesselness image frame 508, in addition to the tubular structures, there may exist regions of near-field haze and boundary artifacts. Therefore, it may be desirable to delete any undesirable regions from the vesselness image frame 508. To that end, in one embodiment, the vesselness image frame 508 may be thresholded to generate the binary image 512. This binary image 512 may include the three tubular structures corresponding to the pericardium 204, the septum 210, and the mitral valve 214 in addition to the other regions of the heart and/or imaging artifacts.

[0060] Further, at step 514, the generated binary image 512 may be processed to provide a score to each of the tubular structures in the binary image 512. These tubular structures may correspond to at least the pericardium 204, the septum 210, and the mitral valve 214. In particular, the binary image 512 may be processed to compute a score for each of these tubular structures based on one or more of their size, intensity, and the distance from the first edge 222 of the image frame, in one embodiment. More specifically, the score corresponding to the tubular structure may be computed based on one or more parameters of equation (1). Further, since the pericardium 204 receives a maximum score as per equation (1), the tubular structure having the maximum score may be selected, thereby localizing the pericardium 204, as depicted in the image 516. In FIG. 5, this localized pericardium 204 is represented by a reference numeral 517 in the image 516. Also, other regions of interest, such as the septum 210 and the mitral valve 216 may be localized based on their corresponding scores. In one example, the score corresponding to die septum 210 may be representative of a second highest score.

[0061 ] Furthermore, at step 518, the image 516 may be processed to identify a first layer 519 and/or a second layer of the pericardium 204, as shown in the image 520. The first layer may correspond to an inner layer of the pericardium 204, while the second layer may correspond to an inner layer of the pericardium 204. To that end, in one example, a distal end of the pericardium 204 may be selected based on the intensity of pixels at the boundary of the pericardium 204. This distal end of the pericardium 204 may be used to identify the first layer 519 of the pericardium 204. It may be noted that the terms "first layer and "inner layer" may be used interchangeably and the terms "second layer and "outer layer" may be used interchangeably.

[0062] In addition, at step 522, a generic polynomial may be fit to the inner layer 519 of the pericardium 204 to represent the inner layer of the pericardium 204 over a series of image frames 302. In one example, a second degree polynomial of the form y = (ax2+ bx +c) may be fit to the inner layer of the pericardium 204 for tracking the inner layer 519 of the pericardium 204 over a series of image frames 302. This polynomial curve 526 is depicted in image 524.

[0063] Upon identifying the pericardium 204 and the septum 210, the segmentation unit 308 may be configured to use the pericardium 204 to accurately segment the posterior wall 206 in the image frame 302. In one example, the posterior wall 206 may be segmented by imposing a width constraint on the image frame. The width constraint indicates that the septum wall thickness (IVsd), LV internal diameter (LVId), and Posterior wall thickness (PWTd) are in the ratio of 1:3:1 for a normal patient, in one example. Further, the thickness measurement unit 310 may be used to measure the thickness of the segmented posterior wall. In one example, a thickness algorithm may be used to measure the thickness of the posterior wall.

[0064] The various embodiments of the systems and methods described hereinabove aid in automatically measuring the thickness of the posterior wall in diastole (PWd). Also, as the image is processed automatically to localize the pericardium and measure the thickness of PWd, the variability in the measurement of PWd thickness is substantially reduced. In addition, since the image is processed automatically using the regions of interest in the image, the cost and time for diagnoses is reduced, thereby enhancing clinical workflow.

[0065] While only certain features of the invention have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

CLAIMS I/We claim:

1. A method, comprising:

receiving at least one ultrasound image of an object;

identifying a plurality of anatomical features in the at least one ultrasound image; and localizing at least one anatomical feature corresponding to a first region of interest from the plurality of anatomical features based on a scoring of each of the plurality of anatomical features.

2. The method of claim 1, wherein identifying the plurality of anatomical features in the at least one ultrasound image comprises enhancing a contrast of tubular structures in the received ultrasound image.

3. The method of claim 2, wherein enhancing the contrast of tubular structures in the received ultrasound image comprises processing the received ultrasound image using a Hessian filter.

4. The method of claim 3, further comprising:
generating a binary image comprising the tubular structures by filtering the at least one ultrasound image; and
identifying the plurality of anatomical features using the generated binary image comprising the tubular structures.

5. The method of claim 1, wherein localizing the at least one
anatomical feature comprises:

scoring each of the plurality of anatomical features in the at least one ultrasound image based on one or more of a size, an intensity, and a distance of the plurality of anatomical features from a first edge of the at least one ultrasound image; and identifying an anatomical feature having a first score as the at least one anatomical feature corresponding to the first region of interest from the plurality of anatomical features.

6. The method of claim 5, wherein scoring each of the plurality of anatomical features comprises computing a score for each of the plurality of anatomical features based on at least one of a size of an anatomical feature, a maximum size of a largest anatomical feature, a mean intensity of pixels corresponding to the anatomical feature, a maximum intensity of pixels corresponding to the anatomical feature, orientation of the anatomical feature, probe angle, distance of the anatomical feature from a first edge of the at least one ultrasound image, and a maximum length of the at least one ultrasound image.

7. The method of claim 1, wherein the at least one ultrasound image comprises a Parasternal Long Axis (PLAX) view image.

8. The method of claim 1, further comprising identifying at least one of a first layer and a second layer corresponding to the first region of interest based on the at least one anatomical feature associated with the first region of interest.

9. The method of claim 8, further comprising applying at least one polynomial fit on at least one of the first layer and the second layer of the first region of interest for tracking the first region of interest over a series of ultrasound images, wherein the series of ultrasound images comprises the at least one ultrasound image.

10. The method of claim 8, further comprising:
identifying a second region of interest based on the at least one of the identified first layer and the second layer of the first region of interest; and
determining a thickness of the second region of interest.

11. The method of claim 10, further comprising assessing a condition of the object based on the determined thickness of the second region of interest.

12. A method, comprising:
receiving at least one ultrasound image of an object, wherein the at least one ultrasound image comprises a plurality of anatomical features;
localizing a first anatomical feature associated with a first region of interest and a second anatomical feature associated with a second region of interest based on a scoring of the plurality of anatomical features in the at least one ultrasound image; and
determining a thickness of a third region of interest based on the localized first anatomical feature and the second anatomical feature.

13. The method of claim 12, wherein localizing the first anatomical feature associated with the first region of interest and the second anatomical feature associated with the second region of interest comprises:
enhancing a contrast of tubular structures in the received at least one ultrasound image, wherein the tubular structures correspond to at least the first region of interest and the second region of interest; and
identifying the anatomical features associated with each of the tubular structures in the received ultrasound image.

14. The method of claim 13, wherein enhancing the contrast of the tubular structures comprises generating a binary image comprising the tubular structures by filtering the at least one ultrasound image.

15. The method of claim 14, wherein identifying the anatomical features comprises identifying the anatomical features using the generated binary image comprising the tubular structures.

16. The method of claim 12, wherein localizing the first anatomical feature associated with the first region of interest and the second anatomical feature associated with the second region of interest further comprises:

scoring each of the plurality of anatomical features in the at least one ultrasound image based on one or more of a size, an intensity, and a distance of the plurality of anatomical features from a first edge of the at least one ultrasound image; and
identifying an anatomical feature having a first score as the first anatomical feature and an anatomical feature having a second score as the second anatomical feature from the plurality of anatomical features.

17. The method of claim 12, wherein determining the thickness of the third region of interest comprises:

segmenting the third region of interest based on the localized first anatomical feature and the second anatomical feature; and

determining the thickness of the segmented third region of interest.

18. The method of claim 17, wherein segmenting the third region of interest comprises:
localizing a fourth region of interest based on an intensity of pixels corresponding to a region between the first region of interest and the second region of interest; and
detecting the third region of interest based on one or more of the localized first region of interest, the second region of interest, and the fourth region of interest.

19. A system, comprising:

an acquisition subsystem configured to acquire a series of ultrasound images of an object;
a processing subsystem operatively coupled to the acquisition subsystem and configured to:
localize a first anatomical feature associated with a first region of interest and a second anatomical feature associated with a second region of interest based on a scoring of a plurality of anatomical features in at least one ultrasound image in the series of ultrasound images; and
determine a thickness of a third region of interest based on the localized first anatomical feature and the second anatomical feature.

20. The system of claim 19, wherein the processing subsystem comprises:
a component extracting unit configured to score each of the plurality of anatomical features in the at least one ultrasound image based on at least one of a size, an intensity, and a distance of the plurality of anatomical features from a first edge of the series of ultrasound images; and
a localizing unit configured to select the first anatomical feature having a first score and the second anatomical feature having a second score from the plurality of anatomical features.

21. The system of claim 20, wherein the processing subsystem further comprises:

a segmentation unit configured to segment the third region of interest based on the localized first anatomical feature and the second anatomical feature; and

a measurement unit configured to determine the thickness of the segmented third region of interest.

22. A non-transitory computer readable medium that stores instructions executable by one or more processors to perform a method, comprising:
receiving at least one ultrasound image of an object;
identifying anatomical features in the at least one ultrasound image; and
localizing at least one anatomical feature associated with a region of interest from the plurality of identified anatomical features based on a scoring of each of the plurality of anatomical features.

Documents

Application Documents

# Name Date
1 4534-CHE-2012 POWER OF ATTORNEY 31-10-2012.pdf 2012-10-31
1 4534-CHE-2012-AbandonedLetter.pdf 2018-11-30
2 4534-CHE-2012-FER.pdf 2018-05-11
2 4534-CHE-2012 FORM-3 31-10-2012.pdf 2012-10-31
3 4534-CHE-2012 FORM-2 31-10-2012.pdf 2012-10-31
3 4534-CHE-2012 CORRESPONDENCE OTHERS 07-07-2014.pdf 2014-07-07
4 4534-CHE-2012 FORM-18 31-10-2012.pdf 2012-10-31
4 4534-CHE-2012 FORM-1 07-07-2014.pdf 2014-07-07
5 4534-CHE-2012 POWER OF ATTORNEY 07-07-2014.pdf 2014-07-07
5 4534-CHE-2012 FORM-1 31-10-2012.pdf 2012-10-31
6 abstract4534-CHE-2012.jpg 2014-02-26
6 4534-CHE-2012 DRAWINGS 31-10-2012.pdf 2012-10-31
7 4534-CHE-2012 DESCRIPTION(COMPLETE) 31-10-2012.pdf 2012-10-31
7 4534-CHE-2012 ABSTRACT 31-10-2012.pdf 2012-10-31
8 4534-CHE-2012 CORRESPONDENCE OTHERS 31-10-2012.pdf 2012-10-31
8 4534-CHE-2012 CLAIMS 31-10-2012.pdf 2012-10-31
9 4534-CHE-2012 CORRESPONDENCE OTHERS 31-10-2012.pdf 2012-10-31
9 4534-CHE-2012 CLAIMS 31-10-2012.pdf 2012-10-31
10 4534-CHE-2012 ABSTRACT 31-10-2012.pdf 2012-10-31
10 4534-CHE-2012 DESCRIPTION(COMPLETE) 31-10-2012.pdf 2012-10-31
11 abstract4534-CHE-2012.jpg 2014-02-26
11 4534-CHE-2012 DRAWINGS 31-10-2012.pdf 2012-10-31
12 4534-CHE-2012 POWER OF ATTORNEY 07-07-2014.pdf 2014-07-07
12 4534-CHE-2012 FORM-1 31-10-2012.pdf 2012-10-31
13 4534-CHE-2012 FORM-18 31-10-2012.pdf 2012-10-31
13 4534-CHE-2012 FORM-1 07-07-2014.pdf 2014-07-07
14 4534-CHE-2012 FORM-2 31-10-2012.pdf 2012-10-31
14 4534-CHE-2012 CORRESPONDENCE OTHERS 07-07-2014.pdf 2014-07-07
15 4534-CHE-2012-FER.pdf 2018-05-11
15 4534-CHE-2012 FORM-3 31-10-2012.pdf 2012-10-31
16 4534-CHE-2012-AbandonedLetter.pdf 2018-11-30
16 4534-CHE-2012 POWER OF ATTORNEY 31-10-2012.pdf 2012-10-31

Search Strategy

1 4534searchh_22-03-2018.pdf