Sign In to Follow Application
View All Documents & Correspondence

Methods And System For Touch Based Ultrasound Image Enhancement

Abstract: Methods and systems for ultrasound imaging are presented. A touch-based input corresponding to an ultrasound image frame displayed on a user interface is received. A region of interest corresponding to the image frame is defined based on the touch-based input. Further, at least one gesture-based input on the user interface is received. Moreover, one or more image processing functions are identified based on the gesture-based input. The one or more image processing functions are applied to the region of interest to aid in enhancing at least the region of interest in the image frame. FIG. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 September 2013
Publication Number
14/2015
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
GEHC_IN_IP-docketroom@ge.com
Parent Application

Applicants

GENERAL ELECTRIC COMPANY
1 RIVER ROAD, SCHENECTADY, NEW YORK 12345

Inventors

1. SINGHAL, NITIN
122, EPIP PHASE 2, HOODI VILLAGE, WHITEFIELD ROAD, BANGALORE 560 066
2. KOTESWAR, SRINIVAS
122, EPIP PHASE 2, HOODI VILLAGE, WHITEFIELD ROAD, BANGALORE 560 066
3. ARORA, MANISH
03-05, MERAH SAGA, 23, JALAN MERAH SAGA 278 102

Specification

METHODS AND SYSTEMS FOR TOUCH-BASED ULTRASOUND IMAGE ENHANCEMENT

BACKGROUND

[0001] Embodiments of the present disclosure relate generally to diagnostic imaging, and more particularly to systems and methods for touch-based enhancement of ultrasound images.

[0002] Medical diagnostic ultrasound is an imaging modality that employs ultrasound waves to probe the acoustic properties of biological tissues and produces corresponding images. Particularly, diagnostic ultrasound systems are used to provide an accurate visualization of muscles, tendons, and other internal organs to assess their size, structure and any pathological lesions using near real time tomographic images. Further, diagnostic ultrasound is used for determining movement, for example, corresponding to blood flow within the body. Additionally, ultrasound systems find use in therapeutics where an ultrasound probe is used to guide interventional procedures such as biopsies or to track an interventional device.

[0003] Furthermore, owing to the ability to image the underlying tissues without the use of ionizing radiation, ultrasound systems are also extensively used in angiography and prenatal scanning. By way of example, ultrasound imaging may be used in the second and third trimesters of pregnancy for measuring length of the femur of a fetus to determine the gestational age and fetal growth.

[0004] However, imaging the femur is a challenging procedure. Generally, an accurate measurement of the femur length entails measurement of a proximal femur positioned at about 90 degrees to the ultrasound beam in an image frame. Further, clinical guidelines stipulate the inclination of the femur in the image frame to be within 30 degrees to the face of an ultrasound transducer. Typically, a sonographer strives to acquire an image frame that satisfies the clinically stipulated guidelines by repeatedly repositioning a transducer probe and/or reconfiguring one or more imaging controls.

[0005] Conventional ultrasound systems include a control panel including a plurality of imaging controls, for example buttons and switches, for controlling image data acquisition, pre- and post-processing, and display operations. These controls may need to be reconfigured multiple times during imaging to obtain a clinically acceptable image frame for imaging a target anatomy. Switching between a display and the control panel repeatedly, however, may distract the sonographer, cause repetitive stress syndromes, while also prolonging the scanning time. Furthermore, use of an independent control panel in a clinical setting exposes the ultrasound system to fluids and other chemical and biological materials, thus resulting in sterilization challenges.

[0006] In recent times, certain portable ultrasound systems have been implemented on smartphone-based and/or tablet-based platforms to allow movement and use of the ultrasound system in ambulances, hospital emergency rooms, outpatient centers, and remote medical offices. For example, use of a smart-phone based ultrasound system allows for providing quality imaging in rural areas, where both expensive imaging systems and skilled sonographers are typically in short supply. However, even these portable systems include a plurality of controls, for example, physical buttons or dedicated touch-enabled portions on a display panel for use in reconfiguring imaging controls. These additional buttons may not only prolong imaging time, but also limit the available area on the panel for displaying the image.

[0007] Additionally, conventional implementations of the imaging controls apply corresponding image processing functions on the entire image. Applying the image processing functions such as segmentation, filtering, time gain correction, and/or contrast enhancement on the entire image, however, is computationally intensive. Particularly, implementing such computationally intensive processes in battery-operated portable ultrasound systems is inefficient, and thus, may severely hinder the provision of imaging capabilities in underdeveloped regions with erratic or undependable power supply infrastructure.

BRIEF DESCRIPTION

[0008] Certain aspects of the present disclosure are drawn to exemplary methods and systems for ultrasound imaging. A touch-based input corresponding to an ultrasound image frame displayed on a user interface is received. A region of interest corresponding to the image frame is defined based on the touch-based input. Further, at least one gesture-based input on the user interface is received. Moreover, one or more image processing functions are identified based on the gesture-based input. The one or more image processing functions are applied to the region of interest to aid in enhancing at least the region of interest in the image frame.

DRAWINGS

[0009] These and other features and aspects of embodiments of the present technique will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:

[0010] FIG. 1 is a schematic representation of an exemplary ultrasound imaging system, in accordance with aspects of the present disclosure;

[0011] FIG. 2 is a flow diagram illustrating an exemplary method for ultrasound imaging, in accordance with aspects of the present disclosure;

[0012] FIG. 3 is a diagrammatical representation of certain exemplary gestures used in the method described with reference to FIG. 2, in accordance with aspects of the present disclosure;

[0013] FIG 4 is an exemplary ultrasound image frame on which a touch based input is received, in accordance with aspects of the present disclosure; and

[0014] FIG. 5 is the ultrasound image frame illustrated in FIG. 4 including an exemplary representation of an enhanced region of interest determined using the touch based input, in accordance with aspects of the present disclosure.

DETAILED DESCRIPTION

[0015] The following description presents systems and methods for touch-based enhancement of ultrasound images. Particularly, certain embodiments illustrated herein describe exemplary tablet-based systems and methods that allow gesture-based enhancement of a region of interest (ROI) in an ultrasound image. Each of the gesture-based inputs is indicative of one or more corresponding image processing functions, which may be applied to the ROI to generate an enhanced visualization of the ROI in real-time. The enhanced visualization of the ROI, in turn, allows a user such as a sonographer to evaluate structural and/or functional condition of the imaged anatomy with greater accuracy. Moreover, limiting the application of the image processing functions to the ROI reduces the computational effort and power requirement, thus prolonging the operational life of the ultrasound imaging system.

[0016] Although the following description includes embodiments relating to ultrasound imaging, these embodiments may also be implemented in other medical imaging systems for enhancing the resulting diagnostic images in real¬time. These systems, for example, may include magnetic resonance imaging (MRI) systems, computed-tomography (CT) systems, and systems that monitor targeted drug and gene delivery. In certain embodiments, the present systems and methods may also be used for image enhancement during non-medical imaging, for example, during nondestructive testing of elastic materials that may be suitable for ultrasound imaging and/or security screening. An exemplary environment that is suitable for practicing various implementations of the present system is described in the following sections with reference to FIG. 1.

[0017] FIG. 1 illustrates an ultrasound system 100 for imaging one or more regions of interest (ROI) in a target object. In one embodiment, the ultrasound system 100 may be configured as a console system or a cart-based system capable of receiving touch-based input. Alternatively, the ultrasound system 100 may be configured, for example, as a personal digital assistant, a pocket personal computer (PC), a notebook computer, a hand-held computing system, a smartphone-based device, and/or a laptop-style system. For discussion purposes, the present embodiment describes the ultrasound system 100 as a tablet-based ultrasound imaging system.

[0018] Further, in the present disclosure, the ultrasound system 100 is discussed with reference to imaging one or more target ROI in biological tissues of interest. Accordingly, in the present embodiment, the target ROI, for example, may include cardiac tissues, liver tissues, breast tissues, prostate tissues, thyroid tissues, lymph nodes, vascular structures adipose tissue, muscular tissue, and/or blood cells. Alternative embodiments, however, may also employ the system 100 for imaging non-biological materials such as manufactured parts, plastics, aerospace composites, and/or foreign objects within the body such as a catheter or a needle.

[0019] In certain embodiments, the system 100 includes transmit circuitry 102 that generates a pulsed waveform to drive an array 104 of transducer elements 106 housed within a transducer probe 108. Particularly, the pulsed waveform drives the array 104 of transducer elements 106 to emit ultrasonic pulses into a body or volume of interest of a subject (not shown). The transducer elements 106, for example, may include piezoelectric, piezoceramic, capacitive, and/or microfabricated crystals. At least a portion of the ultrasonic pulses generated by the transducer elements 106 back-scatter from the target ROI to produce echoes that return to the transducer array 104 and are received by a receive circuitry 110 for further processing.

[0020] In the embodiment illustrated in FIG. 1, the receive circuitry 110 is coupled to a beamformer 112 that processes the received echoes and outputs corresponding radio frequency (RF) signals. Although, FIG. 1 illustrates the transducer array 104, the transmit circuitry 102, the receive circuitry 110, and the beamformer 112 as distinct elements, in certain embodiments, one or more of these elements may be implemented together as an independent acquisition subsystem in the system 100. The acquisition subsystem may be configured to acquire image data corresponding to the patient for further processing.
[0021] Subsequently, a processing unit 114 receives and processes the RF signals according to a plurality of selectable ultrasound modalities in near real¬time and/or offline mode. The processing unit 114 includes devices such as one or more general-purpose or application-specific processors, digital signal processors, microcomputers, microcontrollers, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGA), or other suitable devices in communication with other components of the system 100.
[0022] In certain embodiments, the processing unit 114 also provides control and timing signals for configuring one or more imaging parameters for imaging the target ROI. By way of example, the imaging parameters may include a sequence of delivery of different pulses, frequency of the pulses, a time delay between two different pulses, intensity of the pulses, mode of ultrasound imaging, and/or other such imaging parameters. Particularly, in one embodiment, the processing unit 114 is operatively coupled to a power source 116, for example through a communications link 117, to drive one or more components of the system 100. Accordingly, the power source 116 may include, for example, a fixed current outlet and/or a battery to provide drive voltage to the transducer probe 108, the transmit circuitry 102 and/or the receive circuitry 110, for imaging the target ROI.

[0023] In certain embodiments, the processing unit 114 stores the delivery sequence, frequency, time delay, and/or beam intensity, for example, in a memory device 118 for use in imaging the target ROI. The memory device 118 includes storage devices such as a random access memory, a read only memory, a disc drive, solid-state memory device, and/or a flash memory. In one embodiment, the processing unit 114 uses the stored information for configuring the transducer elements 106 to direct one or more groups of pulse sequences toward the target ROI. Subsequently, the processing unit 114 tracks the displacements in the target ROI caused in response to the incident pulses to determine corresponding tissue characteristics. The displacements and tissues characteristics, thus determined, are stored in the memory device 118. The displacements and tissues characteristics may also be communicated to a medical practitioner, such as a sonographer, for further diagnosis.
[0024] In certain embodiments, the processing unit 114 is further coupled to a user interface 120 for receiving commands and inputs from a user such as the medical practitioner, activating application functionality, and displaying resulting audio, video, and/or graphical output. As illustrated in FIG. 1, the user interface 120 may be a tablet-based interface with a touchscreen display 122 capable of accepting, for example, stylus, pen, keyboard, and/or human touch input.
[0025] In accordance with aspects of the present disclosure, an initial touch by the user may be used to define a ROI for further processing. For example, while performing an ultrasound scan, a user may want to study a particular region or feature visualized in an image frame in greater detail. The user may touch this region to select the region for enhancement, for example, via scaling, contrast enhancement, speckle reduction imaging (SRI) filtering, and/or segmentation.
[0026] In one embodiment, the processing unit 114 may be configured to determine location coordinates corresponding to the touch-based input on the user interface 120 and define a ROI based on the determined location coordinates. In certain embodiments, the defined ROI may be of a predetermined

shape. For example, the ROI may be a circular region, a rectangular region, a triangular region, a square region, an oval region, or a free form region. Alternatively, the ROI may correspond to the shape delineated by the user on the user interface 120.
[0027] The processing unit 114 may be configured to identify further touch-based inputs as one or more determined gestures that may be used to control image processing functionality at ROI level. Specifically, a gesture-based input corresponding to a predetermined movement or pattern received at the user interface 120 triggers one or more commands, command sequences, workflow, and/or other image processing functionality.
[0028] In one embodiment, these gesture-based inputs, for example, may include receiving a double tap, a single tap, a pinch, a zoom, a swipe, a scroll, long press, two-finger tap, two-finger scroll, pinch-open, pinch-close, and/or rotate gestures. In accordance with exemplary aspects of the present disclosure, these gestures may be correlated with one or more image processing functions. Particularly, in one embodiment, the system 100 employs a default correlation of gestures with one or more image processing functions. For example, a pinch-open gesture may be associated with obtaining an 8X scaling of the ROI. Alternatively, the ROI may be scaled to a determined scaling factor based on a time and/or length of the pinch-open gesture. In another embodiment, the swipe gesture may allow for contrast enhancement at the ROI level. Additionally, a long press may implement time gain enhancement.
[0029] Further, in certain further embodiments, a single tap may be used to invoke SRI filtering or a segmentation algorithm, whereas a double tap may allow selection of acquisition parameters such as depth of focus and pulse sequence characteristics for a subsequent ultrasound scan. Moreover, a two-finger tap may allow switching between different modes of ultrasound imaging. For example, the two-finger tap may allow switching between greyscale imaging, color Doppler imaging, and/or Doppler gate imaging.

[0030] Similarly, other gestures such as clockwise and anticlockwise rotation may be used to navigate through clinical applications such as a picture archiving and communication system (PACS), a radiology information system (RIS), a hospital information system (HIS), and an electronic medical record (EMR). Furthermore, certain gestures may be used to transmit, retrieve, and/or store data, for example, to a local memory device, a remotely connected storage device, and/or a cloud-based storage system.
[0031] In certain embodiments, a single gesture-based input may be associated with two or more image processing functions. For example, the processing unit 114 may interpret an "X" shaped gesture made on or over the user interface 120 as a command to perform SRI filtering followed by segmentation on the ROI. Similarly, upon receiving a two-finger scroll input, the processing unit 114 may automatically advance through images in a cine loop, find an optimal image frame, and determine measurements of the imaged anatomy.
[0032] Accordingly, in one embodiment, the system 100 includes a gesture recognition unit 121 configured to identify the gesture-based input based. When using a capacitive touchscreen as the user interface 120, the gesture recognition unit 121, for example, may be configured to identify the gesture based on a change in capacitance between the regions on the user interface 120 that received the touch-based input and those that did not. Alternatively, when using a resistive touchscreen as the user interface 120, the gesture recognition unit 121 may be configured to identify the gesture based on a change in resistance between the different layers of the resistive touchscreen coated with a resistive material and separated by invisible separator dots. In one embodiment, the gesture recognition unit 121 may be configured to determine the change in resistance by measuring the current or voltage between the different layers caused by the touch-based input on the user interface 120. Additionally, the gesture recognition unit 121 may determine a pattern in the touch-based input and match the identified pattern to stored values to identify the corresponding gesture.

[0033] In certain embodiments, the gesture recognition unit 121 may be configured to monitor for gesture-based input only during a "gesture recognition mode" corresponding to a determined period of time, for example, 2 seconds. Specifically, the gesture recognition unit 121 may be configured detect the gestures only during the determined period of time following an initial touch received on the user interface 120. In one embodiment, the gesture recognition unit 121 may pass the control back to processing unit 114 if no gestures are detected during the determined period of time. However, if the gesture is received within the determined period of time, the gestures is identified and correlated to corresponding image processing functions.
[0034] It may be noted that the correlations between the gestures and the one or more image processing functions, such as described herein, are only exemplary. The system 100 may employ several alternative and/or additional correlations to allow for gesture-based enhancement of ROI in an image frame. Additionally, in certain embodiments, the gesture recognition unit 121 may allow customization of the default and/or existing correlations.
[0035] To that end, in one embodiment, the gesture recognition unit 121 may be configured to enter into a "gesture customization mode." The gesture recognition unit 121 may enter the gesture customization mode, for example, using a predetermined gesture, a button press, and/or voice command. In certain embodiments, the gesture customization mode may be activated only for a particular group of users via use of a password, pin, biometrics, and/or other authentication. Once in the gesture customization mode, the gesture recognition unit 121 may allow for reconfiguring, deleting, and/or, adding correlations between one or more gestures and corresponding image processing functions. The gesture recognition unit 121 may allow for the reconfiguration, deletion and/or addition of the gesture-based functionality by detecting the gesture-based input and a selection and/or input of a corresponding functionality received via the user interface 120.

[0036] In one embodiment, upon receiving the gesture based input, the processing unit 114 processes the RF signal data corresponding to the ROI based on an identification of corresponding image processing functions. Particularly, the processing unit 114 processes the RF signal data to generate two-dimensional (2D) and/or three-dimensional (3D) datasets corresponding to contrast enhanced, time gain corrected, and/or segmented ROI for use in reconstructing and overlaying the enhanced ROI over the original image frame. Similarly, in other embodiments, the processing unit 114 may be configured to generate and overlay, for example, B-mode, color Doppler, power Doppler, M-mode, anatomical M-mode, strain, strain rate, and/or spectral Doppler data corresponding to only the ROI over the original image frame.
[0037] In certain embodiments, the processing unit 114 applies the gesture-based processing to the ROI to generate the enhanced visualization in real-time. As used herein, the term "real-time" may be used to refer to an imaging rate of about 10 to about 30 image frames per second (fps) with a delay of less than 1 second. In one embodiment, the processing unit 114 customizes this delay in reconstructing and rendering the image frames based on specific system and/or imaging requirements. Further, the processing unit 114 processes the RF signal data such that a resulting ROI is rendered at the rate of 10 fps on the associated display 122 communicatively coupled to the processing unit 114.
[0038] Additionally, in one embodiment, the system 100 includes a dedicated video processor 124 that applies the gesture-based processing to a plurality of image frames in a digital video stream. For example, on receiving a particular gesture-based input, the video processor 124 may apply contrast enhancement to defined ROI in all or a determined sequence of image frames. Alternatively, the video processor 124 communicates the frames to a remote location for allowing a remotely located medical practitioner to provide gesture-based input for enhancing a desired ROI.

[0039] The real-time ROI enhancement allows the medical practitioner to determine characteristics of the imaged anatomy more accurately. The determined characteristics may be used to diagnose a patient condition and/or to prescribe treatment. By way of example, the medical practitioner may evaluate the enhanced ROI for detecting a pathological condition such as presence of plaque or other blockages in a blood vessel of the patient. Additionally, real-time availability of diagnostic and/or therapeutic information also allows the medical practitioner to evaluate an efficacy of the interventional procedure in real-time, and further to determine whether to stop or continue the procedure based on the evaluated effect.
[0040] Embodiments of the present system 100, thus, allow for gesture-based interaction with a stylus- and/or touch-based interface to allow ROI enhancement in real-time. Particularly, use of the gesture-based input allows for integration of image display, review, post-processing, and diagnosis in the same user interface, thereby reducing clutter in a clinical setting. Further, provision of customizable gestures provides more ergonomic and intuitive imaging controls, while also limiting repetitive stress injuries. An exemplary method for gesture-based enhancement of an ROI in real-time will be described in greater detail with reference to FIG. 2.
[0041] FIG. 2 illustrates a flow chart 200 depicting an exemplary method for reducing power consumption in an ultrasound system. In the present disclosure, embodiments of the exemplary method may be described in a general context of computer executable instructions on a computing system or a processor. Generally, computer executable instructions may include routines, programs, objects, components, data structures, procedures, modules, functions, and the like that perform particular functions or implement particular abstract data types.
[0042] Additionally, embodiments of the exemplary method may also be practiced in a distributed computing environment where optimization functions are performed by remote processing devices that are linked through a wired

and/or wireless communication network. In the distributed computing environment, the computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0043] Further, in FIG. 2, the exemplary method is illustrated as a collection of blocks in a logical flow chart, which represents operations that may be implemented in hardware, software, or combinations thereof. The various operations are depicted in the blocks to illustrate the functions that are performed, for example, during defining a correlation, receiving gesture-based input, and/or image processing phases of the exemplary method. In the context of software, the blocks represent computer instructions that, when executed by one or more processing subsystems, perform the recited operations.
[0044] The order in which the exemplary method is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the exemplary method or augmented by additional blocks with added functionality without departing from the spirit and scope of the subject matter described herein. For discussion purposes, the exemplary method will be described with reference to the elements of FIG. 1.
[0045] Use of conventional ultrasound systems including a console based system control for imaging upper and lower extremities, compression, carotid, and/or neo-natal head regions is a complex procedure. In such scenarios, an operator may not be able to physically reach both the console and a target location to be scanned at the same time. Additionally, the operator may also not be able to adjust a patient being scanned and operate console simultaneously.
[0046] Accordingly, embodiments of the present disclosure present gesture-based enhancement of ultrasound images. Particularly, embodiments described herein allow for efficient ultrasound imaging, for example, using a tablet-based

system that integrates a touch-based user interface, display, processing, and system control into a single device. Although the present embodiment describes use of a single tablet-based system, in certain other embodiments, the ultrasound system may include independent processing, control, and display components, where display component incorporates a touch-based user interface. An embodiment of the present method to allow for touch-based ROI enhancement is described in the following sections in greater detail.
[0047] At step 202, a correlation between a gesture and the one or more image processing functions may be defined. Particularly, in one embodiment, the system 100 employs a default correlation of gestures with one or more image processing functions. For example, a pinch-open gesture may be associated with obtaining an 8X scaling of the ROI. Alternatively, the ROI may be scaled to a determined scaling factor based on a time and/or length of the pinch-open gesture. In another embodiment, the swipe gesture may allow for contrast enhancement at the ROI level. Additionally, a long press may implement time gain enhancement.
[0048] Further, in certain further embodiments, a single tap may be used to invoke SRI filtering or a segmentation algorithm, whereas a double tap may allow selection of acquisition parameters such as depth of focus and pulse sequence characteristics for a subsequent ultrasound scan. Moreover, a two-finger tap may allow switching between different modes of ultrasound imaging. For example, the two-finger tap may allow switching between greyscale imaging, color Doppler imaging, and/or Doppler gate imaging.
[0049] Similarly, other gestures such as clockwise and anticlockwise rotation may be used to navigate through clinical applications such as a picture archiving and communication system (PACS), a radiology information system (RIS), a hospital information system (HIS), and an electronic medical record (EMR). Furthermore, certain gestures may be used to transmit, retrieve, and/or store data,

for example, to a local memory device, a remotely connected storage device, and/or a cloud-based storage system.
[0050] In certain embodiments, a gesture recognition unit such as the gesture recognition unit 121 of FIG. 1 may allow customization of the default and/or existing correlations. To that end, in one embodiment, the gesture recognition unit 121 may be configured to enter into a "gesture customization mode," for example, using a predetermined gesture, a button press, and/or voice command. In certain embodiments, the gesture customization mode may be activated only for a particular group of users via use of a password, pin, biometrics, or other authentication. Once in the gesture customization mode, the gesture recognition unit 121 may allow for reconfiguring, deleting, and/or, adding correlations between one or more gestures and corresponding image processing functions. The gesture recognition unit 121 may allow for the reconfiguration, deletion and/or addition of the gesture-based functionality by detecting the gesture-based input and a selection and/or input of a corresponding functionality received via the user interface 120.
[0051] Further, at step 204, the correlation determined at step 202 is stored in a local, distributed, and/or a cloud-based storage device. It may be noted that steps 202 and 204 may be performed during system setup and/or reconfiguration of gesture-based functionality, whereas steps 208-222 may be performed during ROI enhancement of each selected image frame.

[0052] Accordingly, at step 206, at least one image frame is selected from a plurality of image frames such as from a cine loop corresponding to a target anatomy. The selection of the image frame may be received from the user or the system using an automated optimal image frame selection function. Further, at step 208, a touch-based input corresponding to an ultrasound image frame displayed on a user interface such as the user interface 120 of FIG. 1 is received. The user interface 120, for example, may be a tablet-based interface with a touchscreen display 122 (see FIG. 1) capable of accepting, for example, stylus, pen, keyboard, and/or human touch input.

[0053] In one embodiment, the processing unit such as the processing unit of FIG. 1 may be configured to determine location coordinates corresponding to the touch-based input on a user interface such as the user interface 120 of FIG. 1, as depicted by step 210. Further, at step 212 an ROI corresponding to the image frame may be defined based on the location coordinates. In certain embodiments, the defined ROI may be of a predetermined shape. For example, the ROI may be a circular region, a rectangular region, a triangular region, a square region, an oval region, or a free form region. Alternatively, the ROI may correspond to the shape delineated by the user on the user interface 120.

[0054] At step 214, at least one gesture-based input may be received on the user interface 120. For example, a sonographer may request for scaling of& ROI corresponding to a femur during prenatal imaging for determining corresponding length and endpoints more accurately. In one embodiment, the sonographer may input a pinch-open gesture over the ROI to obtain desired scaling of the ROI. However, to avoid processing accidental touch-based inputs, in certain embodiments, the gesture recognition unit 121 may be configured to monitor for gesture-based input only during a "gesture recognition mode" corresponding to a determined period of time. Specifically, the gesture recognition unit 121 may be configured detect the gestures only during the determined period of time following an initial touch received on the user interface 120.

[0055] Accordingly, at step 216, it may be determined whether a gesture is received within the determined period of time. The determined period of time, for example, may correspond to about 3-5 seconds. Upon determining at step 216 that no gesture is received within the determined period of time, control may pass to step 206 and another image frame may be selected for processing. However, upon determining at step 216 that a gesture is received within the determined period of time, control may pass to step 218.

[0056] At step 218, one or more image processing functions may be identified based on the gesture-based input when a gesture is received within the determined period of time. In an embodiment employing, for example, a capacitive touchscreen, the gesture recognition unit 121 may be configured to identify the gesture based on a change in capacitance between the regions on the user interface 120 that received the touch-based input and those that did not. Alternatively, when using a resistive touchscreen, the gesture recognition unit 121 may be configured to identify the gesture based on a change in current indicative of a corresponding change in resistance between different layers of the resistive touchscreen caused by the touch-based input. Particularly, the gesture recognition unit 121 may be configured to determine a pattern in the touch-based input and match the identified pattern to stored values to identify the corresponding gesture. Further, the identified gesture is correlated to corresponding image processing functions. Specifically, the identified gesture may be identified based on a previously stored correlation between the gesture and one or more image processing functions.

[0057] At step 220, the one or more image processing functions may be applied to the ROI to aid in enhancing at least the ROI in the image frame. For example, SRI filtering and/or a segmentation functions may be applied to the ROI on receiving a single tap input from the user. In certain embodiments, the processing unit 114 applies the gesture-based processing specifically to the ROI to generate the enhanced visualization in real-time, as depicted by step 222.
[0058] FIG. 3 a diagrammatical representation 300 of certain exemplary gestures used in the method described with reference to FIG. 2. Further, FIG. 4 illustrates an exemplary ultrasound image frame 400 on which a touch based input is received at particular point 402. FIG. 5 illustrates a diagrammatical representation 500 of the image frame 400 of FIG. 4 overlaid with an enhanced ROI 502 obtained using the method described with reference to FIG. 2. Specifically, in FIG. 5, the overlaid ROI region 502 is representative of a 10X scaling of the point 402 defined by the user through a touch-based input.

[0059] Embodiments of the present systems and methods, thus, provide an improved and simplified workflow for a clinical environment by providing a system and method for consolidating image display, processing, and enhancement workflows into a single touch-based interface. Use of gesture-based image enhancement on a single touch-based interface aids in reducing clutter in the clinical setting, while providing simplified interactions with medical applications and data. Particularly, use of the gesture-based inputs in lieu of repeated control adjustments on a console substantially reduces repetitive motion injuries such as carpal tunnel syndrome.

[0060] Additionally, certain embodiments allow for user-based customization of gestures for one or more image processing functions to be applied to the ROI, thus providing for a more intuitive implementation. Furthermore, applying the gesture-based image processing functions to only the ROI allow for faster and computationally efficient post-processing of images. Such gesture-based enhancement at the ROI level provides a greater level of detail of desired characteristics of the ROI in real-time for use in providing accurate diagnosis and treatment.

[0061] It may be noted that the foregoing examples, demonstrations, and process steps that may be performed by certain components of the present systems, for example by the processing unit 114, the gesture recognition unit 121, and/or the video processor 124 of FIG. 1 may be implemented by suitable code on a processor-based system. To that end, the processor-based system, for example, may include a general-purpose or a special-purpose computer. It may also be noted that different implementations of the present disclosure may perform some or all of the steps described herein in different orders or substantially concurrently.

[0062] Additionally, the functions may be implemented in a variety of programming languages, including
but not limited to Ruby, Hypertext Preprocessor (PHP), Perl, Delphi, Python, C, C++, or Java. Such code may be stored or adapted for storage on one or more tangible, machine-readable media, such as on data repository chips, local or remote hard disks, optical disks (that is, CDs or DVDs), solid-state drives, or other media, which may be accessed by the processor-based system to execute the stored code.

[0063] Although specific features of various embodiments of the present disclosure may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments, for example, to construct additional assemblies and methods for use in diagnostic imaging.

[0064] While only certain features of the present disclosure have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

We Claim:

1. A method for ultrasound imaging, comprising:

receiving touch-based input corresponding to at least one ultrasound image frame displayed on a user interface;

defining a region of interest corresponding to the image frame based on the touch-based input;
receiving at least one gesture-based input on the user interface;

identifying one or more image processing functions based on the gesture-based input;

applying the one or more image processing functions to the region of interest to aid in enhancing at least the region of interest in the image frame.

2. The method of claim 1, further comprising selecting the image frame from a plurality of image frames.

3. The method of claim 1, wherein defining the region of interest comprises:

determining location coordinates corresponding to the touch-based input on the user interface; and

defining the region of interest corresponding to the image frame based on the location coordinates.

4. The method of claim 1, wherein defining the region of interest comprises defining a region of interest having a predetermined shape.

5. The method of claim 1, wherein defining the region of interest comprises defining the region of interest in a shape delineated by a user on the user interface.

6. The method of claim 1, wherein the region of interest is a circular region, a rectangular region, a triangular region, a square region, an oval region, a free form region, or combinations thereof.

7. The method of claim 1, wherein receiving the gesture-based input comprises:

entering a gesture recognition mode; and

determining if the gesture based input is received within a determined period of time.

8. The method of claim 7, further comprising terminating the gesture recognition mode if a gesture is not received within the determined period of time.

9. The method of claim 1, further comprising:

defining a correlation between a gesture and the one or more image processing functions; and

storing the correlation.

10. The method of claim 9, wherein the one or more image processing functions to be applied to the region of interest are identified based on the stored correlation.

11. The method of claim 9, further comprising reconfiguring the correlation between the gesture and one or more image processing functions.

12. The method of claim 1, further comprising:

generating an enhanced representation of the region of interest; and

overlaying the enhanced representation of the region of interest over the image frame.

13. The method of claim 1, wherein the one or more image processing functions comprise contrast enhancement, speckle reduction imaging filtering, segmentation, time gain enhancement, image scaling, or combinations thereof.

14. The method of claim 1, wherein the one or more image processing functions comprise storing the image frame in a storage repository; wherein the storage repository comprises a local memory device, a remotely connected storage device, a cloud-based storage device, or combinations thereof.

15. The method of claim 1, wherein the one or more image processing functions comprise color Doppler imaging, Doppler gate imaging, or a combination thereof.

16. The method of claim 1, wherein the one or more image processing functions comprise configuring one or more acquisition parameters for a subsequent ultrasound scan.

17. The method of claim 16, wherein the acquisition parameters comprise depth of focus, pulse sequence characteristics, one or more imaging parameters, or combinations thereof.

18. The method of claim 1, wherein the touch-based input is received from a remotely located user interface.

19. The method of claim 1, wherein the gesture-based input comprises receiving a double tap, a single tap, a pinch, a zoom, a swipe, a scroll, long press, two-finger tap, two-finger scroll, pinch-open, pinch-close, rotation, or combinations thereof.

20. An imaging system, comprising:

an acquisition subsystem configured to acquire data for reconstructing at least one image frame corresponding a subject;

a user interface configured to display the image frame and receive a touch-based input corresponding to the image frame;

a processing unit in operative association with at least the user interface, wherein the processing unit is configured to:

determine location coordinates corresponding to the touch-based input on the user interface;

define a region of interest corresponding to the image frame based on location coordinates;

receive at least one gesture-based input on the user interface;

identify one or more image processing functions based on the gesture-based input;

apply the one or more image processing functions to the region of interest to aid in enhancing the image frame on the user interface.

Documents

Application Documents

# Name Date
1 4420-CHE-2013 FORM-3 30-09-2013.pdf 2013-09-30
2 4420-CHE-2013 FORM-18 30-09-2013.pdf 2013-09-30
3 4420-CHE-2013 FORM-1 30-09-2013.pdf 2013-09-30
4 4420-CHE-2013 DRAWINGS 30-09-2013.pdf 2013-09-30
5 4420-CHE-2013 DESCRIPTION (COMPLETE) 30-09-2013.pdf 2013-09-30
6 4420-CHE-2013 CORRESPONDENCE OTHERS 30-09-2013.pdf 2013-09-30
7 4420-CHE-2013 CLAIMS 30-09-2013.pdf 2013-09-30
8 4420-CHE-2013 ABSTRACT 30-09-2013.pdf 2013-09-30
9 4420-CHE-2013 POWER OF ATTORNEY 30-09-2013.pdf 2013-09-30
10 4420-CHE-2013 FORM-2 30-09-2013.pdf 2013-09-30
11 4420-CHE-2013 POWER OF ATTORNEY 28-02-2014.pdf 2014-02-28
12 4420-CHE-2013 FORM-1 28-02-2014.pdf 2014-02-28
13 4420-CHE-2013 CORRESPONDENE OTHERS 28-02-2014.pdf 2014-02-28
14 abstract4420-CHE-2013.jpg 2014-07-10
15 4420-CHE-2013-FER.pdf 2019-09-18
16 4420-CHE-2013-RELEVANT DOCUMENTS [10-10-2019(online)].pdf 2019-10-10
17 4420-CHE-2013-FORM 13 [10-10-2019(online)].pdf 2019-10-10
18 4420-CHE-2013-AbandonedLetter.pdf 2020-03-20

Search Strategy

1 4420SEARCH_06-09-2019.pdf