Sign In to Follow Application
View All Documents & Correspondence

System And Method For Computing Hounsfield Unit Average From Computed Tomography Volume To Determine Sarcopenia

Abstract: A system (100) for computing Hounsfield unit average from computed tomography volume to determine sarcopenia is provided. The system includes input data source 102, Hounsfield unit (HU) computation system 104, network 106, and user device 108. The system 104 (i) receives input CT volume from an input data source 102; (ii) determines whether the input CT volume contains abdominal region by detecting abdominal region in input CT volume using first deep learning model; (iii) locates L3 vertebrae within the input CT volume using second deep learning model; (iv) determines axial slices corresponding to the L3 vertebrae; (v) segments PSOAS muscles from the axial slices using third deep learning model to obtain segmented CT image; (vi) calculates the Hounsfield unit average (HUAC) for the PSOAS muscles using the segmented CT image; and (vii) generates report on the computed HUAC values and displays to user through user device 108. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 April 2024
Publication Number
44/2025
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

CURIUM LIFE TECH PVT LTD
54, 3RD CROSS, CR LAYOUT, JP NAGAR PHASE1, BANGALORE, KARNATAKA 560078, INDIA

Inventors

1. Rohit Kalla
79, Ramji, Rawat Ram Nagar, Behind Dhanwantri Hospital, Pal Road, Jodhpur, Rajasthan, India 342008
2. Vinayak Sitaraman Rengan
23/1, Thiruvengadam Street, West Mambalam, Chennai, Tamilnadu, India 600033
3. Mani R
005 Grihalakshmi Apartments, 90, SouthEnd road, Basavanagudi, Bangalore, Karnataka, India 560004
4. Eham Arora
Plot no. 86, Sukh Sadan, Flat no. 19, Behind old SIES college, Sion West, Mumbai Maharashtra, India 400022
5. Pravin Meenashi Sundaram
1/SA. Jains Cedar Crest Dharga Road, Pallavaram, Chennai, Tamilnadu, India 600043

Specification

DESC:BACKGROUND
Technical Field
[0001] The present invention generally relates to computed tomography (CT) image processing and, more particularly, to a system and method for computing Hounsfield unit average (HUAC) from an input CT volume to determine muscle density, in turn sarcopenia using deep learning models.
Description of the Related Art
[0002] In modern medicine, the significance of sarcopenia, characterized by the loss of muscle mass, has been increasingly recognized, especially in predicting postoperative complications in patients undergoing major surgeries. Sarcopenia assessment plays a crucial role in preoperative risk stratification and optimizing patient outcomes. However, detecting sarcopenia traditionally involves laborious and potentially error-prone manual methods.
[0003] One of the practical approaches to detect sarcopenia is through the Hounsfield Unit Average Calculation (HUAC). This measure assesses the size and density of specific muscles, such as the psoas muscles, which are indicative of muscle mass. The HUAC computation involves examining the axial slice at the lumbar vertebrae, typically L3/L4, within computed tomography (CT) volumes of patients. Traditionally, this examination has relied on manual interpretation by radiologists, which can be time-consuming, prone to variability, and subject to human error. Radiologists need to meticulously search through the entire CT volume to locate the axial slice necessary for HUAC computation, adding to the complexity and workload of preoperative evaluations.
[0004] In recent years, there has been a notable shift in medical practices towards leveraging advanced technologies such as deep learning techniques or models (DLM) and computer vision (CV) techniques for critical medical computations essential for surgeons. While deep learning and computer vision technologies have advanced medical imaging analysis, concerns persist regarding their performance in certain tasks. Specifically, the precision and reliability of these techniques in computing essential medical parameters have not met desired standards. In the context of surgical procedures, accurate computation of medical parameters is crucial for informed decision-making and optimal patient care. However, the current state of deep learning and computer vision applications lacks the required accuracy and consistency for critical medical computations, such as HUAC. Moreover, in the existing approaches, training deep learning models for image segmentation necessitates laborious human intervention to manually label slices at specific anatomical levels, such as the L3/L4 vertebra, to facilitate the segmentation of multiple anatomical structures. Hence, these approaches are time-consuming and prone to inconsistencies. Similarly, the existing approaches involve segmenting multiple anatomies in computing sarcopenia scores, leading to complexity and computational inefficiency.
[0005] Therefore, there arises a need to address the aforementioned technical drawbacks in existing technologies in computing Hounsfield unit average (HUAC) accurately from CT images to determine sarcopenia with less manual effort and reduced computational inefficiency.

SUMMARY
[0001] In view of a foregoing, an embodiment herein provides a system for determining muscle density of a subject by computing Hounsfield unit average (HUAC) from computed tomography (CT) image. The system includes a muscle metric determining server that includes a memory, and a processor in communication with the memory. The processor is configured to: (i) receive the CT image associated with the subject from a CT scanner, where the CT image includes one or more anatomical structures including organs, tissues, abdominal regions, vertebrae or bones of the subject; (ii) detect, using a first deep learning model, an abdominal region in the CT image, where the abdominal region is detected by extracting one or more sagittal slices from the CT image, and providing the one or more sagittal slices into the first deep learning model, where the first deep learning model (i) extracts one or more features indicative of the abdominal region within the one or more sagittal slices, and (ii) generates a segmentation mask associated with the abdominal region based on the one or more features, where the one or more sagittal slices are one or more slices taken perpendicular to a coronal plane and parallel to a sagittal plane of the subject; (iii) locate, using a second deep learning model, a third lumbar (L3) vertebrae within the CT image, if the abdominal region is detected, where the L3 vertebrae is located by extracting one or more coronal slices from the CT image, and providing the one or more coronal slices into the second deep learning model, where the second deep learning model (i) extracts one or more features associated with L3 vertebrae within the one or more slices, and (ii) generates a segmentation mask associated with the L3 vertebrae, wherein the one or more coronal slices are one or more vertical cross-sections along a frontal plane of the subject; (iv) determine axial slices corresponding to the L3 vertebrae by extracting spatial coordinates of the L3 vertebra and selecting one of the axial slices at L3 spatial location; (v) generate a segmented CT image by segmenting a right PSOAS muscle and a left PSOAS muscle from the axial slices using a third deep learning model, where the right PSOAS muscle and a left PSOAS muscle are segmented by (i) providing the axial slices into the third deep learning model, (ii) extracting one or more features associated with the PSOAS muscles within the axial slices using the third deep learning model, (iii) generating a segmentation mask associated with the PSOAS muscles for each axial slice, and (iv) selecting one of the axial slices comprising the segmented left and right PSOAS masks; and (vi) enable a user to determine the muscle density of the subject by computing a Hounsfield unit average (HUAC) value for the right PSOAS muscle and the left PSOAS muscle using the segmented CT image based on mean Hounsfield unit (HU) of the left and right PSOAS muscles, and providing the HUAC value as output.
[0002] In some embodiments, the first deep learning model is trained with one or more labelled sagittal slices to recognize the features indicative of the abdominal region within the one or more sagittal slices, where the features include Xiphoid process and Pubic Symphysis.
[0003] In some embodiments, the second deep learning model is trained with one or more labelled coronal slices to recognize the features indicative of the L3 vertebrae within the one or more coronal slices, where the features include vertebral body shape and size, intervertebral disc position, pedicle structure, spinous process alignment, transverse process orientation and cortical bone density.
[0004] In some embodiments, the spatial location of the L3 vertebra are extracted by identifying the L3 vertebra in coronal planes, and selecting the corresponding axial slices at the L3 vertebra. The processor is configured to fine-tune the position of the axial slices by applying interpolation or refinement techniques.
[0005] In some embodiments, the third deep learning model is trained with one or more labelled axial slices to recognize the features indicative of the regions corresponding to the PSOAS muscles within the one or more axial slices. The features include muscle boundary contours and muscle shape characteristics.
[0006] In some embodiments, the processor is configured to compute the mean HU value of the left and right PSOAS muscles by (i) retrieving HU value of each pixel within the segmented CT image for the left and right PSOAS muscles, where the HU value is retrieved by extracting intensity values from the segmented CT image for each pixel; and (ii) calculating the average HU by summing the HU values of all the pixels and dividing by the total number of pixels.
[0007] In some embodiments, the processor is configured to compute individual cross-sectional areas (CSA) of the left and right PSOAS muscles to determine the size of the PSOAS muscles by counting the number of pixels within the segmented CT image for each muscle and converting the pixel count into the actual cross-sectional area in square centimeters (cm²) using the CT pixel spacing (cm² per pixel), where the HUAC score is calculated using the computed CSA. The HUAC score is the average of HU values across the entire CSA of the left and right psoas muscles.
[0008] In some embodiments, the processor is configured to flag the CT image as not containing the abdominal region and skip additional processing, if the abdominal region is not detected by the first deep learning model.
[0009] In one aspect, a method for determining muscle density of a subject by computing Hounsfield unit average (HUAC) from computed tomography (CT) image is provided. The method includes (a) receiving, by a processor of a muscle metric determining server (104), the CT image associated with the subject from a CT scanner, where the CT image includes one or more anatomical structures including organs, tissues, abdominal regions, vertebrae or bones of the subject; (b) detecting, by the processor, an abdominal region in the CT image using a first deep learning model by extracting one or more sagittal slices from the CT image, and providing the one or more sagittal slices into the first deep learning model, where the first deep learning model (i) extracts one or more features indicative of the abdominal region within the one or more sagittal slices, and generates a segmentation mask associated with the abdominal region based on the one or more features, wherein the one or more sagittal slices are one or more slices taken perpendicular to a coronal plane and parallel to a sagittal plane of the subject; (c) locating, by the processor, a third lumbar (L3) vertebrae within the CT image using a second deep learning model, if the abdominal region is detected, where the L3 vertebrae is located by extracting one or more coronal slices from the CT image, and providing the one or more coronal slices into the second deep learning model, where the second deep learning model (i) extracts one or more features associated with L3 vertebrae within the one or more coronal slices, and (ii) generates a segmentation mask associated with the L3 vertebrae, where the one or more coronal slices are one or more vertical cross-sections along a frontal plane of the subject; (d) determining, by the processor, axial slices corresponding to the L3 vertebrae by extracting spatial coordinates of the L3 vertebra and selecting one of the axial slices at L3 spatial location; (e) generating, by the processor, a segmented CT image by segmenting a right PSOAS muscle and a left PSOAS muscle from the axial slices using a third deep learning model, wherein the right PSOAS muscle and a left PSOAS muscle are segmented by (i) providing the axial slice into the third deep learning model, (ii) extracting one or more features associated with the PSOAS muscles within the axial slices using the third deep learning model, (iii) generating a segmentation mask associated with the PSOAS muscles for each axial slice, and (iv) selecting one of the axial slices comprising the segmented left and right PSOAS masks; (f) computing, by the processor, a Hounsfield unit average (HUAC) value for the right PSOAS muscle and the left PSOAS muscle using the segmented CT image based on mean Hounsfield unit (HU) of the left and right PSOAS muscles; and (g) enabling a user to determine the muscle density of the subject by providing the HUAC value as output.
[0010] In some embodiments, the method includes computing, by the processor, individual cross-sectional areas (CSA) of the left and right PSOAS muscles to determine the size of the PSOAS muscles, where the CSA is computed by counting the number of pixels within the segmented CT image for each muscle and converting the pixel count into the actual cross-sectional area in square centimeters (cm²) using the CT pixel spacing (cm² per pixel).
[0011] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0007] FIG. 1 illustrates a system for computing Hounsfield unit average (HUAC) from a computed tomography (CT) image to determine muscle density of a subject according to some embodiments herein;
[0008] FIG. 2 illustrates a block diagram of a muscle metric determining server of FIG. 1 according to some embodiments herein;
[0009] FIG. 3 is a block diagram that illustrates an exemplary process of computing Hounsfield unit average (HUAC) from a computed tomography (CT) image to determine muscle density using a muscle metric determining server of FIG. 1 according to some embodiments herein;
[0010] FIG. 4 is a flow diagram that illustrates a method for computing Hounsfield unit average (HUAC) from a computed tomography (CT) image to determine muscle density of a subject according to some embodiments herein;
[0011] FIG. 5 is an exemplary diagram that illustrates a performance evaluation of deep learning models that are used by a muscle metric determining server of FIG. 1 by comparing an output with the ground truth according to some embodiments herein; and
[0012] FIG. 6 is a schematic diagram of a computer architecture in accordance with the embodiments herein.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0013] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0014] As mentioned, there remains a need for accurately computing Hounsfield unit average (HUAC) from a computed tomography (CT) image to determine sarcopenia with less manual effort and reduced computational inefficiency. Embodiments herein achieve this by proposing a system and method for computing Hounsfield unit average (HUAC) from an input CT volume using deep learning models. Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[0015] FIG. 1 illustrates a system 100 for computing Hounsfield unit average (HUAC) from a computed tomography (CT) image to determine muscle density of a subject according to some embodiments herein. The system 100 includes an input data source 102, a muscle metric determining server 104, a network 106, and a user device 108. The muscle metric determining server 104 includes a processor and a non-transitory computer-readable storage medium (or memory) storing a database and one or more sequences of instructions, which when executed by the processor causes the accurate computation of HUAC from the CT image and determination of the muscle density of the subject. The muscle metric determining server 104 may be a cloud, handheld device, mobile phone, a Personal Digital Assistant (PDA), tablet, music player, computer, laptop, electronic notebook, or Smartphone.
[0016] The muscle metric determining server 104 is communicatively connected with the input data source 102 through the network 106. The network 106 may be one or more of a wired network, a wireless network based on at least one of a 2G protocol, a 3G protocol, a 4G protocol, or a 5G protocol, Bluetooth Low Energy (BLE), Near Field Communication (NFC), Bluetooth, WiFi, and a narrowband internet of things protocol (NBIoT), a combination of the wired network and the wireless network or the Internet.
[0017] The muscle metric determining server 104 is configured to receive the CT image associated with the subject from the input data source 102. The input data source 102 may be an imaging modality, an image capturing device, any personal device, or a digital source that provides the CT image associated with the subject. In some embodiments, the input data 102 source is a CT scanner. The subject may be a human body or body parts of a human. The CT image includes one or more objects. The one or more objects may include one or more anatomical structures of the subject. The one or more anatomical structures may be organs, tissues, abdominal regions, vertebrae or bones, or other relevant features of the subject.
[0018] The muscle metric determining server 104 is configured to determine a presence of abdominal region in the CT image by detecting the abdominal region in the CT image upon receiving the CT image from the input data source 102. The muscle metric determining server 104 may use a first deep learning model to detect the abdominal region. In some embodiments, the muscle metric determining server 104 detects the abdominal region by extracting sagittal slices from the CT image and inputting the sagittal slices into the first deep learning model. The sagittal slices are slices taken perpendicular to the coronal plane and parallel to the sagittal plane of the subject. The sagittal slices may provide a view from the side, allowing for analysis of structures along the anterior-posterior axis. The first deep learning model may be a modified TransUNet model. The modified TransUNet model may be trained with labelled sagittal slices to recognize patterns and features indicative of the abdominal region within the input sagittal slices. This may include identifying features or anatomical landmarks, such as Xiphoid process and Pubic Symphysis.
[0019] Based on the analysis performed by the first deep learning model, the muscle metric determining server 104 determines whether the CT image contains the abdominal region. If the presence of the abdominal region is detected with sufficient confidence, the muscle metric determining server 104 initiates further processing. Otherwise, the CT image may be flagged as not containing the abdominal region, and additional processing may be skipped.
[0020] The muscle metric determining server 104 is further configured to locate L3 vertebrae within the CT image if the abdominal region is detected. The L3 vertebrae, also known as a third lumbar vertebra, is one of the individual bones that make up the spine of the subject. The muscle metric determining server 104 may use a second deep learning model to locate the L3 vertebrae. In some embodiments, the muscle metric determining server 104 locates the L3 vertebrae by extracting coronal slices from the CT image. The second deep learning model may also be a modified TransUNet model. The second deep learning model may be trained with labelled coronal slices to identify patterns, features, or characteristics indicative of the L3 vertebrae within the coronal slices. The features include vertebral body shape and size, intervertebral disc position, pedicle structure, spinous process alignment, transverse process orientation and cortical bone density.
[0021] The muscle metric determining server 104 is configured to determine axial slices corresponding to the L3 vertebrae by extracting spatial location or coordinates of the L3 vertebra and selecting one of the axial slices at L3 spatial location. The spatial location of the L3 vertebra is extracted by identifying L3 vertebra in coronal planes, and selecting the corresponding axial slices at the L3 vertebra.
[0022] The muscle metric determining server 104 is further configured to segment PSOAS muscles such as a right PSOAS muscle and a left PSOAS muscle from the axial slices to obtain/generate a segmented CT image. The muscle metric determining server 104 may use a third deep learning model to segment PSOAS muscles. In some embodiments, the muscle metric determining server 104 segments PSOAS muscles by analyzing the image features and patterns within the axial slices to identify regions corresponding to the PSOAS muscles using the third deep learning model. The third deep learning model may be a transformer-based deep learning model. The third deep learning model may be trained with labelled axial slices to identify patterns, features, or characteristics indicative of the regions corresponding to the PSOAS muscles. The features may include muscle boundary contours and muscle shape characteristics.
[0023] The muscle metric determining server 104 is further configured to calculate the Hounsfield unit average (HUAC) for the right PSOAS muscle and the left PSOAS muscle using the segmented CT image based on mean Hounsfield unit (HU) and areas of the left and right PSOAS muscles. The muscle metric determining server 104 is further configured to generate a report on the computed HUAC values and other relevant information and communicate the report to the user device 108 associated with a user (for example, health professionals) through network 106. The report may be displayed to the user through the user device 108. The muscle metric determining server 104 enables the user to determine the muscle density of the subject using the Hounsfield unit average (HUAC) value for the right PSOAS muscle and the left PSOAS. The user device 108 may be a handheld device, a mobile phone, a Personal Digital Assistant (PDA), a tablet, a music player, a computer, a laptop, an electronic notebook or a Smartphone. In some embodiments, the report may be integrated into the medical analysis workflow. In some exemplary embodiments, the report can be seamlessly integrated into the Picture Archiving and Communication System (PACS) utilized by numerous hospitals or healthcare centers.
[0024] In some embodiments, the computed HU values are utilized to determine a sarcopenia score. This enables users (for example, medical professionals) to determine one or more medical actions (e.g. surgery) based on the computed sarcopenia score.
[0025] FIG. 2 illustrates a block diagram of a muscle metric determining server 104 of FIG. 1 according to some embodiments herein. The muscle metric determining server 104 includes a database 200, a processor 202, a receiving module 204, an abdominal region detection module 206 that includes a first deep learning model 206A, a vertebrae localizing module 208 that includes a second deep learning model 208A, an axial slice determining module 210, a segmentation module 212 that includes a third deep learning model 212A, a Hounsfield unit average calculation module 214, and a report generating module 216. The database 200 stores a set of modules of the muscle metric determining server 104. The processor 202 executes the set of modules in the database 200 for computing Hounsfield unit average (HUAC) from a computed tomography (CT) image to determine muscle density, in turn sarcopenia.
[0026] The receiving module 204 receives the CT image of a subject from an input data source 102 (for example a CT scanner) through network 106 and stores it in the database 200. The abdominal region detection module 206 is configured to determine whether the CT image contains an abdominal region by detecting the abdominal region in the CT image using a first deep learning model 206A. The first deep learning model may be a modified TransUNet model. In some embodiments, the abdominal region detection module 206 detects the abdominal region by (i) extracting sagittal slices from the CT image; (ii) inputting the sagittal slices into the first deep learning model 206A; (iii) extracting one or more features associated with the abdominal region (that is, Xiphoid process and Pubic Symphysis) within the sagittal slices using one or more layers of the first deep learning model 206A; and (iv) generating a segmentation mask associated with the abdominal region such as Xiphoid process and Pubic Symphysis. The first deep learning model 206A may be trained to segment out the Xiphoid and Pubic Symphysis from the sagittal slices. This can be achieved by utilizing labelled sagittal slices as training data. By training on a dataset with labelled examples, the first deep learning model 206A learns to recognize patterns and features indicative of the abdominal region within the input sagittal slices. This may include identifying anatomical landmarks, such as the Xiphoid process and Pubic Symphysis. Following the identification of both the Xiphoid process and Pubic Symphysis by the first deep learning model 206A, the abdominal region detection module 206 further confirms the presence of the abdominal region within the CT image. If at least one of the Xiphoid process or Pubic Symphysis is not detected, the confidence of the first deep learning model 206A in confirming the presence of the abdominal region decreases, thereby impacting subsequent analysis.
[0027] The vertebrae localizing module 208 locates L3 vertebrae within the CT image using a second deep learning model 208A, if the abdominal region is detected within the CT image. The second deep learning model 208A may be a modified TransUNet model. In some embodiments, the vertebrae localizing module 208 locates the L3 vertebrae by (i) extracting coronal slices from the CT image; (ii) inputting the coronal slices into the second deep learning model 208A; (iii) extracting one or more features associated with the L3 vertebrae within the coronal slices using one or more layers of the second deep learning model 208A; and (iv) generating a segmentation mask associated with the L3 vertebrae. The features may include vertebral body shape and size, intervertebral disc position, pedicle structure, spinous process alignment, transverse process orientation and cortical bone density. By training on a dataset with labelled examples, the second deep learning model 208A learns to identify patterns, features, or characteristics indicative of the L3 vertebrae within the coronal slices. This may involve recognizing anatomical landmarks, vertebral morphology, or other relevant features associated with the L3 vertebrae.
[0028] The axial slice determining module 210 determines axial slices corresponding to the L3 vertebrae by extracting spatial coordinates of the L3 vertebra and selecting one of the axial slices at L3 spatial location. The axial slice determining module 210 may apply interpolation or refinement techniques to fine-tune the position of the axial slices.
[0029] The segmentation module 212 segments PSOAS muscles such as a right PSOAS muscle and a left PSOAS muscle from the axial slices using a third deep learning model 212A to obtain/generate a segmented CT image (i.e., a binary mask image). In some embodiments, the segmentation module 212 segments the PSOAS muscles by (i) providing the axial slices into the third deep learning model 212A; (ii) extracting one or more features associated with the PSOAS muscles within the axial slices using one or more layers of the third deep learning model 212A; (iii) generating a segmentation mask associated with the PSOAS muscles for each axial slice; and (iv) selecting one of the axial slices containing the segmented left and right PSOAS masks. The features may include muscle boundary contours and muscle shape characteristics. The third deep learning model 212A may be a transformer-based deep learning model. By training on a dataset with labelled examples, the third deep learning model 212A learns to identify patterns, features, or characteristics indicative of the regions corresponding to the right and left PSOAS muscles.
[0030] The Hounsfield unit average calculation module 214 calculates the Hounsfield unit average (HUAC) for the right PSOAS muscle and the left PSOAS muscle that are segmented using the third deep learning model 212A at the segmentation module 212. In some embodiments, the Hounsfield unit average calculation module 214 calculates the HUAC based on mean Hounsfield unit (HU) and the areas of the left and right PSOAS muscles. To calculate HUAC, the Hounsfield unit average calculation module 214 computes the individual areas (in cm2) of the left and right PSOAS muscles, and also extracts the areas of the left and right PSOAS muscles from the input CT volume using the segmented CT image (i.e., the binary mask image). The Hounsfield unit average calculation module 214 further calculates mean Hounsfield unit (HU) values of the left and right PSOAS muscles. Final value is calculated by averaging both the values of left and right muscles. In some embodiments, the Hounsfield unit average calculation module 214 retrieves HU value of each pixel within the segmented CT image for the left and right PSOAS muscles, and calculates the average HU by summing the HU values of all the pixels and dividing by the total number of pixels. The HU value may be retrieved by extracting intensity values from the segmented CT image for each pixel. In some embodiments, the Hounsfield unit average calculation module 214 computes individual cross-sectional areas (CSA) of the left and right PSOAS muscles to determine the size of the PSOAS muscles by counting the number of pixels within the segmented CT image for each muscle and converting the pixel count into the actual cross-sectional area in square centimeters (cm²) using the CT pixel spacing (cm² per pixel). The HUAC score may be calculated using the computed CSA. The HUAC score is the average of HU values across the entire CSA of the left and right psoas muscles.
[0031] The report generating module 216 is further configured to provide the HUAC value as output (or report) and communicate the output or the report to a user through a user device 108 associated with the user via the network 106. The report may be displayed to the user through the user device 108. The HUAC value may enable the user to determine the muscle density of the subject.
[0032] FIG. 3 is a block diagram that illustrates an exemplary process of computing Hounsfield unit average (HUAC) from a computed tomography (CT) image to determine muscle density using a muscle metric determining server 104 of FIG. 1 according to some embodiments herein. The automated process for HUAC computation consists of three stages such as a first stage 302, a second stage 304, and a third stage 306. Upon receiving the CT image as input, the first stage 302 involves determining whether the volume contains the abdominal region using an abdominal detection model (i.e., a first deep learning model). Once the presence of the abdominal region is confirmed, the second stage 304 locates the L3 vertebrae within the CT volume using L3 detection model (i.e., a second deep learning model) and pinpoints axial slices corresponding to the L3 vertebrae. In the third stage 306, the process involves segmenting the left and right anatomical PSOAS structures from the axial slices using PSOAS segmentation model (i.e., a third deep learning model). The segmented output at the third stage 306 is then utilized for HUAC computation to determine HUAC score (or sarcopenia score) for muscle quality determination. All three stages 302-306 are automated using a transformer-based deep learning model.
[0033] FIG. 4 is a flow diagram that illustrates a method for computing Hounsfield unit average (HUAC) from a computed tomography (CT) image to determine muscle density of a subject according to some embodiments herein. At step 402, the CT image is received by a muscle metric determining server 104 from an input data source 102 through a network 106. At step 404, a presence of abdominal region in the CT image is determined by the muscle metric determining server 104 by detecting the abdominal region within the CT image using a first deep learning model. In some embodiments, the abdominal region is detected by extracting sagittal slices from the CT image and providing the sagittal slices into the first deep learning model.
[0034] At step 406, a third lumbar (L3) vertebrae is located within the CT image by the muscle metric determining server 104 using a second deep learning model, if the abdominal region is detected within the CT image. In some embodiments, the L3 vertebrae are located by extracting coronal slices from the CT image and providing the coronal slices into the second deep learning model. At step 408, the axial slices corresponding to the L3 vertebrae are determined by extracting spatial coordinates of the L3 vertebra and selecting one of the axial slices at L3 spatial location by the muscle metric determining server 104.
[0035] At step 410, a segmented CT image is obtained/generated by the muscle metric determining server 104 by segmenting PSOAS muscles such as a right PSOAS muscle and a left PSOAS muscle from the axial slices using a third deep learning model. At step 412, the Hounsfield unit average (HUAC) is calculated for the right PSOAS muscle and the left PSOAS muscle using the segmented CT image based on the mean Hounsfield unit (HU) and areas of the left and right PSOAS muscles. At step 414, a report is generated on the computed HUAC values and other relevant information and communicated to a user through a user device 108 via the network 106. This enables the user to determine the muscle density of the subject based on the HUAC value.
[0036] In some embodiments, the method includes computing, by the muscle metric determining server 104, individual cross-sectional areas (CSA) of the left and right PSOAS muscles to determine the size of the PSOAS muscles. The CSA may be computed by counting the number of pixels within the segmented CT image for each muscle and converting the pixel count into the actual cross-sectional area in square centimeters (cm²) using the CT pixel spacing (cm² per pixel).
[0037] FIG. 5 is an exemplary diagram that illustrates a performance evaluation of deep learning models that are used by a muscle metric determining server 104 of FIG. 1 by comparing an output with the ground truth according to some embodiments herein. FIG. 5 shows unknown or test computed tomography (CT) volume serving as input to an abdominal detection model (or a first deep learning model), L3 detection model (or a second deep learning model), and PSOAS segmentation model (or a third deep learning model), prediction images of the deep learning models and the corresponding ground truth for each deep learning model. FIG. 5 provides a comprehensive visualization of how the deep learning model processes the unknown or test CT volume through each step of FIG. 4. The ground truth, representing the true values or correct annotations, serves as a reference for evaluating the performance of the deep learning models. By comparing the model's predictions with the ground truth, the reliability and precision of the automated muscle metric determining server 104 of the present disclosure and sarcopenia detection process can be assessed. The performance evaluation of each deep learning model used in the muscle metric determining server 104 can be assessed by examining the prediction images of the deep learning models and the corresponding Intersection over Union (IoU) values as shown in FIG. 5. The evaluation results for each deep learning model used by the muscle metric determining server 104 indicate the following:
[0038] Abdominal Region Detection Model or the first deep learning model: The IoU value obtained for this model is approximately 75%, which demonstrates its effectiveness in determining the presence of the abdominal region in the input CT volume. Given the satisfactory IoU value, further efforts to improve accuracy may not be necessary for this specific model, as it successfully serves its purpose.
[0039] L3 Vertebrae Localization Model or the second deep learning model: The IoU value achieved for this model is 86%, indicating that it performs well in accurately locating the L3 vertebrae and determining the mid-level location. With an IoU value of 86%, the model's performance is considered sufficient, and additional enhancements in accuracy or IoU may not be required.
[0040] PSOAS Muscle Segmentation Model or the third deep learning model: The PSOAS muscle segmentation model achieves an impressive IoU value of 95% when evaluated on test data. This high IoU value is crucial for the HUAC computation process, as it ensures accurate segmentation of the left and right PSOAS muscles. The model's exceptional performance in this step significantly contributes to the reliable computation of HUAC.
[0041] In conclusion, the deep learning models of the muscle metric determining server 104 demonstrate promising results in different stages of the automated process. While the accuracy of IoU for the abdominal region detection and the L3 vertebrae localization models are satisfactory, the PSOAS muscle segmentation model's high IoU value of 95% is crucial for precise HUAC computation. The successful outcome from the PSOAS segmentation model lays the foundation for accurate HUAC calculations based on the formulations presented in the description of FIG. 2. Overall, these results highlight the efficiency and reliability of the system in automating HUAC computation and its potential significance in medical analysis for detecting muscle quality, in turn, sarcopenia.
[0042] The system 100 identifies the precise location of the axial slices at the L3/L4 vertebrae and segments both left and right PSOAS muscles, facilitating the HUAC computation process. By automating the computation of HUAC with the sequential deep learning models, the system 100 reduces the need for manual intervention and minimizes the potential for human error, resulting in more precise and reliable measurements. Moreover, this automated approach of the system 100 eliminates the need for manual determination of the axial slice and the subsequent computation of PSOAS muscle size and density. This enhanced accuracy ensures more accurate diagnosis and monitoring of sarcopenia, leading to improved patient care and treatment outcomes. As HUAC provides a direct measure of the density and composition of abdominal muscle tissue, focusing on HUAC computation as a key metric for assessing sarcopenia offers advantages over traditional approaches that rely on segmenting skeletal muscles or muscle mass.
[0043] A representative hardware environment for practicing the embodiments herein is depicted in FIG. 6, with reference to FIGS. 1 through 5. This schematic drawing illustrates a hardware configuration of a Hounsfield unit (HU) computation system 104/computer system/ computing device in accordance with the embodiments herein. The system 104 includes at least one processing device, CPU 10, that may be interconnected via system bus 14 to various devices such as a random-access memory (RAM) 12, read-only memory (ROM) 16, and an input/output (I/O) adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk units 38 and program storage devices 40, which are readable by the system. The system 104 can read the inventive instructions on the program storage devices 40 and follow these instructions to execute the methodology of the embodiments herein. The system 104 further includes a user interface adapter 22 that connects a keyboard 28, mouse 30, speaker 32, microphone 34, and/or other user interface devices such as a touch screen device (not shown) to the bus 14 to gather user input. Additionally, a communication adapter 20 connects the bus 14 to a network 42, and a display adapter 24 connects the bus 14 to a display device 26, which provides a graphical user interface (GUI) 36 of the output data in accordance with the embodiments herein, or which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
[0044] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications without departing from the generic concept, and, therefore, such adaptations and modifications should be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.
,CLAIMS:I/We Claim:
1. A system (100) for determining muscle density of a subject by computing Hounsfield unit average (HUAC) from a computed tomography (CT) image, wherein the system (100) comprises
a muscle metric determining server (104) that comprises
a memory (200), and
a processor (202) in communication with the memory (200), wherein the processor (202) is configured to:
receive the CT image associated with the subject from a CT scanner, wherein the CT image comprises a plurality of anatomical structures comprising organs, tissues, abdominal regions, vertebrae or bones of the subject;
characterized in that,
detect, using a first deep learning model (206A), an abdominal region in the CT image, wherein the abdominal region is detected by extracting a plurality of sagittal slices from the CT image, and providing the plurality of sagittal slices into the first deep learning model (206A), wherein the first deep learning model (206A) (i) extracts one or more features indicative of the abdominal region within the plurality of sagittal slices, and (ii) generates a segmentation mask associated with the abdominal region based on the one or more features, wherein the plurality of sagittal slices are a plurality of slices taken perpendicular to a coronal plane and parallel to a sagittal plane of the subject;
locate, using a second deep learning model (208A), a third lumbar (L3) vertebrae within the CT image, if the abdominal region is detected, wherein the L3 vertebrae is located by extracting a plurality of coronal slices from the CT image, and providing the plurality of coronal slices into the second deep learning model (208A), wherein the second deep learning model (208A) (i) extracts one or more features associated with L3 vertebrae within the plurality of coronal slices, and (ii) generates a segmentation mask associated with the L3 vertebrae, wherein the plurality of coronal slices are a plurality of vertical cross-sections along a frontal plane of the subject;
determine axial slices corresponding to the L3 vertebrae by extracting spatial coordinates of the L3 vertebra and selecting one of the axial slices at L3 spatial location;
generate a segmented CT image by segmenting a right PSOAS muscle and a left PSOAS muscle from the axial slices using a third deep learning model (212A), wherein the right PSOAS muscle and a left PSOAS muscle are segmented by (i) providing the axial slices into the third deep learning model (212A), (ii) extracting one or more features associated with the PSOAS muscles within the axial slices using the third deep learning model (212A), (iii) generating a segmentation mask associated with the PSOAS muscles for each axial slice, and (iv) selecting one of the axial slices comprising the segmented left and right PSOAS masks; and
enable a user to determine the muscle density of the subject by computing a Hounsfield unit average (HUAC) value for the right PSOAS muscle and the left PSOAS muscle using the segmented CT image based on mean Hounsfield unit (HU) of the left and right PSOAS muscles, and providing the HUAC value as output.
2. The system (100) as claimed in claim 1, wherein the first deep learning model (206A) is trained with a plurality of labelled sagittal slices to recognize the features indicative of the abdominal region within the plurality of sagittal slices, wherein the features comprise Xiphoid process and Pubic Symphysis.

3. The system (100) as claimed in claim 1, wherein the second deep learning model (208A) is trained with a plurality of labelled coronal slices to recognize the features indicative of the L3 vertebrae within the plurality of coronal slices, wherein the features comprise vertebral body shape and size, intervertebral disc position, pedicle structure, spinous process alignment, transverse process orientation and cortical bone density.

4. The system (100) as claimed in claim 1, wherein the spatial location of the L3 vertebra is extracted by identifying L3 vertebra in coronal planes, and selecting the corresponding axial slices at the L3 vertebra, wherein the processor (202) is configured to fine-tune the position of the axial slices by applying interpolation or refinement techniques.

5. The system (100) as claimed in claim 1, wherein the third deep learning model (212A) is trained with a plurality of labelled axial slices to recognize the features indicative of the regions corresponding to the PSOAS muscles within the plurality of axial slices, wherein the features comprise muscle boundary contours and muscle shape characteristics.

6. The system (100) as claimed in claim 1, wherein the processor (202) is configured to compute the mean HU value of the left and right PSOAS muscles by
retrieving HU value of each pixel within the segmented CT image for the left and right PSOAS muscles, wherein the HU value is retrieved by extracting intensity values from the segmented CT image for each pixel; and
calculating the average HU by summing the HU values of all the pixels and dividing by the total number of pixels.

7. The system (100) as claimed in claim 1, wherein the processor (202) is configured to compute individual cross-sectional areas (CSA) of the left and right PSOAS muscles to determine the size of the PSOAS muscles by counting the number of pixels within the segmented CT image for each muscle and converting the pixel count into the actual cross-sectional area in square centimeters (cm²) using the CT pixel spacing (cm² per pixel), wherein the HUAC score is calculated using the computed CSA, wherein the HUAC score is the average of HU values across the entire CSA of the left and right psoas muscles.

8. The system (100) as claimed in claim 1, wherein the processor (202) is configured to flag the CT image as not containing the abdominal region and skip additional processing, if the abdominal region is not detected by the first deep learning model (206A).

9. A method for determining muscle density of a subject by computing Hounsfield unit average (HUAC) from computed tomography (CT) image, wherein the method comprising:
receiving, by a processor (202) of a muscle metric determining server (104), the CT image associated with the subject from a CT scanner, wherein the CT image comprises a plurality of anatomical structures comprising organs, tissues, abdominal regions, vertebrae or bones of the subject;
characterized in that,
detecting, by the processor (202), an abdominal region in the CT image using a first deep learning model (206A) by extracting a plurality of sagittal slices from the CT image, and providing the plurality of sagittal slices into the first deep learning model (206A), wherein the first deep learning model (206A) (i) extracts one or more features indicative of the abdominal region within the plurality of sagittal slices, and generates a segmentation mask associated with the abdominal region based on the one or more features, wherein the plurality of sagittal slices are a plurality of slices taken perpendicular to a coronal plane and parallel to a sagittal plane of the subject;
locating, by the processor (202), a third lumbar (L3) vertebrae within the CT image using a second deep learning model (208A), if the abdominal region is detected, wherein the L3 vertebrae is located by extracting a plurality of coronal slices from the CT image, and providing the plurality of coronal slices into the second deep learning model (208A), wherein the second deep learning model (208A) (i) extracts one or more features associated with L3 vertebrae within the plurality of coronal slices, and (ii) generates a segmentation mask associated with the L3 vertebrae, wherein the plurality of coronal slices are a plurality of vertical cross-sections along a frontal plane of the subject;
determining, by the processor (202), axial slices corresponding to the L3 vertebrae by extracting spatial coordinates of the L3 vertebra and selecting one of the axial slices at L3 spatial location;
generating, by the processor (202), a segmented CT image by segmenting a right PSOAS muscle and a left PSOAS muscle from the axial slices using a third deep learning model (212A), wherein the right PSOAS muscle and a left PSOAS muscle are segmented by (i) providing the axial slice into the third deep learning model (212A), (ii) extracting one or more features associated with the PSOAS muscles within the axial slices using the third deep learning model (212A), (iii) generating a segmentation mask associated with the PSOAS muscles for each axial slice, and (iv) selecting one of the axial slices comprising the segmented left and right PSOAS masks;
computing, by the processor (202), a Hounsfield unit average (HUAC) value for the right PSOAS muscle and the left PSOAS muscle using the segmented CT image based on mean Hounsfield unit (HU) of the left and right PSOAS muscles; and
enabling a user to determine the muscle density of the subject by providing the HUAC value as output.

10. The method as claimed in claim 9, wherein the method comprising computing, by the processor (202), individual cross-sectional areas (CSA) of the left and right PSOAS muscles to determine the size of the PSOAS muscles, wherein the CSA is computed by counting the number of pixels within the segmented CT image for each muscle and converting the pixel count into the actual cross-sectional area in square centimeters (cm²) using the CT pixel spacing (cm² per pixel).

Documents

Application Documents

# Name Date
1 202441032484-STATEMENT OF UNDERTAKING (FORM 3) [24-04-2024(online)].pdf 2024-04-24
2 202441032484-PROVISIONAL SPECIFICATION [24-04-2024(online)].pdf 2024-04-24
3 202441032484-PROOF OF RIGHT [24-04-2024(online)].pdf 2024-04-24
4 202441032484-FORM FOR STARTUP [24-04-2024(online)].pdf 2024-04-24
5 202441032484-FORM FOR SMALL ENTITY(FORM-28) [24-04-2024(online)].pdf 2024-04-24
6 202441032484-FORM 1 [24-04-2024(online)].pdf 2024-04-24
7 202441032484-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-04-2024(online)].pdf 2024-04-24
8 202441032484-EVIDENCE FOR REGISTRATION UNDER SSI [24-04-2024(online)].pdf 2024-04-24
9 202441032484-DRAWINGS [24-04-2024(online)].pdf 2024-04-24
10 202441032484-Request Letter-Correspondence [26-04-2024(online)].pdf 2024-04-26
11 202441032484-Power of Attorney [26-04-2024(online)].pdf 2024-04-26
12 202441032484-FORM28 [26-04-2024(online)].pdf 2024-04-26
13 202441032484-FORM-26 [26-04-2024(online)].pdf 2024-04-26
14 202441032484-Form 1 (Submitted on date of filing) [26-04-2024(online)].pdf 2024-04-26
15 202441032484-Covering Letter [26-04-2024(online)].pdf 2024-04-26
16 202441032484-DRAWING [23-04-2025(online)].pdf 2025-04-23
17 202441032484-CORRESPONDENCE-OTHERS [23-04-2025(online)].pdf 2025-04-23
18 202441032484-COMPLETE SPECIFICATION [23-04-2025(online)].pdf 2025-04-23