Abstract: Provided is a deep learning-based hernia image assist system 100 designed to aid medical professionals in hernia repair procedures. The system employs a combination of deep learning models, memory, and processing capabilities to provide comprehensive insights and generate informative reports. Key functionalities include the identification of the abdominal region 306 in computed tomography (CT) images 102, localization of the right and left psoas muscles 506, calculation of sarcopenia scores, determination of hernia width and classification based on European Hernia Society (EHS) standards, computation of the Tanaka Index and Sabbagh ratio, and the identification of mesh presence and location within CT scans. Additionally, the system segments the left and right rectus muscles 1006 to assess diastasis recti indicators, diastasis width, and rectus abdominus width. The generated reports, informed by these parameters, facilitate informed decision-making for healthcare providers during hernia repair procedures. This advanced technology promises to enhance surgical precision and user care in the field of hernia surgery, offering a valuable tool for medical professionals. FIG. 2
DESC:BACKGROUND
Technical Field
[0001] The embodiments herein generally relate to surgical assistance for hernia repair, more particularly to an image assist system for assisting in abdominal wall hernia repairs using deep learning techniques.
Description of the Related Art
[0002] In conventional treatment methods, hernia was repaired using surgical techniques without the use of imaging techniques like CT (Computed Tomography). Even though prosthesis like Mesh was being used, 30% of such repairs recur. This recurrence happens due to the wrong surgical technique adopted, which results in sutures holding the abdominal muscles together becoming less effective over time or a surgical wound that has not healed properly. There are also multiple other reasons besides surgical error why an abdominal wall hernia recurs. However, nearly 50% of the recurrence is due to wrong surgical techniques. Thus, a prominent level of expertise is required for surgical decision-making in the case of hernias, especially in the case of large abdominal wall defects/hernias where the content of the hernial sac exceeds 20-25% of the abdominal volume. In such types of hernia defects, a high frequency of recurrences are observed. Thus, to avoid such recurrences, the closure of such massive defects requires special surgical techniques. If at the initial stage, the closure is not done properly and then it leads to frequent treatments (recurrences) which may increase surgical complexity every time a patient is operated on. This also incurs a significant financial burden on the patient’s family and on the health care system.
[0003] In order to solve the above-mentioned problems, radiological expertise is required to provide images/ reports related to hernia prior to surgery. Surgeons or radiological experts may perform some calculations on the images and/or reports for decision-making activities during pre-operative planning e.g. estimate post-operative risks and other complexities related to hernia repair. However, the calculation/analysis is performed manually thus is a very tedious and time-consuming task and there are chances that one may miss important parameters in providing analysis as there is no set of protocols defined for calculations.
[0004] Hence, there is a need for a technology that overcomes the aforementioned drawbacks to provide meaningful information prior to hernia repairsurgery for assistingsurgeons in the process of making the right surgery-related decisions relating to hernia defects.
SUMMARY
[0005] In view of the foregoing, an embodiment herein provides a hernia image assist system that generates a report for assisting in hernia repairs. The hernia image assist system includes a memory and a processor. The processor detects, using a deep learning model, an abdominal region from a computed tomography (CT) image by identifying Xiphoid and Pubic symphysis in sagittal slices of the CT image. The processor determines one or more edges of a right psoas muscle and a left psoas muscle that is segmented using an edge detector and an area of each side of the right psoas muscle and the left psoas muscle. The processor determines, using the deep learning model, a sarcopenia score using a combination of Hounsfield unit values of the pixels bound within the right psoas muscle and the left psoas muscle that is segmented. The processor determines a width of a hernia region on the CT image and identifies, using a European Hernia Society (EHS) classification, a location of the hernia region on the CT image. The processor computes, using the deep learning model, a Tanaka index and a Sabbagh ratio using a hernia sac volume and an abdominal volume. The processor determines, using the deep learning model, a mesh indicator by (a) identifying a mesh in the CT image by analysing the axial slices of the CT image and (b) segmenting the mesh by determining a location comprising a slice where it is inferred. The processor segments, using the deep learning model, a left rectus muscle and a right rectus muscle from the CT image to determine at least one of a diastasis recti indicator, a diastasis width or a rectus abdominus width. The processor generates a report based on one or more parameters such asthe sarcopenia score, the width of the hernia region, the location of the hernia region identified using the European Hernia Society (EHS) classification, the Tanaka Index, the Sabbagh ratio, the mesh indicator, diastasis recti indicator, the diastasis width or the rectus abdominus width to assist a user for appropriate decision making during hernia repair.
[0006] In some embodiments, the processor segments the right psoas muscle and the left psoas muscle by (i) detecting and segmenting, using the deep learning model, an L3 vertebrate from the abdominal region that is detected by analysing the sagittal slices and coronal slices of the CT image, (ii) extracting all axial slices relevant to the L3 vertebrae that is segmented, and (iii) accurately segmenting, using the deep learning model, the right psoas muscle and the left psoas muscle by analyzing the axial slices relevant to the L3 vertebrae.
[0007] In some embodiments, the processor identifies and segments, using the deep learning model, the hernia region by analysing each axial slice of the abdominal region on the CT image at X-Y axis and across other axial slices at a Z-axis. In some embodiments, the processor determines the hernia sac volume by (i) segmenting, using the deep learning model, a hernia sac from the hernia region on each axial slice of the CT image, and (ii) computing a volume of the hernia sac by subsequently stacking the axial slices together.
[0008] In some embodiments, the processor determines the abdominal volume by (i) segmenting, using the deep learning model, the abdominal region from the CT image by analysing the sagittal slices of the CT image that are identified, and (ii) computing the abdominal volume of the abdominal region by subsequently stacking the axial slices together. In some embodiments, the computed tomography (CT) image is obtained directly from a picture archiving and communication system (PACS) system or a storage system. In some embodiments, the processor determines a content of the hernia sac using at least one of a deep learning model or one or more computer vision (CV) techniques.
[0009] In some embodiments, the processor determines the width of the hernia region, and the location of the hernia region using one or more computer vision (CV) techniques by identifying a segment of the abdominal region and specifying location basis of the EHS classification.
[0010] The hernia image assist system assists surgeons with meaningful information i.e., in the form of metrics/parameters prior to hernia repairsurgery and assistssurgeons across the globe in the process of making right surgery-related decisions relating to hernia defects. The information includes characterization of the defect, sarcopenia,visceral fat score,volumetric analysis and the amount of loss of domain in percentage. The hernia image assist system also determines the presence of mesh and adhesions. The hernia image assist system may estimate characteristics of the abdominal wall musculature. In an exemplary embodiment, the hernia image assist system provides assistance to the surgeons based on the analysis performed on the abdominal CT images and other additional information. It applies deep learning and computer vision techniques to define the above-mentioned parameters/metrics and accordingly provide assistance to the surgeons. The hernia image assist system determines whether a user is actually suffering from a hernia defect or not. The hernia image assist system performs a volumetric analysis of the hernia and computes the amount of loss of domain in percentage. The hernia image assist system assists surgeons in the selection of mesh (type, size) required to perform surgery for better user outcomes and avoid recurrences of hernia. The selection of mesh is determined by analyzing various parameters extracted from the abdominal CT images. The hernia image assist system estimates the postoperative risks and other complications prior to operation/surgery and provides recommendations to reduce the risk.
[0011] In one aspect, a method of generating a report for assisting a user in hernia repairs is provided. The method includes (i) detecting, using a deep learning model, an abdominal region from a computed tomography (CT) image by identifying Xiphoid and Pubic symphysis in sagittal slices of the CT image, (ii) determining one or more edges of a right psoas muscle and a left psoas muscle that is segmented using an edge detector and an area of each side of the right psoas muscle and the left psoas muscle, (iii) determining, using a deep learning model, a sarcopenia score using a combination of Hounsfield unit values of the pixels bound within the right psoas muscle and the left psoas muscle that is segmented, (iv) determining a width of a hernia region on the CT image and identifies, using a European Hernia Society (EHS) classification, a location of the hernia region on the CT image, (v) computing, using the deep learning model, a Tanaka index and a Sabbagh ratio using a hernia sac volume and an abdominal volume, (vi) determining, using the deep learning model, a mesh indicator by (a) identifying a mesh in the CT image by analysing the axial slices of the CT image and (b) segmenting the mesh by determining a location comprising a slice where it is inferred, (vii) segmenting, using the deep learning model, a left rectus muscle and a right rectus muscle from the CT image to determine at least one of a diastasis recti indicator, a diastasis width or a rectus abdominus width, and (viii) generating a report based on one or more parameters such asthe sarcopenia score, the width of the hernia region, the location of the hernia region identified using the European Hernia Society (EHS) classification, the Tanaka Index, the Sabbagh ratio, the mesh indicator, diastasis recti indicator, the diastasis width or the rectus abdominus width to assist a user for appropriate decision making during hernia repair.
[0012] In some embodiments, the right psoas muscle and the left psoas muscle are segmented by (i) detecting and segmenting, using the deep learning model, an L3 vertebrate from the abdominal region that is detected by analysing the sagittal slices and coronal slices of the CT image, (ii) extracting all axial slices relevant to the L3 vertebrae that is segmented, and (iii) accurately segmenting, using the deep learning model, the right psoas muscle and the left psoas muscle by analyzing the axial slices relevant to the L3 vertebrae.
[0013] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0015] FIG. 1 illustrates a block diagram of a hernia image assist system that generates a report for assisting in hernia repairs according to an embodiment herein;
[0016] FIG. 2 illustrates a workflow of the hernia image assist system that generates a report for assisting in hernia repairs according to an embodiment herein;
[0017] FIG. 3 illustrates an exemplary view of a process of detecting an abdominal region from a computed tomography (CT) image using the hernia image assist system of FIG. 1 according to an embodiment herein;
[0018] FIG. 4 illustrates an exemplary view of a process of detecting and segmenting an L3 vertebrate from the abdominal region that is detected using the hernia image assist system of FIG. 1 according to an embodiment;
[0019] FIG. 5 illustrates an exemplary view of a process of segmenting a right psoas muscle and a left psoas muscle from the segmented L3 vertebrae using the hernia image assist system of FIG. 1 according to an embodiment;
[0020] FIG. 6 illustrates an exemplary view of a process of identifying and segmenting a hernia region from an axial slice using the hernia image assist system of FIG. 1 according to an embodiment;
[0021] FIG. 7 illustrates an exemplary view of a process of segmenting a hernia sac from the hernia region on each axial slice of the CT image using the hernia image assist system of FIG. 1 according to an embodiment;
[0022] FIG. 8 illustrates an exemplary view of a process of determining an abdominal volume from the axial slices of the CT image using the hernia image assist system of FIG. 1 according to an embodiment;
[0023] FIG. 9 illustrates an exemplary view of a process of identifying and segmenting a mesh from an axial slice of the CT image using the hernia image assist system of FIG. 1 according to an embodiment;
[0024] FIG. 10 illustrates an exemplary view of a process of segmentinga left rectus muscle and a right rectus muscle from an axial slice of the CT image using the hernia image assist system of FIG. 1 according to an embodiment; and
[0025] FIGS. 11A-11B are flow diagrams that illustrate a method of generating a report for assisting a user in hernia repairs according to an embodiment herein.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0026] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0027] As mentioned, there remains a need for a hernia image assist system that generates a report for assisting in hernia repairs. Various embodiments disclosed herein provide a hernia image assist system that generates a report for assisting in hernia repairs and a method of generating a report for assisting a user in hernia repairs. Referring now to the drawings, and more particularly to FIGS. 1 through 11B, where similar reference characters denote corresponding features consistently throughout the figure’s, preferred embodiments are shown.
[0028] FIG. 1 illustrates a block diagram of a hernia image assist system 100 that generates a report for assisting in hernia repairs according to an embodiment herein. The hernia image assist system 100 includes a memory 104 and a processor 106. The processor 106 detects, using a deep learning model 108, an abdominal region from a computed tomography (CT) image 102 by identifying Xiphoid and Pubic symphysis in sagittal slices of the CT image 102. The processor 106 determines one or more edges of a right psoas muscle and a left psoas muscle that is segmented using an edge detector and an area of each side of the right psoas muscle and the left psoas muscle. The processor 106 determines, using the deep learning model 108, a sarcopenia score using a combination of Hounsfield unit values of the pixels bound within the right psoas muscle and the left psoas muscle that is segmented. The processor 106 determines a width of a hernia region on the CT image 102 and identifies, using a European Hernia Society (EHS) classification, a location of the hernia region on the CT image 102. The processor 106 computes, using the deep learning model 108, a Tanaka index and a Sabbagh ratio using a hernia sac volume and an abdominal volume. The processor 106 determines, using the deep learning model 108, a mesh indicator by (a) identifying a mesh in the CT image 102 by analysing the axial slices of the CT image 102 and (b) segmenting the mesh by determining a location comprising a slice where it is inferred. The processor 106 segments, using the deep learning model 108, a left rectus muscle and a right rectus muscle from the CT image 102 to determine at least one of a diastasis recti indicator, a diastasis width or a rectus abdominus width. The processor 106 generates a report based on one or more parameters such as the sarcopenia score, the width of the hernia region, the location of the hernia region identified using the European Hernia Society (EHS) classification, the Tanaka Index, the Sabbagh ratio, the mesh indicator diastasis recti indicator, the diastasis width or the rectus abdominus width to assist a user for appropriate decision making during hernia repair. A python script may be used to generate a PDF based report.
[0029] In some embodiments, the processor 106 segments the right psoas muscle and the left psoas muscle by (i) detecting and segmenting, using the deep learning model 108, an L3 vertebrate from the abdominal region that is detected by analysing the sagittal slices and coronal slices of the CT image 102, (ii) extracting all axial slices relevant to the L3 vertebrae that is segmented, and (iii) accurately segmenting, using the deep learning model 108, the right psoas muscle and the left psoas muscle by analyzing the axial slices relevant to the L3 vertebrae. In some embodiments, the processor 106 identifies and segments, using the deep learning model 108, the hernia region by analysing each axial slice of the abdominal region on the CT image 102 at X-Y axis and across other axial slices at a Z-axis. In some embodiments, the processor 106 determines the hernia sac volume by (i) segmenting, using the deep learning model 108, a hernia sac from the hernia region on each axial slice of the CT image 102, and (ii) computing a volume of the hernia sac by subsequently stacking the axial slices together.
[0029] In some embodiments, the processor 106 determines the abdominal volume by (i) segmenting, using the deep learning model 108, the abdominal region from the CT image 102 by analysing the sagittal slices of the CT image 102 that are identified, and (ii) computing the abdominal volume of the abdominal region by subsequently stacking the axial slices together. In some embodiments, the computed tomography (CT) image 102 is obtained directly from a picture archiving and communication system (PACS) system or a storage system. In some embodiments, the processor 106 determines a content of the hernia sac using at least one of the deep learning model 108 or one or more computer vision (CV) techniques. In some embodiments, the processor 106 determines the width of the hernia region, and the location of the hernia region using one or more computer vision (CV) techniques by identifying a segment of the abdominal region and specifying location basis of the EHS classification.
[0030] The hernia image assist system 100 assists surgeons with meaningful information i.e. in the form of metrics/parameters prior to hernia repairsurgery and assistssurgeons across the globe in the process of making the right surgery-related decisions relating to hernia defects. The information includes characterization of the defect, sarcopenia and visceral fat score, and volumetric analysis and the amount of loss of domain in percentage. The hernia image assist system100 also determines the presence of mesh and adhesions. The hernia image assist system100 may also estimate characteristics of the abdominal wall musculature. In an exemplary embodiment, the hernia image assist system 100 provides assistance to the surgeons based on the analysis performed on the abdominal CT images and other additional information. The hernia image assist system 100 applies deep learning and computer vision techniques to define the above-mentioned parameters/metrics and accordingly provide assistance to the surgeons. The hernia image assist system 100 determines whether a user is actually suffering from a hernia defect or not. The hernia image assist system 100 performs a volumetric analysis of the hernia and the amount of loss of domain in percentage. The hernia image assist system 100 assists surgeons in the selection of mesh (type, size) required to perform surgery for better user outcomes and avoid recurrences of hernia. The selection of mesh is determined by analyzing various parameters extracted from the abdominal CT images. The hernia image assist system 100 estimates the postoperative risks and other complications prior to operation/surgery and provides recommendations to reduce the risk.
[0031] The hernia image assist system 100 may use abdominal CT image, associated information, and deep learning models to characterizeseveral metrics /parameters that may provide assistance to a surgeon before planning for an abdominal wall hernia surgery. The hernia image assist system 100 implements the deep learning models on the CT images 102 have proven useful in medical applications. The hernia image assist system 100 utilizes this deep learning technology to predict surgical complexity or outcomes from preoperative imaging.
[0032] In an exemplary embodiment, the hernia image assist system 100 comprises a CT aggregation and processing unitthat handles a series of CT images 102 around the abdominal area/region of the user suffering from an abdominal wall hernia. Multiple cross-sectional CT images 102 are processed and aggregated using the aggregation and processing unit. The CT images 102 are annotated with multiple labels of interest that help in the classification of data. The CT images 102 from the CT aggregation and processing unitare annotated with multiple labels of interest.Then, the deep learning model 108 is applied and weights are generated for inference. These weights are updated continuously to achieve an optimum error value in the generated output. The deep learning model 108 is trained for several iterations to get optimal results. The CT image 102 that needs to be analyzed is processed and the processed CT image is fed to the model inference engine. The computer vision techniques and computational module are applied on the output received from the model inference engine to generate or extract the output parameters. The hernia image assist system 100 generates a report for decision making for the surgeons towards the hernia repair. Particularly, the report contains following minimal output parameters:1. Size of hernia region/defect, 2. Abdominal muscle measurements, 3. EHS classification of hernia region/defect, 4. Contents of the hernial sac, 5. Tanaka index and other volumetric ratios, 6. Presence and site of adhesions, 7. Safe spots for abdominal entry for surgery, 8. Sarcopenia and Visceral fat scores.
[0033] In some embodiments, the deep learning model 108 is incorporated in the hernia image assist system 100. The deep learning model 108 includes convolutional neural networks and vision transformer for processing images. Convolutional neural networks are comprised of multiple node layers. In other words, the deep learning model 108 uses CT images 102 as the input to furnish said metrics/parameters that may be used by surgeons in decision making prior to surgery. The deep learning model 108 includes data preprocessing step to train and test the deep learning model 108. Multiple CT images 102 are assembled, and annotated for various labels including the hernia defect, hernia sac etc. In one exemplary embodiment, the CT images of size 512 x 512 are used. The CT images 102 are normalized, and the corresponding output labels are also encoded with the annotated masks which are further fed to training and validation models. In each training iteration, the CT image 102 is passed as input to the deep learning model 108 and the subsequent convolution layers extract the high dimensional features from the image and feature maps are generated. The feature maps are provided to the transformer to get encoded patches. The deep learning model 108 further presents a loss function which is an optimization function based on which model weights are updated. The deep learning model 108 computes a loss function and backpropagates to update the weights such that the loss function value reduces. The deep learning model 108 is trained for several iterations. The deep learning model 108 iterates this process, until the loss is minimum, and the deep learning model 108 is able to identify whether the hernia is present or not. There are two loss functions computed by the deep learning model 108. In one exemplary embodiment, the first loss function (L1) is for classification and identification of hernia defect. In another exemplary embodiment, the second loss function (L2) is for segmentation. Two major functions of the deep learning model 108 includes transformation/classification and segmentation. The transformer block of the of the image assist deep learning model helps in the classification of the image to identify and find the defect in the abdomen. It also helps in locating accurate position of the defect and other prosthesis. Further, image segmentation is performed to segment the Hernia sac, other abdominal anatomies and anomalies. Once the deep learning model 108 is sufficiently trained, the CT image 102 that needs to be studied using the system is processed and fed to the inference engine which gives a predicted output to the computer vision & computational module to calculate the various outputs. Reports may be generated using the output for the surgeons to plan surgery and understand the post-surgical complications.
[0034] FIG. 2 illustrates a workflow of the hernia image assist system 100 that generates a report for assisting in hernia repairs according to an embodiment herein. The hernia image assist system 100 includes a memory 104 and a processor 106. The hernia image assist system 100 processes a computed tomography (CT) image 102 associated with a user by setting a Hounsfield Unit (HU) window to a level of 0 and a width of 400 to ensure optimal visual clarity and accurate image representation. The hernia image assist system 100 further performs checks to verify a number of slices and pixel details of the CT image.The hernia image assist system 100 accepts the abdominal CT image 102 after initial verification to ensure the presence of the necessary slices with acceptable quality for applying a deep learning model 108 and/or one or more computer vision (CV) techniques.
[0035] At a step 202, the processor 106 detects, using the deep learning model 108, an abdominal region from the computed tomography (CT) image 102 by identifying Xiphoid and Pubic symphysis in sagittal slices of the CT image 102. If the abdominal region is not detected, an error is raised, else goes to a step 204. At the step 204, the processor 106 determines one or more edges/boundariesof a right psoas muscle and a left psoas muscle that is segmented using an edge detector and an area of each side of the right psoas muscle and the left psoas muscle. At a step 206, the processor 106 segments the right psoas muscle and the left psoas muscle by (i) detecting and segmenting, using the deep learning model 108, an L3 vertebrate from the abdominal region that is detected by analysing the sagittal slices and coronal slices of the CT image 102, (ii) extracting all axial slices relevant to the L3 vertebrae that is segmented, and (iii) accurately segmenting, using the deep learning model 108, the right psoas muscle and the left psoas muscle by analyzing the axial slices relevant to the L3 vertebrae. At a step 208, the processor 106 determines, using the deep learning model 108, a sarcopenia score (Hounsfield unit average calculation, HUAC, score) using a combination of the Hounsfield unit values of the pixels bound within the right psoas muscle and the left psoas muscle that is segmented.
[0036] At a step 210, the processor 106 identifies and segments, using the deep learning model 108, the hernia region/defect by analysing each axial slice of the abdominal region on the CT image 102 at X-Y axis and across other axial slices at a Z-axis. At a step 212, the processor 106 determines a width of a hernia region on the CT image 102 and identifies, using a European Hernia Society (EHS) classification, a location of the hernia region on the CT image 102. At a step 214, the processor 106 determines the width of the hernia region, and the location of the hernia region using one or more computer vision (CV) techniques by identifying a segment of the abdominal region and specifying location basis of the EHS classification.
[0037] At a step 216,the processor 106 segments, using the deep learning model 108, a hernia sac from the hernia region on each axial slice of the CT image 102. At a step 218, the processor 106 determines the hernia sac volume by (i) segmenting, using the deep learning model 108, a hernia sac from the hernia region on each axial slice of the CT image 102, and (ii) computing a volume of the hernia sac by subsequently stacking the axial slices together. In some embodiments, the processor 106 determines a content of the hernia sac using at least one of a deep learning model 108 or one or more computer vision (CV) techniques. At a step 220, the processor 106 determines the abdominal volume by (i) segmenting, using the deep learning model 108, the abdominal region from the CT image 102 by analysing the sagittal slices of the CT image 102 that are identified, and (ii) computing the abdominal volume of the abdominal region by subsequently stacking the axial slices together. The processor 106 computes, using the deep learning model 108, a Tanaka index at step 222 and a Sabbagh ratio at step 224 using a hernia sac volume and an abdominal volume.
[0038] At a step 226, the processor 106 identifies a mesh in the CT image 102 by analysing the axial slices of the CT image 102 and segmenting the mesh by determining a location comprising a slice where it is inferred. At a step 228, the processor 106 determines, using the deep learning model 108, a mesh indicator by (a) identifying the mesh in the CT image 102 by analysing the axial slices of the CT image 102 and (b) segmenting the mesh by determining a location comprising the slice where it is inferred. At a step 230, the processor 106 segments, using the deep learning model 108, a left rectus muscle and a right rectus muscle from the CT image 102. The processor 106 determines at least one of a diastasis recti indicator at step 232, a diastasis width at step 234 or a rectus abdominus width at step 236. At a step 238, the processor 106 generates, optionally using the deep learning model 108, a report based on one or more parameters such asthe sarcopenia score, the width of the hernia region, the location of the hernia region identified using the European Hernia Society (EHS) classification, the Tanaka Index, the Sabbagh ratio, the mesh indicator diastasis recti indicator, the diastasis width or the rectus abdominus width to assist a user for appropriate decision making during hernia repair. The reports may be in text and/or PDF formats.A python script may be used to generate a PDF based report.
[0039] In some embodiments, the computed tomography (CT) image 102 is obtained directly from a picture archiving and communication system (PACS) system or a storage system. In some embodiments, the processor 106 processes the CT image automatically once the CT image is stored in the storage system.
[0040] FIG. 3 illustrates an exemplary view of a process of detecting an abdominal region 306 from a computed tomography (CT) image 102 using the hernia image assist system 100 of FIG. 1 according to an embodiment herein.The hernia image assist system 100 comprises a processor 106 that detects, using the deep learning model 108, an abdominal region 306 from the computed tomography (CT) image 102 by identifying Xiphoid and Pubic symphysis in sagittal slices 302 of the CT image 102. The hernia image assist system 100 receives the sagittal slices 302 of the CT image 102, along with a ground truth 304 of the sagittal slice 302 for detecting an abdominal region 306 from the computed tomography (CT) image 102. In some embodiments, the deep learning model 108 is trained on sagittal slices 302 and segments the Xiphoid and Pubic symphysis. The hernia image assist system 100 employs the identified abdominal region 306 to verify the presence of the abdominal region in the CT image 102. If the abdominal region 306 is not detected, the hernia image assist system 100 raises an error and the process of further inferences is terminated.
[0041] FIG. 4 illustrates an exemplary view of a process of detecting and segmenting an L3 vertebrate 406 from the abdominal region 306 that is detected using the hernia image assist system 100 of FIG. 1 according to an embodiment. The hernia image assist system 100 receives a coronal slice 402 of the CT image 102, along with a ground truth 404 of the coronal slice 402 for detecting an L3 vertebrae 406 from the coronal slice 402 of the CT image 102. The hernia image assist system 100 detects and segments, using the deep learning model 108, the L3 vertebrate 406 from the abdominal region 306 that is detected by analysing the sagittal slices and coronal slices 402 of the CT image 102. The hernia image assist system 100 extracts all axial slices relevant to the L3 vertebrae 406 that is segmented.In some embodiments, the hernia image assist system 100 utilizes a computational technique that incorporates additional inferences at the axial level, the mid-level of the L3 vertebrae 406 that is identified. If the psoas muscles are not distinctly visible, the deep learning model 108 proceeds to the next axial slices until an appropriate slice containing the psoas muscles is selected for further analysis. This traversal may lead to slices at a L4 level where the psoas muscles are clearly visible, which, while suboptimal, remains a valid approach.
[0042] FIG. 5 illustrates an exemplary view of a process of segmenting a right psoas muscle and a left psoas muscle 506 from the segmented L3 vertebrae 406 using the hernia image assist system 100 of FIG. 1 according to an embodiment. The hernia image assist system 100 receives an axial slice 502 of the CT image 102, along with a ground truth 504 of the axial slice 502 for segmenting the right psoas muscle and the left psoas muscle from the segmented L3 vertebrae 406. In some embodiments, the segmented L3 vertebrae406 includes the axial slices 502. The hernia image assist system 100 segments the right psoas muscle and the left psoas muscle 506 by (i) detecting and segmenting, using the deep learning model 108, the L3 vertebrate 406 from the abdominal region 306 that is detected by analysing the sagittal slices and coronal slices of the CT image 102, (ii) extracting all axial slices 502 relevant to the L3 vertebrae 406 that is segmented, and (iii) accurately segmenting, using the deep learning model 108, the right psoas muscle and the left psoas muscle 506 by analyzing the axial slices 502 relevant to the L3 vertebrae.
[0043] The hernia image assist system 100 determines one or more edges of the right psoas muscle and the left psoas muscle 506 that is segmented using an edge detector and an area of each side of the right psoas muscle and the left psoas muscle. The hernia image assist system 100 determines, using the deep learning model 108, a sarcopenia score (Hounsfield unit average calculation, HUAC, score) using a combination of Hounsfield unit values of the pixels bound within the right psoas muscle and the left psoas muscle 506 that is segmented.
[0044] FIG. 6 illustrates an exemplary view of a process of identifying and segmenting a hernia region 606 from an axial slice 602 using the hernia image assist system 100 of FIG. 1 according to an embodiment. The hernia image assist system 100 receives the axial slice 602 of the CT image 102, along with a ground truth 604 of the axial slice 602 for determining a hernia region/defect 606. The hernia image assist system 100 identifies and segments, using the deep learning model 108, the hernia region/defect 606 by analysing each axial slice 602 of the abdominal region on the CT image 102 at X-Y axis and across other axial slices at a Z-axis. In some embodiments, the hernia image assist system 100 determines a width of the hernia region 606 on the CT image 102 and identifies, using a European Hernia Society (EHS) classification, a location of the hernia region 606 on the CT image 102. In some embodiments, the hernia image assist system 100 determines the width of the hernia region 606, and the location of the hernia region 606 using one or more computer vision (CV) techniques by identifying a segment of the abdominal region and specifying location basis of the EHS classification.
[0045] FIG. 7 illustrates an exemplary view of a process of segmenting a hernia sac from the hernia region 606 on each axial slice 702 of the CT image 102 using the hernia image assist system 100 of FIG. 1 according to an embodiment. The hernia image assist system 100 receives the hernia region 606 on an axial slice 702 of the CT image 102, along with a ground truth 704 of the axial slice 702 for segmenting a hernia sac 706. The hernia image assist system 100 segments, using the deep learning model 108, the hernia sac 706 from the hernia region 606 on each axial slice 702 of the CT image 102. In some embodiments, the hernia image assist system 100 determines a volume of the hernia sac 706 by (i) segmenting, using the deep learning model 108, the hernia sac 706 from the hernia region 606 on each axial slice 702 of the CT image 102, and (ii) computing a volume of the hernia sac 706 by subsequently stacking the axial slices 702 together. In some embodiments, the hernia image assist system 100 determines a content of the hernia sac 706 using at least one of a deep learning model 108 or one or more computer vision (CV) techniques.
[0046] FIG. 8 illustrates an exemplary view of a process of determining an abdominal volume 806 from axial slices 802 of the CT image 102 using the hernia image assist system 100 of FIG. 1 according to an embodiment. The hernia image assist system 100 receives a axial slice 802 of the CT image 102, along with a ground truth 804 of the axial slices 802 for determining an abdominal volume 806. The hernia image assist system 100 determines the abdominal volume 806 by (i) segmenting, using the deep learning model 108, an abdominal region from the CT image 102 by analysing the axial slices 802 of the CT image 102 that are identified, and (ii) computing the abdominal volume 806 of the abdominal region by subsequently stacking the axial slices 802 together. In some embodiments, the hernia image assist system 100 computes, using the deep learning model 108, a Tanaka index and a Sabbagh ratio using the hernia sac volume and the abdominal volume 806.
[0047] FIG. 9 illustrates an exemplary view of a process of identifying and segmenting a mesh 906 from an axial slice 902 of the CT image 102 using the hernia image assist system 100 of FIG. 1 according to an embodiment. The hernia image assist system 100 receives the axial slice 902 of the CT image 102, along with a ground truth 904 of the axial slice 902 for segmenting the mesh 906. The hernia image assist system 100 identifies the mesh 906 in the CT image 102 by analysing the axial slices 902 of the CT image 102 and segmenting the mesh 906 by determining a location/slice numbers comprising a slice where it is inferred. In some embodiments, the hernia image assist system 100 determines, using the deep learning model 108, a mesh indicator by (a) identifying the mesh 906 in the CT image 102 by analysing the axial slices 902 of the CT image 102 and (b) segmenting the mesh 906 by determining a location comprising the slice where it is inferred.
[0048] FIG. 10 illustrates an exemplary view of a process of segmentinga left rectus muscle and a right rectus muscle from an axial slice 1002 of the CT image 102 using the hernia image assist system 100 of FIG. 1 according to an embodiment. The hernia image assist system 100 receives the axial slice 1002 of the CT image 102, along with a ground truth 1004 of the axial slice 1002 for segmenting a left rectus muscle and right rectus muscle 1006. The hernia image assist system 100 segments, using the deep learning model 108, the left rectus muscle and the right rectus muscle 1006 from the CT image 102. In some embodiments, the hernia image assist system 100 determines at least one of a diastasis recti indicator, a diastasis width or a rectus abdominus width.
[0049] The hernia image assist system 100 generates a report based on one or more parameters such asthe sarcopenia score, the width of the hernia region, the location of the hernia region identified using the European Hernia Society (EHS) classification, the Tanaka Index, the Sabbagh ratio, the mesh indicator diastasis recti indicator, the diastasis width or the rectus abdominus width to assist a user for appropriate decision making during hernia repair. The reports may be in both text and PDF formats.
[0050] FIGS. 11A-B are flow diagrams that illustrate a method of generating a report for assisting a user in hernia repairs according to an embodiment herein. At a step 1102, an abdominal region is detected, using a deep learning model 108, from a computed tomography (CT) image 102 by identifying Xiphoid and Pubic symphysis in sagittal slices of the CT image 102. At a step 1104, one or more edges of a right psoas muscle and a left psoas muscle that is segmented is determined using an edge detector and an area of each side of the right psoas muscle and the left psoas muscle is also determined. At a step 1106, a sarcopenia score is determined, using the deep learning model 108, using a combination of Hounsfield unit values of the pixels bound within the right psoas muscle and the left psoas muscle that is segmented. At a step 1108, a width of a hernia region on the CT image 102 is determined and a location of the hernia region on the CT image 102 is identified, using a European Hernia Society (EHS) classification. At a step 1110, a Tanaka index and a Sabbagh ratio is computed, using the deep learning model 108, using a hernia sac volume and an abdominal volume. At a step 1112, a mesh indicator is determined, using the deep learning model 108, by (a) identifying a mesh in the CT image 102 by analysing the axial slices of the CT image 102 and (b) segmenting the mesh by determining a location comprising a slice where it is inferred. At a step 1114, a left rectus muscle and a right rectus muscle is segmented, using the deep learning model 108, from the CT image 102 to determine at least one of a diastasis recti indicator, a diastasis width or a rectus abdominus width. At a step 1116, a report is generated based on one or more parameters such asthe sarcopenia score, the width of the hernia region, the location of the hernia region identified using the European Hernia Society (EHS) classification, the Tanaka Index, the Sabbagh ratio, the mesh indicator diastasis recti indicator, the diastasis width or the rectus abdominus width to assist a user for appropriate decision making during hernia repair.
[0051] In some embodiments, the right psoas muscle and the left psoas muscle are segmented by (i) detecting and segmenting, using the deep learning model 108, an L3 vertebrate from the abdominal region that is detected by analysing the sagittal slices and coronal slices of the CT image 102, (ii) extracting all axial slices relevant to the L3 vertebrae that is segmented, and (iii) accurately segmenting, using the deep learning model 108, the right psoas muscle and the left psoas muscle by analyzing the axial slices relevant to the L3 vertebrae.In an exemplary embodiment, the hernia image assist system 100 may include user information which comprises of age, sex, race, body mass index, food and medicine allergy information, medical history, comorbidities related information, and other imaging information, etc. The system 100 may also include parameters extracted from CT images 102 such as size and location of hernia, abdominal muscle measurements, type of hernia, contents of the hernial sac, Tanaka index and other volumetric ratios, presence and site of adhesions, safe spots for abdominal entry for surgery and Sarcopenia (loss of muscle) and visceral fat scores. In an exemplary embodiment, the parameters extracted from CT image 102 is provided to the support system via the hernia image assist system 100. In another embodiment, the hernia image assist system 100 uses any other means for extracting the parameters from CT images 102 and utilizes the same for analysis purposes.
[0052] User information and report of various parameters/metrics extracted from CT images 102 are fed as an input to a Hernia decision support system (HDSS) to generate surgical recommendations, predict post operative complications and recommend methods to reduce the risk. The HDSS helps in the decision-making process in complex surgical scenarios, allowing for improved pre-operative planning, informed consent, and better risk assessment. The HDSS system also helps to estimate the postoperative risks, and based on the said different metrics/parameters, it may also suggest the type, and size of the mesh for hernia repair. It also helps to identify the right surgery technique and report of various parameters/metrics of CT image 102. Surgeons would also like to know about the relationship between hernia sac volume and the residual abdominopelvic cavity volume, a metric termed as 'loss of domain.' This loss of domain is calculated as per the Tanaka Index. Thus, a report generated based on these parameters and the user related information provides surgeons with meaningful analysis in mesh selection for hernia repair and also provides insights for post-surgery complications.
[0053] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope of appended claims. ,CLAIMS:I/We claim:
1. A hernia image assist system (100) that generates a report for assisting in hernia repairs, comprising:
a memory (104);
a processor (106) that
detects, using a deep learning model (108), an abdominal region (306) from a computed tomography (CT) image (102) by identifying Xiphoid and Pubic symphysis in sagittal slices of the CT image (102);
determines one or more edges of a right psoas muscle and a left psoas muscle (506) that is segmented using an edge detector and an area of each side of the right psoas muscle and the left psoas muscle (506);
characterized in that, determines, using the deep learning model (108), a sarcopenia score using a combination of Hounsfield unit values of the pixels bound within the right psoas muscle and the left psoas muscle (506) that is segmented;
determines a width of a hernia region (606) on the CT image (102) and identifies, using a European Hernia Society (EHS) classification, a location of the hernia region (606) on the CT image (102);
computes, using the deep learning model (108), a Tanaka index and a Sabbagh ratio using a hernia sac volume and an abdominal volume (806);
determines, using the deep learning model (108), a mesh indicator by (i) identifying a mesh (906) in the CT image (102) by analysing the axial slices of the CT image (102) and (ii) segmenting the mesh (906) by determining a location comprising a slice where it is inferred;
segments, using the deep learning model (108), a left rectus muscle and a right rectus muscle (1006) from the CT image (102) to determine at least one of a diastasis recti indicator, a diastasis width or a rectus abdominus width;
generates a report based on one or more parameters such asthe sarcopenia score, the width of the hernia region, the location of the hernia region identified using the European Hernia Society (EHS) classification, the Tanaka Index, the Sabbagh ratio, the mesh indicator diastasis recti indicator, the diastasis width or the rectus abdominus width to assist a user for appropriate decision making during hernia repair.
2. The hernia image assist system (100) as claimed in claim 1, wherein the processor (106) segments the right psoas muscle and the left psoas muscle (506) by
(i) detecting and segmenting, using the deep learning model (108), an L3 vertebrate (406) from the abdominal region (306) that is detected by analysing the sagittal slices and coronal slices of the CT image (102);
(ii) extracting all axial slices relevant to the L3 vertebrae (406) that is segmented; and
(iii) accurately segmenting, using the deep learning model (108), the right psoas muscle and the left psoas muscle (506) by analyzing the axial slices relevant to the L3 vertebrae (406).
3. The hernia image assist system (100) as claimed in claim 1, wherein the processor (106) identifies and segments, using the deep learning model (108), the hernia region (606) by analysing each axial slice of the abdominal region (306) on the CT image (102) at X-Y axis and across other axial slices at a Z-axis.
4. The hernia image assist system (100) as claimed in claim 3, wherein the processor (106) determines the hernia sac volume by
(i) segmenting, using the deep learning model (108), a hernia sac (706) from the hernia region on each axial slice of the CT image (102); and
(ii) computing a volume of the hernia sac by subsequently stacking the axial slices together.
5. The hernia image assist system (100) as claimed in claim 1, wherein the processor (106) determines the abdominal volume (806) by
(i) segmenting, using the deep learning model (108), the abdominal region (306) from the CT image (102) by analysing the sagittal slices of the CT image that are identified; and
(ii) computing the abdominal volume (806) of the abdominal region (306) by subsequently stacking the axial slices together.
6. The hernia image assist system (100) as claimed in claim 1, wherein the computed tomography (CT) image (102) is obtained directly from a picture archiving and communication system (PACS) system or a storage system.
7. The hernia image assist system (100) as claimed in claim 1, wherein the processor (106) determines a content of the hernia sac (706) using at least one of a deep learning model or one or more computer vision (CV) techniques.
8. The hernia image assist system (100) as claimed in claim 1, wherein the processor (106) determines the width of the hernia region (606), and the location of the hernia region (606) using one or more computer vision (CV) techniques by identifying a segment of the abdominal region (306) and specifying location basis of the EHS classification.
9. A method of generating a report for assisting a user in hernia repairs, comprising:
detecting, using a deep learning model (108), an abdominal region (306) from a computed tomography (CT) image (102) by identifying Xiphoid and Pubic symphysis in sagittal slices of the CT image (102);
determining one or more edges of a right psoas muscle and a left psoas muscle (506) that is segmented using an edge detector and an area of each side of the right psoas muscle and the left psoas muscle (506);
characterized in that, determining, using the deep learning model (108), a sarcopenia score using a combination of Hounsfield unit values of the pixels bound within the right psoas muscle and the left psoas muscle (506) that is segmented;
determining a width of a hernia region (606) on the CT image (102) and identifies, using a European Hernia Society (EHS) classification, a location of the hernia region (606) on the CT image (102);
computing, using the deep learning model (108), a Tanaka index and a Sabbagh ratio using a hernia sac volume and an abdominal volume (806);
determining, using the deep learning model (108), a mesh indicator by (i) identifying a mesh (906) in the CT image (102) by analysing the axial slices of the CT image (102) and (ii) segmenting the mesh (906) by determining a location comprising a slice where it is inferred;
segmenting, using the deep learning model (108), a left rectus muscle and a right rectus muscle (1006) from the CT image (102) to determine at least one of a diastasis recti indicator, a diastasis width or a rectus abdominus width;
generating a report based on one or more parameters such asthe sarcopenia score, the width of the hernia region, the location of the hernia region identified using the European Hernia Society (EHS) classification, the Tanaka Index, the Sabbagh ratio, the mesh indicator diastasis recti indicator, the diastasis width or the rectus abdominus width to assist a user for appropriate decision making during hernia repair.
10. The method as claimed in claim 9, wherein the right psoas muscle and the left psoas muscle (506) are segmented by (i) detecting and segmenting, using the deep learning model (108), an L3 vertebrate (406) from the abdominal region (306) that is detected by analysing the sagittal slices and coronal slices of the CT image (102);(ii) extracting all axial slices relevant to the L3 vertebrae (406) that is segmented; and (iii) accurately segmenting, using the deep learning model (108), the right psoas muscle and the left psoas muscle (506) by analyzing the axial slices relevant to the L3 vertebrae (406).
Dated this September 12th, 2023
Arjun Karthik Bala
(IN/PA 1021)
Agent for Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202241052643-STATEMENT OF UNDERTAKING (FORM 3) [15-09-2022(online)].pdf | 2022-09-15 |
| 2 | 202241052643-PROVISIONAL SPECIFICATION [15-09-2022(online)].pdf | 2022-09-15 |
| 3 | 202241052643-FORM FOR SMALL ENTITY(FORM-28) [15-09-2022(online)].pdf | 2022-09-15 |
| 4 | 202241052643-FORM FOR SMALL ENTITY [15-09-2022(online)].pdf | 2022-09-15 |
| 5 | 202241052643-FORM 1 [15-09-2022(online)].pdf | 2022-09-15 |
| 6 | 202241052643-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [15-09-2022(online)].pdf | 2022-09-15 |
| 7 | 202241052643-EVIDENCE FOR REGISTRATION UNDER SSI [15-09-2022(online)].pdf | 2022-09-15 |
| 8 | 202241052643-DRAWINGS [15-09-2022(online)].pdf | 2022-09-15 |
| 9 | 202241052643-DECLARATION OF INVENTORSHIP (FORM 5) [15-09-2022(online)].pdf | 2022-09-15 |
| 10 | 202241052643-DRAWING [15-09-2023(online)].pdf | 2023-09-15 |
| 11 | 202241052643-CORRESPONDENCE-OTHERS [15-09-2023(online)].pdf | 2023-09-15 |
| 12 | 202241052643-COMPLETE SPECIFICATION [15-09-2023(online)].pdf | 2023-09-15 |
| 13 | 202241052643-FORM-9 [01-03-2024(online)].pdf | 2024-03-01 |
| 14 | 202241052643-STARTUP [17-03-2024(online)].pdf | 2024-03-17 |
| 15 | 202241052643-FORM28 [17-03-2024(online)].pdf | 2024-03-17 |
| 16 | 202241052643-FORM 18A [17-03-2024(online)].pdf | 2024-03-17 |
| 17 | 202241052643-PA [25-03-2024(online)].pdf | 2024-03-25 |
| 18 | 202241052643-FORM28 [25-03-2024(online)].pdf | 2024-03-25 |
| 19 | 202241052643-ASSIGNMENT DOCUMENTS [25-03-2024(online)].pdf | 2024-03-25 |
| 20 | 202241052643-8(i)-Substitution-Change Of Applicant - Form 6 [25-03-2024(online)].pdf | 2024-03-25 |
| 21 | 202241052643-FER.pdf | 2024-09-25 |
| 22 | 202241052643-POA [12-02-2025(online)].pdf | 2025-02-12 |
| 23 | 202241052643-FORM 13 [12-02-2025(online)].pdf | 2025-02-12 |
| 24 | 202241052643-Proof of Right [14-02-2025(online)].pdf | 2025-02-14 |
| 25 | 202241052643-OTHERS [14-02-2025(online)].pdf | 2025-02-14 |
| 26 | 202241052643-FER_SER_REPLY [14-02-2025(online)].pdf | 2025-02-14 |
| 27 | 202241052643-CORRESPONDENCE [14-02-2025(online)].pdf | 2025-02-14 |
| 28 | 202241052643-COMPLETE SPECIFICATION [14-02-2025(online)].pdf | 2025-02-14 |
| 29 | 202241052643-CLAIMS [14-02-2025(online)].pdf | 2025-02-14 |
| 30 | 202241052643-US(14)-HearingNotice-(HearingDate-14-07-2025).pdf | 2025-06-27 |
| 31 | 202241052643-Correspondence to notify the Controller [04-07-2025(online)].pdf | 2025-07-04 |
| 32 | 202241052643-Correspondence to notify the Controller [11-07-2025(online)].pdf | 2025-07-11 |
| 33 | 202241052643-Annexure [11-07-2025(online)].pdf | 2025-07-11 |
| 34 | 202241052643-RELEVANT DOCUMENTS [19-07-2025(online)].pdf | 2025-07-19 |
| 35 | 202241052643-PETITION UNDER RULE 137 [19-07-2025(online)].pdf | 2025-07-19 |
| 36 | 202241052643-Written submissions and relevant documents [21-07-2025(online)].pdf | 2025-07-21 |
| 37 | 202241052643-PatentCertificate28-07-2025.pdf | 2025-07-28 |
| 38 | 202241052643-IntimationOfGrant28-07-2025.pdf | 2025-07-28 |
| 1 | SearchStrategyE_13-09-2024.pdf |