Abstract: ABSTRACT METHOD AND SYSTEM FOR WOUND MANAGEMENT This disclosure relates generally to a wound management and, more particularly, to wound management by estimating the wound risk score. Effective wound care is essential to prevent further health complications and promote healing. Wound care is usually managed in hospitals, where wound measurement is one of the important components for diagnosis and treatment of the wound. The existing state-of-art traditional visual inspection technique is subjective and error prone, while several techniques enabled by digitization of various deep-learning models heavily relies on the image quality, the dataset size to learn the features, and experts’ annotation. The disclosed method and system for wound management is a generative artificial intelligence (AI) segmentation technique-based segmentation model. The segmentation model is configured to detect peripheral wound boundary from the wounds, followed by extracting a set of morphological features to finally estimate a wound risk score for wound management. [To be published with FIG.2]
Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR WOUND MANAGEMENT SYSTEM
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to a wound management and, more particularly, to a method and system for wound management by estimating the wound risk score.
BACKGROUND
Effective wound care is essential to prevent further complications, promote healing, and reduce the risk of infection and other health issues. Chronic wounds, particularly in older adults, patients with disabilities and diabetes, and those with pressure, venous, or diabetic foot ulcers, cause significant morbidity and mortality. Monitoring the progression of the wound is critical for recovery of the wounded patient, and monitoring involves repeated clinical trips and lab tests over days. Wound care is usually managed in hospitals and community care, where wound measurement is one of the important components, wherein quantitative metrics like wound boundary and morphological features are measured. The accuracy of wound measurement influences the diagnosis and treatment by the healthcare professionals as it is critical to determine the future treatment for the patients by the doctors.
Currently, most of the healthcare professionals depend only on traditional visual inspection technique that is imprecise manual measurement-optical assessment of wounds, which is time-consuming and often inaccurate, thus causing negative impact on patients such as infection risks, inaccurate measurements, and discomfort to patients.
The traditional visual inspection technique is purely subjective and error prone, while several techniques enabled by digitization provides an appealing alternative. The digitization-based techniques involve various deep-learning models that have earned confidence; however, their accuracy primarily relies on the image quality, the dataset size to learn the features, and experts’ annotation.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for a wound management system is provided.
The system includes a memory storing instructions, one or more communication interfaces, and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to receive a plurality of wound images, a set of target masks and a set of metadata associated with a plurality of wounded patients, via the one or more hardware processors. The system is further configured to detect a peripheral wound boundary for the plurality of wound images based on a generative artificial intelligence (AI) segmentation of the plurality of wound images, via the one or more hardware processors, to obtain a set of segmented masks, wherein the generative artificial intelligence (AI) segmentation comprises a segmentation model. The system is further configured to extract the noise free segmented masks from the set of segmented masks, via the one or more hardware processors, wherein the extraction comprises post-processing of the set of segmented masks to obtain a set of noise free segmented masks using a set of erosion-dilation techniques. The system is further configured to extract a set of morphological features from the set of noise free segmented masks, via the one or more hardware processors, wherein the extraction comprises estimating a set of morphological features for the set of noise-free segmented masks using a set of morphological feature estimation techniques, wherein the set of morphological features comprises one of an area, a perimeter, a circle diameter, a length and a breadth of a rectangular area, a major axis and a minor axis length of a ellipse. The system is further configured to estimate a wound risk score, via the one or more hardware processors, using the set of morphological features and the set of metadata of the plurality of wounded patients.
In another aspect, a method for a wound management system is provided. The method includes receiving a plurality of wound images, a set of target masks and a set of metadata associated with a plurality of wounded patients, via one or more hardware processors. The method further includes detecting a peripheral wound boundary for the plurality of wound images based on a generative artificial intelligence (AI) segmentation of the plurality of wound images, via the one or more hardware processors, to obtain a set of segmented masks, wherein the generative artificial intelligence (AI) segmentation comprises a segmentation model. The method further includes extracting the noise free segmented masks from the set of segmented masks, via the one or more hardware processors, wherein the extraction comprises post-processing of the set of segmented masks to obtain a set of noise free segmented masks using a set of erosion-dilation techniques. The method further includes extracting a set of morphological features from the set of noise free segmented masks, via the one or more hardware processors, wherein the extraction comprises estimating a set of morphological features for the set of noise-free segmented masks using a set of morphological feature estimation techniques, wherein the set of morphological features comprises one of an area, a perimeter, a circle diameter, a length and a breadth of a rectangular area, a major axis and a minor axis length of a ellipse. The method further includes estimating a wound risk score, via the one or more hardware processors, using the set of morphological features and the set of metadata of the plurality of wounded patients.
In yet another aspect, a non-transitory computer readable medium for a wound management system is provided. The method includes receiving a plurality of wound images, a set of target masks and a set of metadata associated with a plurality of wounded patients, via one or more hardware processors. The method further includes detecting a peripheral wound boundary for the plurality of wound images based on a generative artificial intelligence (AI) segmentation of the plurality of wound images, via the one or more hardware processors, to obtain a set of segmented masks, wherein the generative artificial intelligence (AI) segmentation comprises a segmentation model. The method further includes extracting the noise free segmented masks from the set of segmented masks, via the one or more hardware processors, wherein the extraction comprises post-processing of the set of segmented masks to obtain a set of noise free segmented masks using a set of erosion-dilation techniques. The method further includes extracting a set of morphological features from the set of noise free segmented masks, via the one or more hardware processors, wherein the extraction comprises estimating a set of morphological features for the set of noise-free segmented masks using a set of morphological feature estimation techniques, wherein the set of morphological features comprises one of an area, a perimeter, a circle diameter, a length and a breadth of a rectangular area, a major axis and a minor axis length of a ellipse. The method further includes estimating a wound risk score, via the one or more hardware processors, using the set of morphological features and the set of metadata of the plurality of wounded patients.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG.1 illustrates an exemplary block diagram of a system for wound management according to some embodiments of the present disclosure.
FIG.2 is a functional block diagram for the wound management system, according to some embodiments of the present disclosure.
FIGS.3A and FIG.3B is a flow diagram illustrating a method for a wound management, in accordance with some embodiments of the present disclosure.
FIG.4 illustrates a segmentation model for generative artificial intelligence (AI) segmentation, according to some embodiments of the present disclosure.
FIG.5 is a flow diagram illustrating a method for a generative artificial intelligence (AI) segmentation using a segmentation model in accordance with some embodiments of the present disclosure.
FIG.6 is a flow diagram illustrating a set of erosion-dilation techniques in accordance with some embodiments of the present disclosure.
FIG.7 is a flow diagram illustrating a set of morphological feature estimation techniques in accordance with some embodiments of the present disclosure.
FIG.8A through FIG.8I (collectively referred as FIG. 8) illustrates a set of morphological features extracted in accordance with some embodiments of the present disclosure.
FIG.9A through FIG.9H (collectively referred as FIG. 9) illustrates peripheral wound boundary detected in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Referring now to the drawings, and more particularly to FIG. 1 through FIG.9, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG.1 is an exemplary block diagram of a system 100 for wound management in accordance with some embodiments of the present disclosure.
In an embodiment, the system 100 includes a processor(s) 104, communication interface device(s), alternatively referred as input/output (I/O) interface(s) 106, and one or more data storage devices or a memory 102 operatively coupled to the processor(s) 104. The system 100 with one or more hardware processors is configured to execute functions of one or more functional blocks of the system 100.
Referring to the components of the system 100, in an embodiment, the processor(s) 104, can be one or more hardware processors 104. In an embodiment, the one or more hardware processors 104 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, graphical processing units and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 is configured to fetch and execute computer-readable instructions stored in the memory 102. In an embodiment, the system 100 can be implemented in a variety of computing systems including laptop computers, notebooks, hand-held devices such as mobile phones, workstations, mainframe computers, servers, a network cloud and the like.
The I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, a touch user interface (TUI) and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface (s) 106 can include one or more ports for connecting a number of devices (nodes) of the system 100 to one another or to another server.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
Further, the memory 102 may include a database 108 configured to include information regarding wound management. The memory 102 may comprise information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure. In an embodiment, the database 108 may be external (not shown) to the system 100 and coupled to the system via the I/O interface 106.
Functions of the components of system 100 are explained in conjunction with functional overview of the system 100 in FIG.2 and flow diagram of FIGS.3A to FIG.3B for wound management.
The system 100 supports various connectivity options such as BLUETOOTH®, USB, ZigBee and other cellular services. The network environment enables connection of various components of the system 100 using any communication link including Internet, WAN, MAN, and so on. In an exemplary embodiment, the system 100 is implemented to operate as a stand-alone device. In another embodiment, the system 100 may be implemented to work as a loosely coupled device to a smart computing environment. The components and functionalities of the system 100 are described further in detail.
FIG.2 is a functional block diagram of the various modules of the system of FIG.1, in accordance with some embodiments of the present disclosure. As depicted in the architecture, the FIG.2 illustrates the functions of the modules of the system 100 that includes wound management.
As depicted in FIG.2, the functional system 200 of system 100 is configured for wound management. The system 200 is configured to receive a plurality of wound images, a set of target masks and a set of metadata associated with a plurality of wounded patients. The system 200 further comprises a peripheral wound boundary detector 202 configured for detecting a peripheral wound boundary for the plurality of wound images based on a generative artificial intelligence (AI) segmentation of the plurality of wound images to obtain a set of segmented masks. The system 200 further comprises a noise free segmented masks extractor 204 configured for extracting the noise free segmented masks from the set of segmented masks, wherein the extraction comprises post-processing of the set of segmented masks to obtain a set of noise free segmented masks using a set of erosion-dilation techniques. The system 200 further comprises a set of morphological features extractor 206 configured for extracting a set of morphological features from the set of noise free segmented masks, wherein the extraction comprises estimating a set of morphological features for the set of noise-free segmented masks using a set of morphological feature estimation techniques. The system 200 further comprises a wound risk score extractor 208 configured for estimating a wound risk score using the set of morphological features and the set of metadata of the plurality of wounded patients. The system 200 further comprises a wound management 210 configured for performing wound management for a wound of a wounded patient by (a) detecting the peripheral wound boundary, (b) extracting the set of morphological features and (b)estimating the wound risk score.
The various modules of the system 100 and the functional blocks in FIG.2 are configured for wound management are implemented as at least one of a logically self-contained part of a software program, a self-contained hardware component, and/or, a self-contained hardware component with a logically self-contained part of a software program embedded into each of the hardware component that when executed perform the above method described herein.
Functions of the components of the system 200 are explained in conjunction with functional modules of the system 100 stored in the memory 102 and further explained in conjunction with flow diagram of FIGS.3A-3B. The FIGS.3A-3B with reference to FIG.1, is an exemplary flow diagram illustrating a method 300 for wound management using the system 100 of FIG.1 according to an embodiment of the present disclosure.
The steps of the method of the present disclosure will now be explained with reference to the components of the system 100 of FIG.1 for wound management and the modules 202-210 as depicted in FIG.2 and the flow diagrams as depicted in FIGS.3A-3C. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
At step 302 of the method 300, a plurality of wound images, a set of target masks and a set of metadata associated with a plurality of wounded patients is received.
In an embodiment, the plurality of wound images comprises of (a) a plurality of images of an atleast one wound or (b) a plurality of images of multiple wounds; and
In another embodiment, the set of metadata comprises a plurality of biological condition, an age and a plurality of activity information of each of the wounded patients from the plurality of wounded patients.
In yet another embodiment, the wound is captured as an image using a mobile camera. The captured wound images are uploaded to the cloud along with the meta information like patient name, patient id, wound location, time, and date stamp information.
In yet another embodiment the camera device could be the ipad or Tablet or webcam or any other modality of camera. The captured wound images along with the meta information like patient name, patient id, wound location, time, and date stamp information are uploaded to the cloud.
In another embodiment, the wound is captured as a video using a mobile camera or an iPad or a Tablet such that the frame in a video having image quality in-terms of sharpness, brightness and contrast is extracted and uploaded to the cloud along with the meta information like patient name, patient id, wound location, time, and date stamp information.
At step 304 of the method 300, a peripheral wound boundary is detected for the plurality of wound images in the peripheral wound boundary detector 202. The peripheral wound boundary is detected to obtain a set of segmented masks.
The peripheral wound boundary is detected based on a generative artificial intelligence (AI) segmentation of the plurality of wound images. The generative artificial intelligence (AI) segmentation comprises a segmentation model.
In an embodiment, the segmentation model is an unsupervised machine learning task that involves discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate output from new examples that plausibly could have been drawn from the original dataset. As illustrated in the FIG.4, the segmentation model comprises a generator 402 and a discriminator 404.
The generator of the segmentation model is symmetric and consists of a U-Net architecture as shown in FIG.4. The generator of the segmentation model contains two major parts:
a) A encoder 406 follows a general convolutional process and collects the context information at a bottleneck 408, and
b) A decoder 410 consists of the transposed 2D convolution layers that will generate the mask output from the context stored in the bottleneck 408.
In a embodiment, and an example scenario the encoder 406 comprises Batch Normalization layers and Leaky Relu activation layers after a 2D convolution, except at the bottleneck where only Relu activation layer is used after 2D convolution. In the decoder 410, Batch Normalization layers and Relu activation layers are used after the transposed 2D convolution and in the last stage, Tanh activation layer is used after the transposed 2D convolution without Batch Normalization. Skip connections are utilized to improve the quality of the generator-generated image by linking the corresponding encoder and decoder layers.
The discriminator 404 of the segmentation model is a generative adversarial network (GAN), however it is different from a regular GAN discriminator as it classifies each patch of the input image separately rather than the entire image. In an example scenario six stacks of convolutional layers are used in the discriminator 404 wherein each stack includes Batch Normalization layers and the Leaky Relu activation layers except in the last stack, where the Sigmoid activation layer is used. The discriminator 404 of the segmentation model takes input from the source domain (original image (x)) and an image from the target domain (target mask (y) or generated mask G(x)) and predicts the likelihood of whether the image from target domain is a real or fake (generated version of the original image). The discriminator 404 of the segmentation model produces a matrix of values where each element corresponds to the respective patch of the input image. Patch-out is a single scalar value computed from the matrix’s average. The segmentation model produces high-quality images due to this fine-grained feedback from the discriminator.
In an embodiment, the peripheral wound boundary is detected based the generative artificial intelligence (AI) segmentation using the segmentation model comprises explained using method 500 of the FIG.5 as explained below:
At step 502, a set of fake masks is generated using the plurality of wound images.
In an embodiment, the set of fake masks is predicted using a generator model based on current generator weights using the plurality of the set of wound images. Initially, the generator and discriminator are initialized with a plurality of random weights.
At step 504, a plurality of discriminator weights is iteratively estimated based on a discriminator loss, which is a scaled version of adversarial loss L_adversarial (G,D), represented as:
L_discriminator = α ⊙ L_adversarial (G,D),α = 0.5 (1)
where, adversarial loss L_adversarial (G,D) is given by:
L_adversarial (G,D) = E [ logD(x,y)+ E [log(1-D(x,G(x)))] (2)
wherein,
G– generator,
D – discriminator,
x – input image,
y – target mask,
G(x) - the fake mask generated by the generator G for the given input wound image x,
D(x,y) - the discriminator’s output for the given input pair (x,y),
D(x,G(x)) - the discriminator’s output given a fake mask G(x) generated by the
generator and the input wound image x,
E - the expected value of the output of the discriminator D ,
E[D(x,y)] – real loss, and
E[D(x,G(x))] – fake loss.
The real loss is estimated based on the plurality of wound images and the set of target masks. The fake loss is estimated based on the plurality of wound images and the set of fake masks.
In an embodiment, the real loss is estimated based on the plurality of wound images and the set of target masks using the discriminator with current discriminator weights. Real loss is the expected value of the discriminator output for the inputs of plurality of wound images, and the set of target masks and is represented by the first term of the equation number 2. In an embodiment, the fake loss is estimated based on the plurality of wound images and the set of fake masks using the discriminator with current discriminator weights. Fake loss is the expected value of the discriminator output for the inputs of plurality of wound images and the set of fake masks and is represented by the second term of the equation number 2.
At step 506, a plurality of generator weights is iteratively estimated based on a generator loss. The generator loss is estimated based on the plurality of wound images, the set of target masks, the set of fake masks, and the plurality of discriminator weights.
In an embodiment, the generator loss is estimated based on the plurality of wound images, the set of target masks, the set of fake masks, and the plurality of discriminator weights:
L_generator= λ_1⊙ L_adversarial (G,D) + λ_2⊙ L_reconstruction (3)
wherein,
L_adversarial (G,D) is computed as given in the equation number 2,
L_reconstruction = ∑_(i=1)^n▒| y_i - G〖(x)〗_i |
At step 508, the peripheral wound boundary is detected by segmenting the plurality of wound images to obtain the set of segmented masks. The peripheral wound boundary is detected based on the plurality of generator weights using a generative adversarial network (GAN) technique.
In an embodiment, the set of segmented masks are predicted by the generator based on the plurality of the generator weights for the plurality of wound images.
Referring to FIG.3A and 3B at step 306 of the method 300, the set of noise free segmented masks is extracted from the set of segmented masks in the noise free segmented masks extractor 204. The extraction comprises post-processing of the set of segmented masks to obtain a set of noise free segmented masks using a set of erosion-dilation techniques.
In an embodiment, the set of erosion-dilation techniques is explained using method 600 of the FIG.6 as explained below:
At step 602, a plurality of spurious noise is removed from outside the set of segmented masks based on a erosion technique and a dilation technique.
In an embodiment, the plurality of spurious noise outside the set of segmented masks is removed by applying the erosion followed by dilation operators on the set of segmented masks.
At step 604, a plurality of spurious noise is removed from within the set of segmented masks based on the dilation technique and the erosion technique.
In an embodiment, the plurality of spurious noise within the set of segmented masks is removed by applying the dilation followed by erosion operators on the set of segmented masks.
Referring to FIG.3A and 3B, at step 308 of the method 300, a set of morphological features is extracted from the set of noise free segmented masks in the set of morphological features extractor 206. The extraction comprises estimating a set of morphological features for the set of noise-free segmented masks using a set of morphological feature estimation techniques.
The set of morphological features comprises one of an area, a perimeter, a circle diameter, a length and a breadth of a rectangular area, a major axis and a minor axis length of an ellipse.
e = foci/(length of major axix) ,0≤e≤1 (5)
Ω_c= A/r^2 (6)
Where A = area and r = perimeter,
ψ_R = (no of pixel in ROI)/(no of pixel in boundary box) (7)
In an embodiment, the set of morphological feature estimation is explained using method 700 of the FIG.7 as explained below:
At step 702, the multiple wounds of the set of noise free segmented masks are labelled to obtain a set of labelled images. The multiple wounds of the set of noise free segmented masks are labelled based on a set of connected component analysis techniques.
In an embodiment, the set of connected component analysis techniques comprises: assigning a label value to the region of the set of noise free segmented masks such that the pixels in the region has similar values and are spatially connected to the neighboring pixels. The process is repeated for all the regions that represent different wounds of the set of noise free segmented masks with different label value.
At step 704, a set of shape parameters is identified for the set of labelled images based on a set of shape identification technique.
In an embodiment, the set of shape identification technique: the approximate shape of the wound is estimated using the set of noise free segmented masks using the equations 5, 6, 7. Ideally, for circle, e=0 and Ω_c=1, for rectangle Ψ_R=1 and for ellipse e<1. In practice, we may not get the ideal values to get the precise shape of the wound. So, the shape of the wound is approximated by using the above parameters as given below. Whenever the e≅0 and Ω_c≅1, the shape of the region is referred as circle, whenever e≤0.8 and 0.75<Ω_c<0.9, the shape of the region is referred as ellipse, and otherwise or when Ψ_R≅1, the shape of the region is referred as rectangle and is expressed using equation 5,6 and 7.
At step 706, the set of shape parameters of the set of labelled images is measured based on a measuring technique.
In an embodiment, the measuring technique estimates the set of shape parameters that include rectangle, circle, and ellipse. The set of shape parameters are given below against the respective shapes of the set of labelled images:
Rectangle: the area, the perimeter, length, and breadth,
Circle: the area, the perimeter and diameter, and
Ellipse: the area, the perimeter, a major axis and a minor axis length.
Referring to FIG.3A and 3B at step 310 of the method 300, a wound risk score is estimated in the wound risk score extractor 210. The wound risk score is estimated using the set of morphological features and the set of metadata of the plurality of wounded patients.
In an embodiment, the estimation of the wound risk score expressed as shown below:
Risk Score= (A_d* 1⁄γ*〖1.01〗^β*∑_(i=1)^3▒(α_i ) )/8.1 (8)
wherein,
A_d is the set of morphological features;
α_iis the plurality of biological condition of the wounded patient comprising a diabetes condition, a smoking condition, and a blood pressure condition;
β is the age of the wounded patient; and
γ is activity per day.
The wound management is performed for a wound of a wounded patient by:
(a) detecting the peripheral wound boundary,
(b) extracting the set of morphological features and
(c) estimating the wound risk score.
In an embodiment, the wound of a wounded patient can be uploaded using a mobile camera. The wound is processed by detecting the peripheral wound boundary, extracting the set of morphological features, and estimating the wound risk score.
EXPERIMENTS:
Experiments have been conducted using the disclosed techniques for wound management using a MICCAI 2021-foot ulcer segmentation challenge dataset, which is an extended version of the chronic wound dataset.
The state-of-the-art techniques used for the performance comparison are given below with the deep learning architecture used in the respective techniques.
1. A technique based on LinkNet architecture with an EfficientNetB1 (EffB1) as backbone is used for the automatic foot ulcer segmentation and is referred as LinkNet-EffB1 in the Table1;
2. A technique based on U-Net architecture with an EfficientNetB2 (EffB2) as backbone is used for the automatic foot ulcer segmentation and is referred as U-Net-EffB2 in the Table1;
3. A technique based on the ensemble of the above two techniques, LinkNet-EffB1 and U-Net-EffB2, is referred as Ensemble of EfficientNets in the Table1;
4. A technique based on conditional GAN (cGAN) architecture is used for simultaneous wound boarder segmentation and tissue classification (BSTC) and is referred as BSTC-cGAN in the Table1;
5. A technique based on DeepLabV3+ architecture is used for wound segmentation and is referred as DeepLabV3+ in the Table1;
6. A technique based on DeepLabV3+ architecture along with squeeze and excite network is used for wound segmentation and is referred as DeepLabV3+SE.
Precision [%] Recall [%] Dice [%] IoU [%]
AFSegGAN (Disclosed technique) 94.04 94.55 93.11 99.07
LinkNet-EffB1 92.88 91.33 92.09 85.35
U-Net-EffB2 92.23 91.57 91.90 85.01
Ensemble of EfficientNets 91.55 86.22 88.80 -
BSTC-cGAN - - 90 -
DeepLabV3+ 96.4 87.6 91.9 92.4
DeepLabV3+SE 96 88.3 92.3 92.4
Table 1. Comparison of proposed model with other state-of-the-art models on MICCAI 2021 foot ulcer segmentation challenge dataset.
The boarder segmentation and tissue classification using GAN (BSTC-cGAN) utilises both the original wound image and the latent space as the input to the generator unlike the disclosed technique that used only the original wound image as input to the generator and the disclosed technique new loss functions for both the generator and the discriminator. Hence the disclosed technique has improved the dice score of BSTC-cGAN from 90% to 93.11%, an increase of 3.45%.
The comparison between the disclosed technique (AFSegGAN) and the other state-of-the-art methods, LinkNEt-EffB1, U-Net-EffB2, DeepLabV3+, and DeepLabV3+SE, indicates that the disclosed technique significantly performs better in terms of dice score by 1.1%, 1.3%, 1.3% and 0.9%, respectively, and 16%, 16.5%, 7.6%, and 7.2%, respectively, in terms of IoU score. Similarly, it can be inferred that the AFSegGAN (the disclosed technique) outperforms the other state-of-the-art methods, LinkNEt-EffB1, U-Net-EffB2, DeepLabV3+, and DeepLabV3+SE, by 3.5%, 3.2%, 7.9% and 7.1%, respectively, in terms of Recall. Also, AFSegGAN (the disclosed technique) performs better than LinkNEt-EffB1 and U-Net-EffB2 methods by 1.2% and 1.9%, respectively, and performs poorer than DeepLabV3+ and DeepLabV3+SE by 2.4% and 2.0%, respectively, in terms of Precision. As other considered state-of-the-art models does not use the GAN technique, their performance is inferior to that of the disclosed technique in terms of most of the metrics.
The few examples from the set of segmented masks are used to obtain the noise free segmented masks. Further the noise free segmented masks are used to estimate the morphological features, and the identified wound boundaries are laid over the original images. The resultant output showing the wound images in FIG. 8A, 8D and 8G, segmented masks obtained from the AFSegGAN (the disclosed technique) in FIG. 8B, 8E and 8H, and the wound boundaries and morphological features laid over the original wound images in FIG. 8C, 8F and 8I. Further, it depicts the wound shape as an ellipse for FIG. 8C, as we can correlate this to the optimized shape of the bandage used to cover this wound. In addendum, the wound’s area, perimeter, and ellipse parameters, such as major-axis and minor lengths, are also estimated during the post-processing and are overlaid on the wound image (FIG. 8C). Further, as shown in the FIG. 8F, the wound shape is estimated as a rectangle; even though the wound ROI shape is not precisely the rectangle, the physician would recommend using a rectangular shape bandage or wound dressing. For a rectangular shape, the width and height are estimated and overlaid on the wound image, along with the area and perimeter. Similarly, in FIG. 8I, as the wound shape is circular, the diameter, area, and perimeter are estimated and overlaid on the wound image.
The AFSegGAN (disclosed technique) performs well even for images with low contrast, low resolution, and more minor wounds, where the manual annotation by the expert was complex. In these scenarios, as the ground truth labels are not annotated, the dice score for that images will be near zero, thereby indicating that our model is under-performed, but it is not so. Such two exceptional cases where the ground truth of the wound is not annotated due to i) poor image quality (less contrast) and ii) multiple wounds with slim sizes are depicted in FIG. 9. These worst-case scenarios may affect the model performance as various factors affect the image-capturing process in clinical laboratories [13]. However, the disclosed model segmented those wound regions as shown in FIG. 9C and 9G, and finally estimated the morphological features as shown in FIG. 9D and FIG. 9H. Here, FIG. 9A and 9E represents the input image, FIG. 9B and 9F indicates the ground truth labels that do not have annotated or missing annotation, the output of the AFSegGAN (disclosed technique) is shown in FIG. 9C and 9G, and the final output after morphological feature extraction in FIG. 9D and 9H. To be more specific, FIG. 9D and 9H, where the multiple wound regions were marked with different shades is a piece of clear evidence that our model surpasses compared to other techniques.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
This disclosure relates generally to wound management and, more particularly to wound management by estimating the wound risk score. Effective wound care is essential to prevent further health complications and promote healing. Wound care is usually managed in hospitals, where wound measurement is one of the important components for diagnosis and treatment of the wound. The existing state-of-art traditional visual inspection technique is subjective and error prone, while several techniques enabled by digitization of various deep-learning models heavily relies on the image quality, the dataset size to learn the features, and experts’ annotation. The disclosed method and system for wound management is a generative artificial intelligence (AI) segmentation technique-based segmentation model. The segmentation model is configured to detect peripheral wound boundary from the wounds, followed by extracting a set of morphological features to finally estimate a wound risk score for wound management.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC, GPU and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
, Claims:We Claim:
A processor implemented method (300), comprising:
receiving (302) a plurality of wound images, a set of target masks and a set of metadata associated with a plurality of wounded patients, via one or more hardware processors, wherein:
the plurality of wound images comprises of (a) a plurality of images of a atleast one wound or (b) a plurality of images of multiple wounds; and
the set of metadata comprises a plurality of biological condition, an age, and a plurality of activity information of each of the wounded patients from the plurality of wounded patients;
detecting a peripheral wound boundary for the plurality of wound images based on a generative artificial intelligence (AI) segmentation of the plurality of wound images, via the one or more hardware processors, to obtain a set of segmented masks, wherein the generative artificial intelligence (AI) segmentation comprises a segmentation model (304);
extracting the noise free segmented masks from the set of segmented masks, via the one or more hardware processors, wherein the extraction comprises post-processing of the set of segmented masks to obtain a set of noise free segmented masks using a set of erosion-dilation techniques (306);
extracting a set of morphological features from the set of noise free segmented masks, via the one or more hardware processors, wherein the extraction comprises estimating a set of morphological features for the set of noise-free segmented masks using a set of morphological feature estimation techniques, wherein the set of morphological features comprises one of an area, a perimeter, a circle diameter, a length and a breadth of a rectangular area, a major axis and a minor axis length of a ellipse (308); and
estimating a wound risk score, via the one or more hardware processors, using the set of morphological features and the set of metadata of the plurality of wounded patients (310).
The processor implemented method (300) as claimed in claim 1, wherein a wound management is performed for a wound of a wounded patient by (a) detecting the peripheral wound boundary, (b) extracting the set of morphological features and (c) estimating the wound risk score.
The processor implemented method (300) as claimed in claim 1, wherein the generative artificial intelligence (AI) segmentation using the segmentation model (500) comprises:
generating a set of fake masks using the plurality of wound images (502);
iteratively estimating a plurality of discriminator weights based on a real loss and a fake loss, wherein the real loss is estimated based on the plurality of wound images and the set of target masks, and fake loss is estimated based on the plurality of wound images and the set of fake masks (504);
iteratively estimating a plurality of generator weights based on a generator loss, wherein the generator loss is estimated based on the plurality of wound images, the set of target masks, the set of fake masks, and the plurality of discriminator weights (506); and
detecting the peripheral wound boundary by segmenting the plurality of wound images to obtain the set of segmented masks based on the plurality of discriminator weights and the plurality of generator weights using a generative adversarial network (GAN) technique (508).
The processor implemented method (300) as claimed in claim 1, the set of erosion-dilation techniques (600) comprises:
removing a plurality of spurious noise from outside the set of segmented masks based on an erosion technique and a dilation technique (602); and
removing a plurality of spurious noise from within the set of segmented masks based on the dilation technique and the erosion technique (604).
The processor implemented method (300) as claimed in claim 1, the set of morphological feature estimation techniques (700):
labelling the multiple wounds of the set of noise free segmented masks to obtain a set of labelled images based on a set of connected component analysis techniques (702);
identifying a set of shape parameters for the set of labelled images based on a set of shape identification technique (704); and
measuring of the set of shape parameters of the set of labelled images based on a measuring technique (706).
The processor implemented method (300) as claimed in claim 1, wherein the estimation of the wound risk score expressed as shown below:
Risk Score= (A_d* 1⁄γ*〖1.01〗^β*∑_(i=1)^3▒(α_i ) )/8.1
wherein,
A_d is the set of morphological features;
Ա_i is the plurality of biological condition of the wounded patient comprising a diabetes condition, a smoking condition, and a blood pressure condition;
β is the age of the wounded patient; and
γ is activity per day.
A system (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
receive a plurality of wound images, a set of target masks and a set of metadata associated with a plurality of wounded patients, via one or more hardware processors, wherein:
the plurality of wound images comprises of (a) a plurality of images of atleast one wound or (b) a plurality of images of multiple wounds; and
the set of metadata comprises a plurality of biological condition, an age, and a plurality of activity information of each of the wounded patients from the plurality of wounded patients;
detect a peripheral wound boundary for the plurality of wound images based on a generative artificial intelligence (AI) segmentation of the plurality of wound images, via the one or more hardware processors, to obtain a set of segmented masks, wherein the generative artificial intelligence (AI) segmentation comprises a segmentation model;
extract the noise free segmented masks from the set of segmented masks, via the one or more hardware processors, wherein the extraction comprises post-processing of the set of segmented masks to obtain a set of noise free segmented masks using a set of erosion-dilation techniques;
extract a set of morphological features from the set of noise free segmented masks, via the one or more hardware processors, wherein the extraction comprises estimating a set of morphological features for the set of noise-free segmented masks using a set of morphological feature estimation techniques, wherein the set of morphological features comprises one of an area, a perimeter, a circle diameter, a length and a breadth of a rectangular area, a major axis and a minor axis length of a ellipse; and
estimate a wound risk score, via the one or more hardware processors, using the set of morphological features and the set of metadata of the plurality of wounded patients.
The system (100) as claimed in claim 7, wherein a wound management is performed for a wound of a wounded patient by (a) detecting the peripheral wound boundary, (b) extracting the set of morphological features and (c)estimating the wound risk score.
The system (100) as claimed in claim 7, wherein the generative artificial intelligence (AI) segmentation using the segmentation model comprises:
generating a set of fake masks using the plurality of wound images;
iteratively estimating a plurality of discriminator weights based on a real loss and a fake loss, wherein the real loss is estimated based on the plurality of wound images and the set of target masks, and fake loss is estimated based on the plurality of wound images and the set of fake masks;
iteratively estimating a plurality of generator weights based on a generator loss, wherein the generator loss is estimated based on the plurality of wound images, the set of target masks, the set of fake masks, and the plurality of discriminator weights; and
detecting the peripheral wound boundary by segmenting the plurality of wound images to obtain the set of segmented masks based on the plurality of discriminator weights and the plurality of generator weights using a generative adversarial network (GAN) technique.
The system (100) as claimed in claim 7, the set of erosion-dilation techniques comprises:
removing a plurality of spurious noise from outside the set of segmented masks based on a dilation technique and an erosion technique; and
removing a plurality of spurious noise from within the set of segmented masks based on the erosion technique and the dilation technique.
The system (100) as claimed in claim 7, the set of morphological feature estimation techniques:
labelling the multiple wounds of the set of noise free segmented masks to obtain a set of labelled images based on a set of connected component analysis techniques;
identifying a set of shape parameters for the set of labelled images based on a set of shape identification technique; and
measuring of the set of shape parameters of the set of labelled images based on a measuring technique.
The system (100) as claimed in claim 7, wherein the estimation of the wound risk score expressed as shown below:
Risk Score= (A_d* 1⁄γ*〖1.01〗^β*∑_(i=1)^3▒(α_i ) )/8.1
wherein,
A_d is the set of morphological features;
Ա_i is the plurality of biological condition of the wounded patient comprising a diabetes condition, a smoking condition, and a blood pressure condition;
β is the age of the wounded patient; and
γ is activity per day.
Dated this 17th Day of October 2023
Tata Consultancy Services Limited
By their Agent & Attorney
(Adheesh Nargolkar)
of Khaitan & Co
Reg No IN-PA-1086
| # | Name | Date |
|---|---|---|
| 1 | 202321070695-STATEMENT OF UNDERTAKING (FORM 3) [17-10-2023(online)].pdf | 2023-10-17 |
| 2 | 202321070695-REQUEST FOR EXAMINATION (FORM-18) [17-10-2023(online)].pdf | 2023-10-17 |
| 3 | 202321070695-FORM 18 [17-10-2023(online)].pdf | 2023-10-17 |
| 4 | 202321070695-FORM 1 [17-10-2023(online)].pdf | 2023-10-17 |
| 5 | 202321070695-FIGURE OF ABSTRACT [17-10-2023(online)].pdf | 2023-10-17 |
| 6 | 202321070695-DRAWINGS [17-10-2023(online)].pdf | 2023-10-17 |
| 7 | 202321070695-DECLARATION OF INVENTORSHIP (FORM 5) [17-10-2023(online)].pdf | 2023-10-17 |
| 8 | 202321070695-COMPLETE SPECIFICATION [17-10-2023(online)].pdf | 2023-10-17 |
| 9 | 202321070695-Proof of Right [18-10-2023(online)].pdf | 2023-10-18 |
| 10 | 202321070695-FORM-26 [05-01-2024(online)].pdf | 2024-01-05 |
| 11 | 202321070695-FORM-26 [05-01-2024(online)]-1.pdf | 2024-01-05 |
| 12 | Abstract.1.jpg | 2024-01-25 |