Abstract: Disclosed herein is a user-friendly and low-cost system for detecting emphysema using advanced neural network architecture-based chest X-ray image analysis technique. The system comprises a user interface (100), and a server (200) communicatively linked with the user interface (100) via a wireless network (300). The user interface (100) has an image uploading option (102) to upload chest X-ray images, and a result display option (104) to display whether the image has any indication of emphysema. The server (200) has embedded with an overlap region detection module (202) configured to: segment X-ray image pixels into different anatomical classes as representatives of diaphragm and eight (1st -8th) pairs of anterior ribs counted from top to bottom, and check positioning of the diaphragm class with respect to the 6th pair rib class; a blunt region detection module (204) configured to: locate three key points at lower corner edges of lungs in the X-ray image, and compute costophrenic angles based on the key points; a decision module (206) configured: to determine the emphysema condition by validating the positioning of the diaphragm class with respect to the 6th pair rib class, and the blunting of costophrenic angles as per a set of predefined parameters. Fig. 1
Description:FIELD OF THE INVENTION
The present invention broadly relates to a health diagnostic system. Particularly, the present invention relates to a user-friendly and cost-effective system for determining emphysema condition in Chronic Obstructive Pulmonary Disorder (COPD) patients by analysing chest X-ray images through novel software codes and advanced deep learning technique.
BACKGROUND OF THE INVENTION
Emphysema is a type of chronic obstructive pulmonary disease (COPD) characterized by the destruction of the air sacs in the lungs, leading to difficulty breathing. It is primarily caused by long-term exposure to irritants such as cigarette smoke, air pollution, or workplace chemicals. The prevalence of emphysema varies depending on factors such as geographic location, smoking rates, and environmental factors. In the United States, it is estimated that around 3.5 million people have been diagnosed with emphysema. However, many more individuals may have undiagnosed or mild forms of the condition.
According to the World Health Organization (WHO), the COPD is the third leading cause of death worldwide, with around 3 million deaths attributed to it annually. Emphysema contributes significantly to this burden. Detecting emphysema early is crucial as it allows for timely intervention, which can slow disease progression and improve prognosis. Early diagnosis enables the implementation of management strategies to prevent complications such as pneumothorax, respiratory failure, and heart problems, thus enhancing overall quality of life.
Doctors generally prescribe the COPD patients to conduct a wide range of diagnostic tests such as chest X-ray, CT scan, magnetic resonance imaging (MRI), positron emission tomography (PET), pulmonary function testing, arterial blood gas (ABG), electrocardiogram (EKG), blood tests and genetic tests etc, which help them understand Emphysema status and associated chest/lung abnormality. Among all these tests, the X-ray appears to be affordable, easily accessible, and exposed to lesser radiation than other imaging methods. With the advancement of computer science and artificial intelligence, many researchers have explored X-ray image analysis technique in the field of COPD/lung diagnosis. However, all the existing X-ray image analysis techniques have several limitations in terms of real-time result delivery, implementation in low end computing device, computing speed, computing resource utilization, type of disease detection, diagnosis accuracy etc. Therefore, there is felt a need of developing a cost-effective, user-friendly, reliable and advanced diagnostic approach to diagnose the COPD patients, particularly to check whether emphysema is present or absent the patient lungs.
One reference may be made to CA3140122A1 that discloses a system for identifying wide range of anomalies in chest X-ray image in posteroanterior orientation based on neural networks, wherein the system uses heat map technique and combination of CNN and FCNN to detect specific graphic patterns associated with different pathologies, such as atelectasis, cardiomegaly, pleural effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, and diaphragmatic hernia.
Another reference may be made to Indian patent application number 202223019813 that discloses a deep learning method to diagnose severity level of seventeen lung diseases using either X-ray image or CT scan images, wherein the method deploys combinative architecture of XChes13Net2.0 and YOLOV5, and uses RGB coloured heatmap and bounding box techniques.
One more reference may be made to US10691980B1 that discloses a system and method for multi-abnormality classification based on chest X-ray images CNN and DBN models are deployed to predict abnormality classification scores for wide range abnormalities such as granuloma, infiltrate, nodule, scaring, effusion, atelectasis, bone/soft tissue lesion, fibrosis, cardiac abnormality, mass, pneumothorax, COPD, consolidation, pleural thickening, cardiomegaly, emphysema, edema, pneumonia, hilar abnormality, or hernias.
All the existing deep learning models as employed in detection of lung diseases are primarily focused on heatmap technique to find too broad range of abnormality patterns; therefore, a further precise analysis is required on chest X-ray images to diagnose a specific complex disease, particularly emphysema condition. Moreover, there are some critical anatomical characteristics such as positioning of specific ribs and diaphragm, and blunting of costophrenic angle, which need to be examined very cautiously and meticulously to show results of emphysema status in real-time computing platform. Therefore, it is required to devise a user-friendly and cost-effective system for determining emphysema condition of patients by analysing chest X-ray images through novel software codes and advanced deep learning technique, which includes all the advantages of the conventional/existing techniques/methodologies and overcomes the deficiencies of such techniques/methodologies.
OBJECT OF THE INVENTION
It is an object of the present invention to develop an artificial intelligence based online platform for qualitative and quantitative analysis of frontal (posteroanterior) chest X-ray images.
It is another object of the present invention to examine positioning of specific ribs and diaphragm, and costophrenic angle blunting in the chest X-ray images to detect presence/absence of emphysema in patient lungs.
It is one more object of the present invention to develop novel software codes and advanced deep learning models for precise diagnosis of emphysema condition.
It is a further object of the present invention is to devise a user-friendly and cost-effective computing system for determining emphysema condition of patients by analysing chest X-ray images.
SUMMARY OF THE INVENTION
In one aspect, the present invention provides a user-friendly and low-cost system for determining emphysema condition using advanced neural network architecture-based chest X-ray image analysis technique. The system comprises a user interface, and a server communicatively linked with the user interface via a wireless network. The user interface has an image uploading option to upload chest X-ray images, and a result display option to display in real-time whether the image has any indication of emphysema. The server has embedded therein an overlap region detection module, a blunt region detection module, and a decision module. The overlap region detection module is configured to: segment X-ray image pixels into different anatomical classes as representatives of diaphragm and eight (1st -8th) pairs of anterior ribs counted from top to bottom, and check positioning of the diaphragm class with respect to the 6th pair rib class. The blunt region detection module is configured to: locate three key points at lower corner edges of lungs in the X-ray image, and compute costophrenic angles based on the key points. The decision module is configured: to determine the emphysema condition by validating the positioning of the diaphragm class with respect to the 6th pair rib class, and the blunting of costophrenic angles as per a set of predefined parameters.
Other aspects, advantages, and salient features of the present invention will become apparent to those skilled in the art from the following detailed description, which delineate the present invention in different embodiments.
BRIEF DESCRIPTION OF DRAWINGS
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying figures.
Fig. 1 is schematic diagram illustrating hardware components of the system for determining emphysema condition, in accordance with an embodiment of the present invention.
Fig. 2 illustrates segmentation of anterior ribs and diaphragm for checking overlap pixel coordinates in frontal (posteroanterior) chest X-ray images, in accordance with an embodiment of the present invention.
Fig. 3 illustrates generation of key points for costophrenic angle measurement (ϴ) in frontal (posteroanterior) chest X-ray images, in accordance with an embodiment of the present invention.
Fig. 4 illustrates X-ray image analysis operational steps for determining emphysema condition, in accordance with an embodiment of the present invention.
Fig. 5 illustrates Yolo-pose base model architecture, in accordance with an exemplary embodiment of the present invention.
Fig. 6 illustrates U-net neural network architecture, in accordance with an exemplary embodiment of the present invention.
List of reference numerals
100 user interface
102 image upload option
104 result display option
200 server
202 overlap region detection module
204 blunt region detection module
206 decision module
300 wireless network
C1 first operational (software) code
C2 second operational (software) code
DETAILED DESCRIPTION OF THE INVENTION
Various embodiments described herein are intended only for illustrative purposes and subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the scope of the present invention. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
The use of terms “comprises/comprising”, ‘includes/including’ or “having/have/has” and variations thereof herein are meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the terms, “an” and “a” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
According to an embodiment of the present invention, as shown in Fig. 1, the system for determining emphysema condition/status is depicted. The system comprises a user interface (100), and a server (200) communicatively linked with the user interface (100) via a wireless network (300). The user interface (100) has an image uploading option (102) to upload chest X-ray images, and a result display option (104) to display whether the image has any indication of emphysema or not. The user interface (100) is web-based or smartphone implementable application interface. The server (200) includes a memory and an X-ray image processing unit, where the memory stores a set of processor executable codes/modules (software/algorithm) to carry out emphysema diagnosis operation.
According to an embodiment of the present invention, the server (200) has embedded therein an overlap region detection module (202), a blunt region detection module (204), and a decision module (206). The uploaded chest X-ray images are fed into the modules of the server for analysis. Before initiating the image analysis, the unloaded chest X-ray images undergo a pre-processing operation, where the images are resized to defined dimensions using computer vision interpolation techniques for uniformity. The overlap region detection module (202) detects whether there is any image pixel overlapping between ribs and diaphragm (as shown in Fig. 2). The blunt region detection module (204) measures blunting level of costophrenic angles (i.e., regions where the diaphragm meets the ribs) (as shown in Fig. 3). The decision module (206) compares/validates outputs of the overlap region detection module (202) and the blunt region detection module (204) with a set of predefined parameters to derive the emphysema result.
According to an embodiment of the present invention, the overlap region detection module (202) is configured to: segment X-ray image pixels into different anatomical classes as representatives of diaphragm and eight (1st -8th) pairs of anterior ribs counted from top to bottom, and check positioning of the diaphragm class with respect to the 6th pair rib class (i.e., if any of the image pixels fall under both the diaphragm class and the 6th pair rib class). The overlap region detection module (202) deploys a U-net neural network model for segmenting the images, and executes a first function (software) code (C1) to check if there is any intersection between the rib pixel and the diaphragm pixel that is an indicative of the overlap region.
Referring to Fig. 2, eight pairs of anterior ribs (out of twelve pairs of ribs) and diaphragm are labelled from top to bottom, which are taken into consideration for emphysema diagnosis. First, each uploaded chest X-ray image is resized to dimensions of 512 x 512 using Python PIL library, employing the Lanczos resampling technique to maintain image quality. Multi-class segmentation is performed by the U-net model on the resized image to segment all the pixels under eight pairs of rib (1st, 2nd, 3rd, 4th, 5th, 6th, 7th, 8th) classes and diaphragm class. Further, all classes are assigned with different colours, allowing for identification of pixel coordinates for each class. For detecting emphysema condition, the focus is given on the classes corresponding to the 6th pair of ribs and the diaphragm. The image pixels of columns of each row are checked in top-to-bottom approach to find if the 6th anterior rib pixel coordinates intersect with the diaphragm pixel coordinates that is an indicative of overlap region (in other words, the diaphragm touches the 6th anterior rib). If no pixel intersection is observed, it indicates absence of overlapping between the diaphragm and the 6th anterior rib (in other words, the diaphragm lies below the 6th anterior rib, i.e., hyperinflation of lungs).
According to an embodiment of the present invention, the blunt region detection module (204) is configured to: locate three key points at lower corner edges of lungs in the X-ray image, and assess blunting of costophrenic angles based on the key points. The blunt region detection module (204) deploys a Yolo V8- pose model for locating the key points, and executes a second function (software) code (C2) to compute the costophrenic angles using coordinates of the key points. The costophrenic angle refers to the angle formed by the lung and the diaphragm. The costophrenic angle is normally acute and sharp with a value ranging from 30 to 45 degrees. The values greater than 45 degrees indicate blunting of the costophrenic angle which may be due to a number of underlying causes such as pleural effusion, diaphragm flattening etc., while the values less than 30 degrees indicate even more sharpening of the angle or the deep sulcus sign which is mostly seen in pneumothorax or lung collapse cases.
Referring to Fig. 3, the lower corner edges of both lung region as appearing in the X-ray images are taken into consideration for analysis. The pose model generates three key points (1, 2, 3) on the lower corner edges of the lungs (as shown in Fig. 3a and 3b). The coordinates of such key points such as A(x1,y1), B(x2,y2), and C(x3,y3) are derived from the image pixels (as shown in Fig. 3c). Then, the costophrenic angles are computed using equation 1.
COSϴ=((BA) ̅.(BC) ̅)/(|(BA) ̅ | (BC) ̅|)
(BA) ̅=A-B=(x1,y1)-(x2,y2)
(BC) ̅=(x3,y3)-(x2,y2)
|(BC) ̅ |= √((x3-x2)^2+(y3-y2)^2 )
|(BA) ̅ |= √(〖(x2-x1)〗^2-〖(y2-y1)〗^2 )
ϴ = 〖COS〗^(-1) (((BA) ̅.(BC) ̅)/(|(BA) ̅ ||(BC) ̅|)) equation 1
According to an embodiment of the present invention, the emphysema condition is characterized by hyperinflation of lungs (where diaphragm lies below 6th anterior rib, i.e., no pixel intersection/overlapping is observed) with or without diaphragm flattening (where costophrenic angle > 45 degree, i.e., blunting of costophrenic angle). The presence or absence of the emphysema condition in the uploaded chest X-ray images is finally determined by the decision module (206) based on the following predefined conditions/parameters as shown in Table 1.
Table 1
Outputs of Overlap / Blunt regions detection modules Chest Anatomy Emphysema Present Emphysema Absent
No pixel intersection/overlapping between 6th anterior rib and diaphragm is observed; Costophrenic angle is less than or equal to 45 degrees Diaphragm lies below 6th anterior rib without costophrenic angle blunting ✔
No pixel intersection/overlapping between 6th anterior rib and diaphragm is observed; Costophrenic angle is greater than to 45 degrees Diaphragm lies below 6th anterior rib with costophrenic angle blunting ✔
Pixel intersection/overlapping between 6th anterior rib and diaphragm is observed; Costophrenic angle is less than or equal to 45 degrees Diaphragm touches 6th anterior rib without costophrenic angle blunting ✔
Pixel intersection/overlapping between 6th anterior rib and diaphragm is observed; Costophrenic angle is greater than to 45 degrees Diaphragm touches 6th anterior rib with costophrenic angle blunting ✔
According to an embodiment of the present invention, as shown in Fig. 4, the chest X-ray image analysis operation for determining emphysema condition is depicted. The image analysis operation includes steps of: uploading (S1) frontal (posteroanterior) chest X-ray images into a server through a user interface communicatively linked thereto; resizing (S2) the images into defined dimensions; segmenting (S3) image pixels into different anatomical classes as representatives of diaphragm and eight (1st -8th) pairs of anterior ribs counted from top to bottom; checking (S4) if any of the image pixels fall under both the diaphragm class and the 6th pair rib class; locating/generating (S5) three key points at lower corner edges of lungs in the image; computing (S6) costophrenic angles using coordinates of the key points; and displaying (S7) in the user interface presence/absence of emphysema based on overlapping of 6th rib pixel coordinates with diaphragm pixel coordinates, and costophrenic angle blunting.
According to an embodiment of the present invention, as shown in Fig. 5, the YOLO pose base model architecture is depicted. Pose estimation is a task that involves identifying the location of specific points in an image. For training purpose, the key points are marked in target lower corner edge region of lungs in the X-ray images of the dataset. The Yolo V8- pose model is trained on such dataset to generate/locate/annotating three key points and find the corresponding coordinates to be used for the costophrenic angle computation using equation 1. It is observed that the model very accurately detects the desired key points that helps in achieving reliable diagnosis results.
According to an embodiment of the present invention, as shown in Fig. 6, the U-Net architecture comprises two major paths: the contractive path on the left and the expansive path on the right. The contractive path, featuring encoders, captures contextual information through convolution layers, while the expansive path, with decoders, upsamples the feature maps to preserve spatial resolution. Skip connections from the contractive path to the expansive path ensure the preservation of spatial information, enabling precise feature detection. The model is built in Python 3.10.9 using PyTorch version 2.0.1 framework, utilizing one A100 GPU for efficient processing. The model is trained using the NIH-chest X-ray dataset, data sets from hospitals and scanning centres performing multi-label semantic segmentation to segment ribs and diaphragm in chest X-ray images. The model is trained for 100 epochs using binary cross-entropy loss function and the Adam optimizer with a learning rate of 0.001, ensuring optimal parameter adjustment. To enhance feature extraction, pretrained backbone networks such as VGG (Visual Geometry Group) and ResNet (Residual Neural Network) are utilized. VGG uses its first 30 convolution layers for extracting features whereas ResNet50 uses all of its layers for extraction. Resnet performs well in extracting the features compared to VGG. Mean Intersection over Union (MIoU) is employed for evaluating segmentation results, achieving high MIoU scores around 90 for all classes of ribs and diaphragm.
Binary cross entropy loss function = - ( y x log (p) + (1-y) x log (1-p) )
Y= true label ( 0 to 1)
P= predicted probability ( 0 to 1)
Log= natural logarithm ( base e)
(-ve) sign the loss is always positive
Y log P :This term represents the loss for positive class (y=1). It calculates negative log of predicted probability (P) when actual label is positive (y=1).
(1-y) x log (1-p) :This term represents the loss of negative class (y=0). It calculated negative log of (1-p) when actual label is negative (y=0).
Dice loss = 1 – Dice coefficient
Dice coefficient = (2*TP)/((2*TP)+FN+FP) = 0.962
Dice loss = 0.038
The proposed U-Net model demonstrates robust performance in identifying emphysema, achieving an impressive accuracy of 96.18%. With a precision of 0.9559, the said model exhibits a high degree of correctness in positively identifying emphysema cases. Furthermore, the model displays a sensitivity of 97.01%, effectively capturing a significant portion of true positive cases, while maintaining a specificity of 95.31%, ensuring a low rate of false positives and false negatives.
The foregoing descriptions of exemplary embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiment was chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable the persons skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions, substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but is intended to cover the application or implementation without departing from the scope of the claims of the present invention. , Claims:We Claim:
1. A system for determining emphysema condition, the system comprises:
a user interface (100) having an image uploading option (102) to upload chest X-ray images, and a result display option (104) to display whether the image has any indication of emphysema; and
a server (200) communicatively linked with the user interface (100) via a wireless network (300), wherein the server (200) has embedded therein an overlap region detection module (202), a blunt region detection module (204), and a decision module (206);
wherein the overlap region detection module (202) is configured to: segment X-ray image pixels into different anatomical classes as representatives of diaphragm and eight (1st -8th) pairs of anterior ribs counted from top to bottom, and check positioning of the diaphragm class with respect to the 6th pair rib class;
wherein the blunt region detection module (204) is configured to: locate three key points at lower corner edges of lungs in the X-ray image, and assess blunting of costophrenic angles based on the key points;
wherein the decision module (206) is configured to determine the emphysema condition by validating the positioning of the diaphragm class with respect to the 6th pair rib class, and the blunting of costophrenic angles as per a set of predefined parameters.
2. The system as claimed in claim 1, wherein the X-ray images are resized into defined dimensions before being inputted into the overlap region detection module (202) and the costophrenic angle estimation module (204).
3. The system as claimed in claim 1, wherein the overlap region detection module (202) deploys U-net neural network model for segmenting the images.
4. The system as claimed in claim 1, wherein the overlap region detection module (202) executes a function code to check the image pixels of columns of each row in top-to-bottom approach to check if the 6th rib pixel coordinates intersect with the diaphragm pixel coordinates that is an indicative of overlap region.
5. The system as claimed in claim 1, wherein the blunt region detection module (204) deploys YoloV8- Pose neural network model for locating the key points.
6. The system as claimed in claim 1, wherein the blunt region detection module (204) executes a function code to compute the costophrenic angle using coordinates of the key points.
7. The system as claimed in claim 1, wherein the decision module (206) decides presence of emphysema if no intersection/overlapping between the 6th anterior rib class and the diaphragm class is observed with or without the blunting of costophrenic angles.
8. The system as claimed in claim 1, wherein the decision module (206) decides absence of emphysema if any intersection/overlapping between the 6th anterior rib class and the diaphragm class is observed with or without the blunting of costophrenic angles.
9. The system as claimed in claim 1, wherein the user interface (100) is web-based or smartphone implementable application interface.
| # | Name | Date |
|---|---|---|
| 1 | 202431020598-FORM FOR STARTUP [19-03-2024(online)].pdf | 2024-03-19 |
| 2 | 202431020598-FORM FOR SMALL ENTITY(FORM-28) [19-03-2024(online)].pdf | 2024-03-19 |
| 3 | 202431020598-FORM 1 [19-03-2024(online)].pdf | 2024-03-19 |
| 4 | 202431020598-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [19-03-2024(online)].pdf | 2024-03-19 |
| 5 | 202431020598-EVIDENCE FOR REGISTRATION UNDER SSI [19-03-2024(online)].pdf | 2024-03-19 |
| 6 | 202431020598-DRAWINGS [19-03-2024(online)].pdf | 2024-03-19 |
| 7 | 202431020598-COMPLETE SPECIFICATION [19-03-2024(online)].pdf | 2024-03-19 |
| 8 | 202431020598-Proof of Right [09-05-2024(online)].pdf | 2024-05-09 |
| 9 | 202431020598-FORM-9 [09-05-2024(online)].pdf | 2024-05-09 |
| 10 | 202431020598-FORM-26 [09-05-2024(online)].pdf | 2024-05-09 |
| 11 | 202431020598-FORM 3 [09-05-2024(online)].pdf | 2024-05-09 |
| 12 | 202431020598-STARTUP [14-06-2024(online)].pdf | 2024-06-14 |
| 13 | 202431020598-FORM28 [14-06-2024(online)].pdf | 2024-06-14 |
| 14 | 202431020598-FORM 18A [14-06-2024(online)].pdf | 2024-06-14 |
| 15 | 202431020598-FER.pdf | 2024-07-16 |
| 16 | 202431020598-OTHERS [08-11-2024(online)].pdf | 2024-11-08 |
| 17 | 202431020598-FER_SER_REPLY [08-11-2024(online)].pdf | 2024-11-08 |
| 18 | 202431020598-DRAWING [08-11-2024(online)].pdf | 2024-11-08 |
| 19 | 202431020598-CLAIMS [08-11-2024(online)].pdf | 2024-11-08 |
| 20 | 202431020598-US(14)-HearingNotice-(HearingDate-19-02-2025).pdf | 2025-01-29 |
| 21 | 202431020598-Correspondence to notify the Controller [10-02-2025(online)].pdf | 2025-02-10 |
| 22 | 202431020598-Written submissions and relevant documents [20-02-2025(online)].pdf | 2025-02-20 |
| 23 | 202431020598-Annexure [20-02-2025(online)].pdf | 2025-02-20 |
| 24 | 202431020598-PatentCertificate11-03-2025.pdf | 2025-03-11 |
| 25 | 202431020598-IntimationOfGrant11-03-2025.pdf | 2025-03-11 |
| 1 | SearchStrategyMatrixE_14-07-2024.pdf |
| 2 | SearchStrategyMatrixAE_31-12-2024.pdf |