Sign In to Follow Application
View All Documents & Correspondence

System For Detecting Pleural Thickening Through Deep Learning Neural Networks

Abstract: Disclosed herein is a user-friendly and cost-effective system for detecting pleural thickening by analysing chest X-ray image though deep learning models such as U-net and YoloV8. The system comprises a user interface (100) having an input field (102) to input raw chest X-ray images, and an output field (104) to output indication of the pleural thickening; and a server (200) being in communication with the user interface (100) via a wireless network (300). The server (200) is embedded with an image segmentation module (204), and a pixel annotation module (206). The image segmentation module (204) is configured to: classify X-ray image pixels into different anatomical classes representing left lung, right lung, and eleven (R1-R11) pairs of posterior ribs counted from top to bottom; and develop pixel colour difference among the classes. The pixel annotation module (206) is configured to: identify gap region between the lungs and the posterior ribs based on the pixel colour difference, and generate bounding boxes in the images using pixel coordinates of the identified gap region. Fig. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 March 2024
Publication Number
24/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

LARKAI HEALTHCARE PRIVATE LIMITED
C/o-DR. PRIYATOSH DHALLA, VILL. BELIATORE, (COLLAGE PARA), BANKURA, WEST BENGAL - 722203, INDIA

Inventors

1. PRITAM DHALLA
C/o-DR. PRIYATOSH DHALLA, VILL. BELIATORE, (COLLAGE PARA), BANKURA, WEST BENGAL - 722203, INDIA

Specification

Description:FIELD OF THE INVENTION
The present invention broadly relates to human pulmonary disease diagnosis. More particularly, the present invention relates to a low-cost reliable system for detecting pleural thickening through deep learning application on chest X-ray images.

BACKGROUND OF THE INVENTION
Pleura is a thin layer of tissue that covers the lungs and lines the interior wall of the chest cavity, thus it protects and cushions the lungs. Sometimes, for various reasons this pleura expands causing a medical condition known as pleural thickening. The major factors responsible for triggering pleural thickening in patients are 1) prolonged exposure to asbestos fibres leading to a condition known as asbestosis; 2) past lung infections such as tuberculosis; 3) chest trauma; 4) radiation therapy for lung cancer or other thoracic malignancies; and 5) certain medications or treatments such as methotrexate or radiation therapy, etc. The symptoms of pleural thickening may not be immediately apparent and can develop gradually over time. The common symptoms include 1) chest pain or discomfort which may worsen during deep breathing or coughing; 2) shortness of breath especially during physical activity; 3) persistent dry cough; 4) reduced lung function leading to decreased exercise tolerance; and 5) pleural effusion (accumulation of fluid between the layers of the pleura), etc.

Detection of pleural thickening holds paramount importance across the medical spectrum. Primarily, it serves as a vital indicator for uncovering underlying conditions, ranging from asbestos exposure to infections or malignancies. This proactive approach not only enhances patient outcomes but also underscores the significance of preventive measures and long-term monitoring. Additionally, the severity and progression of pleural thickening directly impact respiratory function. However, the patients having mild form of such condition or at its primitive stage very often go undiagnosed or unnoticed by doctors.

Although various pulmonary diagnosis approaches are currently available, the X-ray checkup appears to be affordable, easily accessible, and exposed to lesser radiation than other imaging methods. With the recent development of artificial intelligence and machine learning technology, the object detection algorithms based on convolutional neural networks are being used in various fields including health diagnosis. However, all the existing machine leaning-based diagnostic techniques have several limitations in terms of real-time result delivery, implementation in low end computing device, computing speed, computing resource utilization, type of disease detection, diagnosis accuracy etc. Therefore, there is felt a need of developing a low-cost, simplified, reliable diagnostic technique to examine pulmonary diseases, especially to detect pleural thickening and its exact locations on the lungs.

A reference may be made to KR20230040484A that discloses a Faster R-CNN algorithm-based object detection model trained for detecting various chest abnormality pattern using matching features between X-ray images and CT images.

Another reference may be made to US11436725B2 that discloses a self-supervised chest x-ray image analysis machine-learning model that utilizes transferable visual words (TransVW) for to reduce annotation effort in different pathologies.

One more reference may be made to Indian patent application number 202223019813 that discloses a deep learning method to diagnose severity level of seventeen lung diseases using either X-ray image or CT scan images, wherein the method deploys combinative architecture of XChes13Net2.0 and YOLOV5, and uses RGB coloured heatmap and bounding box techniques.

Further reference may be made to US10691980B1 that discloses a system and method for multi-abnormality classification based on chest X-ray images CNN and DBN models are deployed to predict abnormality classification scores for wide range abnormalities such as granuloma, infiltrate, nodule, scaring, effusion, atelectasis, bone/soft tissue lesion, fibrosis, cardiac abnormality, mass, pneumothorax, COPD, consolidation, pleural thickening, cardiomegaly, emphysema, edema, pneumonia, hilar abnormality, or hernias.

All the existing deep learning models (convolutional neural networks) as employed in detection of pulmonary diseases are primarily focused on heatmap technique to detect too broad range of abnormality patterns and/or direct marking on the images without specific segmentation techniques; therefore, a further precise analysis is required on chest X-ray images to diagnose a specific life-threatening disease, particularly plural thickening even at very primitive stage that cannot be visually noticeable by naked eyes in X-ray images. Moreover, there are some critical anatomical characteristics such as gap between the ribs and the lungs, and marks/patches on the lungs, which need to be examined very cautiously and meticulously to show results of plural thickening location in an online user interface instantly. Therefore, it is required to devise a user-friendly, low-cost, and reliable system for detecting pleural thickening through advanced deep learning-based image segmentation and annotation application on chest X-ray images, which includes all the advantages of the conventional/existing techniques/methodologies and overcomes the deficiencies of such techniques/methodologies.

OBJECT OF THE INVENTION
It is an object of the present invention to develop a deep learning technique based online platform for qualitative and quantitative analysis of frontal (posteroanterior) chest X-ray images.

It is another object of the present invention to examine visually unnoticeable gap present between the ribs and the lungs, and marks/patches present in the lungs which could be indications pleural thickening at very primitive stage.

It is one more object of the present invention to develop novel advanced deep learning models for precise diagnosis of pleural thickening and its exact locations on the lungs.

It is a further object of the present invention is to devise a user-friendly, low-cost, and reliable computing (online) system for detecting pleural thickening through advanced deep learning-based image segmentation and annotation application on chest X-ray images.

SUMMARY OF THE INVENTION
In one aspect, the present invention provides a user-friendly and cost-effective system for detecting pleural thickening by analysing chest X-ray image though deep learning models such as U-net and YoloV8. The system comprises a user interface having an input field to input raw chest X-ray images, and an output field to output indication of the pleural thickening; and a server being in communication with the user interface via a wireless network. The server is embedded with an image segmentation module, and a pixel annotation module. The image segmentation module is configured to: classify X-ray image pixels into different anatomical classes representing left lung, right lung, and eleven pairs of posterior ribs counted from top to bottom; and develop pixel colour difference among the classes. The pixel annotation module is configured to: identify gap region between the lungs and the posterior ribs based on the pixel colour difference, and generate bounding boxes in the images using pixel coordinates of the identified gap region.

Other aspects, advantages, and salient features of the present invention will become apparent to those skilled in the art from the following detailed description, which delineate the present invention in different embodiments.

BRIEF DESCRIPTION OF DRAWINGS
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying figures.

Fig. 1 is input (Fig, 1a) and output (Fig. 1b) of a frontal (posteroanterior) chest X-ray image illustrating left and right lungs, and eleven pairs of posterior ribs of a healthy person (normal lung condition).

Fig. 2 input (Fig, 2a) and output (Fig. 2b) of a frontal (posteroanterior) chest X-ray image illustrating left and right lungs, and eleven pairs of posterior ribs of an unhealthy (pleural thickening) patient.

Fig. 3 is schematic diagram illustrating hardware components of the system for detecting pleural thickening, in accordance with an embodiment of the present invention.

Fig. 4 illustrates X-ray image analysis operational steps for detecting pleural thickening, in accordance with an embodiment of the present invention.

Fig. 5 illustrates U-net neural network architecture, in accordance with an exemplary embodiment of the present invention.

Fig. 6 illustrates Yolo-V8 model architecture, in accordance with an exemplary embodiment of the present invention.

List of reference numerals
100 user interface
102 input field
104 output field
200 server (processing unit)
202 image pre-processing module
204 image segmentation module
206 pixel annotation module
300 wireless network
R1-R11 posterior ribs
BB bounding boxes (marking on segmented images)

DETAILED DESCRIPTION OF THE INVENTION
Various embodiments described herein are intended only for illustrative purposes and subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the scope of the present invention. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.

The use of terms “comprises/comprising”, ‘includes/including’ or “having/have/has” and variations thereof herein are meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the terms, “an” and “a” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.

The frontal (posteroanterior) chest X-ray image (as shown in Fig. 1a) of a healthy person does not show any gap between eleven pairs of the posterior ribs (R1-R11) and the lungs (usually, specific part of pleural outer surface remains in contact with specific ribs in normal condition). The frontal (posteroanterior) chest X-ray image (as shown in Fig. 2a) of an unhealthy person (i.e., pleural thickening patient) shows some gap region (appearing as white patches) between eleven pairs of the posterior ribs (R1-R11) and the lungs. However, at the very early/primitive stage, these gap regions cannot be noticed in naked eyes. Further, frontline medical staffs including nurses and in-experienced doctors find difficulty in distinguishing pleural thickening condition from other lung diseases. Therefore, the present invention makes such pleural thickening detection tasks fully automated and simplified. Moreover, the present invention provides an online (website) or a mobile app-based system through which the users need to just upload the raw chest frontal (posteroanterior) X-ray images, and the accurate results are displayed in real-time. The results include whether the patient has any indication of pleural thickening or not. Further, the location of pleural thickening is annotated/marked in form of bounding boxes (BB) (as shown in Fig. 2b) in the diagnosed X-ray images as displayed in the system.

According to an embodiment of the present invention, as shown in Fig. 1, the system for detecting pleural thickening is depicted. The system comprises a user interface (100), and a server (processing unit) (200) being in communication with the user interface (100) via a wireless network (300). The user interface (100) has an input field (102) to input/upload raw chest X-ray images, and an output field (104) to output/display indication of the pleural thickening with location annotation. The user interface (100) is web-based or a mobile app interface. The server (processing unit) (200) comprises a memory and a processor, where the memory stores a set of processor executable codes/modules (software/algorithm) to carry out pleural thickening diagnosis operation.

According to an embodiment of the present invention, the server (200) has embedded therein an image pre-processing module (202); an image segmentation module (204); and a pixel annotation module (206).

According to an embodiment of the present invention, the image pre-processing module (202) configured to convert the inputted raw X-ray images into defined dimensions before segmentation. Initially every input image is resized to 512 x 512 dimension using computer vision interpolation techniques. For example, the image resizing is done using Python PIL library, employing the Lanczos resampling technique to maintain image quality.

According to an embodiment of the present invention, the image segmentation module (204) is configured to: classify X-ray image pixels into different anatomical classes representing left lung, right lung, and eleven (R1-R11) pairs of posterior ribs counted from top to bottom; and develop pixel colour difference among the classes. All the classes are assigned different colours, allowing for the identification of pixel coordinates for each class. For example, as shown in Fig. 1 and 2, the pixel colour in images before and after segmentation is shown in Table 1.

Table 1

Stages Rib colour Lung colour Pleural Thickening colour
Before segmentation White Black White
After segmentation White White Black

According to an embodiment of the present invention, the pixel annotation module (206) is configured to: identify gap region between the lungs and the posterior ribs based on the pixel colour difference, and generate bounding boxes (BB) in the images using pixel coordinates of the identified gap region. The bounding boxes (BB) are defined by two points, for example the top-left and bottom-right corners of the box, providing a correct way to describe the position and size of the pleural thickening regions pixels in the X-ray images.

According to an embodiment of the present invention, the output field (104) outputs the identified gap region between the lungs and the posterior ribs as presence of the pleural thickening with the generated bounding box annotations in the images (as shown in Fig. 2b). The output field (104) outputs absence of the pleural thickening in absence of the gap region between the lungs and the posterior ribs (as shown in Fig. 1a). For example, the output field (104) displays text message presence/absence of the pleural thickening condition.

According to an embodiment of the present invention, as shown in Fig. 4, the frontal (posteroanterior) chest X-ray image analysis operation for detecting pleural thickening is depicted. The image analysis operation includes steps of: inputting (S1) raw chest X-ray images; resizing (S2) the images into defined dimensions; classifying/segmenting (S3) image pixels into different anatomical classes/segments representing left lung, right lung, and eleven (R1-R11) pairs of posterior ribs counted from top to bottom; developing (S4) pixel colour difference among the classes; identifying (S5) gap region between the lungs and the posterior ribs based on the pixel colour difference; and generating (S6) bounding boxes in the images using pixel coordinates of the identified gap region; and outputting (S7) presence or absence of pleural thickening with annotation in an output field of the user interface.

Referring to Fig. 5, the image segmentation module (204) deploys a U-net neural network model for image segmentation. The U-net architecture is characterised by two main paths: an expansive path on the right and a contractive one on the left. Convolution layers are used as encoders in the contractive path to capture contextual information by reducing the spatial dimensions of the image. Transposed convolution layers are used as decoders to up sample the image to original size. Finding the relevant features in the input image is the responsibility of U-net's contracting path. By reducing the spatial resolution and increasing the depth of the feature maps through convolutional processes, the encoder layers are able to capture progressively more abstract representations of the input. Other convolutional neural networks feedforward layers resemble this contracting pattern. However, in order to locate the features and preserve the input's spatial resolution, the expanding path decodes the encoded data. In addition to convolutional processes, the decoder layers in the extended path up sample the feature maps. The decoder layers can detect the features more precisely because the skip connections from the contracting path help to preserve the spatial information lost in the contracting path. The model is built in Python 3.10.9 using PyTorch version 2.0.1 framework, utilizing one A100 GPU for efficient processing. The model is trained using the NIH-chest X-ray dataset, data sets from hospitals and scanning centres for training and validation. The raw images are resized to 512 x 512 dimension. The posterior ribs and the lungs portions in the resized images are annotated and trained. In total 24 classes are segmented from the images, they are the 11 pair of posterior ribs and the two lungs from the chest Xray. In the deep learning neural network architecture, the convolution block consists of convolution layer and max pooling with a window size of 2 and dropout layer with a dropout rate of 0.25 to prevent overfitting. To adjust the parameters of the model binary cross entropy loss function is used along with binary dice loss along with the Adam optimizer with a learning rate of 0.001. The model is trained for 100 epochs. Once the spatial dimension of the image is reduced after encoder layers, to extract the features from the image pretrained backbone networks are used such as VGG (Visual Geometry Group) and ResNet 50 (Residual Neural Network). The backbone network extracts feature from the images at different depth, here the VGG uses first 30 convolution layers to extract the features whereas ResNet 50 uses all its layers. Upon several trials, the results from Resnet 50 gives the best segmentation outputs. The decoder path upsamples the feature maps and combines them with corresponding features. For evaluating the segmentation results, the Mean Intersection over Union (MIoU) is employed, achieving high MIoU scores around 92 for all classes of ribs and lungs.

Further, it is observed that the U-net model demonstrates strong performance in detecting pleural thickening, achieving an accuracy of 95.77%. With a precision of 0.9459, it showcases high correctness in identifying pleural thickening cases accurately. Additionally, the model exhibits a sensitivity of 96.21%, effectively capturing a significant portion of true positive cases, while maintaining a specificity of 94.67%, thereby minimizing the occurrence of false positives and false negatives in the diagnosis of pleural thickening.

Referring to Fig. 6, The pixel annotation module (206) deploys a YoloV8 neural network model for pleural thickening marking/labelling. The yolo v8 model is trained to generate bounding box (BB) at the identified gap regions, and give the coordinates of identified gaps in the image. The model is capable to locate and identify event smallest gaps, indicating thickening of pleural membrane, and identifies missed diagnosis and both diffused as well as localized thickenings. The model exhibits mean average precision (mAP50) of 0.73. Using these bounding boxes, it becomes possible to detect precisely locations of the pleural thickening.

The foregoing descriptions of exemplary embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiment was chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable the persons skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions, substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but is intended to cover the application or implementation without departing from the scope of the claims of the present invention. , Claims:We Claim:

1. A system for detecting pleural thickening, the system comprises:
a user interface (100) having an input field (102) to input raw chest X-ray images, and an output field (104) to output indication of the pleural thickening; and
a server (200) being in communication with the user interface (100) via a wireless network (300), wherein the server (200) is embedded with:
an image segmentation module (204) configured to: classify X-ray image pixels into different anatomical classes representing left lung, right lung, and eleven (R1-R11) pairs of posterior ribs counted from top to bottom; and develop pixel colour difference among the classes; and
a pixel annotation module (206) configured to: identify gap region between the lungs and the posterior ribs based on the pixel colour difference, and generate bounding boxes in the images using pixel coordinates of the identified gap region.

2. The system as claimed in claim 1, wherein the server (200) is embedded with an image pre-processing module (202) configured to convert the raw X-ray images into defined dimensions before segmentation.

3. The system as claimed in claim 1, wherein the image segmentation module (204) deploys a U-net neural network model.

4. The system as claimed in claim 1, wherein the pixel annotation module (206) deploys a YoloV8 neural network model.

5. The system as claimed in claim 1, wherein the output field (104) outputs the identified gap region between the lungs and the posterior ribs as presence of the pleural thickening with the generated bounding box annotations in the images.

6. The system as claimed in claim 1, wherein the output field (104) outputs absence of the pleural thickening in absence of the gap region between the lungs and the posterior ribs.

7. The system as claimed in claim 1, wherein the user interface (100) is web-based or smartphone installable application interface.

Documents

Application Documents

# Name Date
1 202431025846-FORM FOR STARTUP [29-03-2024(online)].pdf 2024-03-29
2 202431025846-FORM FOR SMALL ENTITY(FORM-28) [29-03-2024(online)].pdf 2024-03-29
3 202431025846-FORM 1 [29-03-2024(online)].pdf 2024-03-29
4 202431025846-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [29-03-2024(online)].pdf 2024-03-29
5 202431025846-EVIDENCE FOR REGISTRATION UNDER SSI [29-03-2024(online)].pdf 2024-03-29
6 202431025846-DRAWINGS [29-03-2024(online)].pdf 2024-03-29
7 202431025846-COMPLETE SPECIFICATION [29-03-2024(online)].pdf 2024-03-29
8 202431025846-Proof of Right [06-05-2024(online)].pdf 2024-05-06
9 202431025846-FORM-9 [06-05-2024(online)].pdf 2024-05-06
10 202431025846-FORM-26 [06-05-2024(online)].pdf 2024-05-06
11 202431025846-FORM 3 [06-05-2024(online)].pdf 2024-05-06
12 202431025846-STARTUP [14-06-2024(online)].pdf 2024-06-14
13 202431025846-FORM28 [14-06-2024(online)].pdf 2024-06-14
14 202431025846-FORM 18A [14-06-2024(online)].pdf 2024-06-14
15 202431025846-FER.pdf 2024-09-02
16 202431025846-OTHERS [01-03-2025(online)].pdf 2025-03-01
17 202431025846-FER_SER_REPLY [01-03-2025(online)].pdf 2025-03-01
18 202431025846-CLAIMS [01-03-2025(online)].pdf 2025-03-01
19 202431025846-US(14)-HearingNotice-(HearingDate-08-05-2025).pdf 2025-03-20
20 202431025846-Correspondence to notify the Controller [05-05-2025(online)].pdf 2025-05-05
21 202431025846-Written submissions and relevant documents [13-05-2025(online)].pdf 2025-05-13
22 202431025846-Annexure [13-05-2025(online)].pdf 2025-05-13

Search Strategy

1 searchstrategy(2)E_26-08-2024.pdf
2 202431025846_SearchStrategyAmended_E_SearchHistory(75)AE_19-03-2025.pdf