Sign In to Follow Application
View All Documents & Correspondence

System And Method For Detecting Patient Rotation From Chest X Ray Images Using Deep Learning

Abstract: Disclosed herein is a system and method for detecting patient rotation/alignment in chest X-ray, which help to understand whether the patient is properly aligned during X-ray procedure or not, so that erroneous diagnosis/medication can be prevented from beginning. The system comprises a user interface (100) wirelessly linked with a server (200). The user interface (100) has an input section (102) for receiving a chest X-ray image, and an output section (104) for delivering a detection result in real-time. The server (200) is configured to: segment the image in two stages with respect to left and right clavicles (RC, LC), and five thoracic vertebras (T1-T5), respectively, using U-net models; superimpose the segmented clavicle image with the segmented thoracic vertebra image forming an overlapping region (OL) between the clavicles (RC, LC) and the thoracic vertebras (T1-T5); locate two innermost pixel values of the clavicles (RC, LC), and two outermost pixel values of the third thoracic vertebra (T3) present in the overlapping region (OL); calculate an average pixel value of the identified two outermost pixel values of the third thoracic vertebra (T3); subtract the identified two innermost pixel values of the clavicles (RC, LC) from the average pixel value separately to get corresponding two absolute values; derive the rotation index through division of the two absolute values; and compare the rotation index against a set of threshold values, and determine if the subject is rotated towards right or left, or the subject is correctly aligned based on the comparison status. Fig. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
05 April 2024
Publication Number
41/2025
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

LARKAI HEALTHCARE PRIVATE LIMITED
C/o-DR. PRIYATOSH DHALLA, VILL. BELIATORE, (COLLAGE PARA), BANKURA, WEST BENGAL - 722203, INDIA

Inventors

1. PRITAM DHALLA
C/o-DR. PRIYATOSH DHALLA, VILL. BELIATORE, (COLLAGE PARA), BANKURA, WEST BENGAL - 722203, INDIA

Specification

DESC:FIELD OF THE INVENTION
The present invention generally relates to medical image processing techniques. More specifically, the present invention relates to a system and method for detecting patient rotation dependent anatomical attributes (such as tracheal deviation/shift) from chest X-ray images. By employing advanced image processing techniques and machine learning algorithms, the present invention enhances diagnostic accuracy and efficiency in pulmonary and thoracic healthcare.

BACKGROUND OF THE INVENTION
During chest X-ray procedure, even a very small amount of patient rotation may lead to misinterpretation resulting in erroneous diagnosis. For instance, if the patient is rotated to his/her left, then the heart may appear enlarged. If the patient is rotated to his/her right, then heart size may be underestimated. Therefore, it is necessary to measure patient rotation before reaching at any conclusion. Rotation on a Posterior-Anterior (PA) chest radiograph can be determined by examining both sternal ends of the clavicles for a symmetric appearance in relationship to the spine. The existing technique for checking whether there is any patient rotation, includes measuring distance of the heads of clavicles from the spinous processes seen in trachea on a Chest X-Ray; if the distances are equal then there is no rotation, if the distance between right clavicle and spinous processes is lesser then there is rotation towards left, and if the distance between left clavicle and spinous processes is lesser then there is rotation towards right.

Due to increased pressure within the chest cavity, the trachea may shift to one side, typically the side where the pressure is lower, or there is less lung volume, such abnormal position of the trachea is known as tracheal deviation. Tracheal deviation is typically seen towards anteriorly and to the right (up to 90°). Normal deviation to the left is observed only when aortic arch is located to the right of the trachea. Any other configuration (i.e. to the left or posteriorly) may raise the possibility of underlying pathology which results in symptoms such as coughing, difficulty breathing, wheezing, and chest pain. Further, the conventional method of measuring tracheal deviation involves checking the position of trachea with respect to the body midline, which is typically considered the spinous processes for easy reference.

Tracheal deviation and patient rotation are critical factors that significantly influence the accurate interpretation of chest X-ray images, particularly in diagnosing pulmonary and thoracic conditions. Tracheal deviation can be indicative of various underlying pathologies such as mediastinal masses or pneumothorax, while patient rotation can obscure critical anatomical structures, leading to misinterpretation and delayed treatment. Current diagnostic methodologies often rely on subjective visual assessment, resulting in variability and potential diagnostic errors. With the advancement of computer science and artificial intelligence, few researchers have come up with alternative approaches for analysing/correcting patient rotation factors in X-ray images.

A reference may be made to EP4184429A1 that discloses a method for determining rotation of a patient's chest in a medical image, in which scapular spatial data is determined and the rotation of patient's chest is found with respect to at least one reference axis using the scapular spatial data.

Another reference may be made to US20230363727A1 that discloses an optical arrangement for an X-ray system for determining a patient position/rotation, in which laser source, detector, and analyser are arranged in the X-ray system.

All the existing patient rotation assessment techniques have several limitations in terms of measuring degree/severity patient rotation, configuration in low end hardware, computing speed/efficiency/accuracy, resource utilization; therefore, a further precise investigation is required on chest X-ray images to measure degree/severity of shifting/rotation of patient’s chest (target) region that may result in misleading or erroneous diagnostic interpretation. Moreover, there are some critical anatomical characteristics such as alignment/positioning of specific vertebrae, clavicles, trachea bifurcation point etc. which need to be examined very cautiously and meticulously to assess various pathological conditions contingent on patient rotation in an online user interface instantly. Therefore, it is required to devise a user-friendly, low-cost, and reliable system for investigation/assessment of patient rotation dependent anatomical attributes (such as tracheal deviation/shift) from chest X-ray images through advanced deep learning-based image segmentation and annotation application on chest X-ray images, which includes all the advantages of the conventional/existing techniques/methodologies and overcomes the deficiencies of such techniques/methodologies.

OBJECT OF THE INVENTION
It is an object of the present invention to develop an advanced automated detection system meticulously engineered to identify and precisely quantify tracheal deviation and patient rotation from chest X-ray images using cutting-edge image processing algorithms and machine learning techniques.

It is another object of the present invention to provide clinicians with accurate, objective, and actionable assessments, thereby enhancing diagnostic precision and efficiency in pulmonary and thoracic healthcare.

It is one more object of the present invention to overcome limitations of manual interpretation methods, offering objective and timely detection of subtle anatomical variations, thereby improving patient outcomes and streamlining diagnostic workflows in clinical settings.

It is a further object of the present invention to devise a system assessing patient rotation dependent pulmonary and thoracic attributes to prevent misleading/erroneous interpretation.

SUMMARY OF THE INVENTION
In one aspect, the present invention provides a system for detecting patient rotation/alignment in chest X-ray, which help to understand whether the patient is properly aligned during X-ray procedure or not, so that erroneous diagnosis/medication can be prevented from beginning. The system comprises a user interface wirelessly linked with a server. The user interface has an input section for receiving a chest X-ray image, and an output section for delivering a detection result in real-time. The server is configured to: segment the image in two stages with respect to left and right clavicles, and five thoracic vertebras, respectively, using U-net models; superimpose the segmented clavicle image with the segmented thoracic vertebra image forming an overlapping region between the clavicles and the thoracic vertebra; locate two innermost pixel values of the clavicles, and two outermost pixel values of the third thoracic vertebra present in the overlapping region; calculate an average pixel value of the identified two outermost pixel values of the third thoracic vertebra; subtract the identified two innermost pixel values of the clavicles from the average pixel value separately to get corresponding two absolute values; derive the rotation index through division of the two absolute values; and compare the rotation index against a set of threshold values, and determine if the subject is rotated towards right or left, or the subject is correctly aligned based on the comparison status.

Other aspects, advantages, and salient features of the present invention will become apparent to those skilled in the art from the following detailed description, which delineate the present invention in different embodiments.

BRIEF DESCRIPTION OF DRAWINGS
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying figures.

Fig. 1 is a schematic diagram illustrating hardware components of a system for detecting subject rotation in chest X-ray, in accordance with an embodiment of the present invention.

Fig. 2 shows image segmentation, superimposition, and target pixel identification technique applied on the chest X-ray image, in accordance with an embodiment of the present invention.

Fig. 3 shows method steps for detecting subject rotation in chest X-ray, in accordance with an embodiment of the present invention.

List of reference numerals
100 user interface
102 image input section/field
104 detection result output section/field
200 server
202 deep neural network models
204 image superimposition module
206 image pixel tracking module
208 rotation score/index computing module
210 validation module
300 wireless communication network
RC right clavicle
LC left clavicle
T1-T5 five thoracic vertebras
I input chest X-ray image
IP1 first innermost pixel value of RC
IP2 second innermost pixel value of LC
OP1 first outermost pixel value of T3
OP2 second outermost pixel value of T3
CP central pixel value of T3

DETAILED DESCRIPTION OF THE INVENTION
Various embodiments described herein are intended only for illustrative purposes and subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the scope of the present invention. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.

The use of terms “comprises/comprising”, ‘includes/including’ or “having/have/has” and variations thereof herein are meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the terms, “an” and “a” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. The term ‘subject’ used herein refers to patient internal body part such as heart, trachea, lungs, ribs etc. as appeared in chest X-ray images.

Patient rotation on a chest X-ray, where the patient is not aligned properly, can lead to misinterpretation of heart size, lung appearances, and other structures, potentially masking or mimicking disease. Further, frontline medical staffs including nurses and in-experienced doctors may not easily recognize erroneous diagnosis caused due such patient rotation issues. Therefore, the present invention aims at developing an online (website) or a mobile app platform through which any healthcare professional just uploads the raw chest X-ray images, and the platform confirms in real-time accurately whether the subject (any of chest region internal organs) is rotated towards right or left (such as tracheal deviation/shift), or the subject is correctly aligned/positioned. The present invention selects two major anatomical (morphological) factors of left and right clavicles (collar bones) and trachea vertebras for precise analysis in X-ray images.

According to an embodiment of the present invention, as shown in Fig. 1, the system for detecting subject rotation in chest X-ray is depicted. The system comprises a user interface (100), and a server (processing unit) (200) linked with the user interface (100) via a wireless communication network (300). The user interface (100) is web-based or a mobile app interface, which may run in any computing devices including computer, smartphone, PDA, tab, laptop etc. The user interface (100) comprises an input section (102) for receiving a (uploaded) chest X-ray image, and an output section (104) for delivering/displaying a detection result in real-time. The server (200) comprises a memory and a processor, where the memory stores a set of processor executable codes/modules (software/algorithm) to carry out image processing/analysis operations.

According to an embodiment of the present invention, the server (200) has embedded therein a deep neural network (202), an image superimposition module (204), an image pixel tracking module (206), a rotation index computing module (208), a validation module (210).

According to an embodiment of the present invention, as shown in Fig. 2, the deep neural network (202) is configured to segment the image in two stages with respect to left and right clavicles (RC, LC), and five thoracic vertebras (T1-T5), respectively. The input chest x-ray (I) may undergo a preprocessing where the image is resized into a defined dimension, noise may be removed using filter. The image segmentation is performed using the deep learning tool such as U-net neural network models. One model is trained to segment both the clavicles (RC, LC) from the input chest x-ray (I). Another model is trained to segment the thoracic vertebrae (T1-T5) from the chest x-ray image (I), where the third thoracic vertebrae (T3) is considered a centre of the segmented thoracic vertebrae. The segmented images are in binary such as black and white (o or 1) format. In other words, the clavicles (RC, LC) regions and the thoracic vertebras (T1-T5) regions (white coloured marking) are separated from the background (black coloured marking) as shown in Fig. 2.

U-Net is carefully chosen as it consists of two major paths i.e., contractive path on the left and expansive path on the right. Encoders are present in the contractive path which basically consists of convolution layers for capturing the contextual information through decreasing the spatial dimensions of image and the decoders consist of transposed convolution layers to up sample. U-Net's contracting path is in charge of locating the pertinent features in the input image. By reducing the spatial resolution and increasing the depth of the feature maps through convolutional processes, the encoder layers are able to capture progressively more abstract representations of the input. Other convolutional neural networks feedforward layers resemble this contracting pattern. However, in order to locate the features and preserve the input's spatial resolution, the expanding path decodes the encoded data. In addition to processes, the decoder layers in the extended path up sample the feature maps. The decoder layers can detect the features more precisely because the skip connections from the contracting path help to preserve the spatial information lost in the contracting path.

For model building, python 3.10.9 (Pytorch version 2.0.1) is used as framework and one A100 GPU is used. NIH-chest X-ray dataset are used for training and validation. Multi label semantic segmentation technique is performed using U-Net on the chest X-ray image data where every pixel is segmented to a particular class it belongs. The first five thoracic vertebrae and both (left and right) clavicles are segmented from the chest X-ray. Initially every input image is resized to 512 x 512 dimension using computer vision interpolation techniques. In the U-Net architecture, the convolution block consists of convolution layer and max pooling with a window size of 2 and dropout layer with a dropout rate of 0.25. To adjust the parameters of the model binary cross entropy loss function is used along with binary dice loss. And the optimizer used in Adam with a learning rate of 0.001. The model is trained for 100 epochs. Once the spatial dimension of the image is reduced after encoder layers, to extract the features from the image pretrained backbone networks are used such as VGG and Resnet 50. The backbone network extract features from image at different depth, here VGG uses first 30 convolution layers to extract the features whereas Resnet uses all its layers. Upon several trials, the best segmentation outputs are obtained from Resnet 50. The decoder path up-samples feature maps and combines them with corresponding features.

According to an embodiment of the present invention, as shown in Fig. 2, the image superimposition module (204) is configured to superimpose the segmented clavicle image with the segmented thoracic vertebra image forming an overlapping region (OL) between the clavicles (RC, LC) and the thoracic vertebras (T1-T5). In other words, both the segmented images are overlapped one on another using an image superimposition technique to find desired/target pixel points in a region where the inner side clavicle regions and the third thoracic vertebra (T3) are overlapped.

According to an embodiment of the present invention, as shown in Fig. 2, the image pixel tracking module (206) is configured to locate two innermost pixel values of the clavicles (RC, LC), and two outermost pixel values of the third thoracic vertebra (T3) present in the overlapping region (OL). The first innermost pixel value (IP1) of the right clavicle (RC) is marked in red colour. The second innermost pixel value (IP2) of the left clavicle (LC) is marked in green colour. The first outermost pixel value (OP1) and the second outermost pixel value (OP2) of the third thoracic vertebra (T3) are marked in yellow colour. The central pixel value (CP) of the third thoracic vertebra (T3) is marked in blue colour. These pixel coordinates are detected through Python loop and code. The x-axis value of each pixel coordinate is taken into consideration.

According to an embodiment of the present invention, the rotation index computing module (208) is configured to: calculate an average pixel value of the identified two outermost pixel values of the third thoracic vertebra (T3), subtract the identified two innermost pixel values of the clavicles (RC, LC) from the average pixel value separately to get corresponding two absolute values; and derive the rotation score through division of the two absolute values.

The average pixel value is considered as the central pixel value (CP) of the third thoracic vertebra (T3) that is computed using equation 1.
CP = (OP1 + OP2)/2 equation 1.

Three imaginary parallel straight lines (RC line, LC line, and T3 line) are drawn along two innermost pixel (IP1, IP2) points of both clavicles (RC, LC), and the central pixel (CP) point of the third thoracic vertebra (T3). The absolute (i.e., positive) values are considered as a first distance (a1) between the first innermost pixel (IP1) point and the central pixel (CP) point, and a second distance (a2) between the second innermost pixel (IP2) point and the central pixel (CP) point. The said distance values are computed using equations 2 and 3.
a1 = IP1 value – CP value equation 2
a2 = IP2 value – CP value equation 3

The rotation score is considered as patient rotation index value (RI) that indicates possibility and severity of misalignment of the patient body parts, especially, size and positioning of lungs and hearts etc. which is usually caused due to tracheal/clavicle shifting in chest X-ray procedure. The rotation index value (RI) is computed using equations 4.
RI = a1/a2 equation 4

According to an embodiment of the present invention, the validation module (210) is configured to compare the rotation score (RI) against a set of threshold/defined values, and determine if the subject (patient) is rotated towards right or left, or the subject (patient) is correctly aligned based on the comparison status. The validation parameters are shown in Table 1.
Table 1
Rotation Index (RI) Threshold Value Indication
RI >1.1 patient is rotated towards right (abnormal)
RI <0.9 patient is rotated towards left (abnormal)
RI between 0.9 and 1.1 patient is correctly aligned (normal)

If the rotation index is greater than 1.1, the system displays an indication of the subject being rotated towards right (i.e., misaligned subject) in the chest X-ray. If the rotation index is lesser than 0.9, the system displays an indication of the subject being rotated towards left (i.e., misaligned subject) in the chest X-ray. If the rotation index is between 0.9 and 1.1, the system displays an indication of the subject being correctly aligned in the chest X-ray.

According to an embodiment of the present invention, as shown in Fig. 3, the method for detecting subject rotation in chest X-ray is depicted. The method comprises steps of: configuring/implementing (S1) an input section for receiving a chest X-ray image, and an output section for delivering a detection result, respectively, in a user interface; transmitting (S2) the received chest X-ray image from the user interface to a server through a wireless communication network; segmenting (S3) the image in two stages with respect to left and right clavicles (RC, LC), and five thoracic vertebras (T1-T5), respectively, through a deep neural network hosted in the server; superimposing (S4) the segmented clavicle image with the segmented thoracic vertebra image forming an overlapping region (OL) between the clavicles (RC, LC) and the thoracic vertebras (T1-T5); locating (S5) two innermost pixel values of the clavicles (RC, LC), and two outermost pixel values of the third thoracic vertebra (T3) present in the overlapping region (OL); obtaining (S6) an average pixel value of the identified two outermost pixel values of the third thoracic vertebra (T3); subtracting (S7) the identified two innermost pixel values of the clavicles (RC, LC) from the average pixel value separately to get corresponding two absolute values; deriving (S8) a rotation index through division of the two absolute values; and validating (S9) the rotation index against a set of threshold values to determine if the subject is rotated towards right or left, or the subject is correctly aligned.

The conventional approach for determining patient rotation involves manual interpretation by radiologist experts (experienced and knowledgeable doctors) or through CNN models which are fed with the x-ray images along with their text reports or interpretations results to cross check diagnosis accuracy. In contrast, the proposed approach for determination of patient rotation or proper alignment of patient under the X-Ray setup is outperformed such conventional approaches, thus getting a first-hand reliable information before visiting medical experts and initiating next course of medical procedures. Further, it is observed from pilot study run on anonymized chest x-ray images that the proposed computational models/modules demonstrate robust performance in detecting patient rotation, achieving an accuracy of 97%. Additionally, the model exhibits a sensitivity of 98.2%, effectively capturing a significant portion of true positive cases, while maintaining a specificity of 97.9%, thereby minimizing the occurrence of false positives and false negatives in the detecting of patient rotation.

The foregoing descriptions of exemplary embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiment was chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable the persons skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions, substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but is intended to cover the application or implementation without departing from the scope of the claims of the present invention. ,CLAIMS:We Claim:

1. A method for detecting subject rotation in chest X-ray, the method comprises steps of:
configuring (S1) an input section (102) for receiving a chest X-ray image, and an output section (104) for delivering a detection result, respectively, in a user interface (100);
transmitting (S2) the received chest X-ray image from the user interface (100) to a server (200) through a wireless communication network (300);
segmenting (S3) the image in two stages with respect to left and right clavicles (RC, LC), and five thoracic vertebras (T1-T5), respectively, through a deep neural network (202) hosted in the server (200);
superimposing (S4) the segmented clavicle image with the segmented thoracic vertebra image forming an overlapping region (OL) between the clavicles (RC, LC) and the thoracic vertebras (T1-T5);
locating (S5) two innermost pixel values of the clavicles (RC, LC), and two outermost pixel values of the third thoracic vertebra (T3) present in the overlapping region (OL);
obtaining (S6) an average pixel value of the identified two outermost pixel values of the third thoracic vertebra (T3);
subtracting (S7) the identified two innermost pixel values of the clavicles (RC, LC) from the average pixel value separately to get corresponding two absolute values;
deriving (S8) a rotation index through division of the two absolute values; and
validating (S9) the rotation index against a set of threshold values to determine if the subject is rotated towards right or left, or the subject is correctly aligned.

2. The method as claimed in claim 1, wherein the validating step (S9) includes checking if the rotation index is greater than 1.1 as an indicative of the subject being rotated towards right in the chest X-ray.

3. The method as claimed in claim 1, wherein the validating step (S9) includes checking if the rotation index is lesser than 0.9 as an indicative of the subject being rotated towards left in the chest X-ray.

4. The method as claimed in claim 1, wherein the validating step (S9) includes checking if the rotation index is between 0.9 and 1.1 as an indicative of the subject being correctly aligned in the chest X-ray.
5. A system for detecting subject rotation in chest X-ray, the system comprises:
a user interface (100) an input section (102) for receiving a chest X-ray image, and an output section (104) for delivering a detection result in real-time; and
a server (200) linked with the user interface (100) via a wireless communication network (300), wherein the server (200) has embedded therein:
a deep neural network (202) configured to segment the image in two stages with respect to left and right clavicles (RC, LC), and five thoracic vertebras (T1-T5), respectively;
an image superimposition module (204) configured to superimpose the segmented clavicle image with the segmented thoracic vertebra image forming an overlapping region (OL) between the clavicles (RC, LC) and the thoracic vertebras (T1-T5);
an image pixel tracking module (206) configured to locate two innermost pixel values of the clavicles (RC, LC), and two outermost pixel values of the third thoracic vertebra (T3) present in the overlapping region (OL);
a rotation index computing module configured (208) to: calculate an average pixel value of the identified two outermost pixel values of the third thoracic vertebra (T3), subtract the identified two innermost pixel values of the clavicles (RC, LC) from the average pixel value separately to get corresponding two absolute values; and derive the rotation index through division of the two absolute values; and
a validation module (210) configured to compare the rotation index against a set of threshold values, and determine if the subject is rotated towards right or left, or the subject is correctly aligned based on the comparison status.

6. The system as claimed in claim 5, wherein the user interface (100) is web-based or smartphone installable application interface.

7. The system as claimed in claim 5, wherein the deep neural network (202) involves image segmentation models trained in a U-net architecture comprising convolution layer, max pooling with a window size of 2 and dropout layer with a dropout rate of 0.25, and Adam optimizer with a learning rate of 0.001.

8. The system as claimed in claim 5, wherein the validation module (210) defines values of the rotation index greater than 1.1 as an indicative of the subject being rotated towards right, lesser than 0.9 as an indicative of the subject being rotated towards left, and between 0.9 and 1.1 as an indicative of the subject being correctly aligned, respectively, in the chest X-ray.

Documents

Application Documents

# Name Date
1 202431028210-PROVISIONAL SPECIFICATION [05-04-2024(online)].pdf 2024-04-05
2 202431028210-FORM FOR STARTUP [05-04-2024(online)].pdf 2024-04-05
3 202431028210-FORM FOR SMALL ENTITY(FORM-28) [05-04-2024(online)].pdf 2024-04-05
4 202431028210-FORM 1 [05-04-2024(online)].pdf 2024-04-05
5 202431028210-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [05-04-2024(online)].pdf 2024-04-05
6 202431028210-EVIDENCE FOR REGISTRATION UNDER SSI [05-04-2024(online)].pdf 2024-04-05
7 202431028210-DRAWINGS [05-04-2024(online)].pdf 2024-04-05
8 202431028210-Proof of Right [06-05-2024(online)].pdf 2024-05-06
9 202431028210-FORM-26 [06-05-2024(online)].pdf 2024-05-06
10 202431028210-FORM 3 [06-05-2024(online)].pdf 2024-05-06
11 202431028210-FORM-5 [05-04-2025(online)].pdf 2025-04-05
12 202431028210-DRAWING [05-04-2025(online)].pdf 2025-04-05
13 202431028210-CORRESPONDENCE-OTHERS [05-04-2025(online)].pdf 2025-04-05
14 202431028210-COMPLETE SPECIFICATION [05-04-2025(online)].pdf 2025-04-05
15 202431028210-FORM 18 [27-04-2025(online)].pdf 2025-04-27
16 202431028210-FORM 18 [11-08-2025(online)].pdf 2025-08-11