Sign In to Follow Application
View All Documents & Correspondence

Ai Enabled Image Processing System And Method For Determining Osteoarthritis

Abstract: The present disclosure relates to an image acquisition unit 102, that acquires an image from a number of input devices, an image classifying unit 106 coupled to the image acquisition unit 102, a comparison unit 108 is coupled to the image classifying unit 106, a storage unit 112 coupled to the comparison unit 108, a machine learning unit 110 coupled with the comparison unit 108, a KL scale analyzing unit 114, coupled with the machine learning unit 110, a severity determination unit 116 coupled with the KL scale analyzing unit 114, an AI unit 120 coupled with the severity determination unit 116, and a recommendation unit 122, coupled with the AI unit 120. The present disclosure also relates to a method of determining osteoarthritis. Figure 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 December 2020
Publication Number
25/2022
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
patent@adastraip.com
Parent Application

Applicants

Vinod Jain
575, MG Road, Indore, Madhya Pradesh - 452001, India

Inventors

1. Vinod Jain
575, MG Road, Indore, Madhya Pradesh - 452001, India

Specification

DESC:TECHNICAL FIELD
The present disclosure relates to image processing system. More particularly the present disclosure relates to AI enabled image processing system and method for determining osteoarthritis.
BACKGROUND
Osteoarthritis is a joint disease resulting from degradation or breakdown of joint cartilage and underlying bone. Osteoarthritis (OA) is a highly prevalent chronic health condition that causes substantial disability in late life. Cartilage is the protective connective tissue that covers the end of bones in a joint. Healthy cartilage enables easy movement of bone in the joint and prevents them from rubbing against each other. A subject suffering from Osteoarthritis suffers break down of top layer of cartilage, which wears away gradually leading to rubbing of bones against each other causing pain.
Medical imaging is the process of obtaining visual representations of the internal structures of a subject such as bones helping in clinical diagnosis and medical intervention. It is the part of biological imaging and incorporates radiology which uses the imaging technologies of X-ray, Magnetic Resonance Imaging (MRI), Ultrasound, and Computed Tomography (CT) scan.
Presently evaluation of OA is based on clinical examination, symptoms and simple radiographic assessment techniques using (X-ray), MRI, CT etc. done by supervision of a trained doctor or physician. The images of the joints so obtained are classified according to Kellgren-Lawrence (KL) system that classifies individual joints into 5 grades. The grade 0 is normal joint space. The grade 1 is doubtful, normal joint space, no osteophytes formation. The grade 2 is possible osteophytes, possible narrowing of joint space. The grade 3 is definite osteophyte formation, definite narrowing of joint space, some sclerosis, possible joint malalignment. The grade 4 is multiple osteophyte formation, joint space collapse and bike is contacting bone, marked sclerosis and definite joint malalignment. Osteophyte is a bone spur that is a small piece of bone that projects from the normal bone around joints.
However, the above classifications of severity of OA in a patient is done manually and gradual OA may require continued visits to a trained orthopedic to determine the severity of OA. This may lead to misdiagnosis or wrong prognosis, especially when a surgery may be required. The present methods are manual with substantial human intervention and therefore with scope of human error. The present methods used for clinical diagnosis of OA are not accurate enough to efficiently measure the quality & evolution of Osteoarthritis as per KL system. Thus, there remains more significant methods & algorithms which are multi-factoral to access the parameters & progression of Osteoarthritis are required.

SUMMARY
In view of the foregoing an osteoarthritis determination system is provided. The system includes an image acquisition unit, that acquires an image from one or more input devices, an image classifying unit, that receives an acquired image from the image acquisition unit and classifies the acquired image by segmenting the acquired image into a number of parts in accordance with topography of a joint and extracts a feature of interest from a segmented image, a comparison unit, that is communicatively coupled with the image classifying unit, the comparison unit receives a classified image from the image classifying unit and compares the topography of a joint and feature of interest with in the classified image with a prestored image received from a storage unit, a machine learning unit, that is communicatively coupled with the comparison unit, the machine learning unit receives a compared image from the comparison unit and maps a feature of interest in the compared image, a KL scale analyzing unit, that is communicatively coupled with the machine learning unit, the KL scale analyzing unit receives a mapped image from the machine learning unit and assign a KL scale values to the mapped image, a severity determination unit, that is communicatively coupled with the KL scale analyzing unit, the severity determination unit receives a KL scale values from the KL scale analyzing unit and determines severity of osteoarthritis in the subject, an AI unit, that is communicatively coupled with the severity determination unit, the AI unit compares a severity data received from the severity determination unit with a prestored severity data from the storage unit and predicts an outcome based on a compared AI data and a recommendation unit, that is communicatively coupled with the AI unit, the recommendation unit recommends a user based on a predicted data received from the AI unit. The one or more input device is selected from a group comprising an X-ray device, MRI images, CT scan images or other medical image obtained by radiography. The image acquired by the image acquisition unit is an anteroposterior view of the X-ray image of a knee joint. The KL scale analyze one or more stages and the stages are selected from a group comprising early stage, late stage, or on basis of convention KL scale of osteoarthritis. The recommendation unit recommends the user for surgery, medication or exercise based on a distance between two bones joint. The acquired image is enhanced by one or more image parameters by an image enhancement unit. The one or more image parameter is selected from a group comprising image brightness, image temperature, image contrast, image focal length, image aperture, image levels, image vibrance, image hue, image saturation, image invert, image posturize, image colour balance, image channel mixture, image colour lookup, image exposure, image threshold, time, white balance, image cropping or environmental parameters.
In one aspect of the disclosure a method for determining Osteoarthritis is provided. The method includes acquiring an image from a number of input device by an image acquisition unit and transmitting an acquired image to an image enhancement unit, enhancing and cropping an acquired image by the image enhancement unit, classifying an enhanced and cropped image by an image classifying unit and further communicates with a comparison unit, receiving a classified image from the image classifying unit, the comparison unit compares the classified image with a prestored image that is received from a storage unit, receiving compared image from the comparison unit, a machine learning unit points a location of bones and joints in the compared image and further communicates with KL scale analyzing unit, receiving mapped image from the machine learning unit, the KL scale analyzing unit measures a distance between the two joints, assigns a KL scale values and further communicates with a severity determination unit to determine a severity based on the KL scale values measured by the KL scale analyzing unit, receiving severity data from the severity determination unit, an AI unit predicts a possible outcomes based on the severity data received from the severity determination unit with a prestored severity data stored in the storage unit, and receiving predicted data from the AI unit, a recommendation unit recommends a remedial measure to a subject based on the severity data received from the severity determination unit and the predicted data received from the AI unit.

BRIEF DESCRIPITION OF DRAWINGS
The drawing/s mentioned herein disclose exemplary embodiments of the claimed invention. Other objects, features, and advantages of the present invention will be apparent from the following description when read with reference to the accompanying drawing:
FIG. 1 illustrates an osteoarthritis determination system for determining osteoarthritis (OA) in a subject (patient or user), according to an embodiment herein;
FIG. 2 illustrates a flowchart that depicts working of the osteoarthritis determination system, according to another embodiment herein; and
FIG. 3 illustrates a flowchart depicts working of a recommendation unit, according to another embodiment herein.
To facilitate understanding, like reference numerals have been used, where possible to designate like elements common to the figures.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
This section is intended to provide explanation and description of various possible embodiments of the present invention. The embodiments used herein, and the various features and advantageous details thereof are explained more fully with reference to non-limiting embodiments illustrated in the accompanying drawing/s and detailed in the following description. The examples used herein are intended only to facilitate understanding of ways in which the embodiments may be practiced and to enable the person skilled in the art to practice the embodiments used herein. Also, the examples/embodiments described herein should not be construed as limiting the scope of the embodiments herein.
The term “feature of interest” defines a portion of images of the bones, joints, ligaments, thighbone, shinbone and knee caps.
The term “KL scale” defines a Kellgren and Lawrence system used to classify the severity of osteoarthritis (OA) by five grades such as grade 0 - none, grade 1 - doubtful, grade 3 – minimal and grade 4 - severe.
As mentioned, there is a need for the development of a system and method for determining the severity of Osteoarthritis. The embodiment herein overcome the limitations of the prior art by providing an AI enabled Kellgren and Lawrence (KL) system to predict severity of Osteoarthritis and recommends for a remedial measure to the user.
FIG. 1 illustrates an osteoarthritis determination system 100 for determining osteoarthritis (OA) in a subject (patient or user). The OA determination system 100 includes an image acquisition unit 102, an image enhancement unit 104, an image classifying unit 106, an image comparison unit 108, a storage unit 112, a machine learning unit 110, a KL scale analyzing unit 114, a severity determination unit 116, an AI unit 120 and a recommendation unit 122.
The image acquisition unit 102 is communicatively coupled with the image enhancement unit 104. The image enhancement unit 104 is communicatively coupled to the image classifying unit 106 and the storage unit 112. The image classifying unit 106 is communicatively coupled to the image comparison unit 108. The comparison unit 108 is communicatively coupled with the machine learning unit 110. The machine learning unit 110 is further communicatively coupled with the KL scale analyzing unit 114. The KL scale analyzing unit 114 is further communicatively coupled with the severity determination unit 116. The severity determination unit 116 is further communicatively coupled with the AI unit 120. The AI unit 120 further communicatively coupled with the recommendation unit 122.
In operation, the image acquisition unit 102 acquires an image from a number of input devices and transmit the acquired image to the image enhancement unit 104. The image enhancement unit 104 is configured to enhance a number of parameters associated with the image received by the image enhancement unit 104. The image enhanced by the image enhancement unit 104 is sent to the image classifying unit 106. The image classifying unit 106 is configured to classify the enhanced image and segments the enhanced image into a number of parts in accordance with the topography of a joint of the subject. The image classifying unit 106 thereby extracts a feature of interest from the segmented image. The comparison unit 108 receives a classified image from the image classifying unit 106 and compares the classified image with a prestored image dataset that is stored in the storage unit 112. The machine learning unit 110 is adapted to map the feature of interest in the compared image received from the comparison unit 108 with respect to the prestored image dataset that is stored in the storage unit 112. The mapped image is sent to the KL scale analyzing unit 114 in order to identify severity of OA in the subject. The KL scale analyzing unit 114 is adapted to assign a KL scale value to the mapped image with respect to the distance between two bones of the subject. The severity determination unit 116 determines severity of OA based on the KL scale value received from the KL scale analyzing unit 114. The AI unit 120 receives a severity data from the severity determination unit 116 and a prestored severity data from the storage unit 112. The AI unit 120 predicts a possible outcome based on the severity data and a prestored severity data. The recommendation unit 122 receives a predicted data from the AI unit 120 and recommends the subject about remedial approach to overcome the osteoarthritis.
The AI unit 120 uses Inception v3 convolutional neural network for assisting in the image analysis and object detection. The convolutional neural network architecture of the AI unit 120 is from the Inception family that makes several improvements including using label smoothing, factorized 7 x 7 convolutions, and the use of an auxiliary classifier to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). In an Inception v3 model, several techniques for optimizing the network have been suggested to loosen the constraints for easier model adaptation. The techniques include factorized convolutions, regularization, dimension reduction, and parallelized computations.
In an embodiment, the input device includes an X-ray device, a magnetic resonance imaging (MRI) imaging unit, a computed topography (CT) scan imaging unit or other medical imaging unit.
In another embodiment, convolutional neural network technique is used for assisting in image analysis and object detection.
In another embodiment, the image acquired by the image acquisition unit 102 is an anteroposterior view of the X-ray image of a knee joint of the subject.
In another embodiment, the number of image parameters includes image brightness, image temperature, image contrast, image focal length, image aperture, image levels, image vibrance, image hue, image saturation, image invert, image posturize, image color balance, image channel mixture, image color lookup, image exposure, image threshold, time, white balance or environmental parameters.
In another embodiment, the image enhancing unit 104 may crop the image.
In another embodiment, the image enhancing unit 104 splits the image into two images.
In another embodiment, the machine learning unit 110 maps a location of bones in the compared image received from the comparison unit 108.
In another embodiment, the feature of interest is a distance between bone joints or segment pertaining to bone joins or formation of degradation of cartilage on bone joints.
In another embodiment, the prestored image datasets is a number of reference image of a healthy user.
In another embodiment, the prestored feature data is a reference feature data of the
In another embodiment, the KL scale analyzing unit 114 identifies the classified image at a number of levels (hereinafter referred to as a level for single component).
In another embodiment the number of levels includes early stage, late stage or on basis of convention KL scale of OA.
In another embodiment, the recommendation unit 122 recommends the user for surgery when the distance between two bones is less than predetermined distance.
In another embodiment, the recommendation unit 122 recommends the user for regenerative medicine or physical exercises when the distance between two bones are equal or more than the predetermined distance.
FIG. 2 illustrates a flowchart that depicts working 200 of the osteoarthritis determination system 100. The working 200 of the osteoarthritis determination system 100 involves following steps for determining OA in the subject.
At step 202, the image is acquired from the number of input devices by the image acquisition unit 102 and transmitting the acquired image to the image enhancement unit 104.
At step 204, the acquired image is enhanced and cropped by the image enhancement unit 104.
At step 206, the enhanced and cropped image is classified by the image classifying unit 106 and further communicates with the comparison unit 108.
At step 208, the classified image is received from the image classifying unit 106, the comparison unit 108 compares the classified image with the prestored image that is received from the storage unit 112.
At step 210, the compared image is received from the comparison unit 108, the machine learning unit 110 points the location of bones and joints in the compared image and further communicates with the KL scale analyzing unit 114.
At step 212, the mapped image is received from the machine learning unit 110, the KL scale analyzing unit 114 measures the distance between the two joints, assigns a KL scale values and further communicates with the severity determination unit 116 to determine the severity based on the KL scale values assigned by the KL scale analyzing unit 114.
At step 214, the severity data is received from the severity determination unit 116, the AI unit 120 predicts the possible outcomes based on the severity data received from the severity determination unit 116 with the prestored severity data stored in the storage unit 112.
At step 216, the predicted data is received from the AI unit 120, the recommendation unit 122 recommends a remedial measure to the subject based on the severity data received from the severity determination unit 116 as shown in the FIG. 3 and the predicted data received from the AI unit 120.
FIG. 3 illustrates a flowchart that depicts working of the recommendation unit 122. The recommendation unit 122 receives a predicted data from the AI unit 120 and the recommendation unit 122 recommends the subject for surgery when the KL value is low or distance between two bones in the acquired image is less than predetermined distance. The recommendation unit 122 recommends the user for regenerative medicine or exercise when the KL value is more or distance between two bones in the acquired image is equal or more than the predetermined distance as shown in the FIG. 3.

Example 1: -
Step 1: - Two datasets containing Knee X-ray Images and other random images which also included MRI of brain, Hand X-rays images were created.
C
Step 2: - The input image was resized to (224,224) pixels.
Step 3: - AI model using inceptionv3 architecture and transfer learning was created. Inceptionv3 is trained to classify between 1000 categories of images. We used the weights and biases from ImageNet for transfer learning and frizzed all layers of Inceptionv3 to use the ImageNet weights and biases. As we are doing binary classification, according to our need we removed the last layer from the inception model added a flatten and Dense layer for the output with ‘softmax’ activation function then trained the model for binary classification.

Step 4: - AI model was trained using the image data generator to feed the images to the neural network and also used various transformations like rescaling, horizontal flip, zooming etc. To increase the accuracy of the model.

Step 5: - for compiling of the model we used ‘Categorical_crossentropy’ as loss function, ‘Adam’ optimizer, and ‘accuracy’ as metrics.

Step 6: - Training data and validation data were fitted to train the model.

Step 7: - A function for predicting, whether the image belongs to a knee X-ray image class or a random image class and it will return true if the image is from the knee X-ray class and then further pass the image to the primary AI model to predict the grade of arthritis was generated.

Example 2: -
Step 1: - Two data-sets were created two data-set for single and double knee images. For ex: - double knee X- ray image class and single knee X- ray image class.

Step 2: - A pathlib object to load the images was created.
Step 3: - The data was labelled by creating the Class_dir and Class_labels where single knee images were labelled as ‘0’ and double knee images labelled as ‘1’.

Step 4: - Preprocessing of the images by converting them into grayscale images and binarized them using otsu's thresholding technique was done. The images were resized to 224,224 pixels. These images were then stored in X (Independent) and y (dependent) variables.

Step 5: - Conversion of the images into Numpy arrays and then splitted into training and testing datasets using sklearn’s Train_test_split class was done.

Step 6: - Conversion of the images into RGB channels from gray scaled images by repeating the single channel three times because our inception architecture accepts input images as 3 channel (RGB) images was done.

Step 7: - The images for training and testing were scaled by dividing them by 255.

Step 8: - AI model was created with the help of Keras functional API with inceptionv3 Architecture As base model.

Step 9: - Five hidden dense layers of 1000, 500 258, and 128 neurons respectively were added, and one neuron in the last output layer which gave us the probability of a single or double knee image.

Step 10: - The model was compiled by using ‘BinaryCrossentropy’ as loss function, ‘Adam’ optimizer, and ‘accuracy’ as metrics.

Step 11: - The Training data and validation data was fitted to the model. And trained the AI model for 5 epochs.

Step 12: - Evaluation of the model on the test data was done and it gave as 98% accuracy and F1-score of 98%.

Step 13: - Classification of the images into single and double knee images was done. IF the image is a single knee image we directly pass the image to Ai model which predicts the grade of arthritis. If the image is double knee image we vertically cropped the images into half and pass the cropped images to our Ai model which predicts the grade of arthritis.

While the disclosure has been presented with respect to certain specific embodiments, it will be appreciated that many modifications and changes may be made by those skilled in the art without departing from the spirit and scope of the disclosure. It is intended, therefore, by the appended claims to cover all such modifications and changes as fall within the true spirit and scope of the disclosure. ,CLAIMS:
1. An osteoarthritis determination system (100) for determining osteoarthritis in a subject, the osteoarthritis determination system (100) comprising:
an image acquisition unit (102) configured to acquire an image from one or more input devices;
an image classifying unit (106) configured to receive the acquired image from the image acquisition unit (102) and classifies the acquired image by segmenting the acquired image into a number of parts in accordance with a topography of a joint and extracts a feature of interest from the segmented image;
a comparison unit (108) communicatively coupled with the image classifying unit (106) and is configured to receive a classified image from the image classifying unit (106) for comparing the topography of a joint and the feature of interest associated with the classified image with a prestored image dataset received from a storage unit (112);
a machine learning unit (110) communicatively coupled with the comparison unit (108) and is configured to receive a compared image from the comparison unit (108) for mapping the feature of interest in the compared image;
a KL scale analyzing unit (114) communicatively coupled with the machine learning unit (110) and is configured to receive a mapped image from the machine learning unit (110) for assigning a KL scale value to the mapped image;
a severity determination unit (116) communicatively coupled with the KL scale analyzing unit (114) and is configured to receive an assigned KL scale value from the KL scale analyzing unit (114) for determining severity of osteoarthritis in the subject;
an AI unit (120) with inception v3 convolutional neural network architecture is communicatively coupled with the severity determination unit (116) and is configured to compare a severity data received from the severity determination unit (116) with a prestored severity data stored in the storage unit (112) for predicting an outcome based on compared AI data; and
a recommendation unit (122) communicatively coupled with the AI unit (120) and is configured to recommend one or more remedial approaches to the subject based on a predicted data received from the AI unit (120).

2. The system (100) as claimed in claim 1, wherein the one or more input device is selected from a group comprising an X-ray device, MRI images, CT scan images or other medical image obtained by radiography.

3. The system (100) as claimed in claim 1, wherein the image acquired by the image acquisition unit (102) is an anteroposterior view of the X-ray image of a knee joint.

4. The system (100) as claimed in claim 1, wherein the KL scale analyze one or more stages and the stages are selected from a group comprising early stage, late stage, or on basis of convention KL scale of osteoarthritis.

5. The system (100) as claimed in claim 1, wherein the recommendation unit 122 recommends the user for surgery, medication or exercise based on a distance between two bones joint.

6. The system (100) as claimed in claim 1, wherein the acquired image is enhanced by one or more image parameters by an image enhancement unit (104).

7. The system (100) as claimed in claim 6, wherein the one or more image parameter is selected from a group comprising image brightness, image temperature, image contrast, image focal length, image aperture, image levels, image vibrance, image hue, image saturation, image invert, image posturize, image colour balance, image channel mixture, image colour lookup, image exposure, image threshold, time, white balance, image cropping or environmental parameters.

8. A method (200) for determining Osteoarthritis, the method comprising:
a. acquiring (202) an image from a number of input devices by an image acquisition unit (102) and transmitting the acquired image to an image enhancement unit (104);
b. enhancing and cropping (204) an acquired image by the image enhancement unit (104);
c. classifying (206) the enhanced and cropped image by an image classifying unit (106) and further sending the enhanced image to a comparison unit (108);
d. receiving (208) a classified image from the image classifying unit (106) by the comparison unit (108) for comparing the classified image with a prestored image dataset that is received from a storage unit (112);
e. receiving (210) the compared image from the comparison unit (108) by a machine learning unit (110) for mapping location of bones and joints in the compared image and further sending the mapped image to a KL scale analyzing unit (114);
f. receiving (212) the mapped image from the machine learning unit (110) by the KL scale analyzing unit (114) for measuring distance between the two joints of the subject and assigning a KL scale value based on the measured distance between the two joints;
g. determining severity of the osteoarthritis based on the KL scale values measured by the KL scale analyzing unit (114);
h. receiving (214) severity data from the severity determination unit (116) by a severity determination unit (116), an AI unit (120) with inception v3 convolutional neural network architecture predicts a possible outcome based on the severity data received from the severity determination unit (116) with a prestored severity data stored in the storage unit (112); and
i. receiving (216) predicted data from the AI unit (120), a recommendation unit (122) recommends a remedial measure to a subject based on the severity data received from the severity determination unit (116) and the predicted data received from the AI unit (120).

Documents

Application Documents

# Name Date
1 202021025737-STATEMENT OF UNDERTAKING (FORM 3) [18-06-2020(online)].pdf 2020-06-18
2 202021025737-PROVISIONAL SPECIFICATION [18-06-2020(online)].pdf 2020-06-18
3 202021025737-FORM 1 [18-06-2020(online)].pdf 2020-06-18
4 202021025737-DRAWINGS [18-06-2020(online)].pdf 2020-06-18
5 202021025737-FORM-26 [18-09-2020(online)].pdf 2020-09-18
6 202021025737-PostDating-(18-06-2021)-(E-6-142-2021-MUM).pdf 2021-06-18
7 202021025737-APPLICATIONFORPOSTDATING [18-06-2021(online)].pdf 2021-06-18
8 202021025737-DRAWING [18-12-2021(online)].pdf 2021-12-18
9 202021025737-COMPLETE SPECIFICATION [18-12-2021(online)].pdf 2021-12-18
10 Abstract1.jpg 2022-03-30
11 202021025737-FORM 18 [18-12-2024(online)].pdf 2024-12-18