Abstract: AUTOMATIC ASSESSMENT OF DAMAGE TO THRUST BEARINGS AND SYSTEM THEREFOR ABSTRACT The disclosure provides a method 100 for automatic damage detectionand quantification in thrust bearings. The disclosure further includes an automatic assessment system 200 for damage detection and quantification in thrust bearings. The system includes an input unit 202, an image classification module 204, an image visualization module 218, a processing unit 220 and a display unit 222. The image classification module includes a CNN model configured to retain spatial information indicating damage in the thrust bearing and determine at least one feature map indicating damage in the thrust bearings corresponding to the input data set received from the input unit. The system generates a feature location map by multiplying the feature maps using corresponding weights to finally generate heat maps indicative of damage assessment in the unknown sample. The system and method of the present disclosure provide a training accuracy of at least 99.25% and a testing accuracy of at least 82.22% FIG.1
Description:FORM 2
THE PATENT ACT, 1970
(39 of 1970)
COMPLETE SPECIFICATION
(See section 10, rule 13)
TITLE: AUTOMATIC ASSESSMENT OF DAMAGE TO THRUST BEARINGS AND SYSTEM THEREFOR
INVENTORS
BHAUMIK, Shubrajit, Indian Citizen
Ramnagar 7, Agartala, Tripura- 799001
RANGARAJAN, Prasanna Kumar, Indian Citizen
No. 27/9, A block, Orchard Bhavan, VR Nagar, Korattur
Chennai– 600080, Tamilnadu
MUTHUKRISHNAN, Shree Prasad, Indian Citizen
Plot No 2/ 3/ 4, S1, B Block,Salim Luminous Square
Dhanalakshmi Nagar Phase 2, Iyyapanthangal, Chennai – 600056, Tamilnadu
GIRI, Jeevan Sendur, Indian Citizen
No1, Sivasakthi Nagar,
L.Kallipatti, Gobichettipalayam, Erode dist– 638452, Tamilnadu
GANAPATHIBHOTLA VENKATA, Krishna Kumar, Indian Citizen
Flat T - 7, Guru Kailash Apartment,
Pamban Swamigal Salai, Chitlapakkam, Chennai – 600064, Tamilnadu
BYREDDY LAKSHMI, Manohar Reddy, Indian Citizen
1/5, Nehru Nagar, Gospadu(M),
Nandyal, Nandyal Dist– 518593, Andhra Pradesh
APPLICANTS
Amrita Vishwa Vidyapeetham
Chennai Campus
337/1A, Vengal Village,
Thiruvallur Taluk & District – 601 103, Tamil Nadu, India.
THE FOLLOWING SPECIFICATIONPARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED
AUTOMATIC ASSESSMENT OF DAMAGE TO THRUST BEARINGS AND SYSTEM THEREFOR
CROSS-REFERENCES TO RELATED APPLICATIONS
None.
FIELD OF THE INVENTION
The present invention generally relates to visual inspectionand more particularly relates toa system and method to automated visual inspection to detect defects in thrust bearings.
BACKGROUND OF THE RELATED ART
Thrust ball bearings are indispensable elements in the design of various rotating machines such as in hydro-generators, aircraft, machine tools, automobiles and home appliances.Bearings incorporated in actual machines can be damaged due to fatigue accumulation, inadequate lubrication or intrusion of foreign matter during operation, causing minute internal cracks to form or flaking of the bearing surface. This further causes peeling anddeformation of bearings, rendering them unusable and requiring replacement by a new one.Therefore, if normal operation of a machine is to be maintained,there is a strong need to detect damage as early as possible.Conventionally, vibration based detection has been used to diagnose damage to the thrust bearings. However, as evaluation of vibration can only detect problems at a terminal stage of the operation, such as when flaking or peel expansion occurs, it does not provide sufficient information to permit damage prediction. In several studies, an acoustic emission (AE) has been considered to be very effective in diagnosing the lifetimes of bearings by directly evaluating friction and wear.
In the framework of predictive maintenance, several studies have focused on 2D or 3D modeling of bearings fatigue. In CN116610993, a bearing fault diagnosis method and system is disclosed wherein fault diagnosis of the bearing is realized through the CNN and the attribution of the fault diagnosis result is known through visualization. WO2021225876A1 discloses a method for reducing usage of processing resources when training a plurality of neural networks to perform automated visual inspection for a plurality of respective defect categories.A physics-informed feature weighting method for bearing fault diagnostics (DOI:10.1016/j.ymssp.2023.110171) discloses bearing fault diagnostics under different operating conditions and also details the deployment of a physics-informed convolutional neural network model on an Industrial Internet of Things (IIoT) device, where edge computing gives users a real-time evaluation of bearing health.However, the above mentioned art use complex data sets, analysis models andrequire specialized skilled labor for bearing fault diagnosis.
Hence, there has been a need in the art for a method and system that particularly tackles issues like complex data sets and requirement of skilled labor.In this regard, the method and system for automatic damage detection and quantification in thrust bearingsaccording to the present invention substantially departs from the conventional concepts and designs of the prior art.
The invention proposes to mitigate some of the problems discussed above, as illustrated further with reference to the description and drawings.
SUMMARY OF THE INVENTION
According to one embodiment of the present subject matter, a method for automatic damage detection and quantification in thrust bearings. The method includes the steps of providing an input data set in an input unit, wherein the input data set comprises a plurality of damage detection images classified as defective or non-defective areas of thrust bearings. In various embodiments, the method includes the step of constructing a convolutional neural network (CNN) model in an image classification module to determine at least one feature map indicating damage in the thrust bearings corresponding to the input data set received from the input unit. This is followed by initiating prediction for an unknown input sample using the CNN model to detect at least one feature map indicating damage in a thrust bearing based on the image input data set and generating a model output. Next step includes applying a Gradient-weighted Class Activation Mapping (Grad-CAM) on the model output received from the classification module in an image visualization module based on gradient information obtained from each feature map (FM1, FM2…FM64). The next step of the method includes generating a feature location map by multiplying the feature maps using corresponding weights (W1, W2,….W64). In various embodiments, at least one feature map in step includes pits, corrosion or fatigue damage in thrust bearings induced by friction or wear. The method further includes estimating the damage in the thrust bearing after computing the feature location map by a processing unit and generating heat maps (HM1, HM2…HM64) corresponding to the weights (W1, W2,….W64). This step is followed by displaying the heat maps received from the image visualization module in a display unit indicative of damage assessment in the unknown sample.
In various embodiments, the CNN model uses an activation function ReLU to provide non-linearity to the model. In various embodiments, an Adam optimizer with default is used as the optimization algorithm for the CNN. In various embodiments, the method provides a training accuracy of at least 99.25% and a testing accuracy of at least 93%.
According to another embodiment of the present subject matter, an automatic assessment system for damage detection and quantification in thrust bearings is disclosed. In various embodiments, the system includes an input unit configured to store and provide a plurality of damage detection images classified as defective or non-defective areas of thrust bearings. In various embodiments, the system further includes an image classification module having a CNN model configured to retain spatial information indicating damage in the thrust bearing. In various embodiments, the system further includes an image visualization module configured to apply a Gradient-weighted Class Activation Mapping (Grad-CAM) and superimpose Grad-CAM on the model output to generate a feature location map. The system further includes a processing unit configured to generate heat maps based on the feature location maps corresponding to the weights. The system also includes a display unit adapted to display the heat maps received from the processing unit.
In various embodiments, the CNN model includes an input layer having a shape of 180×180×3 configured to provide an input images of size 180×180 pixels with 3 colour channels. The CNN further includes a series of convolutional layers with intermediate max pooling layers configured to provide at least one feature map and to highlight informative regions of the feature maps to reduce dimensionality to generate a 3D model output. The CNN further includes a dropout layer configured to prevent overfitting of the 3D model output. The CNN also includes a flattening layer configured to flatten the 3D model output into one dimensional array.
In various embodiments, the plurality of damage detection images comprise a width of at least 960 px or a height of at least 1080 px or a color depth of at least 8 bit or a resolution of at least 96 dpi, both vertically and horizontally. In various embodiments, the dropout layer has a dropout rate of 0.5.
In various embodiments, the series of convolutional layers with intermediate max pooling layers includes a first convolutional layer having 16 filters with an output size of 180×180 pixels followed by a first max pooling layer with the pool size of (2, 2) and adapted to provide an output size of 90×90 pixels. The series further includes a second convolutional layer having 32 filters with an output size of 90×90 pixels followed by a second max pooling layer adapted to provide an output size of 45×45 pixels. The series includes a third convolutional layer having 64 filters with an output size of 45×45×64 followed by a third max pooling layer configured to provide a final output of 23×23 pixels.
BRIEF DESCRIPTION OF DRAWINGS
The invention has other advantages and features which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
FIG. 1: represents a method for automatic assessment of damage in thrust bearings.
FIG. 2:represents a block diagram of an automatic assessment system for analyzing and determining the damage in thrust bearings.
FIG. 3: show a detailed Network Architecture of the CNN, according to an embodiment of the present subject matter.
FIG. 4: illustrates a process flowof the method of the invention with the automatic assessmentsystem.
FIG. 5A-5D:show a visual feature mapping in context with percentage of damage to thrust bearing using Grad-CAM.
Referring to the figures, like numbers indicate like parts throughout the various views.
DETAILED DESCRIPTION OF THE EMBODIMENTS
While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material to the teachings of the invention without departing from its scope.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein unless the context clearly dictates otherwise. The meaning of "a", "an", and "the" include plural references. The meaning of "in" includes "in" and "on." Referring to the drawings, like numbers indicate like parts throughout the views. Additionally, a reference to the singular includes a reference to the plural unless otherwise stated or inconsistent with the disclosure herein.
The present subject matter discloses a method for automatic damage detection and quantification in thrust bearings and system thereof, as further disclosed with reference to the drawings.
The method 100 for automatic damage detection and quantification in thrust bearings is illustrated in FIG. 1, according to embodiments of the subject matter. The method 100 includes the step 102of providing an input data set in an input unit.The input data set comprises a plurality of damage detection images. In various embodiments, the plurality of images include images classified as defective or non-defective thrust bearing surfaces. Step-102 is followed by constructing a convolutional neural network (CNN) modelin an image classification module. This step is to determine at least one feature map indicating damage in the thrust bearings corresponding to the input data set received from the input unit in step 104.
The next step 106 includesinitiating prediction for an unknown input sample using the CNN model to detect at least one feature map indicating damage in a thrust bearing. The prediction is based on the image input data set and generating a model output. This is followed by step 108 of applying a Gradient-weighted Class Activation Mapping (Grad-CAM) on the model output received from the classification module in an image visualization module. The Grad-CAM is applied based on gradient information obtained from each feature map FM1, FM2…FM64. After that, a feature location map is generated by multiplying the feature maps using corresponding weightsW1, W2…W64, in step 110. This is followed by step 112 of estimating the damage in the thrust bearing after computing the feature location map by a processing unit and generating heat maps HM-1, HM-2,….HM-64corresponding to the weightsW1, W2,….W64. Final step 114 includes displaying the heat maps received from the image visualization module in a display unit indicative of damage assessment in the unknown sample.
In various embodiments, the dataset used in step 102 contains a plurality of 2D pictographs with defective and non-defective class labels. In one embodiment, the plurality of damage detection images include real time 2-dimentionalpictographs of bearings. In various embodiments, the training dataset in step 102 is configured to enable generating feature maps in step 104corresponding to a variety of damage cases such as pits, corrosion or fatigue damage in thrust bearings induced by friction or wear.
In various embodiments, a Convolutional Neural Network (CNN) model in step 106may be used to detect the nature of damage in the bearing. In various embodiments, the CNN model uses an activation function ReLU to provide non-linearity to the model. In various embodiments, an Adam optimizer with default is used as the optimization algorithm for the CNN. In various embodiments, Grad CAM in step 108 may help to visualize the areas where the damage is present.
Anautomatic assessment system 200 for damage detection and quantification in thrust bearings is illustrated in FIG. 2, according to one embodiment of the present subject matter. In various embodiments, the system includes an input unit 202 configured to store and provide a plurality of damage detection images classified as defective or non-defective areas of thrust bearings. The system includes an image classification module 204 having a CNN model configured to retain spatial information indicating damage in the thrust bearing. The system includes an image visualization module 218configured to apply a Gradient-weighted Class Activation Mapping (Grad-CAM). The image visualization module 218 is adapted to superimpose Grad-CAM on the model output to generate a feature location map. The system further includes fully connected layers 217and backpropagation 219to generate correspondingweights based onat least one feature map.In various embodiments, the system includesa processing unit 220configured to generate heat maps based on the feature location maps corresponding to the weights. In various embodiments, the system further includes a display unit 222adapted to display the heat maps received from the processing unit.
With reference to FIG. 3, a detailed network architecture of the CNN is illustrated. In various embodiments, the CNN model comprises of an input layer 202having a shape of 180×180×3 configured to provide an input images of size 180×180 pixels with 3 colour channels. The CNN model further includes a series of convolutional layers (208-1, 208-2, 208-3)with intermediate max pooling layers (210-1, 210-2, 210-3)configured to provide at least one feature map FM-1, FM-2, …..FM-64 and to highlight informative regions of the feature maps to reduce dimensionality to generate a 3D model output. The CNN model includes a dropout layer 212configured to prevent overfitting of the 3D model output. The CNN model includes a flattening layer 214configured to flatten the 3D model output into one dimensional array.
In various embodiments, the CNN architecture is a sequential model with an input layer with the shape of (180,180,3), indicating the input images of size 180×180 pixels with 3 colour channels. The first convolutional layer includes 16 filters with an output shape of 180×180 pixels. The first convolutional layer 208-1 is followed by a max-pooling layer 210-1 with the pool size of (2, 2) which in turn results in an output size of 90×90 pixels. In various embodiments, the pooling operation is effective in highlighting informative regions for reducing dimensionality. In various embodiments, the next convolutional layer 208-2 comes with 32 filters followed by the max-pooling 210-2again giving an output shape of 45×45 pixels. In various embodiments, the network deepens with 64 filters in the next convolutional layer 208-3 with an output shape of (45, 45, 64). In various embodiments, a max-pooling layer 210-3 with similar size is applied resulting in a final output size of 23×23 pixels. In various embodiments, a dropout layer with dropout rate of 0.5 follows the layer for preventing overfitting. In various embodiments, using a flattening layer 214, the 3-D output is flattened into a 1-D array. In various embodiments, the 1-D array is passed to a first dense layer 216-1 followed by a second dense layer 216-2. In various embodiments, the first and second dense layers may be used to classify image based on output from the convolutional layers.
In various embodiments, in convolutional layers(208-1, 208-2, 208-3), the activation function used is ReLU which is used to introduce non-linearity to the model. In various embodiments, the Adam optimizer with default is used as the optimization algorithm for the CNN. In various embodiments, the spatial information is retained by the convolutional layers. In various embodiments, the dropout layer has a dropout rate of 0.5.
Aprocess flowof the method of the invention with the automatic assessmentsystem is illustrated in FIG. 4. In various embodiments, Grad-CAM uses the gradient information of the last convolutional layer to understand the importance of each feature map the model has used to make a classification decision. In various embodiments, the feature location map is obtained through multiplying the feature maps with their corresponding weights. In various embodiments, Grad-CAM visually helps to determine the important regions for classification. In one embodiment, the region for classification is the pit in the bearings.
The present disclosure uses computer vision to accurately detect and quantify the extent of damage in thrust bearings, specifically focusing on identifying and analysing the damages on bearing raceways. The method of the present invention makes use of2D pictograph processing techniques which is more convenient and does not require skilled labour during the operation. The method of the present invention makes use of deep learning algorithms which is a reliable and automated method for assessing the percentage of damage present on bearing raceways, particularly the pittings and corrosion due to rust that occurred during the operation due to friction and wear is computed. The present invention is significant for industries reliant on machinery and rotating equipment, where the condition of bearings directly impacts operational efficiency, reliability, and maintenance costs.The method of present invention may help manufacturers and maintenance personnel to swiftly identify deteriorating thrust bearings, assess the severity of damage, and make decisions regarding maintenance schedules, component replacements, and operational adjustments. The implementation of system of the present invention enhances overall equipment effectiveness, reduces the risk of unexpected failures, minimizes downtime, and optimizes maintenance strategies, thereby contributing to improved productivity, safety, and cost-effectiveness across various industrial sectors.
EXAMPLES:
Example-1: Loading Of Data Set:
Real time two dimensional pictographs of damaged and non-damaged thrust bearings were used to predict the percentage of defect. The images depicted the fretting and wear by induced friction, to test the viscosity of various lubricants. A custom dataset generated by inducing fretting failure as per AST D 4170 standardas shown in FIG. 5, was used for model. The test continued for 22 hrs at ambient temperature. Different types of greases and pastes were used which resulted in different types of failures in the thrust bearings. All experimental conditions were strictly followed as per ASTM D 4170 standard. A total of 257 pictographs were considered for this work. Out of which 170 pictographs were of defective bearings and 87 were non-defective defective bearings. 80% of the pictographs taken were used for training. Hence, 206 images were used for training (Table-1), 51 images were used for testing (Table-2). Each of the images were of width 960 px, and a height of 1080 px. The color depth is 8-bit [256 colors per channel], and a resolution of 96 dpi, both vertically and horizontally.The images are of JPG format and before they were used for trainingand resized to 180x180.
Table-1:Data of 206 Images (sample number)with % Defect Used for Training
Sample Number Percentage Defect Sample Number Percentage Defect Sample Number Percentage Defect
Defective Defective Non-Defective
Image 1 62.83 Image 70 60.64 Image 168 0
Image 2 27.94 Image 71 87.65 Image 169 0
Image 3 49.43 Image 72 99.91 Image 170 0
Image 4 10.54 Image 73 74.66 Image 171 0
Image 5 84.43 Image 74 74.66 Image 172 0
Image 6 21.88 Image 75 99.91 Image 173 0
Image 7 25.66 Image 76 18.38 Image 174 0
Image 8 27.88 Image 77 18.52 Image 175 0
Image 9 35.67 Image 78 46.78 Image 176 0
Image 10 7.47 Image 79 77.52 Image 177 0
Image 11 18.06 Image 80 71.52 Image 178 0
Image 12 2.94 Image 81 47.6 Image 179 0
Image 13 4.43 Image 82 99.91 Image 180 0
Image 14 21.09 Image 83 99.91 Image 181 0
Image 15 15.7 Image 84 99.5 Image 182 0
Image 16 80.91 Image 85 54.39 Image 183 0
Image 17 23.97 Image 86 18.42 Image 184 0
Image 18 27.78 Image 87 89.76 Image 185 0
Image 19 62.49 Image 88 99.91 Image 186 0
Image 20 64.24 Image 89 83.67 Image 187 0
Image 21 72.28 Image 90 99.91 Image 188 0
Image 22 99.91 Image 91 99.91 Image 189 0
Image 23 88.29 Image 92 38.42 Image 190 0
Image 24 32.51 Image 93 6.47 Image 191 0
Image 25 51.97 Image 94 99.91 Image 192 0
Image 26 52.63 Image 95 99.91 Image 193 0
Image 27 56.62 Image 96 96.13 Image 194 0
Image 28 62.1 Image 97 99.91 Image 195 0
Image 29 74.39 Image 98 99.91 Image 196 0
Image 30 62.1 Image 99 99.91 Image 197 0
Image 31 61.72 Image 100 99.91 Image 198 0
Image 32 28.49 Image 101 99.91 Image 199 0
Image 33 31.21 Image 102 75.75 Image 200 0
Image 34 64.93 Image 103 99.91 Image 201 0
Image 35 13.93 Image 104 49.85 Image 202 0
Image 36 13.86 Image 105 99.91 Image 203 0
Image 37 87.65 Image 106 99.91 Image 204 0
Image 38 69.91 Image 107 49.85 Image 205 0
Image 39 45.56 Image 108 16.77 Image 206 0
Image 40 96.24 Image 109 99.91 Image 207 0
Image 41 99.91 Image 110 47.43 Image 208 0
Image 42 99.91 Image 111 99.91 Image 209 0
Image 43 44.74 Image 112 41.74 Image 210 0
Image 44 93.14 Image 113 19.79 Image 211 0
Image 45 57.49 Image 114 41.74 Image 212 0
Image 46 68.09 Image 115 9.92 Image 213 0
Image 47 43.85 Image 116 99.91 Image 214 0
Image 48 62.17 Image 117 92.05 Image 215 0
Image 49 34.47 Image 118 99.91 Image 216 0
Image 50 69.89 Image 119 25.35 Image 217 0
Image 51 42.58 Image 120 52.61 Image 218 0
Image 52 20.6 Image 121 63.29 Image 219 0
Image 53 49.69 Image 122 71.52 Image 220 0
Image 54 25.72 Image 123 51.43 Image 221 0
Image 55 9.22 Image 124 25.29 Image 222 0
Image 56 29.53 Image 125 67.84 Image 223 0
Image 57 29.72 Image 126 77.89 Image 224 0
Image 58 28.66 Image 127 99.91 Image 225 0
Image 59 36.65 Image 128 80.65 Image 226 0
Image 60 47.14 Image 129 5.96 Image 227 0
Image 61 6.47 Image 130 21.96 Image 228 0
Image 62 36.5 Image 131 42.11 Image 229 0
Image 63 48.52 Image 132 17.16 Image 230 0
Image 64 99.91 Image 133 47.61 Image 231 0
Image 65 99.91 Image 134 79.07 Image 232 0
Image 66 68.75 Image 135 85.13 Image 233 0
Image 67 85.85 Image 136 58.63 Image 234 0
Image 68 88 Image 137 60.71 Image 235 0
Image 69 99.91 Image 138 99.91
Table-2: Images Used for Testing with % Defect Actual vs. Predicted
Sample Number Actual Defect (%) Predicted Defect (%) Sample Number Actual Defect (%) Predicted Defect (%)
Defective Non-Defective
Image 139 67.68 64.54 Image 236 0 0
Image 140 68.83 61.59 Image 237 0 0
Image 141 38.75 45.56 Image 238 0 0
Image 142 51.2 45.92 Image 239 0 0
Image 143 50.72 42.28 Image 240 0 0
Image 144 41.22 36.65 Image 241 0 0
Image 145 24.53 27.75 Image 242 0 0
Image 146 99.91 99.91 Image 243 0 0
Image 147 99.91 99.91 Image 244 0 0
Image 148 92.22 92 Image 245 0 0
Image 149 63.93 64.2 Image 246 0 0
Image 150 99.91 99.91 Image 247 0 0
Image 151 99.91 99.91 Image 248 0 0
Image 152 99.91 99.91 Image 249 0 0
Image 153 99.91 99.91 Image 250 0 0
Image 154 99.91 99.91 Image 251 0 0
Image 155 99.91 99.91 Image 252 0 0
Image 156 99.91 99.91 Image 253 0 0
Image 157 64.25 65.68 Image 254 0 0
Image 158 51.49 49.21
Image 159 29.71 28.05
Image 160 38.62 40.87
Image 161 33.25 33.57
Image 162 33.25 33.24
Image 163 95.61 95.61
Image 164 50.58 54.44
Image 165 72.39 72.4
Image 166 80.14 72.4
Image 167 19.96 13.35
Example-2: Development Of Convolutional Neural Network (CNN) Model For Image Processing:
To detect the damage in the thrust bearing, a Convolutional Neural Network (CNN) model was used. Further, GradCAM helped to visualize the areas where the damage was present. The CNN classified the images when GradCAMwas applied on the Last Convolution Layer. This analysed the activations of the layer, and highlighted the locations where a non-defective and defective bearing differed, which were the defects, or pits due to fretting.
To determine the percentage defect, all pixels from the colour range red to yellow, which highlighted the defects in a GradCAM, were considered.Approximately 19.4% of the total image covered the raceway, which was taken as the standard threshold. When the colors on the raceway was fully red to yellow, the bearing was determined as “Fully defective”. For the values lower than the threshold value, the below mentioned formula was used to determine the percentage of defect, with respect to the threshold:
Defect %= (Attained %)/Threshold×100
wherein when the red to yellow pixels that cover 12.2% of the whole GradCAM image, the calculation of defects was proceeded as:
Defect %= (12.2 %)/19.4×100=62.886%
The Convolutional Neural Network (CNN)model used for image processing included following layers:
Layers Parameter Output Size
Convolution Layer 1 Kernel: 3x3 (180,180)
MaxPooling Layer 1 Kernel: 2x2 (90,90)
Convolution Layer 2 Kernel: 3x3 (90,90)
MaxPooling Layer 2 Kernel: 2x2 (45,45)
Convolution Layer 3 Kernel: 3x3 (45,45)
MaxPooling Layer 3 Kernel: 2x2 (23,23)
Fully Connected Layer 1 128 128
Fully Connected Layer 2 64 64
The output included four images corresponding to an input image (original resolution), model input (lower resolution), a Gradient-Class Activation Map (Grad-CAM) and a superimposed image of Grad-CAM into the model input imageand construct a web application.
Example-2:Salient Feature Mapping Of ‘Defective’ And ‘Non-defective’ Thrust Bearing Using Grad-CAM Thermal Images:
Gradient-weighted Class Activation Mapping (Grad-CAM) was used as a class-discriminative localization technique that generated visual explanations for CNN-based network without requiring architectural changes or re-training. The Grad-CAM used the gradients of a ‘defective’ bearing in a classification network flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept.
A web application was developed that included a welcome page and options to select and submit an image. A textual output of whether the uploaded sample is “Defective” or “Non-Defective” was obtained with Grad-CAM.FIG. 6provides a visual feature mapping with context to percentage of damage to thrust bearing using Grad-CAM.
Scenario 1: FIG. 6A represents a sample environment namely scenario-1 with bearing condition predicted as 99.91% defective.
Scenario 2: FIG. 6B represents a sample environment namely scenario-2 with bearing condition predicted as 24.76% defective.
Scenario 3: FIG. 6C representsa sample environment namely scenario-3 with bearing condition predicted as 63.90% defective.
Scenario 4: FIG. 6D represents a sample environment namely scenario-4 with prediction of “non-defective” condition of the bearing.
The model of the present invention achieved 99.25% training accuracy. With reference to FIG. 7, a graphical representation of the flow of training accuracy in each epoch while training.The training accuracy of 93% was obtained after 30 epochs. The method and system of the present invention achieved a testing accuracy of 93%. The outcome reflects the robustness and effectiveness of the model in capturing intricate patterns and features within the dataset. The result implies that the training data fits well in the model and how the model can make accurate predictions on unseen data too. The result suggested that the model is reliable and fit for real-world applications.
Although the detailed description contains many specifics, these should not be construed as limiting the scope of the invention but merely as illustrating different examples and aspects of the invention. It should be appreciated that the scope of the invention includes other embodiments not discussed herein. Various other modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the system and method of the present invention disclosed herein without departing from the spirit and scope of the invention as described here and as set forth in the claims attached herewith.
, Claims:WE CLAIM:
1. A method (100) for automatic damage detectionand quantification in thrust bearings, the method comprising the steps of:
providing an input data set (102) in an input unit, wherein the input data set comprises a plurality of damage detection images classified as defective or non-defective areas of thrust bearings;
constructing a convolutional neural network (CNN) model (104) in an image classification module to determine at least one feature map indicating damage in the thrust bearings corresponding to the input data set received from the input unit;
initiating prediction for an unknown input sample (106)using the CNN model to detect at least one feature map indicating damage in a thrust bearing based on the image input data set and generating a model output;
applying a Gradient-weighted Class Activation Mapping (Grad-CAM) (108) on the model output received from the classification module in an image visualization module based on gradient information obtained from each feature map(FM1, FM2…FM64);
generating a feature location map (110) by multiplying the feature maps using corresponding weights(W1, W2,….W64);
estimating the damage (112) in the thrust bearing after computing the feature location map by a processing unit and generating heat maps (HM1, HM2…HM64)corresponding to the weights(W1, W2,….W64); and
displaying the heat maps (114) received from the image visualization module in a display unit indicative of damage assessment in the unknown sample.
2. The method (100) as claimed in claim 1, wherein at least one feature map in step (104) comprises of pits, corrosion or fatigue damage in thrust bearings induced by friction or wear.
3. The method (100) as claimed in claim 1, wherein the CNN model uses an activation function ReLU to provide non-linearity to the model.
4. The method (100) as claimed in claim 1, wherein an Adam optimizer with default is used as the optimization algorithm for the CNN.
5. The method (100) as claimed in claim 1, wherein the method provides a training accuracy of at least 99.25% and a testing accuracy of at least 93%.
6. An automatic assessment system (200) for damage detectionand quantification in thrust bearings, the system comprising:
an input unit (202) configured to store and provide a plurality of damage detection images classified as defective or non-defective areas of thrust bearings;
an image classification module (204) having a CNN model configured to retain spatial information indicating damage in the thrust bearing, wherein the CNN model comprises of:
an input layer (206) having a shape of 180×180×3 configured to provide an input images of size 180×180 pixels with 3 colour channels;
a series of convolutional layers (208) with intermediate max pooling layers (210) configured to provide at least one feature map and to highlight informative regions of the feature maps to reduce dimensionality to generate a 3D model output;
a dropout layer (212) configured to prevent overfitting of the 3D model output; and
a flattening layer (214) configured to flatten the 3D model output into one dimensional array;
an image visualization module (218) configured to apply a Gradient-weighted Class Activation Mapping (Grad-CAM) and superimpose Grad-CAM on the model output to generate a feature location map ;
a processing unit (220) configured to generate heat maps based on the feature location maps corresponding to the weights; and
a display unit (222) adapted to display the heat maps received from the processing unit.
7. The system as claimed in claim 6, wherein the plurality of damage detection images comprise a width of at least 960 px or a height of at least 1080 px or a color depth of at least 8 bit or a resolution of at least 96 dpi, both vertically and horizontally.
8. The system as claimed in claim 6, wherein the series of convolutional layers (208) with intermediate max pooling layers (210) comprises:
a first convolutional layer(208-1) having 16 filters with an output size of 180×180 pixels;
a first max pooling layer(210-1) with the pool size of (2, 2) and adapted to provide an output size of 90×90 pixels;
a second convolutional layer(208-2) having 32 filters with an output size of 90×90 pixels;
a second max pooling layer (210-2)adapted to provide an output size of 45×45 pixels;
a third convolutional layer(208-3) having 64 filters with an output size of 45×45×64; and
a third max pooling layer (210-3)configured to provide a final output of 23×23 pixels.
9. The system as claimed in claim 6, wherein the dropout layer has a dropout rate of 0.5.
Sd.- Dr V. SHANKAR IN/PA-1733
For and on behalf of the Applicants
| # | Name | Date |
|---|---|---|
| 1 | 202441040325-STATEMENT OF UNDERTAKING (FORM 3) [23-05-2024(online)].pdf | 2024-05-23 |
| 2 | 202441040325-REQUEST FOR EXAMINATION (FORM-18) [23-05-2024(online)].pdf | 2024-05-23 |
| 3 | 202441040325-REQUEST FOR EARLY PUBLICATION(FORM-9) [23-05-2024(online)].pdf | 2024-05-23 |
| 4 | 202441040325-OTHERS [23-05-2024(online)].pdf | 2024-05-23 |
| 5 | 202441040325-FORM-9 [23-05-2024(online)].pdf | 2024-05-23 |
| 6 | 202441040325-FORM FOR SMALL ENTITY(FORM-28) [23-05-2024(online)].pdf | 2024-05-23 |
| 7 | 202441040325-FORM 18 [23-05-2024(online)].pdf | 2024-05-23 |
| 8 | 202441040325-FORM 1 [23-05-2024(online)].pdf | 2024-05-23 |
| 9 | 202441040325-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [23-05-2024(online)].pdf | 2024-05-23 |
| 10 | 202441040325-EDUCATIONAL INSTITUTION(S) [23-05-2024(online)].pdf | 2024-05-23 |
| 11 | 202441040325-DRAWINGS [23-05-2024(online)].pdf | 2024-05-23 |
| 12 | 202441040325-DECLARATION OF INVENTORSHIP (FORM 5) [23-05-2024(online)].pdf | 2024-05-23 |
| 13 | 202441040325-COMPLETE SPECIFICATION [23-05-2024(online)].pdf | 2024-05-23 |
| 14 | 202441040325-FORM-8 [27-05-2024(online)].pdf | 2024-05-27 |
| 15 | 202441040325-Proof of Right [04-11-2024(online)].pdf | 2024-11-04 |
| 16 | 202441040325-RELEVANT DOCUMENTS [03-04-2025(online)].pdf | 2025-04-03 |
| 17 | 202441040325-POA [03-04-2025(online)].pdf | 2025-04-03 |
| 18 | 202441040325-FORM 13 [03-04-2025(online)].pdf | 2025-04-03 |