Abstract: A SYSTEM FOR DEMODULATING FRINGE ORDER FROM ISOCHROMATIC IMAGES ABSTRACT A system for demodulating fringe order in isochromatic images is disclosed. The system 100 includes a data acquisition module 102 for obtaining photo-elastic images with their corresponding fringe patterns, a processing module 104 for predicting a fringe order pattern of the received images and a loss function module 106 for maintaining continuity of the fringe order generated from the cyclic U-Net model. The received photo-elastic images are provided to the processing module 104 which includes a cyclic U-Net model 108 for predicting the fringe order pattern and the input that has generated a corresponding fringe order pattern. The cyclic U-Net model 108 includes two U-Net components of the cyclic U-Net model create a feedback loop. The system can predict fringe order patterns and validate the accuracy of the predictions. The system optimizes both predictive accuracy and fringe pattern coherence, leading to superior image quality and fidelity. FIG. 1
Description:F O R M 2
THE PATENTS ACT, 1970
(39 of 1970)
COMPLETE SPECIFICATION
(See section 10 and rule 13)
TITLE
A SYSTEM FOR DEMODULATING FRINGE ORDER FROM ISOCHROMATIC IMAGES
INVENTORS:
MOHAN, Vishnu M.S. – Indian Citizen
Pulinthitta Chandravilasom, Poruvazhy, Chathakulam PO
Kollam, Kerala 690520, India
MOHAN PRASANNA, Hariprasad – Indian Citizen
Prasadam, Panangad PO, Balussery, Nirmallore
Kozhikode Kerala 673612, India
MENON, Vivek – Indian Citizen
10-B, Link Heights, Panampilly Nagar
Kochi 682036, India
APPLICANT
AMRITA VISHWA VIDYAPEETHAM
Amritapuri Campus
Amritapuri, Clappana PO
Kollam 690 525
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED:
A SYSTEM FOR DEMODULATING FRINGE ORDER FROM ISOCHROMATIC IMAGES
CROSS-REFERENCES TO RELATED APPLICATION
[0001] None.
FIELD OF INVENTION
[0002] The present disclosure relates to digital photo-elasticity, more particularly, it relates to fringe pattern demodulation for isochromatic images.
DESCRIPTION OF THE RELATED ART
[0003] Photoelasticity is a whole field optical technique that is based on the principle of stress strain-induced birefringence, which gives the principal stress difference (isochromatics) and orientation of principal stresses (isoclinics) in the form of fringe contours. With newer technologies and innovations in the image processing hardware and computing facilities, digital photoelastic techniques have revolutionized the experimental stress analysis field in the broad areas of engineering and biomedical applications. Several algorithms have been reported in the field of digital photoelasticity for the evaluation of isochromatic and isoclinic parameters. If the focus is to find the isoclinics and isochromatic parameters together with high accuracy, one has to go for phase shifting technique. However, the need for multiple image acquisitions is a constraint, and dynamic variations in the isochromatic fringe data would be difficult to analyze using conventional image acquisition techniques. Single acquisition-based technique like Twelve Fringe Photoelasticity (TFP) is an established method for determining total fringe order using the color domain information. This method is based on the comparison of the Red (R), Green (G), Blue (B), intensities of the application image of the model with those of a calibration table generated from a specimen having known fringe order variation. The fringe order obtained using these methods in the first stage is prone to errors due to the repetition of colors.
[0004] The color matching algorithm is one of the most used algorithms for estimating the fringe order value for a particular pixel. An error table is generated using least squares for each row in the calibration table. For a particular pixel in the model domain, the fringe order corresponding to the minimum error in the calibration table is assigned. However, in this method, false estimations of fringe orders could arise at some locations due to the repetition of colors. This is solved by refining the fringe order further by imposing fringe order continuity. This method uses pixel information along with neighboring fringe orders in the model domain. Fringe order refinement is carried out by the modified error calculation. Most of the fringe order analysis using single isochromatic image rely on color information, necessitating calibration of the colour information with known fringe order variations. Since a universal calibration dataset may not be available for DP analysis, calibration tables must be generated based on the specific material used for the problem at hand. It is also to be noted that, conventional approaches often require multiple stages of fringe order refinement due to the repetitive nature of color information in isochromatics. This essentially demands significant amount of time and expertise from the end users. An approach with machine learning concepts would be ideal to relive such requirements. Machine learning approaches have identified tremendous potential in the field of image processing. However, these techniques are not applied to its full potential for problems pertaining to digital photoelastic analysis for evaluating fringe order data.
[0005] Various publications have tried to address the problems encountered when maintaining continuity of fringe order. Chinese publication 116817785A discloses high-reflectivity surface dynamic three-dimensional measurement method. mechanisms to manage permissions to access user data in a distributed ledger trust network. Chinese publication 116772743A discloses single-frame dual-wavelength interference phase demodulation method. Chinese publication 116757974A discloses method for separating aliasing fringes of a transparent optical element. Bo Tao et. al in “Photoelastic Stress Field Recovery Using Deep Convolutional Neural Network” discusses deep convolutional neural network based on the encoder–decoder structure. In “The Accuracy improvement of demodulating the stress field with StressUnet in photoelasticity”, Zhao et. al. mentions a framework to enhance prediction accuracy of images. Sun et. al. discusses a novel Circle-U-Net architecture in “Circle-U-Net: An Efficient Architecture for Semantic Segmentation”.
[0006] Presently, there is a requirement of for enhancing the fringe order demodulation from isochromatic images.
SUMMARY OF THE INVENTION
[0007] The present subject matter relates to demodulating fringe order from isochromatic images.
[0008] In one embodiment of the present subject matter, a system for demodulating fringe order from isochromatic images is disclosed. The system includes a data acquisition module for obtaining photo-elastic images with their corresponding fringe patterns and a processing module for predicting a fringe order pattern. The processing module includes a cyclic U-Net model for predicting the fringe order pattern and the input that has generated a corresponding fringe order pattern. The U-Net model is configured to predict the fringe order pattern of the received photo-elastic images at a first U-Net component and predict an isochromatic pattern that produced the received fringe order image based on the predicted pattern of the fringe order to a second U-Net component. The system also includes a loss function module for maintaining continuity of the fringe order generated from the cyclic U-Net model.
[0009] In various embodiments, the two U-Net components of the cyclic U-Net model create a feedback loop.
[0010] In various embodiments, each of the two U-Net comprises an encoder and decoder.
[0011] In various embodiments, the encoder captures context and extracts high-level features and the decoder up-samples and concatenates features for precise localization and information preservation.
[0012] In various embodiments, the loss function module (106) includes a Continuity-Imposed Hybrid Cyclic Loss (CHCL) function
[0013] In various embodiments, the Continuity-Imposed Hybrid Cyclic Loss(CHCL) function comprises a combination of Structural Similarity Index (SSIM), Mean Squared Error(MSE), Peak Signal to-Noise Ratio (PSNR), and a continuity loss.
[0014] In various embodiments, the Mean Squared Error (MSE) is utilized between the received isochromatic image and the predicted isochromatic image from the second U-Net model.
[0015] In various embodiments, the continuity loss ensures consistency of the fringe order throughout the predicted image.
[0016] In various embodiments, Structural Similarity Index (SSIM) augments the cyclic U-Net model in maintaining structural similarity of the fringe order prediction and the target ground truth fringe order pattern.
[0017] In various embodiments, Peak Signal to-Noise Ratio (PSNR) is used for minimizing the difference between the predicted fringe order pattern and the target fringe order pattern.
[0018] This and other aspects are described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The invention has other advantages and features, which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
[0020] FIG. 1 illustrate the system for demodulating fringe order in isochromatic images, according to an embodiment of the present subject matter.
[0021] FIGS. 2A and 2B illustrate the cyclic U-Net model and the cyclic model with isochromatic and fringe order prediction branches respectively, according to an embodiment of the present subject matter.
[0022] FIG. 3 illustrates the loss function module with CHCL, according to an embodiment of the present subject matter.
[0023] FIGS. 4A and 4B illustrate the fringe order values for a specific isochromatic image, according to an embodiment of the present subject matter.
[0024] FIGS. 5A and 5B illustrate the continuity plot of the image, according to an embodiment of the present subject matter.
[0025] FIGS. 6A, 6B and 6C illustrate the isochromatics phasemap for conventional, FringeNet and error Map, respectively, according to an embodiment of the present subject matter.
[0026] FIG. 7 illustrate the comparison of the fringe order and conventional, according to an embodiment of the present subject matter.
[0027] FIGS. 8A, 8B and 8C illustrate the fringe order prediction without cyclic model, according to an embodiment of the present subject matter.
[0028] FIGS. 9A, 9B and 9C illustrate the validation metrics comparison of different models and loss functions, according to an embodiment of the present subject matter
[0029] Referring to the figures, like numbers indicate like parts throughout the various views.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0030] While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material to the teachings of the invention without departing from its scope.
[0031] Throughout the specification and claims, the following terms take the meanings explicitly associated herein unless the context clearly dictates otherwise. The meaning of “a”, “an”, and “the” include plural references. The meaning of “in” includes “in” and “on.” Referring to the drawings, like numbers indicate like parts throughout the views. Additionally, a reference to the singular includes a reference to the plural unless otherwise stated or inconsistent with the disclosure herein.
[0032] The present subject matter describesa system for demodulating fringe order in isochromatic images. A deep learning-based cyclic model is used to enhance the fringe order demodulation from single isochromatic images and a custom loss function to enhance the overall capability of the model.
[0033] A system 100 for demodulating fringe order in isochromatic images is illustrated in FIG. 1. The system 100 includes a data acquisition module 102 for obtaining photo-elastic images with their corresponding fringe patterns, a processing module 104 for predicting a fringe order pattern of the received images and a loss function module 106 for maintaining continuity of the fringe order generated from the cyclic U-Net model. The photo-elastic images received by the data acquisition model 102 encompass diverse variations in factors such as external force ranges and types of camera sensors employed. The received photo-elastic images are provided to the processing module 104 which includes a cyclic U-Net model 108 for predicting the fringe order pattern and the input that has generated a corresponding fringe order pattern.
[0034] In various embodiments, the cyclic U-Net model 108 includes two U-Net components of the cyclic U-Net model create a feedback loop and each U-Net model includes an encoder 202 and a decoder 204 as illustrated in FIG. 2A. The encoder 202 captures context and extracts high-level features and the decoder 204 up-samples and concatenates features for precise localization and information preservation. The cyclic U-Net model 108 predicts the fringe order pattern of the received photo-elastic images at a first U-Net component and predict an isochromatic pattern that produced the received fringe order image based on the predicted pattern of the fringe order at a second U-Net component as illustrated in FIG. 2B. The feedback loop allows to mutually reinforce both branches through the cyclic training procedure, integrating both into the model. The received isochromatic images are fed to the cyclic U-Net model 108 for the first U-Net component to obtain the normalized fringe order pattern. This predicted pattern then serves as input for the second U-Net component of the cyclic model which predicts the isochromatic pattern that produced the given fringe order image. The predictive step facilitates in improving the model's 108 learning capabilities and capturing the underlying characteristics of fringe order data. The cyclic nature of the model ensures the interconnection of the first and second U-Net components that iteratively reinforce each other. This allows the cyclic U-Net model to refines its ability to predict fringe order patterns and it concurrently enhances its comprehension of the intricate relationships between isochromatic images and fringe order images.
[0035] In various embodiments, a loss function module 106 is connected to the cyclic U-Net model for maintaining continuity of the fringe order generated from the cyclic U-Net model. The loss function module 106 includes a Continuity-Imposed Hybrid Cyclic Loss (CHCL) function. The Continuity-Imposed Hybrid Cyclic Loss (CHCL) function further includes a combination of Structural Similarity Index (SSIM), Mean Squared Error (MSE), Peak Signal to-Noise Ratio (PSNR), and a continuity loss as illustrated in FIG. 3. CHCL function is represented by the following equation.
[0036] In some embodiments, the CHCL function uses a continuity loss term to ensure the consistency of the predicted fringe order throughout the image. This allows the model to predict fringe orders that exhibit continuity rather than values with high variations.MSE, PSNR, SSIM and Continuity terms are parameterized using a, ß, ?, d and by tuning these hyperparameters an optimal result for the cyclic model 104 may be achieved. For a it is 0.5, for ß it is 0.5 for ? it is 0.2 and for d it is 1. For a non-cyclic model the hybrid loss (HL) is defined as by the following equation, where a = 0.5, ß = 0.1 and ? = 1. These values are tuned through experimentation to ensure a balanced contribution from all error functions, considering factors such as model complexity, dataset characteristics, range of loss functions, and desired performance metrics.
[0037] In some embodiments, the Structural Similarity Index (SSIM) metrics present in the CHCL function are used to measure the structural similarity between two images. It compares the images in a global context rather than pixel-wise contribution. The SSIM index combines the contrast, luminance, and structure of the input and target image. SSIM is used as a loss function in image processing tasks to minimize the difference between the reference and generated images based on their structural content. This facilitates in creating models that produce visually similar images, considering not only pixel-wise differences but also the perceived quality by a human observer. The addition of SSIM to the CHCL function augments the cyclic U-Net model 104 in maintaining structural similarity of the fringe order prediction and the target ground truth fringe order pattern. The following equation represents the SSIM
[0038] Two variables, C1 and C2are present in the SSIM to stabilize the division when working with a weak denominator.
[0039] In some embodiments, PSNR is a metric used in the CHCL function for assessing the quality of reconstructed images compared to their originals. It measures the similarity between the reconstructed and source images by evaluating the ratio of the maximum achievable signal power to the power of noise. PSNR values, expressed in decibels (dB), reflect the level of likeness between the original and reconstructed images. The negative of PSNR is used as part of the CHCL loss. Using the negative of PSNR as an objective function facilitates in maximizing PSNR, thereby minimizing the difference between original and reconstructed images. PSNR is represented by the following equation
[0040] In various embodiments, the Mean Squared Error (MSE) used in the CHCL function is placed between the input image and the predicted image from the second U-Net component using fringe order prediction to facilitate good performance of the model 104 in less iteration. MSE measures the average of the squared differences between corresponding pixel values. A lower MSE value indicates that the two images are closer in terms of pixel intensity, suggesting better similarity. MSE is computed by the following equation
[0041] The following equation uses the MSE to assess dissimilarity between two sets of images based on their average spatial gradients. Spatial gradients capture how pixel intensities change across an image, providing valuable information about its structural characteristics. By calculating the average gradient for each set and comparing them through MSE, the equation quantifies the extent of divergence in terms of these spatial features. An advantage of MSE is its sensitivity to the overall spatial structure, making it well-suited for tasks such as image quality assessment or image registration, where alterations in pixel intensity patterns are crucial indicators. The averaging process over the entire set of images ensures a global perspective, which can be used in scenarios where a holistic assessment is desired, mitigating the impact of local variations and noise.
[0042] In some embodiments, Mean Squared Error (MSE) has been used the second U-Net component while a, a hybrid loss function has been used in the first U-Net component. The first U-Net component is utilized to accurately demodulate fringe order from input fringe pattern images for which a hybrid loss function comprising multiple error functions are employed, to attain a balanced and effective approach to error minimization and demodulation optimization. Conversely, the second U-Net component plays a complementary role in the cyclic model, focusing on enhancing understanding and learning of isochromatic information from fringe order images' perspective. It emphasizes not on precisely predicting the isochromatic image, but on capturing and learning additional features associated with isochromatic information. To maintain simplicity in the learning process of the second U-Net component and avoid unnecessary complexity, MSE was selected as the loss function.
[0043] A system for demodulating fringe order in isochromatic images has several advantages over prior art particularly for reduction of computational power and cost. The cyclic U-Net model ensures the interconnection of two U-Net components, iteratively reinforcing each other. As the model refines its ability to predict fringe order patterns, it concurrently enhances its comprehension of the intricate relationships between isochromatic images and fringe order images. Further, the iterative approach of the cyclic U-Net model enhances the accuracy and predictive capabilities of the model. The system utilizes the first U-Net component to predict the fringe order pattern, it maintains the same computational cost as that of a non-cyclic U-Net model while achieving superior performance. Additionally, the second U-Net component of the cyclic model may be used for analyzing results during the inference stage. This becomes especially beneficial when the objective is to determine the type of input isochromatic image that the model expects for the predicted fringe order pattern. This analysis facilitates in identifying the disparities between the actual input isochromatic image and the predicted one. Consequently, a user can quantitatively assess the model's performance, even in the absence of the target fringe order pattern. Through this process, the user may identify potential sources of noise that could have contributed to prediction errors, if they exist. This process enhances performance and understanding of the model's predictions and the intrinsic data patterns thus allowing a more thorough comprehension of the underlying processes and structures of fringe order patterns.
[0044] Although the detailed description contains many specifics, these should not be construed as limiting the scope of the invention but merely as illustrating different examples and aspects of the invention. It should be appreciated that the scope of the invention includes other embodiments not discussed herein. Various other modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the system and method of the present invention disclosed herein without departing from the spirit and scope of the invention as described here, and as delineated in the claims appended hereto.
EXAMPLES
[0045] EXAMPLE 1: Implementation of CHCL and cyclic model for fringe order demodulation
[0046] Several encoder architectures are tested as the base block for the U-Net to understand how model performance changes with variations in encoder size, type, and the number of layers in the encoder architecture. EfficientNet-b1, EfficientNet-b7 were used as encoder blocks to enable the assessment of different feature extraction capabilities on fringe order prediction.
[0047] High-quality 5MP CMOS sensors were used for image acquisition, maintaining excellent fringe resolution even after downscaling for inference with the cyclic model. All the model evaluations were carried out on an NVIDIA GeForce RTX 4070 GPU with 8GB of memory. The optimizer used was Adam with step learning rate decay from 1e-3 to 1e-5, and a weight decay of 1e-5 was applied to all models. The models were trained for 30 epochs to ensure a fair comparison of their performance on a similar scale. The same optimizer, learning rate decay, and weight decay were used for all models. The PyTorch framework was used to implement all the experiments and evaluate the performance of the proposed cyclic model.
[0048] Model performance evaluation was conducted by comparing SSIM, PSNR, and MSE metrics. These metrics provide a quantitative assessment of the model's ability to accurately capture and reproduce fringe order patterns.
[0049] EXAMPLE 2: Evaluation of Test Data Set
[0050] A publicly available dataset published comprising 1,01,430 samples of photo-elastic images alongside their corresponding fringe patterns are used for both training and validation of the model. To evaluate generalizability and effectiveness of the model, it was tested using 3 isochromatic experimental images, which included realistic experimental noises, enabling a comprehensive analysis of its performance.
[0051] The comparison results for all evaluated models using the test set from the original dataset distribution are presented in TABLE 1. The model employing the CHCL loss surpasses all other models in every evaluation criterion tested. Furthermore, the standard deviation of the FringeNet model suggests superior generalization compared to the other models, indicating the potential for its utilization without the need for fine-tuning, when applied to images differing from the original dataset distribution.
TABLE 1: Comparison of the cyclic model with previous models
[0052] The fringe order values for a particular isochromatic image from the test set are depicted in FIG. 4A and 4B. The target and predicted profile lines closely follow each other with a MAE (mean absolute error) of 0.00109, signifying a high level of accuracy in the prediction process. This close alignment indicates a strong correlation between the predicted and actual profiles, underscoring the reliability of the predictive model.
[0053] A comparison between the gradients of the predicted and target fringe order for the case is presented in FIG. 5A and 5B. From the continuity plot, it is evident that the model closely approximated the actual gradient value, thus affirming the effectiveness of the CHCL loss function.
[0054] To provide a global overview of the model's performance, a scatter plot has been generated using the flattened versions of both the target and predicted images. Additionally, analyzing the residual values reveals that they are centered around zero and devoid of any discernible patterns that might have been learned during the training phase. This indicates that the model has accurately learned to replicate the underlying patterns without introducing any significant biases.
[0055] EXAMPLE 3: Evaluation of Experimental Data
[0056] To assess the performance of the cyclic model with realistic isochromatic experimental images, three distinct experimental tests were considered: Angle bracket, three-point bending, and ring configurations. The target images were acquired from the DigiTFP software. The target images were acquired from the DigiTFP software. The maximum and minimum values required for renormalizing the fringe order data can either be performed using conventional fringe ordering or compensation techniques like the Babinet-Soleil compensator. These steps can be conducted during the experimental acquisition phases itself. The fringe order map generated by the model was compared with those obtained using the conventional DP algorithm, and the error map was computed accordingly. The model's effectiveness is showcased by its accurate prediction, closely resembling the target fringe order images. Without any fine-tuning, the cyclic model performs well for images from an unseen distribution, suggesting that the model has captured the intricate nature of the principle. Additionally, it is able to ignore small noises in the images and predict the fringe order pattern effectively.
[0057] The analysis of a specific experimental image is presented in FIG. 6A-6C. Error map for the same test image is also presented. It can be inferred from FIG. 7 that though the model's performance may not match that of a test image from the actual dataset, it still demonstrates comparable capabilities, indicating its generalization capacity. Despite not achieving the same level of accuracy as with the original dataset, the model exhibits promising adaptability across different scenarios.
[0058] The alignment between the target and predicted profile lines is close, with a MAE (mean absolute error) as low as 0.02871, indicating a high degree of accuracy in the prediction process. This close correspondence highlights a strong correlation between the predicted and actual profiles, emphasizing the reliability of the predictive model. This observation underscores the model's ability to extend its learned patterns beyond the training data, thus enhancing its utility across diverse contexts and experimental setup conditions in photo-elasticity. The model adeptly captures the fringe order continuity with a high degree of accuracy. It preserves a comparable fringe order gradient to that of the original image, highlighting the model's capacity to discern subtle nuances and complexities of photo-elasticity.
[0059] EXAMPLE 4: Comparison of Models
[0060] The system for fringe order demodulation showcased 92.4% improvement in MSE, signifying substantial improvement in predictive accuracy and model robustness compared to previous models. This significant reduction in MSE underscores the effectiveness of system in minimizing prediction errors and optimizing overall performance. Furthermore, system exhibited a 16.19% improvement in PSNR, indicative of enhanced image quality and fidelity in comparison to previous models. The perceptual quality of images generated by system emphasizes its effectiveness in capturing the underlying photo-elastic principle and predicting the fringe order pattern.
[0061] Moreover, the Structural Similarity Index (SSIM) showed a 3.8% improvement in the system, highlighting its ability to maintain structural similarities and perceptual integrity across images. Also, the lower standard deviation observed in the cyclic model shows its consistency and reliability in generalization, further validating its robustness across real world experimental isochromatic images. If the model had a high standard deviation value, then it could be that the model is performing well for some cases and poorly in some other situations.
[0062] The cyclic model's performance in terms of inference speed is presented in TABLE 2. As only the first U-Net component is utilized, the performance is similar to that of a U-Net with an EfficientNet-b1 encoder. In the implementation with Nvidia 4070 GPU, FringeNet achieved an fps of 74.82, which is 13.36 ms for inference.
TABLE 2: Model performance parameters
[0063] EXAMPLE 4: Ablation Study
[0064] Ablation study examines how a model's performance is affected when certain parts are removed or altered. The impact of various loss functions and the absence of the cyclic nature in the system was systematically investigated. Also, experiments were conducted utilizing various loss functions, including the hybrid loss, CHCL, and MSE loss, to evaluate the efficacy of the proposed loss function.
[0065] To determine the influence of the cyclic model in the system, a variant of the model without the cyclic feedback loop was examined. This involved training the U-Net solely with the MSE loss, omitting the cyclic connection between predicting fringe order patterns, and reconstructing isochromatic images. This approach facilitated the assessment of the individual contribution of the cyclic nature in enhancing the model's predictive capabilities.
[0066] Furthermore, an investigation was conducted to understand how incorporating or excluding specific loss functions, such as the hybrid loss or CHCL, impacts the overall performance of the model. This comprehensive analysis aimed to unravel the specific contributions of each loss component and the cyclic nature, providing valuable insights into the effectiveness of these elements in improving the model's accuracy in predicting fringe order patterns.
[0067] TABLE 3 shows that the system achieved the best result when using both cyclic and CHCL loss functions, and the prediction deteriorated when removing the cyclic component.
TABLE 3: Results of ablation study
[0068] FIG. 8A-8C illustrate the results for fringe order prediction using a non-cyclic model. It is evident from the comparison that the cyclic model performed notably better in this context. The comparison of metrics for validation data across training epochs is illustrated in FIG. 9A-9C. This indicates that the cyclic model utilizing the CHCL loss function converged much faster compared to other models and loss functions.
, Claims:WE CLAIM:
1. A system (100) for demodulating fringe order from isochromatic images, the system (100) comprising:
a data acquisition module (102) for obtaining photo-elastic images with their corresponding fringe patterns;
a processing module (104) for predicting a fringe order pattern, wherein the processing module comprises:
a cyclic U-Net model (108) for predicting the fringe order pattern and the input that has generated a corresponding fringe order pattern, the U-Net model is configured to:
predict the fringe order pattern of the received photo-elastic images at a first U-Net component; and
predict an isochromatic pattern that produced the received fringe order image based on the predicted pattern of the fringe order at a second U-Net component; and
a loss function module (106) for maintaining continuity of the fringe order generated from the cyclic U-Net model.
2. The system (100) as claimed in claim 1, wherein the two U-Net components of the cyclic U-Net model create a feedback loop.
3. The system (100) as claimed in claim 1, wherein each of the two U-Net comprises an encoder (202) and decoder (204).
4. The system (100) as claimed in claim 3, wherein the encoder (202) captures context and extracts high-level features and the decoder (204) up-samples and concatenates features for precise localization and information preservation.
5. The system (100) as claimed in claim 1, wherein the loss function module (106) includes a Continuity-Imposed Hybrid Cyclic Loss (CHCL) function.
6. The system (100) as claimed in claim 1, wherein the Continuity-Imposed Hybrid Cyclic Loss(CHCL) function comprises a combination of Structural Similarity Index (SSIM), Mean Squared Error(MSE), Peak Signal to-Noise Ratio (PSNR), and a continuity loss.
7. The system (100) as claimed in claim 6, wherein the Mean Squared Error(MSE) is utilized between the received isochromatic image and the predicted isochromatic image from the second U-Net model.
8. The system (100) as claimed in claim 6, wherein the continuity loss ensures consistency of the fringe order throughout the predicted image.
9. The system (100) as claimed in claim 6, wherein Structural Similarity Index (SSIM) augments the cyclic U-Net model in maintaining structural similarity of the fringe order prediction. By considering luminance, contrast, and structure, SSIM produces a score that reflects how similar the predicted fringe orders are to the actual ones, thus enhancing the model's ability to preserve important structural details in its predictions.
10. The system (100) as claimed in claim 1, wherein Peak Signal to-Noise Ratio (PSNR) is used for minimizing the difference between the predicted fringe order pattern and the target fringe order pattern.
Dr V. SHANKAR IN/PA-1733
For and on behalf of the Applicants
| # | Name | Date |
|---|---|---|
| 1 | 202441053295-STATEMENT OF UNDERTAKING (FORM 3) [12-07-2024(online)].pdf | 2024-07-12 |
| 2 | 202441053295-REQUEST FOR EXAMINATION (FORM-18) [12-07-2024(online)].pdf | 2024-07-12 |
| 3 | 202441053295-REQUEST FOR EARLY PUBLICATION(FORM-9) [12-07-2024(online)].pdf | 2024-07-12 |
| 4 | 202441053295-OTHERS [12-07-2024(online)].pdf | 2024-07-12 |
| 5 | 202441053295-FORM-9 [12-07-2024(online)].pdf | 2024-07-12 |
| 6 | 202441053295-FORM-8 [12-07-2024(online)].pdf | 2024-07-12 |
| 7 | 202441053295-FORM FOR SMALL ENTITY(FORM-28) [12-07-2024(online)].pdf | 2024-07-12 |
| 8 | 202441053295-FORM 18 [12-07-2024(online)].pdf | 2024-07-12 |
| 9 | 202441053295-FORM 1 [12-07-2024(online)].pdf | 2024-07-12 |
| 10 | 202441053295-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [12-07-2024(online)].pdf | 2024-07-12 |
| 11 | 202441053295-EDUCATIONAL INSTITUTION(S) [12-07-2024(online)].pdf | 2024-07-12 |
| 12 | 202441053295-DRAWINGS [12-07-2024(online)].pdf | 2024-07-12 |
| 13 | 202441053295-DECLARATION OF INVENTORSHIP (FORM 5) [12-07-2024(online)].pdf | 2024-07-12 |
| 14 | 202441053295-COMPLETE SPECIFICATION [12-07-2024(online)].pdf | 2024-07-12 |
| 15 | 202441053295-RELEVANT DOCUMENTS [24-03-2025(online)].pdf | 2025-03-24 |
| 16 | 202441053295-POA [24-03-2025(online)].pdf | 2025-03-24 |
| 17 | 202441053295-FORM 13 [24-03-2025(online)].pdf | 2025-03-24 |
| 18 | 202441053295-OTHERS [06-05-2025(online)].pdf | 2025-05-06 |
| 19 | 202441053295-EDUCATIONAL INSTITUTION(S) [06-05-2025(online)].pdf | 2025-05-06 |