Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For Graph Assisted Unsupervised Domain Adaptation For Machine Fault Diagnosis

Abstract: The disclosure generally relates to methods and systems for graph assisted unsupervised domain adaptation for machine fault diagnosis. The present disclosure solves the technical problems in the art using a Graph Assisted Unsupervised Domain Adaptation (GA-UDA) technique for the machine fault diagnosis. The GA-UDA technique carries out the domain adaptation in two stages. In the first stage, a Class-wise maximum mean discrepancy (CMMD) loss is minimized to transform the data from both source and target domains to a shared feature space. In the second stage, the augmented transformed (projected) data from both the source and the target domains are utilized to construct a joint graph. Subsequently, the labels of target domain data are estimated through label propagation over the joint graph. The GA-UDA technique of the present disclosure helps in addressing significant distribution shift between the two domains.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 July 2023
Publication Number
2/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai 400021, Maharashtra, India

Inventors

1. PATTNAIK, Naibedya
Tata Consultancy Services Limited, Gopalan Global Axis, H-Block, EPIP Industrial Area, Whitefield, Bangalore - 560066, Karnataka, India
2. KUMAR, Kriti
Tata Consultancy Services Limited, Gopalan Global Axis, H-Block, EPIP Industrial Area, Whitefield, Bangalore - 560066, Karnataka, India
3. CHANDRA, Mariswamy Girish
Tata Consultancy Services Limited, Gopalan Global Axis, H-Block, EPIP Industrial Area, Whitefield, Bangalore - 560066, Karnataka, India
4. KUMAR, Achanna Anil
Tata Consultancy Services Limited, Gopalan Global Axis, H-Block, EPIP Industrial Area, Whitefield, Bangalore - 560066, Karnataka, India

Specification

Description:FORM 2 THE PATENTS ACT, 1970 (39 of 1970) & THE PATENT RULES, 2003 COMPLETE SPECIFICATION (See Section 10 and Rule 13) Title of invention: METHODS AND SYSTEMS FOR GRAPH ASSISTED UNSUPERVISED DOMAIN ADAPTATION FOR MACHINE FAULT DIAGNOSIS Applicant Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956 Having address: Nirmal Building, 9th Floor, Nariman Point, Mumbai 400021, Maharashtra, India The following specification particularly describes the invention and the manner in which it is to be performed. TECHNICAL FIELD The disclosure herein generally relates to an unsupervised domain adaptation, and, more particularly, to methods and systems for graph assisted unsupervised domain adaptation for machine fault diagnosis. BACKGROUND Unsupervised Domain adaptation (UDA) has become an emerging technology for many useful applications such as a machine fault diagnosis. The UDA leverages knowledge learned from labeled data in a source domain to build an effective classifier for an unlabeled data in a target domain, given that the source and target data have different underlying distributions. Classical data-driven machine learning algorithms for machine diagnosis assume that the training (source) and test (target) data follow the same data distribution. However, in practical industrial scenario, such assumption does not always hold as machine data from both domains are significantly different due to different working conditions, sampling frequency, location of sensor placement, etc. Additionally, for machine fault diagnosis, access to labeled data of every machine is not always available as manual labelling is time consuming and inducing faults in machine is economically not viable. Moreover, limited data is available for training. So, knowledge transfer between different but related machines can be beneficial. Most of the existing techniques aim to address a marginal distribution discrepancy aspect alone, ignoring a conditional distribution discrepancy that may exist between the two domains. In order to achieve good adaptation performance, both the marginal and conditional distributions of the source and target data need to be aligned. The problem becomes challenging when the data is limited, and no labels are available for the target domain data. Further, existing graph-based domain adaptation work focuses on jointly optimizing the domain invariant feature learning by a divergence loss and label propagation loss over a fixed graph which is obtained by augmenting source and target domain data, to learn the labels of the target domain data. The labels are considered as graph signals which are projected onto the graph. Using the known source labels, the target domain labels are predicted by label propagation over this fixed graph. When the domain discrepancy is less, the fixed graph has edge connectivity between source and target nodes which eventually helps in label propagation. However, when the domain discrepancy between the source and target domain is large, then the fixed graph results in two disjoint sub-graphs for source and target data respectively, with no edge connectivity between the source and target nodes. Hence, the label propagation will not be able to estimate the labels of the target domain data. SUMMARY Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. In an aspect, a processor-implemented method for graph assisted unsupervised domain adaptation for machine fault diagnosis is provided. The method including the steps of: receiving a labeled source domain S data {X_s,Y_s }, and an unlabeled target domain T data {X_t }, wherein the labeled source domain S data comprising a plurality of labeled source domain samples {(x_(s_1 ),y_(s_1 ) ),…,(x_(s_(n_s ) ),y_(s_(n_s ) ) )} and each labeled source domain sample comprising a source domain feature and a source domain label, and the unlabeled target domain T data comprising one or more unlabeled target domain samples {x_(t_1 ),…,x_(t_(n_t ) )} and each unlabeled target domain sample comprises a target domain feature; performing an optimization of a set of parameters including (i) a source projection matrix P_s, (ii) a target projection matrix P_t, and (iii) a probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, and wherein the optimization comprising: (a) initializing the source projection matrix P_s and the target projection matrix P_t, from the labeled source domain S data and the unlabeled target domain T data respectively, using a principal component analysis (PCA) technique; (b) determining a source projected data X_sp and a target projected data X_tp, from (i) the labeled source domain S data and the source projection matrix P_s, and (ii) the unlabeled target domain T data and the target projection matrix P_t, respectively; (c) augmenting the source projected data X_sp and the target projected data X_tp, to construct a joint graph G, using a Gaussian kernel; (d) estimating the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through a label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing a graph total variation (GTV) loss; and (e) iteratively performing a joint learning using the initialized parameters at step (a) and the set of parameters from step (b) through step (d) in a first iteration and learnt parameters thereafter until a convergence criterion is met, wherein the joint learning comprising: learning each of the source projection matrix P_s and the target projection matrix P_t, using (i) the labeled source domain S data and the unlabeled target domain T data respectively, and (ii) the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, by minimizing a weighted class-wise maximum mean discrepancy (CMMD) loss; determining the source projected data X_sp and the target projected data X_tp, from (i) the labeled source domain S data and the source projection matrix P_s, and (ii) the unlabeled target domain T data and the target projection matrix P_t, respectively; augmenting the source projected data X_sp and the target projected data X_tp, to construct the joint graph G, using the Gaussian kernel; learning the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing the graph total variation (GTV) loss; and wherein the convergence criterion is when the GTV loss being less than an empirically determined threshold value, to obtain (i) the learnt source projection matrix P_s, (ii) the learnt target projection matrix P_t, and (iii) the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample; and determining a target domain label associated with each target domain feature present in each unlabeled target domain sample, from the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample. In another aspect, a system for graph assisted unsupervised domain adaptation for machine fault diagnosis is provided. The system includes: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to: receive a labeled source domain S data {X_s,Y_s }, and an unlabeled target domain T data {X_t }, wherein the labeled source domain S data comprising a plurality of labeled source domain samples {(x_(s_1 ),y_(s_1 ) ),…,(x_(s_(n_s ) ),y_(s_(n_s ) ) )} and each labeled source domain sample comprising a source domain feature and a source domain label, and the unlabeled target domain T data comprising one or more unlabeled target domain samples {x_(t_1 ),…,x_(t_(n_t ) )} and each unlabeled target domain sample comprises a target domain feature; perform an optimization of a set of parameters including (i) a source projection matrix P_s, (ii) a target projection matrix P_t, and (iii) a probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, and wherein the optimization comprising: (a) initializing the source projection matrix P_s and the target projection matrix P_t, from the labeled source domain S data and the unlabeled target domain T data respectively, using a principal component analysis (PCA) technique; (b) determining a source projected data X_sp and a target projected data X_tp, from (i) the labeled source domain S data and the source projection matrix P_s, and (ii) the unlabeled target domain T data and the target projection matrix P_t, respectively; (c) augmenting the source projected data X_sp and the target projected data X_tp, to construct a joint graph G, using a Gaussian kernel; (d) estimating the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through a label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing a graph total variation (GTV) loss; and (e) iteratively performing a joint learning using the initialized parameters at step (a) and the set of parameters from step (b) through step (d) in a first iteration and learnt parameters thereafter until a convergence criterion is met, wherein the joint learning comprising: learning each of the source projection matrix P_s and the target projection matrix P_t, using (i) the labeled source domain S data and the unlabeled target domain T data respectively, and (ii) the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, by minimizing a weighted class-wise maximum mean discrepancy (CMMD) loss; determining the source projected data X_sp and the target projected data X_tp, from (i) the labeled source domain S data and the source projection matrix P_s, and (ii) the unlabeled target domain T data and the target projection matrix P_t, respectively; augmenting the source projected data X_sp and the target projected data X_tp, to construct the joint graph G, using the Gaussian kernel; learning the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing the graph total variation (GTV) loss; and wherein the convergence criterion is when the GTV loss being less than an empirically determined threshold value, to obtain (i) the learnt source projection matrix P_s, (ii) the learnt target projection matrix P_t, and (iii) the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample; and determine a target domain label associated with each target domain feature present in each unlabeled target domain sample, from the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample. In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive a labeled source domain S data {X_s,Y_s }, and an unlabeled target domain T data {X_t }, wherein the labeled source domain S data comprising a plurality of labeled source domain samples {(x_(s_1 ),y_(s_1 ) ),…,(x_(s_(n_s ) ),y_(s_(n_s ) ) )} and each labeled source domain sample comprising a source domain feature and a source domain label, and the unlabeled target domain T data comprising one or more unlabeled target domain samples {x_(t_1 ),…,x_(t_(n_t ) )} and each unlabeled target domain sample comprises a target domain feature; perform an optimization of a set of parameters including (i) a source projection matrix P_s, (ii) a target projection matrix P_t, and (iii) a probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, and wherein the optimization comprising: (a) initializing the source projection matrix P_s and the target projection matrix P_t, from the labeled source domain S data and the unlabeled target domain T data respectively, using a principal component analysis (PCA) technique; (b) determining a source projected data X_sp and a target projected data X_tp, from (i) the labeled source domain S data and the source projection matrix P_s, and (ii) the unlabeled target domain T data and the target projection matrix P_t, respectively; (c) augmenting the source projected data X_sp and the target projected data X_tp, to construct a joint graph G, using a Gaussian kernel; (d) estimating the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through a label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing a graph total variation (GTV) loss; and (e) iteratively performing a joint learning using the initialized parameters at step (a) and the set of parameters from step (b) through step (d) in a first iteration and learnt parameters thereafter until a convergence criterion is met, wherein the joint learning comprising: learning each of the source projection matrix P_s and the target projection matrix P_t, using (i) the labeled source domain S data and the unlabeled target domain T data respectively, and (ii) the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, by minimizing a weighted class-wise maximum mean discrepancy (CMMD) loss; determining the source projected data X_sp and the target projected data X_tp, from (i) the labeled source domain S data and the source projection matrix P_s, and (ii) the unlabeled target domain T data and the target projection matrix P_t, respectively; augmenting the source projected data X_sp and the target projected data X_tp, to construct the joint graph G, using the Gaussian kernel; learning the probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample, through label propagation over the joint graph G, using the source domain label associated with the source domain feature present in each labeled source domain sample, by minimizing the graph total variation (GTV) loss; and wherein the convergence criterion is when the GTV loss being less than an empirically determined threshold value, to obtain (i) the learnt source projection matrix P_s, (ii) the learnt target projection matrix P_t, and (iii) the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample; and determine a target domain label associated with each target domain feature present in each unlabeled target domain sample, from the learnt probabilistic target domain label associated with each target domain feature present in each unlabeled target domain sample. In an embodiment, the source domain feature present in each labeled source domain sample and the target domain feature present in each unlabeled target domain sample are obtained from one or more sensors present in the machine whose faults to be diagnosed. In an embodiment, (i) the source domain label associated with each source domain feature, and (ii) the target domain label associated with each target domain feature, are part of a plurality of predefined labels. In an embodiment, minimizing the graph total variation (GTV) loss propagates the source domain labels over the joint graph G to estimate the probabilistic target domain labels associated with the target domain T data. In an embodiment, the weighted class-wise maximum mean discrepancy (CMMD) loss is defined as a sum of the class-wise distance between a mean of the projected source domain data X_sp and the mean of the projected target domain data X_tp associated with similar labels among the plurality of predefined labels. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles: FIG. 1 is an exemplary block diagram of a system for graph assisted unsupervised domain adaptation for machine fault diagnosis, in accordance with some embodiments of the present disclosure. FIG. 2 is an exemplary block diagram illustrating a graph assisted unsupervised domain adaptation for machine fault diagnosis, in accordance with some embodiments of the present disclosure. FIGS. 3A and 3B illustrate exemplary flow diagrams of a processor-implemented method for graph assisted unsupervised domain adaptation for machine fault diagnosis, in accordance with some embodiments of the present disclosure. DETAILED DESCRIPTION OF EMBODIMENTS Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. Classical machine learning algorithms assume that the training and test data follow the same data distribution. However, in practice, this assumption does not always hold, which leads to deterioration in their performance. Interestingly, Domain Adaptation (DA) has emerged as one of the upcoming techniques to tackle this issue, where the training (source) and test (target) data can be from different distributions. DA relies on leveraging the information learned from well-studied source domain to improve the classification performance on the target domain. According to the availability of label information in the target domain, the DA can be categorized as Unsupervised DA (UDA) where the target domain is completely unlabeled, and semi-supervised DA (SDA) where the target domain has limited labels. Among all the existing DA techniques, divergence and adversarial learning-based techniques have been successfully applied in different applications. Divergence based DA techniques map instances from both source and target domains to a common feature space to learn domain invariant features. However, they fail to perform when a large distribution discrepancy exists between the two domains. Adversarial learning based DA methods are able to handle such a scenario, as they learn data translation between source and target domains by training a generator and discriminator network. However, these techniques do not guarantee that class discriminability is preserved during the data translation. Also, they require massive data for training, which may not be always available in many practical application scenarios. Apart from the existing techniques mentioned above, graph-based techniques have recently been used for DA, as graphs can capture the actual data manifolds effectively. The existing techniques are based on Graph Convolutional Networks (GCN), Graph Signal Processing (GSP), and hybrid techniques that utilize divergence method with graph to learn domain invariant features. An unsupervised Domain Adaptive Network Embedding (DANE) framework has been proposed using GCN and adversarial network that learns transferable embeddings between the source and target domain. Another UDA technique utilized a dual GCN for local and global consistency for feature aggregation. Although popular, these techniques ignore the property of graph structured data while carrying out classification. To effectively exploit the underlying structure of the data, the concepts of GSP has been utilized for the SDA. The technique is based on aligning the Fourier bases of the graphs constructed using source and target domain data. The spectrum of the labels learned from the source graph is transferred to the target graph for the DA. This work was extended by incorporating graph learning into the optimization formulation that aligns the spectrum of the graphs associated with the source and target data, which resulted in improved performance. Further, a Graph Adaptive Knowledge Transfer (GAKT) technique has been proposed that jointly optimized the domain invariant feature learning by weighted class-wise adaptation loss and label propagation over the graph. A joint graph is employed by augmenting source and target domain data to propagate the labels from known source to unknown target data. When the domain discrepancy is less, the joint (fixed) graph has the edge connectivity between source and target nodes which eventually helps in label propagation. However, when the domain discrepancy between the source and target domains is large, then the joint (fixed) graph results in two disjoint sub-graphs for source and target data respectively, with no edge connectivity between the source and target nodes. Hence, the label propagation will not be able to estimate the labels of the target domain data. Further, all the aforementioned techniques mainly focus on computer vision related DA applications, but not on time series data for the challenging adaptation scenario of machine fault diagnosis or machine inspection. In most practical applications of the machine inspection, access to labeled data is difficult, as manual labeling is time consuming and inducing faults in machines is not economically viable. Moreover, labeled data of every machine is not available. Thus, transferring the knowledge learned from labeled data of one machine (source) to a different but related machine (target) is important and required in practice. This is a challenging adaptation scenario since the data distribution of both domains is significantly different due to different working conditions, sampling frequency, location of sensor placement, and so on. The present disclosure solves the technical problems in the art using a Graph Assisted Unsupervised Domain Adaptation (GA-UDA) technique for the machine fault diagnosis. The GA-UDA technique of the present disclosure for the machine fault diagnosis, carries out the domain adaptation in two stages. In the first stage, a Class-wise maximum mean discrepancy (CMMD) loss is minimized to transform the data from both source and target domains to a shared feature space. In the second stage, the augmented transformed (projected) data from both the source and the target domains are utilized to construct a joint graph. Subsequently, the labels of target domain data are estimated through label propagation over the joint graph. The GA-UDA technique of the present disclosure is similar in nature to the conventional GAKT technique. However, unlike the fixed joint graph considered in GAKT technique, the present disclosure iteratively updates the joint graph using the transformed features of both the source and target domains obtained through the optimization formulation. The GA-UDA technique of the present disclosure helps in addressing significant distribution shift between the two domains. Referring now to the drawings, and more particularly to FIG. 1 through FIG. 3B, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary systems and/or methods. FIG. 1 is an exemplary block diagram of a system 100 for graph assisted unsupervised domain adaptation for machine fault diagnosis, in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 includes or is otherwise in communication with one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more hardware processors 104, the memory 102, and the I/O interface(s) 106 may be coupled to a system bus 108 or a similar mechanism. The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a plurality of sensor devices, a printer and the like. Further, the I/O interface(s) 106 may enable the system 100 to communicate with other devices, such as web servers and external databases. The I/O interface(s) 106 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface(s) 106 may include one or more ports for connecting a number of computing systems with one another or to another server computer. Further, the I/O interface(s) 106 may include one or more ports for connecting a number of devices to one another or to another server. The one or more hardware processors 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may be used interchangeably. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, portable computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like. The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 102 includes a plurality of modules 102a and a repository 102b for storing data processed, received, and generated by one or more of the plurality of modules 102a. The plurality of modules 102a may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types. The plurality of modules 102a may include programs or computer-readable instructions or coded instructions that supplement applications or functions performed by the system 100. The plurality of modules 102a may also be used as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 102a can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 104, or by a combination thereof. In an embodiment, the plurality of modules 102a can include various sub-modules (not shown in FIG. 1). Further, the memory 102 may include information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure. The repository 102b may include a database or a data engine. Further, the repository 102b amongst other things, may serve as a database or includes a plurality of databases for storing the data that is processed, received, or generated as a result of the execution of the plurality of modules 102a. Although the repository 102b is shown internal to the system 100, it will be noted that, in alternate embodiments, the repository 102b can also be implemented external to the system 100, where the repository 102b may be stored within an external database (not shown in FIG. 1) communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, data may be added into the external database and/or existing data may be modified and/or non-useful data may be deleted from the external database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). In another embodiment, the data stored in the repository 102b may be distributed between the system 100 and the external database. Referring to FIG. 2, components and functionalities of the system 100 are described in accordance with an example embodiment of the present disclosure. For example, FIG. 2 is an exemplary block diagram illustrating a graph assisted unsupervised domain adaptation for machine fault diagnosis, in accordance with some embodiments of the present disclosure. Functions of the components of the system 100 as depicted in FIG. 2 are explained with reference to the description of FIGS. 3A and 3B, which illustrate exemplary flow diagrams of a processor-implemented method 300 for graph assisted unsupervised domain adaptation for machine fault diagnosis, in accordance with some embodiments of the present disclosure. Although steps of the method 300 including process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods, and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any practical order. Further, some steps may be performed simultaneously, or some steps may be performed alone or independently. At step 302 of the method 300, the one or more hardware processors 104 of the system 100 are configured to receive a labeled source domain data, and an unlabeled target domain data. The labeled source domain data includes a plurality of labeled source domain samples. Each labeled source domain sample includes a source domain feature and a source domain label. In another words each labeled source domain sample is a labeled or annotated sample. The unlabeled target domain data includes one or more unlabeled target domain samples. Each unlabeled target domain sample comprises a target domain feature. In another words, each unlabeled target domain sample includes only the features for which the labels or classes are (fault labels or fault classes) to be predicted. Hence, the unlabeled target domain data is associated with a machine whose faults are to be diagnosed and the labeled source domain data is associated with the similar machine. Specifically, in an embodiment, the labeled source domain data and the unlabeled target domain data are from different but similar or related machines such as machines with different working conditions, different sampling frequencies, different sensor placements, with same or similar fault types, sensors, and so on. In an embodiment, the source domain feature present in each labeled source domain sample and the target domain feature present in each unlabeled target domain sample are obtained from a raw sample data collected from one or more sensors present in the machine whose faults to be diagnosed. For example, the source domain features, and the target domain features for the machine includes a root mean square (RMS) value, a variance, a data peak value, a kurtosis value, a peak-to peak time-domain value, and so on. In an embodiment, each of the source domain label associated with each source domain feature, and each of the target domain label associated with each target domain feature, are part of a plurality of predefined labels. Here the plurality of predefined labels means the annotated labels or the classes through which the machine fault types are defined. Thus, the labeled source domain data and the unlabeled target domain data are associated with a same feature space (the source domain feature and the target domain feature) and a same label space or a class space (the plurality of predefined labels). Let the labeled source domain S data be expressed as {X_s,Y_s }={(x_(s_1 ),y_(s_1 ) ),…,(x_(s_(n_s ) ),y_(s_(n_s ) ))}, where n_s denotes a number of a plurality of labeled source domain samples, X_s?R^(m×n_s ) denotes a list of source domain features {x_(s_1 ),…,x_(s_(n_s ) )} and each source domain feature is of m dimensions. Y_s?R^(n_s×C) is the corresponding one-hot encoded labels ({y_(s_1 ),…,y_(s_(n_s ) )} (source domain labels) with C number of classes (the plurality of predefined labels). Similarly, the unlabeled target domain T data is expressed as {X_t }={x_(t_1 ),…,x_(t_(n_t ) )}, where n_t denotes a number of the one or more unlabeled target domain samples, X_t?R^(m×n_t ) denotes a list of target domain features and each target domain feature is of m dimensions. Given that the distribution discrepancy exists between S and T, the task is to predict the labels ({Y_t }={y_(t_1 ),…,y_(t_(n_t ) ) }), of the target domain data X_t, assuming the feature and label space to be the same across both the source and target domains. Class-wise Maximum Mean Discrepancy: A Maximum Mean Discrepancy (MMD) is one of the popular techniques used to address the domain discrepancy between the source S and target T domains. The MMD computes a deviation of sample means of two domains in the projected space. More formally, the MMD loss C_1 is mathematically expressed as in equation 1: C_1 (P_s,P_t )=?1/( n_s ) ?_(i=1)^(n_s)¦?P_s^T x_(s_i ) ?-1/( n_t ) ?_(j=1)^(n_t)¦?P_t^T x_(t_j ) ??_2^2 =?(P_s^T X_s ? 1?_(n_s ))/n_s -(P_t^T X_t ? 1?_(n_t ))/n_t ?_2^2 ---------------- (1) Where P_s?R^(m×k) and P_t?R^(m×k) are two projection matrices with k

Documents

Application Documents

# Name Date
1 202321044922-STATEMENT OF UNDERTAKING (FORM 3) [04-07-2023(online)].pdf 2023-07-04
2 202321044922-REQUEST FOR EXAMINATION (FORM-18) [04-07-2023(online)].pdf 2023-07-04
3 202321044922-FORM 18 [04-07-2023(online)].pdf 2023-07-04
4 202321044922-FORM 1 [04-07-2023(online)].pdf 2023-07-04
5 202321044922-FIGURE OF ABSTRACT [04-07-2023(online)].pdf 2023-07-04
6 202321044922-DRAWINGS [04-07-2023(online)].pdf 2023-07-04
7 202321044922-DECLARATION OF INVENTORSHIP (FORM 5) [04-07-2023(online)].pdf 2023-07-04
8 202321044922-COMPLETE SPECIFICATION [04-07-2023(online)].pdf 2023-07-04
9 202321044922-FORM-26 [16-08-2023(online)].pdf 2023-08-16
10 202321044922-Proof of Right [15-10-2023(online)].pdf 2023-10-15
11 Abstract.jpg 2023-12-21
12 202321044922-FORM 3 [19-07-2024(online)].pdf 2024-07-19
13 202321044922-Request Letter-Correspondence [22-07-2024(online)].pdf 2024-07-22
14 202321044922-Power of Attorney [22-07-2024(online)].pdf 2024-07-22
15 202321044922-Form 1 (Submitted on date of filing) [22-07-2024(online)].pdf 2024-07-22
16 202321044922-Covering Letter [22-07-2024(online)].pdf 2024-07-22
17 202321044922-CERTIFIED COPIES TRANSMISSION TO IB [22-07-2024(online)].pdf 2024-07-22
18 202321044922-CORRESPONDENCE(IPO)-(WIPO DAS)-24-07-2024.pdf 2024-07-24