Specification
Claims:We Claim:
1. A processor implemented method for generating mixed variable type multivariate temporal synthetic data, the method comprising:
providing, via one or more hardware processors, mixed variable type multivariate temporal real time data as an input data, wherein the mixed variable type comprises continuous variables and discrete variables (202);
pre-processing, via the one or more hardware processors, the input data by scaling to a fixed range for both the continuous variables and the discrete variables (204);
splitting, via the one or more hardware processors, the pre-processed data into a training dataset, a validation dataset and a test dataset (206);
training, via the one or more hardware processors, a joint neural network of an autoencoding-decoding component of a Constraint-Condition-Generative Adversarial Network (ccGAN), a supervisor neural network and a critic neural network utilizing the training dataset, wherein the autoencoding-decoding component comprises an embedding neural network and a recovery neural network (208), the training comprises:
providing the training dataset as an input to the embedding neural network to generate high dimensional real latent temporal embeddings (208a),
providing the high dimensional real latent temporal embeddings as an input to the recovery neural network to get a reconstructed input training dataset, wherein the embedding and the recovery neural network is jointly trained using a supervised learning approach for reconstructing the training dataset (208b),
providing the high dimensional real latent temporal embeddings as an input to the supervisor neural network to generate a single-step-ahead high dimensional real latent temporal embeddings, wherein the supervisor neural network is trained using the supervised learning approach (208c), and
providing the high dimensional real latent temporal embeddings as an input to the critic neural network to predict a target variable, wherein the critic neural network is trained using the supervised learning approach (208d);
determining, via the one or more hardware processors, a cluster label dependent random noise by transforming a Gaussian random noise with fixed predetermined cluster labels, wherein the Gaussian random noise is part of the input data (210);
computing, via the one or more hardware processors, a conditioned knowledge vector corresponding to a pre-determined label value for each discrete variable (212);
concatenating, via the one or more hardware processors, the cluster label dependent random noise with the conditioned knowledge vector to generate a condition aware synthetic noise (214);
jointly training, via the one or more hardware processors, adversarial neural networks of the Constraint-Condition aware Generative Adversarial Network (ccGAN), a sequence generator neural network, a sequence discriminator neural network, the supervisor neural network and the critic neural network utilizing the condition aware synthetic noise (216), wherein the training comprises:
providing a condition aware synthetic noise as an input to the sequence generator neural network to get high dimensional synthetic latent temporal embeddings (216a),
providing the high dimensional synthetic latent temporal embeddings to the trained supervisor neural network to predict single-step ahead synthetic temporal latent embeddings (216b),
providing the high dimensional synthetic latent temporal embeddings to the trained critic neural network to predict the synthetic target variable (216c), and
providing the predicted single-step ahead synthetic temporal latent embeddings as an input to the recovery neural network to generate the mixed variable type multivariate temporal synthetic data (216d);
providing, via the one or more hardware processors, the high dimensional real latent temporal embeddings and the high dimensional synthetic latent temporal embeddings as an input to the sequence discriminator neural network to classify them as one of a real or a fake, and predict cluster labels for synthetic data (218);
providing, via the one or more hardware processors, a real world condition aware synthetic noise as an input to the trained sequence generator neural network to get real world high dimensional synthetic latent temporal embeddings (220);
providing, via the one or more hardware processors, the real world high dimensional synthetic latent temporal embeddings to the trained supervisor neural network to predict real world single-step ahead synthetic temporal latent embeddings (222); and
providing, via the one or more hardware processors, the real world predicted single-step ahead synthetic temporal latent embeddings as an input to the trained recovery neural network to generate the mixed variable type multivariate temporal synthetic data (224).
2. The processor implemented method of claim 1 further configured to minimize the discrepancy between the real input temporal data and the mixed variable type multivariate temporal synthetic data using the embedding neural network and the recovery neural network modules.
3. The processor implemented method of claim 1, wherein a conditioned knowledge vector is configured to incorporate the condition into the Constraint-Conditional Generative Adversarial Network (ccGAN) framework.
4. The processor implemented method of claim 1 further comprising providing the validation dataset as an input to the trained ccGAN to tune a set of hyperparameters.
5. A system (100) for generating mixed variable type multivariate temporal synthetic data, the system comprises:
an input/output interface (104);
one or more hardware processors (108); and
a memory (110) in communication with the one or more hardware processors, wherein the one or more first hardware processors are configured to execute programmed instructions stored in the one or more first memories, to:
provide mixed variable type multivariate temporal real time data as an input data, wherein the mixed variable type comprises continuous variables and discrete variables;
pre-process the input data by scaling to a fixed range for both the continuous variables and the discrete variables;
split the pre-processed data into a training dataset, a validation dataset and a test dataset;
train a joint neural network of an autoencoding-decoding component of a Constraint-Condition-Generative Adversarial Network (ccGAN), a supervisor neural network and a critic neural network utilizing the training dataset, wherein the autoencoding-decoding component comprises an embedding neural network and a recovery neural network, the training comprises:
providing the training dataset as an input to the embedding neural network to generate high dimensional real latent temporal embeddings,
providing the high dimensional real latent temporal embeddings as an input to the recovery neural network to get a reconstructed input training dataset, wherein the embedding and the recovery neural network is jointly trained using a supervised learning approach for reconstructing the training dataset,
providing the high dimensional real latent temporal embeddings as an input to the supervisor neural network to generate a single-step-ahead high dimensional real latent temporal embeddings, wherein the supervisor neural network is trained using the supervised learning approach, and
providing the high dimensional real latent temporal embeddings as an input to the critic neural network to predict a target variable, wherein the critic neural network is trained using the supervised learning approach;
determine a cluster label dependent random noise by transforming Gaussian random noise with a fixed predetermined cluster labels, wherein the Gaussian random noise is part of the input data;
compute a conditioned knowledge vector corresponding to a pre-determined label value for each discrete variable;
concatenate the cluster label dependent random noise with the conditioned knowledge vector to generate a condition aware synthetic noise;
jointly train adversarial neural networks of the Constraint-Condition aware Generative Adversarial Network (ccGAN), a sequence generator neural network, a sequence discriminator neural network, the supervisor neural network and the critic neural network utilizing the condition aware synthetic noise, wherein the training comprises:
providing the condition aware synthetic noise as an input to the sequence generator neural network to get high dimensional synthetic latent temporal embeddings,
providing the high dimensional synthetic latent temporal embeddings to the trained supervisor neural network to predict single-step ahead synthetic temporal latent embeddings,
providing the high dimensional synthetic latent temporal embeddings to the trained critic neural network to predict the synthetic target variable, and
providing the predicted single-step ahead synthetic temporal latent embeddings as an input to the recovery neural network to generate the mixed variable type multivariate temporal synthetic data;
provide the high dimensional real latent temporal embeddings and the high dimensional synthetic latent temporal embeddings as input to the sequence discriminator neural network to classify them as one of a real or a fake, and predict the cluster labels for synthetic data;
provide a real world condition aware synthetic noise as an input to the trained sequence generator neural network to get real world high dimensional synthetic latent temporal embeddings;
provide the real world high dimensional synthetic latent temporal embeddings to the trained supervisor neural network to predict real world single-step ahead synthetic temporal latent embeddings; and
provide the real world predicted single-step ahead synthetic temporal latent embeddings as an input to the trained recovery neural network to generate the mixed variable type multivariate temporal synthetic data.
6. The system of claim 5 further configured to minimize the discrepancy between the real input temporal data and the mixed variable type multivariate temporal synthetic data using the embedding neural network and the recovery neural network modules.
7. The system of claim 5, wherein a conditioned knowledge vector is configured to incorporate the condition into the Constraint-Conditional Generative Adversarial Network (ccGAN) framework.
8. The system of claim 5 further comprising providing the validation dataset as an input to the trained ccGAN to tune a set of hyperparameters.
Dated this 05th day of January 2022
Tata Consultancy Services Limited
By their Agent & Attorney
(Adheesh Nargolkar)
of Khaitan & Co
Reg No IN-PA-1086 , Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
SYSTEM AND METHOD FOR GENERATING MIXED VARIABLE TYPE MULTIVARIATE TEMPORAL SYNTHETIC DATA
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to the field of synthetic data generation, and, more particularly, to a method and system for generating mixed variable type multivariate temporal synthetic data.
BACKGROUND
Health monitoring of complex industrial assets remains the most critical task for avoiding downtimes, improving system reliability, safety and maximize utilization. The industrial assets rely on large amount of data for functioning and operation. There is a rising emphasis in the industry to leverage ad hoc artificial intelligence (AI) driven technology landscape for various activities. One of the activity is designing and operating the process twins of various industrial assets. Deep learning algorithms in the recent times have been extensively leveraged in modeling complex phenomena in diverse time dependent data fields but not limited to financial, medical, weather, process-plants for classification, anomaly detection challenge, etc. Data abundance and quality of the data substantially impedes performance of deep learning models.
Deep learning-driven generative models encapsulate the operational behavior from adversarial losses through adversarial training of complex large-scale industrial-plants or asset multivariate time series data. The generative information helps to study the industrial plant performance, and life-cycle operation conditions of industrial assets to aid in prognostics, optimization, and predictive maintenance.
Recent works in time-series synthetic data generation include methods which are able to generate time series sequences but have several inherent limitations for realistic applications. The existing tools for multivariate data synthesis do not utilize a unified approach.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a system for generating mixed variable type multivariate temporal synthetic data is provided. The system comprises an input/output interface, one or more hardware processors and a memory. The memory is in communication with the one or more hardware processors, wherein the one or more first hardware processors are configured to execute programmed instructions stored in the one or more first memories, to: provide mixed variable type multivariate temporal real time data as an input data, wherein the mixed variable type comprises continuous variables and discrete variables; pre-process the input data by scaling to a fixed range for both the continuous variables and the discrete variables; split the pre-processed data into a training dataset, a validation dataset and a test dataset; train a joint neural network of an autoencoding-decoding component of a Constraint-Condition-Generative Adversarial Network (ccGAN), a supervisor neural network and a critic neural network utilizing the training dataset, wherein the autoencoding-decoding component comprises an embedding neural network and a recovery neural network, the training comprises: providing the training dataset as an input to the embedding neural network to generate high dimensional real latent temporal embeddings, providing the high dimensional real latent temporal embeddings as an input to the recovery neural network to get a reconstructed input training dataset, wherein the embedding and the recovery neural network is jointly trained using a supervised learning approach for reconstructing the training dataset, providing the high dimensional real latent temporal embeddings as an input to the supervisor neural network to generate a single-step-ahead high dimensional real latent temporal embeddings, wherein the supervisor neural network is trained using the supervised learning approach, and providing the high dimensional real latent temporal embeddings as an input to the critic neural network to predict a target variable, wherein the critic neural network is trained using the supervised learning approach; determine a cluster label dependent random noise by transforming Gaussian random noise with a fixed predetermined cluster labels, wherein the Gaussian random noise is part of the input data; compute a conditioned knowledge vector corresponding to a pre-determined label value for each discrete variable; concatenate the cluster label dependent random noise with the conditioned knowledge vector to generate a condition aware synthetic noise; jointly train adversarial neural networks of the Constraint-Condition aware Generative Adversarial Network (ccGAN), a sequence generator neural network, a sequence discriminator neural network, the supervisor neural network and the critic neural network utilizing the condition aware synthetic noise, wherein the training comprises: providing the condition aware synthetic noise as an input to the sequence generator neural network to get high dimensional synthetic latent temporal embeddings, providing the high dimensional synthetic latent temporal embeddings to the trained supervisor neural network to predict single-step ahead synthetic temporal latent embeddings, providing the high dimensional synthetic latent temporal embeddings to the trained critic neural network to predict the synthetic target variable, and providing the predicted single-step ahead synthetic temporal latent embeddings as an input to the recovery neural network to generate the mixed variable type multivariate temporal synthetic data; provide the high dimensional real latent temporal embeddings and the high dimensional synthetic latent temporal embeddings as an input to the sequence discriminator neural network to classify them as one of a real or a fake, and predict the cluster labels for synthetic data; provide a real world condition aware synthetic noise as an input to the trained sequence generator neural network to get real world high dimensional synthetic latent temporal embeddings; provide the real world high dimensional synthetic latent temporal embeddings to the trained supervisor neural network to predict real world single-step ahead synthetic temporal latent embeddings; and provide the real world predicted single-step ahead synthetic temporal latent embeddings as an input to the trained recovery neural network to generate the mixed variable type multivariate temporal synthetic data.
In another aspect, a method for generating mixed variable type multivariate temporal synthetic data is provided. Initially, mixed variable type multivariate temporal real time data is provided as an input data, wherein the mixed variable type comprises continuous variables and discrete variables. Further, the input data is preprocessed by scaling to a fixed range for both the continuous variables and the discrete variables. In the next step, the pre-processed data is split into a training dataset, a validation dataset and a test dataset. The training dataset is then trained on a joint neural network of an autoencoding-decoding component of a Constraint-Condition-Generative Adversarial Network (ccGAN), a supervisor neural network and a critic neural network, wherein the autoencoding-decoding component comprises an embedding neural network and a recovery neural network. The training comprises: providing the training dataset as an input to the embedding neural network to generate high dimensional real latent temporal embeddings, providing the high dimensional real latent temporal embeddings as an input to the recovery neural network to get a reconstructed input training dataset, wherein the embedding and the recovery neural network is jointly trained using a supervised learning approach for reconstructing the training dataset, providing the high dimensional real latent temporal embeddings as an input to the supervisor neural network to generate a single-step-ahead high dimensional real latent temporal embeddings, wherein the supervisor neural network is trained using the supervised learning approach, and providing the high dimensional real latent temporal embeddings as an input to the critic neural network to predict a target variable, wherein the critic neural network is trained using the supervised learning approach. In the next step, a cluster label dependent random noise is determined by transforming Gaussian random noise with a fixed predetermined cluster labels, wherein the Gaussian random noise is part of the input data. Further, a conditioned knowledge vector is computed corresponding to a pre-determined label value for each discrete variable. In the next step, the cluster label dependent random noise is concatenated with the conditioned knowledge vector to generate a condition aware synthetic noise. Neural networks of the Constraint-Condition aware Generative Adversarial Network (ccGAN), a sequence generator neural network, a sequence discriminator neural network, the supervisor neural network and the critic neural network are then jointly trained utilizing the condition aware synthetic noise. The training comprises: providing the condition aware synthetic noise as an input to the sequence generator neural network to get high dimensional synthetic latent temporal embeddings, providing the high dimensional synthetic latent temporal embeddings to the trained supervisor neural network to predict single-step ahead synthetic temporal latent embeddings, providing the high dimensional synthetic latent temporal embeddings to the trained critic neural network to predict the synthetic target variable, and providing the predicted single-step ahead synthetic temporal latent embeddings as an input to the recovery neural network to generate the mixed variable type multivariate temporal synthetic data. Further, the high dimensional real latent temporal embeddings and the high dimensional synthetic latent temporal embeddings are provided as an input to the sequence discriminator neural network to classify them as one of a real or a fake, and predict the cluster labels for synthetic data. In the next step, a real world condition aware synthetic noise is provided as an input to the trained sequence generator neural network to get real world high dimensional synthetic latent temporal embeddings. Further, the real world high dimensional synthetic latent temporal embeddings are provided to the trained supervisor neural network to predict real world single-step ahead synthetic temporal latent embeddings. And finally, the real world predicted single-step ahead synthetic temporal latent embeddings are provided as an input to the trained recovery neural network to generate the mixed variable type multivariate temporal synthetic data.
In yet another aspect, one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause generating mixed variable type multivariate temporal synthetic data is provided. Initially, mixed variable type multivariate temporal real time data is provided as an input data, wherein the mixed variable type comprises continuous variables and discrete variables. Further, the input data is preprocessed by scaling to a fixed range for both the continuous variables and the discrete variables. In the next step, the pre-processed data is split into a training dataset, a validation dataset and a test dataset. The training dataset is then trained on a joint neural network of an autoencoding-decoding component of a Constraint-Condition-Generative Adversarial Network (ccGAN), a supervisor neural network and a critic neural network, wherein the autoencoding-decoding component comprises an embedding neural network and a recovery neural network. The training comprises: providing the training dataset as an input to the embedding neural network to generate high dimensional real latent temporal embeddings, providing the high dimensional real latent temporal embeddings as an input to the recovery neural network to get a reconstructed input training dataset, wherein the embedding and the recovery neural network is jointly trained using a supervised learning approach for reconstructing the training dataset, providing the high dimensional real latent temporal embeddings as an input to the supervisor neural network to generate a single-step-ahead high dimensional real latent temporal embeddings, wherein the supervisor neural network is trained using the supervised learning approach, and providing the high dimensional real latent temporal embeddings as an input to the critic neural network to predict a target variable, wherein the critic neural network is trained using the supervised learning approach. In the next step, a cluster label dependent random noise is determined by transforming Gaussian random noise with a fixed predetermined cluster labels, wherein the Gaussian random noise is part of the input data. Further, a conditioned knowledge vector is computed corresponding to a pre-determined label value for each discrete variable. In the next step, the cluster label dependent random noise is concatenated with the conditioned knowledge vector to generate a condition aware synthetic noise. Neural networks of the Constraint-Condition aware Generative Adversarial Network (ccGAN), a sequence generator neural network, a sequence discriminator neural network, the supervisor neural network and the critic neural network are then jointly trained utilizing the condition aware synthetic noise. The training comprises: providing the condition aware synthetic noise as an input to the sequence generator neural network to get high dimensional synthetic latent temporal embeddings, providing the high dimensional synthetic latent temporal embeddings to the trained supervisor neural network to predict single-step ahead synthetic temporal latent embeddings, providing the high dimensional synthetic latent temporal embeddings to the trained critic neural network to predict the synthetic target variable, and providing the predicted single-step ahead synthetic temporal latent embeddings as an input to the recovery neural network to generate the mixed variable type multivariate temporal synthetic data. Further, the high dimensional real latent temporal embeddings and the high dimensional synthetic latent temporal embeddings are provided as an input to the sequence discriminator neural network to classify them as one of a real or a fake and predict the cluster labels for synthetic data. In the next step, a real world condition aware synthetic noise is provided as an input to the trained sequence generator neural network to get real world high dimensional synthetic latent temporal embeddings. Further, the real world high dimensional synthetic latent temporal embeddings are provided to the trained supervisor neural network to predict real world single-step ahead synthetic temporal latent embeddings. And finally, the real world predicted single-step ahead synthetic temporal latent embeddings are provided as an input to the trained recovery neural network to generate the mixed variable type multivariate temporal synthetic data.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 illustrates a block diagram of a system for generating mixed variable type multivariate temporal synthetic data according to some embodiments of the present disclosure.
FIG. 2A through FIG. 2C is a flowchart illustrating steps involved in generating mixed variable type multivariate temporal synthetic data according to some embodiments of the present disclosure.
FIG. 3 is a block diagram of an embedding and recovery module according to some embodiments of the present disclosure.
FIG. 4 is a block diagram showing unsupervised learning of the generator neural network, the supervisor neural network and the recovery neural network according to some embodiments of the present disclosure.
FIG. 5 is a block diagram of a generator module according to some embodiments of the present disclosure.
FIG. 6 is a block diagram of a discriminator module according to some embodiments of the present disclosure.
FIG. 7 is a block diagram of a critic module according to some embodiments of the present disclosure.
FIG. 8 is a block diagram of a supervisor module according to some embodiments of the present disclosure.
FIG. 9 is a block diagram showing training of the joint network in supervised-learning approach according to some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Health monitoring of complex industrial assets remains the most critical task for avoiding downtimes, improving system reliability, safety and maximizing utilization. Deep learning-driven generative models encapsulate the operational behavior from adversarial losses through adversarial training of the complex large-scale industrial-plant or asset multivariate time series data. Recent advances in time-series synthetic data generation have several inherent limitations for realistic applications. The existing solutions do not provide a unified approach and do not generate the realistic data which can be used in the industrial processes. Further, the existing solution is not able to incorporate condition and constraint prior knowledge while sampling the synthetic data.
The present disclosure provides a method and system for generating mixed variable type multivariate temporal synthetic data. The system provides a framework for condition and constraint knowledge-driven synthetic data generation of real-world industrial mixed-data type multivariate time-series data. The framework consists of a generative time-series model, which is trained adversarially (the adversarial loss is described by a continuously trained generative network player to generate samples that have a low probability of being unrealistic in contrast to the discriminator loss, as determined by the discriminator player which is trained to classify both the true data and the synthetic data from the generator player) and jointly through a learned latent embedding space with both supervised and unsupervised losses. The key challenges are encapsulating the distributions of mixed-data types variables and correlations within each timestamp as well as the temporal dependencies of those variables across time frames.
The present disclosure addresses the key desideratum in diverse time dependent data fields where data availability, data accuracy, precision, timeliness, and completeness are of prior importance in improving the performance of the deep learning models.
Referring now to the drawings, and more particularly to FIG. 1 through FIG. 9, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
According to an embodiment of the disclosure, FIG. 1 illustrates a block diagram of a system 100 for generating mixed variable type multivariate temporal synthetic data. The system 100 comprises a generative model through an adversarial training process between generator and discriminator network player is a unified algorithmic approach from autoregressive models for sequence prediction, Generative Adversarial Networks (GAN) based methods for sequence generation, and for time-series representation learning.
It may be understood that the system 100 comprises one or more computing devices 102, such as a laptop computer, a desktop computer, a notebook, a workstation, a cloud-based computing environment and the like. It will be understood that the system 100 may be accessed through one or more input/output interfaces 104, collectively referred to as I/O interface 104 or user interface 104. Examples of the I/O interface 104 may include, but are not limited to, a user interface, a portable computer, a personal digital assistant, a handheld device, a smartphone, a tablet computer, a workstation and the like. The I/O interface 104 are communicatively coupled to the system 100 through a network 106.
In an embodiment, the network 106 may be a wireless or a wired network, or a combination thereof. In an example, the network 106 can be implemented as a computer network, as one of the different types of networks, such as virtual private network (VPN), intranet, local area network (LAN), wide area network (WAN), the internet, and such. The network 106 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and Wireless Application Protocol (WAP), to communicate with each other. Further, the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices. The network devices within the network 106 may interact with the system 100 through communication links.
The system 100 may be implemented in a workstation, a mainframe computer, a server, and a network server. In an embodiment, the computing device 102 further comprises one or more hardware processors 108, one or more memory 110, hereinafter referred as a memory 110 and a data repository 112, for example, a repository 112. The memory 110 is in communication with the one or more hardware processors 108, wherein the one or more hardware processors 108 are configured to execute programmed instructions stored in the memory 110, to perform various functions as explained in the later part of the disclosure. The repository 112 may store data processed, received, and generated by the system 100. The memory 110 further comprises a plurality of modules for performing various functions. The plurality of modules comprises an embedding and recovery module 114, a generator module 116, a discriminator module 118, a critic module 120, and a supervisor module 122.
The system 100 supports various connectivity options such as BLUETOOTH®, USB, ZigBee and other cellular services. The network environment enables connection of various components of the system 100 using any communication link including Internet, WAN, MAN, and so on. In an exemplary embodiment, the system 100 is implemented to operate as a stand-alone device. In another embodiment, the system 100 may be implemented to work as a loosely coupled device to a smart computing environment. The components and functionalities of the system 100 are described further in detail.
FIG. 2A through 2C illustrates an example flow chart of a method 200 for analyzing a plurality of data streams in real time, in accordance with an example embodiment of the present disclosure. The method 200 depicted in the flow chart may be executed by a system, for example, the system 100 of FIG. 1. In an example embodiment, the system 100 may be embodied in the computing device.
Operations of the flowchart, and combinations of operations in the flowchart, may be implemented by various means, such as hardware, firmware, processor, circuitry and/or other device associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described in various embodiments may be embodied by computer program instructions. In an example embodiment, the computer program instructions, which embody the procedures, described in various embodiments may be stored by at least one memory device of a system and executed by at least one processor in the system. Any such computer program instructions may be loaded onto a computer or other programmable system (for example, hardware) to produce a machine, such that the resulting computer or other programmable system embody means for implementing the operations specified in the flowchart. It will be noted herein that the operations of the method 200 are described with help of system 100. However, the operations of the method 200 can be described and/or practiced by using any other system.
Initially at step 202 of the method 200, mixed variable type multivariate temporal real time data is provided as an input data. The mixed variable type comprises continuous variables and discrete variables.
Further at step 204, the input data is preprocessed by scaling to a fixed range for both the continuous variables and the discrete variables. The data pre-processing involves encoding the continuous independent & dependent feature variables by scaling to the fixed-range, [0; 1] by applying the min-max scaling technique. The discrete categorical featre attributes are represented as binary vectors through the one-hot encoding technique. At step 206, the pre-processed data is split into a training dataset, a validation dataset and a test dataset. The training dataset is used to train multiple neural networks. The validation dataset is used to tune a set of hyperparameters.
Further at step 208 of the method 200, a joint neural network of an autoencoding-decoding component of a Constraint-Condition-Generative Adversarial Network (ccGAN), a supervisor neural network and a critic neural network is trained utilizing the training dataset. The autoencoding-decoding component comprises an embedding neural network and a recovery neural network. The training comprises the learning of optimum learnable parameters of the embedding neural network, the recovery neural network, the supervisor neural network and the critic neural network.
At step 208a, the training dataset is provided as an input to the embedding neural network to generate high dimensional real latent temporal embeddings. At step 208b, the high dimensional real latent temporal embeddings are provided as an input to the recovery neural network to get a reconstructed input training dataset. The embedding and the recovery neural network are jointly trained using a supervised learning approach for reconstructing the training dataset. At step 208c, the high dimensional real latent temporal embeddings are provided as an input to the supervisor neural network to generate a single-step-ahead high dimensional real latent temporal embeddings. The supervisor neural network is trained using the supervised learning approach. And at step 208d, the high dimensional real latent temporal embeddings are provided as an input to the critic neural network to predict a target variable. The critic neural network is trained using the supervised learning approach.
Further at step 210 of the method 200, a cluster label dependent random noise is determined by transforming Gaussian random noise with fixed predetermined cluster labels, wherein the Gaussian random noise is part of the input data. At step 212, a conditioned knowledge vector is computed corresponding to a pre-determined label value for each discrete variable. At step 214 the cluster label dependent random noise is concatenated with the conditioned knowledge vector to generate a condition aware synthetic noise.
Further at step 216 of the method 200, following neural networks of the Constraint-Condition aware Generative Adversarial Network (ccGAN), a sequence generator neural network, a sequence discriminator neural network, the supervisor neural network and the critic neural network are jointly trained utilizing the condition aware synthetic noise. The training comprises: initially at step 216a, the condition aware synthetic noise is provided as an input to the sequence generator neural network to get high dimensional synthetic latent temporal embeddings. At step 216b, the high dimensional synthetic latent temporal embeddings are provided to the trained supervisor neural network to predict single-step ahead synthetic temporal latent embeddings. At step 216c, the high dimensional synthetic latent temporal embeddings are provided to the trained critic neural network to predict the synthetic target variable. And at step 216d, the predicted single-step ahead synthetic temporal latent embeddings are provided as an input to the recovery neural network to generate the mixed variable type multivariate temporal synthetic data.
Further, at step 218 of the method 200, the high dimensional real latent temporal embeddings and the high dimensional synthetic latent temporal embeddings are provided as an input to the sequence discriminator neural network to classify them as one of a real or a fake and predict the cluster labels for synthetic data. It should be appreciated that the validation dataset is utilized as an input to the trained ccGAN to tune a set of hyperparameters.
Further at step 220 of the method 200, real world condition aware synthetic noise is provided as an input to the trained sequence generator neural network to get real world high dimensional synthetic latent temporal embeddings. At step 222, real world high dimensional synthetic latent temporal embeddings are provided to the trained supervisor neural network to predict real world single-step ahead synthetic temporal latent embeddings. And finally, at step 224, the real world predicted single-step ahead synthetic temporal latent embeddings are provided as an input to the trained recovery neural network to generate the mixed variable type multivariate temporal synthetic data.
According to an embodiment of the disclosure, the system 100 can be explained with the help of a problem-solution approach. To formulate the problem, it is considered that the mixed-feature f-dimensional time series dataset, D is observed over T_n×2N timepoints. D is described by (D_((1)),D_((2)),…,D_((T_n×2N))) and observed in timepoints, t?{1,2,…,T_n×2N}. The observations of the f-stochastic variables at t-th timepoint is given by D_t=(D_t^((1)),…,D_t^((f)))?D^((f)). D_t^((j)), ? j?{1,2,…,f} denotes the observation value of j-th feature-variable of Dt. The observed dataset, D comprises of c-continuous random variables, {1,…,c}?f and d-categorical stochastic variables, {(c+1),…,d}?f. f (=c=d) denotes the total number of feature variables in the real dataset, D. The mixed-feature f-dimensional space is denoted by D. D is given by, ?_(j=1)^f¦? D_((j)). In general, the synthetic data generative neural network function, G_n is learned by modeling the mixed-feature multivariate time series data, D joint distribution, P(D(1:c;c+1:d)) to generate synthetic data, D ~ which is modeled by P ^(D ~^((1:c,c+1:d))). D ~ is determined by solving a two machine players, mini-max optimization schema through adversarial training in a competitive game setting. It is expressed as below,
min_(G_n ) max_(D_n ) [E_(D~P) [logD_n (D)]+E_(D ~~P ^ ) [log(1-D_n (D ~))]] ……….. [1]
D_ndenotes the sequence discriminator neural network. After training G_n on D. D ~ is determined independently by sampling sequences using G_n. The drawbacks of the traditional synthetic data generative algorithms solved through, refer equation[1]. The drawbacks include to retain the joint distributions of handling mixed-feature type real data, temporal dynamics of the real data, preserve the relationship between the independent feature variables and the target variable in the real data and incorporate a condition and constraint prior knowledge for sampling synthetic data. The sampled, D ~ suffer from lack-luster utility for application in downstream tasks. In the present disclosure, ccGAN algorithmic architecture by operating on rearranged multivariate mixed-feature dataset, D_(n,1:T_n ) the short-comings are adressed by incorporating the condition and constraint knowledge into the generative algorithm and preseve the characteristics of the observed mixedfeature data in the synthetic data. The observed dataset, D is rearranged as,
D_(n,1:T_n ),?n?{1,…,2N} …………. [2]
The cardinality of the real dataset, D_(n,1:T_n )?R^(T_n×f), ? n?{1,…,2N} is 2N. Lets consider for n=1, D_(1,1:T_1 )?D^(T_1×f). T1 denotes the finite length of the sequence, n = 1. I_(1,1:T_1 ) consists of observations of D_t at time points, t?{1,2,…,T_1 }. In the same way for the sequence, n=2. I_(2,1:T_2 )?D^(T_2×f) is an array of arrays, which consists of observations of D_t at time points, t?{(T_1+1),…,(T_1+T_2 )}. The finite-length, Tn of each sequence, ? n?{1,…,2N} is a stochastic variable. Tn is constant value across the sequences, ? n?{1,…,2N}. It is a hyper-parameter of the hybridlearning algorithm. D_(n,1:T_n ) be the continuous observed multivariate time series of length, Tn of the mixed-feature variables, f for a given sequence, n. D_(n,t)={D_(n,t)^((1) ),D_(n,t)^((2) ),…,D_(n,t)^((f) ) }. D_(n,t)?D^((f) ) is the n-th data sequence observation values of the feature variables, f at the t-th time point, t?1:T_n. D_(n,t)^((j))?D_((j)),t?1?:T?_n denotes the observed value of the j-th feature variable in D_(n,t)?D^((f)). The real dataset, D_(n,1:T_n ) of cardinality 2N is split into training data set, D_(train_(n,1:T_n ) ), and test data set, D_(test_(n,1:T_n ) ) of cardinality N respectively without random shuffling of D_(n,1:T_n ). D_(train_(n,1:T_n ) )is modeled with an unknown joint distribution, P(D_(train_(n,1:T_n ))^((1:c,c+1:d))). The synthetic dataset generated by the ccGAN neural-network architecture is denoted by D ~_(n,1:T_n ), ? n?{1,…,N}. The size of D ~_(n,1:T_n ) is N. Lets say, D ~_(n,1:T_n ) is modeled by distribution, P ^(D ~_(n,1:T_n)^((1:c,c+1:d))). In the present disclosure, mixed-feature training data, D_(train_(n,1:T_n ) ) learn a density P ^(D ~_(n,1:T_n)^((1:c,c+1:d) ) ) that best approximates P(D_(train_(n,1:T_n ))^((1:c,c+1:d))). It is mathematically described as by minimizing the weighted sum of the Kullback-Leibler(KL) divergence and the Wasserstein distance function(W) of order-1 defined between the original data observations, D_(train_(n,1:T_n ) ) and the synthetic data, D ~_(n,1:T_n ) continuous probability distributions. The mathematical description is as follows,
min_P ^ KL(P(D_(train_(n,t))^((1:c,c+1:d) ))? P ^(D ~_(n,t)^((1:c,c+1:d) )))+?W(P(D_(train_(n,t))^((c_1:c_c,d_1:d_d ) ))? P ^(D ~_(n,t)^((1:c,c+1:d))) ……… [3]
In modeling mixed-data type multivariate temporal data, for convenience n = 1, D_(1,1:T_1 )=(D_1,…,D_(T_1 ))?D^(T_1×f). The intent is to precisely represent the conditional distribution P(D_(1,t) |D_(1,1:t-1)),t?1:T_1 in the synthetic data generated. The desideratum of the disclosed framework is also to preserve the temporal dynamics of real data. It is obtained by matching the conditionals and the mathematical description is as follows,
min_P ^ KL(P(D_(train_(n,t))^((1:c,c+1:d)) |D_(train_(n,1:t-1))^((1:c,c+1:d)))? P ^(D ~_(n,t)^((1:c,c+1:d)) |D ~_(n,1:t-1)^((1:c,c+1:d))))+?W(P(D_(train_(n,t))^((1:c,c+1:d)) |D_(train_(n,1:t-1))^((1:c,c+1:d)))? P ^(D ~_(n,t)^((1:c,c+1:d)) |D ~_(n,1:t-1)^((1:c,c+1:d)))),t?1: T_n …[4]
The synthetic data generative neural network architecture should also preserve the relationship between the independent feature variables, f_c?f and target variable, f_T?f of the temporal real data and it is described by,
min_P ^ KL(P(D_(train_(n,t))^(f_T )¦D_(train_(n,t))^(f_c ) )? P ^(D ~_(n,t)^(f_T )¦D ~_(n,t)^(f_c ) ))+?W(P(D_(train_(n,t))^(f_T )¦D_(train_(n,t))^(f_c ) )? P ^(D ~_(n,t)^(f_T )¦D ~_(n,t)^(f_c ) )],t?1:T_n ……… [5]
According to an embodiment of the disclosure Constraint-Conditional Generative Adversarial Network (ccGAN) comprises following neural network modules or neural networks, embedding neural network, recovery neural network, sequence generator neural network, critic neural network, and a sequence discriminator neura network as mentinoed above.
According to an embodiment of the disclosure, an embedding and recovery module is configured to train the embedding neural network and the recovery neural network. The embedding module performs feature embedding by mapping the low dimensional temporal sequences to their corresponding high dimensional latent variables, E_ccGAN:D_(train_(n,1:T_n ) )? ?_t¦? ?_(j=1)^f¦? D_((j)) ? H_(train_(n,1:T_n ) )? ?_t¦? ?_(j=1)^f¦? H_((j)), ? n?{1,…,N}. D_((j)), H_((j)) denotes the real j-th variable feature space & the real j-th variable latent embedding space. Refer to the algorithm [1] for the computation of the high dimensional latent variables from the low-dimensional feature representations. S,S_m denote the sigmoid & softmax activation-function respectively. e ?_rnn is an autoregressive neural-net model. It is realized with an unidirectional recurrent neural network with extended memory. e_f is parameterized by a feed-forward neural network. The recovery function transforms the high dimensional temporal latent variables to their corresponding low-level feature representations, R_ccGAN:H_(n,1:T_n)^*?D_(n,1:T_n)^*,M_(n,1:T_n)^*. M_(n,1:T_n)^* is the binary-valued sparse dataset. The superscript, * denotes for real variables, D_(train_(n,1:T_n ) ), H_(train_(n,1:T_n ) ), M ~_(train_(n,1:T_n ) ) or for synthetic variables, D ~_(n,1:T_n ), H ^_(train_(n,1:T_n ) ), M ~_(n,1:T_n ) respectively. H_(n,1:T_n)^*? ?_t¦? ?_(j=1)^f¦? H_j, D_(n,1:T_n)^*??_t¦? ?_(j=1)^f¦? D_j and M_(n,1:T_n)^*?{0,1}^([T_n,f_M]). f_M=?_(j=1)^f¦?|l_j |. Refer to the algorithm [2] for the computation of the low-dimensional feature representations from the high dimensional latent variables. r ?_rnn is an autoregressive, casual-ordering driven neural-net model. It is realized with an unidirectional recurrent neural network with extended memory. r_f^c and r_f^d are implemented by a feed-forward neural networks. ? denotes the concatenation operator
The intermediate layers of the recovery module applies a sigmoid function, S and the softmax function, Sm to output values for continuous feature variables and for the discrete feature variables respectively. Please refer to steps-4,-5 of the algorithm [2]. The d-categorical feature variables were transformed, {(c+1),…,d}?f to a set of one-hot numeric arrays. Please refer to step-7 of the algorithm [2].
{v ~^((1)),…,v ~^((d))} ……….. [6]
Assume, l_j represents the set of discrete labels associated with the j-th categorical feature variable, j?{(c+1),…,d}?f. |l_j | denotes the size of the set, l_j. v ~^((j)) is described by:
{v ~^((j))?{0,1}^(|l_j |):?_(i=1)^(|l_j |)¦? v ~_i^((j))=1},?j?f ….. [7]
v ~^((j)) denotes the one-hot vector corresponding to j-th categorical feature variable. v ~_i^((j)) is the scalar value of v ~^((j)). v ~_i^((j)) i takes the value of 1, when i=argmax_i [l_j^((i))=k],k?l_j condition is satisfied and the rest is filled with zeros. l_j^((i)) denotes the i-th element of the set, l_j. The one-hot vector of the discrete feature variable, j?{(c+1),…,d} at each timepoint, t is concatenated to obtain the sparse-vector, M_(n,t) for a data sequence, n,? n?{1,…,N} refer to step-8 of the algorithm [2]. The objective of the embedding and recovery modules are to minimize the discrepancy between the input mixed-feature real data, D_(train_(n,1:T_n ) )and the reconstructed data, D ~_(train_(n,1:T_n ) ) from its corresponding high dimensional latent representations, H_(train_(n,1:T_n ) ) as shown in FIG. 3. It is realized by joint training of the embedding and recovery modules through by minimizing supervised loss as described below,
L_R=?_(n=1)^N¦??D_(train_(n,1:T_n ) )-D ~_(train_(n,1:T_n ) ) ?_2 … [8]
The cross-entropy loss in binary classification for predicting the input sparse one-hot encoded matrix is described below,
L_M=-1/N ?_(n=1)^N¦?(M_(n,1:T_n)^* logM ~_(n,1:T_n)^*+(1-M_(n,1:T_n)^*)log(1-M ~_(n,1:T_n)^*))|?M_(n,t)^((k))?^*=1 … [9]
The loss, L_M is evaluated for sparse matrix values, ?M_(n,t)^((k))?^*=1,k?1,…,f_M. M^* orM ~^*?{0,1}^([T_n,f_M]),f_M=?_(j=1)^f¦?|l_j |,?n?{1,…,N}. The supercript, * denotes real, M_(train_(n,1:T_n ) ), the sparse conditional vector c_v or synthetic, M ~_(train_(n,1:T_n ) ), M ~_(n,1:T_n ). M_(train_(n,1:T_n ) ) is the ground-truth one-hot encoded sparse matrix determined for discrete feature variables, D_(train_(n,1:T_n ))^((c+1:d)) by applying the one hot encoding technique. M ~_(train_(n,1:T_n ) ) denotes the reconstructed binary sparse matrix. ß is a hyper parameter.
The unsupervised loss is minimized, L_US through by joint adversarial training of the generator, supervisor and the recovery modules in the unsupervised learning approach as shown in FIG. 4. In absence of pre-assigned labels(ground-truth) as in contrast to supervised learning. The unsupervised learning approach extracts the relationships in the real data through by matching the first and second-order moments of the real, D_(train_(n,1:T_n ) ) and the synthetic data, D ~_(n,1:T_n ). Lets say, D ¯_1=1/N ?_(j=1)^f¦? ?_(n=1)^N¦? D_(train_(n,1:T_n ))^((j)) and D ¯_2=1/N ?_(j=1)^f¦? ?_(n=1)^N¦? D ~_(n,1:T_n)^((j)) denote the sample means of the real and the synthetic data respectively. Lets assume, s ^_1^2=1/N ?_(j=1)^f¦? ?_(n=1)^N¦?(D_(train_(n,1:T_n ))^((j))-D ¯_1 )^2 and s ^_2^2=1/N ?_(j=1)^f¦? ?_(n=1)^N¦?(D ~_(n,1:T_n)^((j) )-D ¯_2 )^2 denote the sample variances of the real and the synthetic data respectively. The joint adversarial generative moment-matching network comprising of the generator, supervisor, recovery modules aid in unsupervised inference by enforcing the similarity of two distributions, P(D_(train_(n,1:T_n ))^((1:c,c+1:d))), P(D ~_(train_(n,1:T_n ))^((1:c,c+1:d))) by minimizing the first, |D ¯_1-D ¯_2 | and second order moments, |v(s ^_1^2 )-v(s ^_2^2 )| differences between the real and the synthetic data as described below,
L_US=|D ¯_1-D ¯_2 |+|v(s ^_1^2 )-v(s ^_2^2 )| …… [10]
According to an embodiment of the present disclosure, a constraint and condition-aware generator module is configured to incorporate the condition & constraint sampling mechanism into the synthetic data generative neural net. For a finite set of categorical feature variables, {(c+1,…,d)}?f, k be the categorical label value for j-th discrete feature variable in the training dataset, D_(train_(n,t))^((j)) at t-th timepoint corresponding to n-th data sequence. The condition-conscious generator neural net, G_ccGAN is presented as a sampler for mixed-feature synthetic data, D ~_(n,t)^((j)) with a prior knowledge given k-label value for j-th discrete feature attribute at t-th timepoint corresponding to the data sequence, n. The condition-aware generated samples, G_ccGAN satisfy the conditional distribution criteria, P ^(D ~_(n,t)^((1:c,c+1:d)) |D ~_(n,t)^((j))==k), j?{(c+1,…,d)}, t?1:T_n & ?n?{1,…,N}. G_ccGAN learns the real mixed feature dataset joint conditional probability distribution as expressed below,
P ^(D ~_(n,t)^((1:c,c+1:d)) |D ~_(n,t)^((j))=k)=P(D_(train_(n,t))^((1:c,c+1:d)) |D_(train_(n,t))^((j))=k) …. [11]
The real temporal data distribution can be described as:
P(D_(train_(n,t))^((1:c,c+1:d)))=?_(k? l_j)¦? P ^(D ~_(n,t)^((1:c,c+1:d)) |D ~_(n,t)^((j))=k)P(D_(train_(n,t))^((j))=k) … [12]
The context-free condition embedded vector, c_v is presented as a mathematical method for incorporating the condition prior knowledge into the Constraint-Conditional Generative Adversarial Network(ccGAN) framework.
Lets assume, m^((j)) is the mask vector corresponding to j-th categorical feature variable. Note: |l_j | is the cardinality of the set of possible categorical label values, l_j for the j-th discrete feature variable.
{m^((j))?{0,1}^(|l_j |):?_(i=1)^(|l_j |)¦? m_i^((j))=1} ……. [13]
m_i^((j)) denotes the scalar value of 1 in the matching scenario of i=argmax_i [l_j^((i))=k],k?l_j and the rest is filled with zeros. Note: l_j^((i)) j denotes the i-th element of the set, lj . The conditional vector, cv is determined by,
{m^((1) )?…?m^((d) ) }…… [14]
cv is derived to operate only on the discrete feature variables for condition-aware synthetic data generation. The sparse conditional vector, cv during the adversarial training penalizes the generator to output appropriate synthetic latent embeddings. The supervisor neural-net operates on the condition embedded synthetic latent embeddings and predicts one-step ahead synthetic temporal latent embeddings. These high dimensional representations are utilized by the recovery function to output the synthetic data, D ~_(n,1:T_n)^* and the sparse dataset, M ~_(n,1:T_n)^* refer to steps 6, 8 & 11 of the algorithm [2].
Lets assume, Z_(n,1:T_n )?R^(T_n×f) be an f-dimensional uniformly distributed random variable of length, Tn for a sequence, n with values in the half-open interval [0; 1) sampled from an uniform noise, Z. The synthetic noise is refined, Z_(n,1:T_n ),?n?{1,…,N} based on the cluster labels, C_(train_(n,1:T_n ) )?R^(T_n ),?n?{1,2,…,N}. The labels are determined by an iterative centroid-based clustering algorithm for assigning a cluster membership to each observation in the unlabeled dataset, D_(train_(n,t) ),?n?{1,…,N},t?1:T_n. The labels are computed by partitioning, as belonging to one of the K-fixed apriori non-overlapping clusters. The adversarial ground-truth labels, C_(train_(n,1:T_n ) ) of the mixed-feature dataset, D_(train_(n,1:T_n ) ) are obtained through the unsupervised learning technique.
It is determined by the K-means clustering algorithm as follows:
Initialize the cluster centroids randomly, µ_1,µ_2,…,µ_K?R^f.
Repeat until convergence so as to minimize the within-cluster sum of pairwise squared deviations:
{
For every n, while n
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202221000662-STATEMENT OF UNDERTAKING (FORM 3) [05-01-2022(online)].pdf |
2022-01-05 |
| 2 |
202221000662-REQUEST FOR EXAMINATION (FORM-18) [05-01-2022(online)].pdf |
2022-01-05 |
| 3 |
202221000662-FORM 18 [05-01-2022(online)].pdf |
2022-01-05 |
| 4 |
202221000662-FORM 1 [05-01-2022(online)].pdf |
2022-01-05 |
| 5 |
202221000662-FIGURE OF ABSTRACT [05-01-2022(online)].jpg |
2022-01-05 |
| 6 |
202221000662-DRAWINGS [05-01-2022(online)].pdf |
2022-01-05 |
| 7 |
202221000662-DECLARATION OF INVENTORSHIP (FORM 5) [05-01-2022(online)].pdf |
2022-01-05 |
| 8 |
202221000662-COMPLETE SPECIFICATION [05-01-2022(online)].pdf |
2022-01-05 |
| 9 |
202221000662-FORM-26 [20-04-2022(online)].pdf |
2022-04-20 |
| 10 |
202221000662-Proof of Right [21-04-2022(online)].pdf |
2022-04-21 |
| 11 |
Abstract1.jpg |
2022-04-28 |
| 12 |
202221000662-Power of Attorney [08-12-2022(online)].pdf |
2022-12-08 |
| 13 |
202221000662-Form 1 (Submitted on date of filing) [08-12-2022(online)].pdf |
2022-12-08 |
| 14 |
202221000662-Covering Letter [08-12-2022(online)].pdf |
2022-12-08 |
| 15 |
202221000662-CORRESPONDENCE(IPO)-(WIPO DAS)-14-12-2022.pdf |
2022-12-14 |
| 16 |
202221000662-FORM 3 [30-05-2023(online)].pdf |
2023-05-30 |
| 17 |
202221000662-FER.pdf |
2025-02-12 |
| 18 |
202221000662-FORM 3 [09-05-2025(online)].pdf |
2025-05-09 |
| 19 |
202221000662-OTHERS [09-07-2025(online)].pdf |
2025-07-09 |
| 20 |
202221000662-FER_SER_REPLY [09-07-2025(online)].pdf |
2025-07-09 |
| 21 |
202221000662-DRAWING [09-07-2025(online)].pdf |
2025-07-09 |
| 22 |
202221000662-CLAIMS [09-07-2025(online)].pdf |
2025-07-09 |
| 23 |
202221000662-ORIGINAL UR 6(1A) FORM 26-160725.pdf |
2025-07-18 |
Search Strategy
| 1 |
202221000662E_12-03-2024.pdf |