Specification
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
SYSTEM AND METHOD FOR CLASSIFICATION OF SENSITIVE DATA USING FEDERATED SEMI-SUPERVISED LEARNING
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
The present application claims priority from Indian provisional patent application no. 202221050218, filed on September 02, 2022. The entire contents of the aforementioned application are incorporated herein by reference.
TECHNICAL FIELD
The disclosure herein generally relates to classification of sensitive data, and, more particularly, to system and method for classification of sensitive data using federated semi-supervised learning.
BACKGROUND
Security classification of data is a complex task and depends on the context under which the data is shared, used, and processed. Such classification is usually derived from the data, and not intuitive to the user working on that data, thus dictating the requirement for an automated security classification tool. All sensitive documents have same architecture and technology stack, from a security classification perspective, and these are entirely different as one of them is customer-confidential and the other is a public document. For an enterprise, it is important that they safeguard the customer-confidential data as any breach may lead to a violation of the non-disclosure agreement thus resulting in monetary and reputation loss.
Deep learning has widespread applications in various fields, such as entertainment, visual recognition, language understanding, autonomous vehicles, and healthcare. Human level performance in such applications are due to the availability of a large amount of data. However, getting a large amount of data could be difficult and may not always be possible. It is primarily due to end user data privacy concerns and geography based data protection regulations that impose strict rules on how data is stored, shared, and used. Privacy concerns lead to creation of data in silos at end-user devices. Such an accumulation of data is not conducive to conventional deep learning techniques that require training data at a central location and with full access. However, keeping data in a central place has the inherent risk of data being compromised and misused.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a system for classification of sensitive data using federated semi-supervised learning is provided. The system includes extracting a training dataset from one or more data sources and the training dataset are pre-processed into a machine readable form based on associated data type. The training dataset comprises a labeled dataset and an unlabeled dataset. Further, iteratively a federated semi-supervised learning model is trained based on model contrastive and distillation learning to classify sensitive data from the unlabeled dataset. The federated semi-supervised learning model comprises a server and a set of participating clients. The federated semi-supervised learning model is trained by fetching a federated learning plan comprising a first set of distinctive attributes corresponding to the set of local models and a second set of distinctive attributes corresponding to the global model. Each local model and the global model includes at least one of a projection layer, a classification layer, and a base encoder. Further, the set of local models are trained at the set of participating clients with respective unlabeled dataset by using the first set of distinctive attributes associated with the federated learning plan and communicating the plurality of trained local models to the server. Then, the global model is trained with the set of local models of each participating client with respective labeled dataset on the server by using the second set of distinctive attributes associated with the federated learning plan and communicating the trained global model with each participating client. Further, the system classifies sensitive data from a user query received as input using the federated semi-supervised learning model and reclassify the sensitive data from the user query based on a feedback provided by the user if the data classification is erroneous.
In another aspect, a method for classification of sensitive data using federated semi-supervised learning is provided. The method includes extracting a training dataset from one or more data sources and the training dataset are pre-processed into a machine readable form based on associated data type. The training dataset comprises a labeled dataset and an unlabeled dataset. Further, iteratively a federated semi-supervised learning model is trained based on model contrastive and distillation learning to classify sensitive data from the unlabeled dataset. The federated semi-supervised learning model comprises a server and a set of participating clients. The federated semi-supervised learning model is trained by fetching a federated learning plan comprising a first set of distinctive attributes corresponding to the set of local models and a second set of distinctive attributes corresponding to the global model. Each local model and the global model includes at least one of a projection layer, a classification layer, and a base encoder. Further, the set of local models are trained at the set of participating clients with respective unlabeled dataset by using the first set of distinctive attributes associated with the federated learning plan and communicating the plurality of trained local models to the server. Then, the global model is trained with the set of local models of each participating client with respective labeled dataset on the server by using the second set of distinctive attributes associated with the federated learning plan and communicating the trained global model with each participating client. Further, the method classifies sensitive data from a user query received as input using the federated semi-supervised learning model and reclassify the sensitive data from the user query based on a feedback provided by the user if the data classification is erroneous.
In yet another aspect, a non-transitory computer readable medium for extracting a training dataset from one or more data sources and the training dataset are pre-processed into a machine readable form based on associated data type. The training dataset comprises a labeled dataset and an unlabeled dataset. Further, iteratively a federated semi-supervised learning model is trained based on model contrastive and distillation learning to classify sensitive data from the unlabeled dataset. The federated semi-supervised learning model comprises a server and a set of participating clients. The federated semi-supervised learning model is trained by fetching a federated learning plan comprising a first set of distinctive attributes corresponding to the set of local models and a second set of distinctive attributes corresponding to the global model. Each local model and the global model includes at least one of a projection layer, a classification layer, and a base encoder. Further, the set of local models are trained at the set of participating clients with respective unlabeled dataset by using the first set of distinctive attributes associated with the federated learning plan and communicating the plurality of trained local models to the server. Then, the global model is trained with the set of local models of each participating client with respective labeled dataset on the server by using the second set of distinctive attributes associated with the federated learning plan and communicating the trained global model with each participating client. Further, the method classifies sensitive data from a user query received as input using the federated semi-supervised learning model and reclassify the sensitive data from the user query based on a feedback provided by the user if the data classification is erroneous.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 illustrates an exemplary block diagram of a system (may be alternatively referred as a federated semi-supervised learning based sensitive data classification system) to classify sensitive data, in accordance with some embodiments of the present disclosure.
FIG. 2A and FIG.2B illustrates an exemplary client-server architecture to classify sensitive data from the user query with feedback mechanism using the federated learning based sensitive data classification framework, in accordance with some embodiments of the present disclosure.
FIG.3 illustrates an exemplary flow diagram of a method to classify sensitive data from the user query, in accordance with some embodiments of the present disclosure.
FIG.4 illustrates an exemplary enterprise scenario having labels at server using the system of FIG.1, according to some embodiments of the present disclosure.
FIG.5A through FIG.5C illustrate an identical data distribution (IID) and non-independent identical data distribution from a Fashion-modified national institute standard technology (MNIST) dataset across ten clients with different values of alpha using the system of FIG.1, according to some embodiments of the present disclosure.
FIG.6A through FIG.6O illustrates graphical representation of accuracy between the sensitive data classification framework and conventional datasets using the system of FIG.1, according to some embodiments of the present disclosure.
FIG.7A through 7D illustrates a global model representation for local model training performed with different loss functions using the system of FIG.1, according to some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Embodiments herein provide a method and system for classification of sensitive data using federated semi-supervised learning. The system may be alternatively referred as a sensitive data classification system 100. Federated learning has emerged as a privacy-preserving technique to learn one or more machine learning (ML) models without requiring users to share their data. Existing techniques in federated semi-supervised learning (FSSL) requires data augmentation to train one or more machine learning model training. However, data augmentation is not well defined for prevalent domains like text and graphs. Moreover, non-independent and identically distributed (non-IID) data across users is a significant challenge in federated learning.
The method of the present disclosure provides technical solution where users do not have domain expertise or incentives to label data on their device, and where the server has limited access to some labeled data that is annotated by experts using a Federated Semi-Supervised Learning (FSSL) based sensitive data classification system. Although consistency regularization shows good performance in the federated semi-supervised learning (FSSL) for the vision domain, it requires data augmentation to be well defined. However, in the text domain, data augmentation is not so straightforward, and changing few words can impact the meaning of sentence. The method implemented by the present disclosure addresses the problem of data augmentation in FSSL with the data augmentation-free semi-supervised federated learning approach. The method implemented by the present disclosure employs a model contrastive loss and a distillation loss on the unlabeled dataset to learn generalized representation and supervised cross-entropy loss on the server side for supervised learning. The system 100 is a data augmentation-free framework for federated semi-supervised learning that learns data representation based on computing a model contrastive loss and a distillation loss while training a set of local models. The method implemented by the present disclosure and its systems described herein are based on model contrastive and distillation learning which does not require data augmentation, thus making it easy to adapt to different domains. The method is further evaluated on image and text datasets to show the robustness towards non-IID data. The results have been validated by varying data imbalance across users and the number of labeled instances on the server. The disclosed system is further explained with the method as described in conjunction with FIG.1 to FIG.7D below.
Referring now to the drawings, and more particularly to FIG. 1 through FIG.7D, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 illustrates an exemplary block diagram of a system (may be alternatively referred as a federated semi-supervised learning based sensitive data classification system) to classify sensitive data, in accordance with some embodiments of the present disclosure. In an embodiment, the batch processing system 100 includes processor (s) 104, communication interface (s), alternatively referred as or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the processor (s) 104. The system 100, with the processor(s) is configured to execute functions of one or more functional blocks of the system 100.
Referring to the components of the system 100, in an embodiment, the processor (s) 104 can be one or more hardware processors 104. In an embodiment, the one or more hardware processors 104 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) 104 is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud, and the like.
The I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface (s) 106 can include one or more ports for connecting a number of devices (nodes) of the system 100 to one another or to another server.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 102 may comprise information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure. Functions of the components of system 100, for predicting batch processes, are explained in conjunction with FIG.2A, FIG.2B and FIG.3 providing flow diagram, architectural overviews, and performance analysis of the system 100.
FIG. 2A and FIG.2B illustrates an exemplary client-server architecture to classify sensitive data from the user query with feedback mechanism using the federated learning based sensitive data classification framework, in accordance with some embodiments of the present disclosure. The system 100 comprises a set of participating clients 200 and a server 240. Data communication between the server 240 and each participating client 200 is facilitated via one or more data links connecting the components to each other.
The client manager 216 of FIG.2A includes at least one participating client 200. Here, each participating client 200 comprises a data store 202, a data manager 204, a federated learning plan 206, a local model 208, a global model 210, a resource monitor 212, relevant data 214, and a prevention and reporting engine 234.
The local model 208 includes a classification layer, a base encoder, and a projection head. The global model 210 includes a classification layer, a base encoder, and a projection head. The local model 208 and the global model 210 interact with the server 240 to process queries reported by each client 200.
The server 240 FIG.2B comprises a model aggregator 218, a global model 210, a fine tuner 222, a labeled data engine 220, a test data engine 230, the federated learning (FL) plan 206, a resource monitor 212, a risk report generator 224 which generates one or more risk reports, a global model performance analyzer 226, a client manager 216, and a human expert. The global model 210 of the server 240 includes a classification layer, a base encoder, and a projection head.
The main task of the communication is to transfer information between each of the participating client 200 and the server 240. The information may include model weights or gradients of weights, a local training procedure, a data filtering mechanism, client’s system performance statistics, and thereof which is further stored in the federated learning plan 206. The server 240 generally transfers the global model weights, the local model training procedures, and the data filtering rules, whereas each client 200 generally transfers local model weights, the local model performance, and the system performance statistics, including any other important information. For ease of building the system 100, the method handles the underlying network communication such as socket programming, serialization, port establishment, serialized data transmission, and thereof.
FIG.3 illustrates an exemplary flow diagram of a method to classify sensitive data from the user query, in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 comprises one or more data storage devices or the memory 102 operatively coupled to the processor(s) 104 and is configured to store instructions for execution of steps of the method 300 by the processor(s) or one or more hardware processors 104. The steps of the method 300 of the present disclosure will now be explained with reference to the components or blocks of the system 100 as depicted in FIG.1 and FIG.2, and the steps of flow diagram as depicted in FIG.3. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods, and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
Referring now to the steps of the method 300, at step 302, the one or more hardware processors 104 extract an input dataset from one or more data sources and pre-process the training dataset into a machine readable form based on associated data type, wherein the training dataset comprises a labeled dataset and an unlabeled dataset. The data store 202 receives the training dataset as input and feeds into the data manager 204 to train a federated semi-supervised learning model (FSSL) to classify sensitive data from the unlabeled dataset. The data manager 204 of the system 100 performs data extraction which helps to remove data that is unrelated for the training as unrelated data. The main task of the data extraction module is to gather data from multiple sources that are useful for training. The collected data is then preprocessed in the data manager 204. For example, some employees within the organization have documents related to trade secrets, quarterly results, and thereof, while other may have documents of design choices, customers list, and thereof. So, depending on the work one is doing, the kind of document classes they have access to differ, which leads to uneven data distribution among employees.
The data manager 204 performs data extraction with a filtering mechanism to decide which type of data needs to be extracted. The instruction for filtering is provided by the administrator of the project. These filtering instructions are sent to all participants of the federated learning. For example, simple filtering could be selecting data with the “.pdf”, ”.txt” or selecting data with “.png”, ”.jpg”, and other extensions for a training procedure. More complex instructions include giving a few templates and selecting data that lies within pre-defined or dynamically calculated boundaries in a data representation space.
The data manager 204 also performs the data type conversion as the machine learning model takes numerical inputs. Data in the image domain is generally stored in a numerical format, whereas text, graphs, and other kind of data need to be converted into a numeric data type. Depending on the type of data, the working of the pre-processor differs. Generally, for image domain, pre-processing consist of data normalization, rotation, flip, shear, crop, augmentation, color, and contrast transformations, and thereof. Performing these transformations on image data help to increase the size of dataset. These transformations are feasible in the image domain because they do not change the overall semantics of the images. The system negates the need for data augmentation and applies data normalization in the pre-processing step. For text domain, pre-processing includes 1) data cleaning steps like stop word removing, lemmatizing, stemming, 2) tokenization, 3) vectorization, and thereof.
Further, the local model 208 among the set of local models of the participating client 200 processes only unlabeled data because of the lack of domain expertise or lack of incentives. This makes it challenging to train the local model in the absence of any supervision. The system 100 utilizes the global model 210 to guide the training in two ways. One is to learn an intermediate representation of data, and the other is to learn the output representation on clients. The local model 208 is trained on each client which processes the data into batches.
Referring now to the steps of the method 300, at step 304, the one or more hardware processors 104 train a federated semi-supervised learning (FSSL) model iteratively based on model contrastive learning to classify sensitive data from the unlabeled dataset, wherein the federated semi-supervised learning model comprises a server and a set of participating clients. The client manager 216 on the server 240 selects a subset of total clients and shares the global model and federated learning plan with them. The federated learning (FL) plan 206 contains local model training instructions such as one or more epochs, a batch size, and other hyperparameters, one or more data filtering instructions, and thereof. The data manager 204 on each participating client 200 having the relevant data engine 214 selects relevant data for training. Data representation is learned on the unlabelled data selected by the data manager. The federated learning plan 206 is utilized to perform communication between the global model 210 of the server 240 and with the global model 210 of at least one participating client 200, and communication between each local model 208 among the set of local models within each participating client 200.
Federated learning collaboratively learns a global prediction model without exchanging end-user data. Let K be the number of clients 200 collaborating in the learning process and ?? be the number of training rounds. In each round, the server first randomly selects ?? clients (m ? K) and sends a global model ???? to them. Each participating client then trains a local model ? ??_k on their local dataset D_k= {x_1,...,x_n,N_k}, where N_k= |N_k | is the total number of examples for the kth client. Server 240 then aggregates all the local model 208 from the selected m clients to obtain a global model ? ??_g = 1/N * ? ? ??_k* ? N?_k. Here N = ?? N?_k. This procedure is repeated for R number of rounds or until convergence. In federated semi-supervised learning, clients 200 have unlabelled data on their device whereas server has some labelled data curated by experts.
The federated learning (FL) plan is fetched from the data manager 204 to perform model training. The FL plan 206 comprises a first set of distinctive attributes corresponding to the set of local models and a second set of distinctive attributes corresponding to the global model, wherein each local model 208 and the global model 210 includes at least one of a projection layer, a classification layer, and a base encoder. The set of local models at the set of participating clients are trained with respective unlabeled dataset based on the first set of distinctive attributes by obtaining the first set of distinctive attributes and initializing the set of local models with one or more weights. Losses occurred while training the set of local models are minimized by computing a cumulative loss based on determining a model contrastive loss and a distillation loss. The model contrastive loss (refer equation 1 and equation 2) is computed at each participating client when trained with the unlabeled dataset by considering the outputs of projection layer at current step and previous step. The distillation loss is computed by considering the outputs of classification layer of at least one of the local model and the global model. Further, each local model 208 is updated with the cumulative loss function when at least one of the global model constraints are not updated and updating the one or more weights of the set of local models.
During federated learning, relevant information communication occurs between each client 200 and the server 240. Generally, the communicated information contains model weights, labels, logits, and thereof. The method of the present disclosure has all the necessary information is bundled within the FL Plan 206. Human expert on the server 240 designs a revised federated Learning Plan (FL Plan). The revised FL plan 206 contains instructions related to the global model 210 training on the server 200 and local model 208 training on the client 200. For the global model 210 training, the FL plan 206 contains specific values for the number of training epochs, batch size, and other hyperparameters. In addition to training instructions such as epochs, batch size, and other hyperparameters, the FL plan 206 for the client 200 training also contains one or more data filtering instructions. These data filtering instructions are then passed to the data manager 204 for selecting the relevant data. The complexity of data filtering instructions depends on the task at hand. Apart from the training instruction, the FL plan 206 contains potential information that is needed for system improvement. For example, the client-server architecture can be tailored according to the client’s resource specification during the first instance of participation in the training and sent back to the client during the next round of training with the help of the FL Plan 206. The model architecture-related information consists of the type of architecture to be used for example convolutional neural network, recurrent neural network, transformers, number of hidden layers and neurons, and thereof.
In one embodiment, self-supervised learning methods, such as simple framework for contrastive learning and visual representation (SimCLR) and bootstrap your own latent (BYOL), have shown good results in learning generalized data representations from unlabeled data in the vision domain. These techniques are based on contrastive learning, which is based on the idea that representations of different augmented views of the same image should be close to each other. On the other hand, the representations of different augmented views of different images should be far apart. Let {x ~_h} be a set with positive examples x ~_i, and x ~_j wherein the contrastive prediction task is to identify x ~_j, in ?{ x ~?_i}h???. Furthermore, if pairs of augmented examples are derived from a randomly sampled set of H samples, then this results in 2H data points for a contrastive prediction task. The contrastive loss for any a given pair of augmentation ?(x?_i, x_j ) for a data point ?? in the 2?? data points will be represented in Equation 1,
L_(con ) (x_i,x_j )=-log??(exp?(sim(?(x?_i,x_j )/t))/(?_(h=1)^2H¦1_([h?i]) exp(sim(z_i,z_h )/t)?
---------Equation 1
Where, z denotes the representation of x,t is a temperature parameter, sim(·,·) is a cosine similarity function, and 1_([h?i])? {0,1} is an indicator function evaluating to 1 if h?i.
The client 200 architecture of the system 100 is the self-supervised contrastive learning, wherein the projection head is added on top of a base encoder to compare the representations of two images in the projection space. The local model 208 and the global model 210 in the method of the present disclosure consists of three components: a base encoder, a projection head, and a classification layer. Let p_(? ) (·) be the output of the projection head, and f_(? ) (·) be the output of the classification layer.
Each local model 208 training (refer Table 1) learns a high-dimensional representation of the local data as the client has only unlabeled data. In the absence of labeled data, there is no way to guide a local training toward good data representation. The model contrastive loss is used to guide the local model 208 training and to learn generalized data representation. Given client k and an instance x, let q^r=p_(?_k)^r (x) and q^(r-1)= p_(?_k)^(r-1) (x) represent the output of the projection head of the local model training at round r and r - 1 respectively. Let q_g^r represent the output of the projection head of the global model at round r. Given this, the model contrastive loss is represented in Equation 2,
L_c=-log??(exp?(sim( q^r,q_g^r )/t))/(exp?(sim( q^r,q_g^r )/t)+exp?(sim( q^r,q^r-1)/t) )?
------------ Equation 2
Where, t denotes a temperature parameter, which regulates the amount of information in a distribution. With only model contrastive loss L_c for local model training and no other supervision information, the classification layer weights do not get updated. This is because the model contrastive loss L_c is computed by considering the outputs of the projection layer only, which is followed by a classification layer for the class discrimination task. The global model’s knowledge of the classification layer is utilized because global model weights get updated on the labeled dataset D_s. The global model 210 knowledge is distilled into the local model with the distillation loss is defined in Equation 3,
L_d=CE (f_(?_g ) (x),f_(?_k ) (x))
-------------Equation 3
Where, f (·) is the output of classification layer and CE is the Cross- Entropy loss. In round r, for the ????h client, the objective is to minimize the following cumulative loss is represented in Equation 4,
L_k=?min?_(??_k?^r ) E_(x~D_k ) [L_c (?_k^r;?_k^(r-1);?_g^r;x)+L_d (?_k^r;?_g^r;x)]
----Equation 4
Equation 4 represents the loss function which is minimized with respect to the local model parameters only.
Table 1 – Local Model Training
Algorithm 1: Local Model Training w^r
Require : Unlabeled dataset D_k for client k, local model training rate ?_u, epochs for local model training E_u
Ensure : Local model ?_k^r
1: Initialize ?_k^r ? w^r // initialize a local using a global model
2: itr ?0 // initialize iteration counter for training epochs, itr=0
3: While itr
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202221050218-STATEMENT OF UNDERTAKING (FORM 3) [02-09-2022(online)].pdf |
2022-09-02 |
| 2 |
202221050218-PROVISIONAL SPECIFICATION [02-09-2022(online)].pdf |
2022-09-02 |
| 3 |
202221050218-FORM 1 [02-09-2022(online)].pdf |
2022-09-02 |
| 4 |
202221050218-DRAWINGS [02-09-2022(online)].pdf |
2022-09-02 |
| 5 |
202221050218-DECLARATION OF INVENTORSHIP (FORM 5) [02-09-2022(online)].pdf |
2022-09-02 |
| 6 |
202221050218-FORM-26 [02-11-2022(online)].pdf |
2022-11-02 |
| 7 |
202221050218-Proof of Right [10-01-2023(online)].pdf |
2023-01-10 |
| 8 |
202221050218-FORM 3 [01-02-2023(online)].pdf |
2023-02-01 |
| 9 |
202221050218-FORM 18 [01-02-2023(online)].pdf |
2023-02-01 |
| 10 |
202221050218-ENDORSEMENT BY INVENTORS [01-02-2023(online)].pdf |
2023-02-01 |
| 11 |
202221050218-DRAWING [01-02-2023(online)].pdf |
2023-02-01 |
| 12 |
202221050218-COMPLETE SPECIFICATION [01-02-2023(online)].pdf |
2023-02-01 |
| 13 |
Abstract1.jpg |
2023-02-14 |
| 14 |
202221050218-Request Letter-Correspondence [29-08-2023(online)].pdf |
2023-08-29 |
| 15 |
202221050218-Power of Attorney [29-08-2023(online)].pdf |
2023-08-29 |
| 16 |
202221050218-Form 1 (Submitted on date of filing) [29-08-2023(online)].pdf |
2023-08-29 |
| 17 |
202221050218-Covering Letter [29-08-2023(online)].pdf |
2023-08-29 |
| 18 |
202221050218-CERTIFIED COPIES TRANSMISSION TO IB [29-08-2023(online)].pdf |
2023-08-29 |
| 19 |
202221050218 CORRESPONDANCE (WIPO DAS) 07-09-2023.pdf |
2023-09-07 |
| 20 |
202221050218-FORM 3 [19-01-2024(online)].pdf |
2024-01-19 |
| 21 |
202221050218-FER.pdf |
2025-06-04 |
| 22 |
202221050218-FORM 3 [14-07-2025(online)].pdf |
2025-07-14 |
| 23 |
202221050218-FER_SER_REPLY [12-11-2025(online)].pdf |
2025-11-12 |
| 24 |
202221050218-DRAWING [12-11-2025(online)].pdf |
2025-11-12 |
| 25 |
202221050218-COMPLETE SPECIFICATION [12-11-2025(online)].pdf |
2025-11-12 |
| 26 |
202221050218-CLAIMS [12-11-2025(online)].pdf |
2025-11-12 |
Search Strategy
| 1 |
Search_Strategy_MatrixE_07-11-2024.pdf |