Specification
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR TRANSFORM LEARNING BASED FUNCTION APPROXIMATION FOR REGRESSION AND FORECASTING
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
The present application claims priority from Indian provisional application no. 201921035106, filed on August 30, 2019. The entire contents of the aforementioned application are incorporated herein by reference.
TECHNICAL FIELD
The disclosure herein generally relates to the field of data analysis and processing, and more particularly, method and system for transform learning based function approximation for regression and forecasting.
BACKGROUND
Major challenges faced by any Internet of things (IoT) application is to clean, process and make requisite inferences from the vast amount of data acquired by the sensors. Regression is an important facet of inference and is viewed as a function approximation problem with appropriate input and output variables. In regression, the output variable is continuous in nature and the function captures the relationship of the output with the input variables. With the abundance of data, the said function approximation needs to be derived from the data.
Regression plays an important role in many applications like, finding causal relationship between variables in biological systems, weather data analysis, market research studies, customer survey results, fine-tuning manufacturing, delivery processes and so on. The function approximation interpretation also enables time series forecasting to be viewed as a regression problem. Obtaining an accurate model for all these applications which can represent the data well is important. Recently, traditional and kernelized dictionary versions have been used for learning data representations to facilitate regression problems. However, the dictionary learning problem is highly non-convex, and there is a high chance of algorithms being stuck in bad local minima.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
In an aspect, there is provided a processor implemented method for transform learning based function approximation for regression and forecasting. The method comprises: receiving, via one or more hardware processors, a training data (X) from a monitored system with a training output (y), wherein the training data is a data matrix represented using Transform Learning involving a joint optimization of a set of parameters including (i) a transform (T) (ii) a coefficient(Z) and (iii) a weight matrix (w) corresponding to the training data; performing, via one or more hardware processors, the joint optimization of the transform (T) the coefficient(Z), and the weight matrix (w) to jointly obtain (i) the learnt transforms (T) and (ii) the learnt weight matrix (w) for the monitored system by: initializing (i) the coefficient with a random matrix comprising real numbers between 0 and 1, (ii) the transform with a matrix comprising of zeros and (iii) the weight matrix as a product of the training output and inverse of the initialized coefficient; performing joint learning iteratively using the initialized parameters and learnt parameters thereafter until a termination criterion is met, wherein the joint learning comprises: learning the transform based on the coefficient and the training data; learning the coefficient based on the transform, the training data, the weight matrix and the training output; and learning the weight matrix based on the coefficient and the training output; wherein the termination criterion is any one of (i) completion of a predefined number of iterations (Maxiter) and (ii) difference between the transform of a current iteration and the transform of a previous iteration being less than an empirically determined threshold value (Tol).
In another aspect, there is provided a system for transform learning based function approximation for regression and forecasting. The system comprises: memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: receive a training data (X) from a monitored system with a training output (y), wherein the training data is a data matrix represented using Transform Learning involving a joint optimization of a set of parameters including (i) a transform (T) (ii) a coefficient(Z) and (iii) a weight matrix (w) corresponding to the training data; perform the joint optimization of the transform (T) the coefficient(Z), and the weight matrix (w) to jointly obtain (i) the learnt transforms (T) and (ii) the learnt weight matrix (w) for the monitored system by: initializing (i) the coefficient with a random matrix comprising real numbers between 0 and 1, (ii) the transform with a matrix comprising of zeros and (iii) the weight matrix as a product of the training output and inverse of the initialized coefficient; performing joint learning iteratively using the initialized parameters and learnt parameters thereafter until a termination criterion is met, wherein the joint learning comprises: learning the transform based on the coefficient and the training data; learning the coefficient based on the transform, the training data, the weight matrix and the training output; and learning the weight matrix based on the coefficient and the training output; wherein the termination criterion is any one of (i) completion of a predefined number of iterations (Maxiter) and (ii) difference between the transform of a current iteration and the transform of a previous iteration being less than an empirically determined threshold value (Tol); to jointly obtain (i) the learnt transforms (T) and (ii) the learnt weight matrix (w) for the monitored system.
In an embodiment, the data matrix is one of (i) a time series data from one or more sensors connected to the monitored system (ii) a set of features extracted from sensed parameters by the one or more sensors connected to the monitored system (iii) or a combination of both (i) and (ii).
In an embodiment, the joint optimization is represented as
min-(T,Z,w)???TX-Z?_F^2 ?+?(?T?_F^2-log?det?T )+??y-wZ?_2^2 and wherein X?R^LXN, L being the number of features , N being the number of samples in the training data , T?R^KXL, K being the size of the transform, Z?R^KXN , w?R^1XK and y?R^1XN.
In an embodiment, the learning of transform is represented as
T?min-T???TX-Z?_(F )^2+?(?T?_F^2-log?det?T )?.
In an embodiment, the learning of coefficient is represented as
Z=(1+??^T ?)^(-1) (TX+??^T y)
In an embodiment, the learning of weight is represented as ??min-?????y-?Z?_2^2 ?
In an embodiment, estimating an output (y_new)of the monitored system for a new data x_new, comprises: receiving, via the one or more hardware processors, the new data x_new of the monitored system; estimating a new coefficient using the new data x_new and the learnt transform T; and estimating the output y_new for the monitored system based on the learnt weight matrix and the estimated new coefficient.
In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device causes the computing device to perform transform learning based function approximation for regression and forecasting by receiving, via one or more hardware processors, a training data (X) from a monitored system with a training output (y), wherein the training data is a data matrix represented using Transform Learning involving a joint optimization of a set of parameters including (i) a transform (T) (ii) a coefficient(Z) and (iii) a weight matrix (w) corresponding to the training data; performing, via one or more hardware processors, the joint optimization of the transform (T) the coefficient(Z), and the weight matrix (w) to jointly obtain (i) the learnt transforms (T) and (ii) the learnt weight matrix (w) for the monitored system by: initializing (i) the coefficient with a random matrix comprising real numbers between 0 and 1, (ii) the transform with a matrix comprising of zeros and (iii) the weight matrix as a product of the training output and inverse of the initialized coefficient; performing joint learning iteratively using the initialized parameters and learnt parameters thereafter until a termination criterion is met, wherein the joint learning comprises: learning the transform based on the coefficient and the training data; learning the coefficient based on the transform, the training data, the weight matrix and the training output; and learning the weight matrix based on the coefficient and the training output; wherein the termination criterion is any one of (i) completion of a predefined number of iterations (Maxiter) and (ii) difference between the transform of a current iteration and the transform of a previous iteration being less than an empirically determined threshold value (Tol).
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG.1 illustrates an exemplary block diagram of a system for transform learning based function approximation for regression and forecasting, according to some embodiments of the present disclosure.
FIG. 2A through FIG. 2D is an exemplary flow diagram illustrating a method for transform learning based function approximation for regression and forecasting, according to some embodiments of the present disclosure.
FIG. 3 illustrates a graphical representation of experimental results of heating load estimation for buildings, according to embodiments of the present disclosure.
FIG.4 illustrates a graphical representation of experimental results of day ahead load forecasting for building, according to embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
Regression models are functions depicting regression which can range from simple to complex models. Regression plays an important role in a lot of applications like, finding causal relationship between variables in biological systems, weather data analysis, market research studies, customer survey results, fine-tuning manufacturing and delivery processes and so on. The regression model may be interpreted as a function approximation problem. Any time series forecasting may also be viewed as a regression problem. Various signal processing based data representation models using dictionaries and transforms are explored in literature for obtaining an accurate model which can represent the data well. For regression problems, traditional and kernelized dictionary versions have been used for data representation. There are two different ways in which the traditional and kernelized dictionary versions are utilized for machine learning tasks such as (i) as a two stage approach where, the dictionary coefficients are learnt in the first stage and then fed as features for machine learning based classifiers/regressors in the second stage (ii) as a single stage approach where, the features and classification/regression weights are learnt together in a joint optimization framework. In the single stage approach, the output label/variable is utilized effectively while learning the dictionary and the associated coefficients. Hence the single stage approach has a better performance compared to the two-stage approach. However, sparse coding solved repeatedly for dictionary learning is NP-hard and the approximate synthesis sparse coding algorithms can be computationally expensive. Moreover, dictionary learning problem is highly non-convex and the algorithms have a high chance of getting stuck in local minima.
Transform learning techniques address the problems related to dictionary-based learning techniques. Transform learning is an analysis approach, where data is analyzed by learning a transform to produce associated coefficients. Unlike dictionary, which is an inverse learning problem, transform is a forward learning problem. In signal processing literature, it is well known that transform learning has an advantage over dictionaries in terms of application scenarios, accuracy and complexity. The present disclosure considers transform learning approach for function approximation and explains a technique that involves formulating the regression problem and time series using transform learning framework. The present disclosure of transform learning also considers single stage joint optimization approach. However the formulations for dictionary learning and transform learning are different leading to different expressions and constraints being applied to the joint optimization formulation, resulting in different solutions. In the present disclosure a joint optimization is carried out to learn a transform, associated coefficients and weight matrix together. Formulations for both basic transform learning and kernelized transform learning are disclosed. The present disclosure provides experimental results with different datasets for regression and time series forecasting showing comparison with both (basic and kernel) dictionary versions (Dictionary Learning for Regression (DLR), Kernel Dictionary Learning for Regression (KDLR), Kernel Regression (KR) and Linear Regression (LR).
A brief explanation on the basic and kernel transform learning (KTL) framework which is known in the art is provided below:
Transform learning (TL) is an analysis approach for data representation which is formulated as:
TX=Z (1)
where X?R^LXN is the data matrix, L is the number of features,N is the number of data samples, T?R^KXL is the transform , K is the size of the transform and Z?R^KXN are the coefficients. The transform learning is formulated as a joint optimization problem which is given as,
min-(T,Z,w)???TX-Z?_F^2 ?+?(?T?_F^2-log?det?T )+µ?Z?_0 (2)
The closed form updates for Transform (T) and coefficients (Z) are obtained by solving the joint optimization problem given in Equation (2) using alternate minimization problem. Z is solved using the equations given below,
Z?min-Z???TX-Z?_F^2 ?+µ?Z?_0 (3)
Z=(abs(TX)=µ).TX (4)
wherein ‘.’ denotes the element-wise product and µ is an empirically computed value to introduce sparsity. So Z will be non-zero only when (abs(TX)=µ).
T is solved using the equation below,
T?min-T???TX-Z?_F^2 ?+?(T_F^2-log?det?T ) (5)
Using Cholesky decomposition followed by singular value decomposition the closed form updates for T is obtained and is given by,
XX^T+?I=LL^T (6)
wherein L is a lower triangular matrix and L^T denotes the conjugate transpose of L. Applying singular value decomposition results in,
L^(-1) XZ^T=USV^T (7)
wherein the diagonal entries of S are the singular values and U and V are the left and right singular vectors of L^(-1) XZ^T respectively. The final update on transform T is given by,
T=0.5V(S+(S^2+2?I)^(1/2) ) U^T L^(-1) (8)
Kernel Transform Learning (KTL) framework:
The KTL framework is used for capturing the non-linearities in the data. The formulation of KTL problem is provided as below:
BK(X,X)=Z (9)
where B is the transform and K(X,X) is the kernel matrix expressed as
K(X,X)=f(X)^T f(X) (10)
By applying Equation 10, equation 9 becomes,
Bf(X)^T f(X)=Z (11)
The formulation of KTL is provided as,
min-(B,Z)???BK(X,X)-Z?_F^2 ?+?(?B?_F^2-log?det?B )+µ?Z?_0 (12)
The closed form solution of transform and coefficients in the KTL remain the same as in the case of basic TL with the difference being that in the former case, the kernelized version of input data is utilized instead of raw input data.
Referring now to the drawings, and more particularly to FIG. 1 through FIG.4 , where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG.1 illustrates an exemplary block diagram of a system 100 for transform learning based function approximation for regression and forecasting, according to some embodiments of the present disclosure. In an embodiment, the system 100 includes one or more processors 102, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 104 operatively coupled to the one or more processors 102. The one or more processors 102 that are hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, graphics controllers, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) are configured to fetch and execute computer-readable instructions stored in the memory. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may be used interchangeably. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
The I/O interface (s) 106 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface(s) can include one or more ports for connecting a number of devices to one another or to another server.
The memory 104 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, one or more modules (not shown) of the system for performing function approximation of regression and forecasting may be stored in the memory 102.
FIG. 2A through FIG. 2D is an exemplary flow diagram illustrating a method 200 for transform learning based function approximation for regression and forecasting, according to some embodiments of the present disclosure. In an embodiment, the system 100 comprises one or more data storage devices or the memory 104 operatively coupled to the one or more hardware processors 102 and is configured to store instructions for execution of steps of the method by the one or more processors 102. The steps of the method 200 of the present disclosure will now be explained with reference to components of the system 100 of FIG.1.
In an embodiment of the present disclosure, the basic transform learning framework for regression (TLR) and kernel transform learning framework for regression (KTLR) is explained with reference to the steps of the method 200 of FIG.2.
Basic Transform Learning framework for regression (TLR): Let N be the number of samples of training data X?R^LXN received from one or more sensors connected to a monitored system, where L is the number of features of the one or more sensors. The output in terms of a regressor is given by y?R^1XN. The TLR is carried out by jointly learning transform and weight matrix for the monitored system.
In an embodiment of the present disclosure, the one or more processors 102 are configured to receive, at step 202, a training data X from a monitored system having a training output y. In an embodiment, the training data considered is a data matrix represented by using transform learning involving joint optimization of a set of parameters. The data matrix is one of (i) a time series data from one or more sensors connected to the monitored system (ii) a set of features extracted from sensed parameters by the one or more sensors connected to the monitored system (iii) or a combination of both (i) and (ii). The set of parameters includes (i) a transform (T) (ii) a coefficient (Z) and (iii) a weight matrix (w) corresponding to the training data. For an exemplary monitored system that estimates heating load for a building, the training data considered may include say, eight building parameters such as relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, glazing area distribution. In another exemplary monitored system for load forecasting, the training data may include temperature, previous day power consumption, previous week same day power consumption data and contextual information like weekdays and weekends. The basic transform learning (TL) framework is utilized for regression tasks by adding a ridge regression penalty term.
In an embodiment, at step 204, the one or more processors 102 are configured to perform the joint optimization of the transform (T) the coefficient(Z), and the weight matrix (w) to jointly obtain (i) the learnt transforms (T) and (ii) the learnt weight matrix (w) for the monitored system. The joint optimization is expressed as,
min-(T,Z,w)???TX-Z?_F^2 ?+?(?T?_F^2-log?det?T )+??y-wZ?_2^2 (13)
wherein L being the number of features , N being the number of samples in the training data, , T?R^KXL , K being the size of the transform, Z?R^KXN , w?R^1XK and y?R^1XN.
In an embodiment of the present disclosure, the joint optimization comprises initialization of the set of parameters and iteratively performing joint learning using the initialized parameters and learnt parameters. In an embodiment, the coefficient, the transform and the weight matrix are initialized at step 204a. The coefficient is initialized with a random matrix comprising real numbers between 0 and 1. The real numbers may be drawn from a uniform distribution. The transform is initialized with a matrix comprising zeros and the weight matrix is initialized as a product of the training output and inverse of the initialized coefficient.
In an embodiment the joint learning is performed iteratively at step 204b using the initialized set of parameters of step 204a and learnt parameters thereafter until a termination criteria is met. The joint learning comprises learning the transform at step 204b-1 based on the coefficient and the training data. The learning of the transform is represented using the Equation 5 with the closed form update given in Equation 8.
Further the joint learning comprises learning the coefficient at step 204b-2 based on the transform, the training data, the weight matrix and the training output. The learning of the coefficient is represented using the below equation,
Z?min-Z???TX-Z?_F^2 ?+??y-wZ?_2^2 (14)
The closed form update of the Equation 14 may be given as,
Z=(1+??^T ?)^(-1) (TX+??^T y) (15)
The joint learning further comprises learning of the weight matrix at step 204b-3 based on the coefficient and the training output. The learning of the weight matrix may be represented as,
??min-?????y-?Z?_2^2 ? (16)
In accordance with an embodiment of the present disclosure, the termination criterion for the iterative learning is any one of (i) completion of a predefined number of iterations (Maxiter) and (ii) difference between the transform of a current iteration and the transform of a previous iteration being less than an empirically determined threshold value (Tol). The empirically determined threshold value is 0.001.
After obtaining the learnt parameters (i) the learnt transform and (ii) the learnt weight matrix for the monitored system, using joint learning the one or more processors 102 are configured to estimate at step 206 an output (y_new) of the monitored system for a new data x_new, in an embodiment of the present disclosure. The new data is a data matrix which is one of (i) a time series data from one or more sensors connected to the monitored system (ii) a set of features extracted from sensed parameters by the one or more sensors connected to the monitored system (iii) or a combination of both (i) and (ii).
In accordance to an embodiment of the present disclosure, the one or more processors 102 are configured to receive at step 206a the new data x_new of the monitored system. At step 206b a new coefficient is estimated using the new data and the learnt transform T in Equation 5. The new coefficient is estimated for the corresponding new data and may be represented as,
z_new=Tx_new (17)
Further at step 206c output y_new for the monitored system is estimated based on the learnt weight matrix of Equation 16 and the estimated new coefficient z_new. The estimation of the output from the learnt weight matrix and the estimated new coefficient may be represented as,
y_new=?z_new (18)
Kernel Transform Learning for regression (KTLR): Kernel transform learning is formulated to capture the non-linearities in the data. The Kernel transform learning may be formulated by using a kernelized version of the input data K(X,X). For KTLR, the joint optimization problem in Equation 13 may be represented as,
min-(B,Z,w)???BK(X,X)-Z?_F^2 ?+?(?B?_F^2-log?det?B )+??y-wZ?_2^2 (19)
where B is the transform. The closed form expression for learning the coefficient for kernelized version may be represented as,
Z=(1+??^T ?)^(-1) (BK(X,X)+??^T y) (20)
The update expression for w remains the same for the kernelized version. In the kernelized version, the Equation 17 for computing a new coefficient from a new data may be represented as,
z_new=Bf(x_new )^T f(X) (21)
where B is the learnt transform. The estimation of output of the monitored system remains the same as in Equation 18.
The pseudocode for the TLR and KTLR algorithm is provided as below:
Algorithm 1: Transform and Kernel Transform Learning for Regression (TLR or KTLR):
Input: Set of training data, X, training output, y, size of transform (atoms) K,
parameters (?,?) and kernel function K to compute kernel matrix K(X,X), K(x_new,X) and test data x_new
Output: Learnt transform T or B, weight vector w, estimated output y_new
Initialization: Set coefficient Z_0 to random matrix with real numbers between 0 and 1 drawn from a uniform distribution, transform T_0 to a matrix of zeros, weight matrix w_0=yZ^†, where † denotes pseudo inverse and iteration i = 1
1: procedure
2: loop: Repeat until convergence (or fixed number of iterations Maxitr)
3: T_i or? B?_i?0.5V_i (S_i+(?S_i?^2+2?I)^(1/2) ) ?U_i?^T ?L_i?^(-1)
4: Z_(i )? update using T_i or ? B?_i and w_(i-1) using (15 or 20)
5: w_i?y?Z_i?^†
6: i?i+1
7: if ?T_i (or B_i )-T_(i-1) (or B_(i-1) )?_F
Documents
Application Documents
| # |
Name |
Date |
| 1 |
201921035106-CLAIMS [25-03-2022(online)].pdf |
2022-03-25 |
| 1 |
201921035106-STATEMENT OF UNDERTAKING (FORM 3) [30-08-2019(online)].pdf |
2019-08-30 |
| 2 |
201921035106-COMPLETE SPECIFICATION [25-03-2022(online)].pdf |
2022-03-25 |
| 2 |
201921035106-PROVISIONAL SPECIFICATION [30-08-2019(online)].pdf |
2019-08-30 |
| 3 |
201921035106-FORM 1 [30-08-2019(online)].pdf |
2019-08-30 |
| 3 |
201921035106-FER_SER_REPLY [25-03-2022(online)].pdf |
2022-03-25 |
| 4 |
201921035106-OTHERS [25-03-2022(online)].pdf |
2022-03-25 |
| 4 |
201921035106-DRAWINGS [30-08-2019(online)].pdf |
2019-08-30 |
| 5 |
201921035106-Proof of Right (MANDATORY) [18-09-2019(online)].pdf |
2019-09-18 |
| 5 |
201921035106-FER.pdf |
2021-11-18 |
| 6 |
Abstract1.jpg |
2021-10-19 |
| 6 |
201921035106-ORIGINAL UR 6(1A) FORM 1-250919.pdf |
2019-09-28 |
| 7 |
201921035106-FORM-26 [11-10-2019(online)].pdf |
2019-10-11 |
| 7 |
201921035106-COMPLETE SPECIFICATION [24-08-2020(online)].pdf |
2020-08-24 |
| 8 |
201921035106-FORM 3 [24-08-2020(online)].pdf |
2020-08-24 |
| 8 |
201921035106-DRAWING [24-08-2020(online)].pdf |
2020-08-24 |
| 9 |
201921035106-ENDORSEMENT BY INVENTORS [24-08-2020(online)].pdf |
2020-08-24 |
| 9 |
201921035106-FORM 18 [24-08-2020(online)].pdf |
2020-08-24 |
| 10 |
201921035106-ENDORSEMENT BY INVENTORS [24-08-2020(online)].pdf |
2020-08-24 |
| 10 |
201921035106-FORM 18 [24-08-2020(online)].pdf |
2020-08-24 |
| 11 |
201921035106-DRAWING [24-08-2020(online)].pdf |
2020-08-24 |
| 11 |
201921035106-FORM 3 [24-08-2020(online)].pdf |
2020-08-24 |
| 12 |
201921035106-COMPLETE SPECIFICATION [24-08-2020(online)].pdf |
2020-08-24 |
| 12 |
201921035106-FORM-26 [11-10-2019(online)].pdf |
2019-10-11 |
| 13 |
201921035106-ORIGINAL UR 6(1A) FORM 1-250919.pdf |
2019-09-28 |
| 13 |
Abstract1.jpg |
2021-10-19 |
| 14 |
201921035106-FER.pdf |
2021-11-18 |
| 14 |
201921035106-Proof of Right (MANDATORY) [18-09-2019(online)].pdf |
2019-09-18 |
| 15 |
201921035106-DRAWINGS [30-08-2019(online)].pdf |
2019-08-30 |
| 15 |
201921035106-OTHERS [25-03-2022(online)].pdf |
2022-03-25 |
| 16 |
201921035106-FER_SER_REPLY [25-03-2022(online)].pdf |
2022-03-25 |
| 16 |
201921035106-FORM 1 [30-08-2019(online)].pdf |
2019-08-30 |
| 17 |
201921035106-COMPLETE SPECIFICATION [25-03-2022(online)].pdf |
2022-03-25 |
| 17 |
201921035106-PROVISIONAL SPECIFICATION [30-08-2019(online)].pdf |
2019-08-30 |
| 18 |
201921035106-STATEMENT OF UNDERTAKING (FORM 3) [30-08-2019(online)].pdf |
2019-08-30 |
| 18 |
201921035106-CLAIMS [25-03-2022(online)].pdf |
2022-03-25 |
| 19 |
201921035106-US(14)-HearingNotice-(HearingDate-12-11-2025).pdf |
2025-10-16 |
| 20 |
201921035106-Correspondence to notify the Controller [06-11-2025(online)].pdf |
2025-11-06 |
| 21 |
201921035106-FORM-26 [10-11-2025(online)].pdf |
2025-11-10 |
| 22 |
201921035106-FORM-26 [10-11-2025(online)]-1.pdf |
2025-11-10 |
Search Strategy
| 1 |
Search_Strategy_201921035106E_15-11-2021.pdf |