Abstract: A system and method for building a classifier on sparsely annotated data is provided. The method includes steps for training a model (Pθ t (yt | xt)) on a target data (Dt = {(xt 1 , yt 1 ), (xt 2 , yt 2 ) … (xt m, yt m)}) with a plurality of classes. Further, computing a posterior likelihood for each of the classes of a source data (Ds = {(xs 1 , ys 1 ), (xs 2 , ys 2 ) … (xs n , ys n )}) using the trained model for each of the plurality of classes of the target data. 10 Furthermore, matching the source class that maximizes the posterior likelihood for each of the plurality of classes of the target data. Further, training a classifier on source data using matched source classes (ys * = {ys *1, ys *2, … ys *m}). Finally, building a classifier by retraining the model (Pθ t (yt | xt)) using matched source classes (ys * ), and a loss term.
The subject matter relates to the field of image processing and automation,
more particularly but not exclusively the subject matter relates to building a clasiffier
on sparsely annotated data for improving the generalization ability of classifiers built
5 for specific tasks.
[0002] Human action recognition from videos refers to identification and
classification of different action types in a video as and when they occur. Accurate
representation and classification of human actions is a challenging area of research in
computer vision which includes human detection, human pose estimation and tracking
10 human behavior. Human action recognition is typically solved in a supervised machine
learning setting. Most of the successful models employ convolutional neural networks
(CNN) as their backbone and can be broadly categorized into three types - two stream
networks, 3-Dimensional ((3D) Convolutional Neural Networks and Covolutaional
long short-termmemory (LSTM) networks.
15 [0003] The two stream architecture has two CNNs each of them separately trained
on image sequences (RGB) and optical flow sequences. The model averages the
predictions from a single RGB frame and a stack of multiple optical flow frames after
passing them through two CNNs which are pre-trained on large-scale static image
datasets. An extension of two-stream model, fuses the two streams after the last CNN
20 layer, showing improvement in performance. Bag of words modeling approach ignores
the temporal structure as they contained pooled predictions of features extracted using
frames across the video. A recurrent layer like LSTM can be added to such model
which can capture long range dependencies and temporal ordering. 3D CNNs are
convolutional networks with spatio-temporal filters that create hierarchical
25 representations of spatio-temporal data. They have many more parameters as compared
to 2D CNNs and therefore harder to train. As these models couldn’t static image
datasets for pre-training, shallow custom architectures are defined which are trained
3
from scratch.
[0004] The two stream Inflated 3D CNN or I3D has two stream architecture with
each stream trained with RGB and optical flow sequences separately. I3D model can
be seen as inflated version of 2D architecture where the 2D filters and pooling kernels
5 are added with one more temporal dimension. The 3D model can be pre-trained on
static image datasets by repeating weights of 2D filters along the temporal dimension
and then rescaling. I3D is one of the standard benchmarks of UCF101 and HMDB51
datasets. Temporal Segment Networks or TSN is another example of two stream
networks which has better ability to model long range temporal dependencies. Rather
10 than working on individual frames or stack of contiguous frames, TSN works on short
snippets sparsely sampled from the entire video. Each of these sparse snippets is
predicted with a class and a final consensus is taken to predict the class of the video.
[0005] The above techniques may be effective for use cases where the data sets are
huge. It may not be economically feasible to gather and annotate human actions data
15 for building a data set for all minor and particular use cases, as it is not only time
consuming to collect but may often end up in an over-fit model that does not perform
well on the unseen data. Further, there is abundant availability of large-scale public
datasets that contains thousands of annotated video clips corresponding to hundreds of
action classes.
20 [0006] In view of the foregoing, there is a need for an improved technique that may
leverage the large-scale annotated datasets to improve the generalization abilities of
classifiers built for specific tasks.
SUMMARY
25 [0007] Accordingly, an improved technique for building a classifier on sparsely
annotated data which leverages available large-scale annotated datasets is provided. In
an embodiment, the method for building a clasiffier on sparsely annotated data includes
4
steps for training a model (Pθ
t
(yt | xt)) on a target data (Dt = {(xt
1
, yt
1
), (xt
2
, yt
2
) … (xt
m,
yt
m)}) with a plurality of classes. Further, computing a posterior likelihood for each of
the classes of a source data (Ds = {(xs
1
, ys
1
), (xs
2
, ys
2
) … (xs
n
, ys
n
)}) using the trained
model for each of the plurality of classes of the target data. Furthermore, matching the
5 source class that maximizes the posterior likelihood for each of the plurality of classes
of the target data. Further, training a classifier on source data using matched source
classes (ys
* = {ys
*1, ys
*2, … ys
*m}). Finally, building a classifier by retraining the model
(Pθ
t
(yt | xt)) using matched source classes (ys
*
), and a loss term.
[0008] In an embodiment, the method for building a classifier includes a step of
computing a loss term using a frobenius norm (LDR = || Eθ
T
10 Eϕ – Ik ||F) based on the
parameters of models Pθ
t
(yt | xt) and Pϕ
s
(ys | xs).
[0009] In an embodiment, the source data is an annotated large public dataset.
[0010] In an embodiment, wherein the target data is a dataset build for a specific
use case.
15 [0011] In an embodiment, wherein video-to-video adaptation is performed using
the classifier on sparsely annotated data for recognizing human actions
[0012] In another embodiment, a system for building a classifier on sparsely
annotated data is provided. The system includes at least one processor, and a memory
coupled to the at least one processor. The memory includes an application program
20 configured to perform operations for building a classifier on sparsely annotated data,
the operations includes training a model (Pθ
t
(yt | xt)) on a target data (Dt = {(xt
1
, yt
1
),
(xt
2
, yt
2
) … (xt
m, yt
m)}) with a plurality of classes. Further, computing a posterior
likelihood for each of the classes of a source data (Ds = {(xs
1
, ys
1
), (xs
2
, ys
2
) … (xs
n
,
ys
n
)}) using the trained model for each of the plurality of classes of the target data.
25 Furthermore, matching the source class that maximizes the posterior likelihood for
each of the plurality of classes of the target data. Further, training a classifier on source
data using matched source classes (ys
* = {ys
*1, ys
*2, … ys
*m}). Finally, building a
5
classifier on sparsely annotated data by retraining the model (Pθ
t
(yt | xt)) using matched
source classes (ys
*
), and a loss term.
BRIEF DESCRIPTION OF THE DRAWINGS
5 [0013] Embodiments are illustrated by way of example in the Figures of the
accompanying drawings, in which like references indicate similar elements and in
which:
[0014] FIG. 1 illustrates action classes in two different datasets in optical flow
domain;
10 [0015] FIG. 2 illustrates an exemplary process flow diagram 200 for building a
classifier in accordance with an embodiment of the invention;
[0016] FIG. 3 illustrates an exemplary method 300 for building a classifier in
accordance with an embodiment; and
[0017] FIG. 4 illustrates a block diagram of a system 400 in which an embodiment
15 for building a classifier on sparsely annotated datamay be implemented.
DETAILED DESCRIPTION
[0018] The following detailed description includes references to the accompanying
drawings, which form a part of the detailed description. The drawings show
illustrations in accordance with example embodiments. These example embodiments
20 are described in enough detail to enable those skilled in the art to practice the present
subject matter. However, it will be apparent to one with ordinary skill in the art that the
present invention may be practiced without these specific details. In other instances,
well-known methods, procedures, components, and networks have not been described
in detail so as not to unnecessarily obscure aspects of the embodiments. Within the
25 scope of the detailed description and the teachings provided herein, additional
embodiments, application, features, and modifications are certainly are recognized by
6
a person skilled in the art. Therefore, the following detailed description is not to be
taken in a limiting sense.
[0019] In this document, the terms “a” or “an” are used, as is common in patent
documents, to include one or more than one. In this document, the term “or” is used
5 to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not
A,” and “A and B,” unless otherwise indicated.
[0020] Generalization across unseen data, is one of the most important problems
in learning theory. Specifically, very high capacity models such as Deep neural
10 networks, tend to easily over-fit the training data leading to poor performance on test
data. Several regularization techniques are routinely incorporated to address the
problem of overfitting. Most of them attempt to reduce the generalization error by
trading increased bias for reduced variance. Some of the popular approaches include,
imposing norm penalty on the network parameters, stochastic pruning of parameters,
15 normalization of activations and data augmentation.
[0021] FIG. 1 illustrates action classes in two different datasets in optical flow
domain. One representative RBG and Flow frame is depicted in each case. The
directional closeness of the optical flow frames can be observed despite complete
20 unrelated: (a)-(b) Arms up action in a dataset is close to (c)-(d) pull ups action in a
large annotated dataset called ‘Kinetics dataset’. Further, (e)-(f) rolly polly action in
the dataset is close to (g)-(h) Flic flac action in another annotated dataset called
‘HMDB51 dataset’.
25 [0022] In any given video, there exists transformations such as optical flow, which
are no-unique mappings of the video space. Thereby, suggesting that given multiple
disjoint set of action classes, there may be spaces (such as flow) where a given pair of
action classes may lie ’close’ albeit they represent different semantics in the RGBspace. For example, the optical flow characteristics of a ’baseball-strike’ class and
7
’cricket-slog’ class could be imagined to be close. Further, there may exist large-scale,
open data sets (e.g., Kinetics) that encompass a large number of annotated videos for
several action classes. Thus, if one can find the classes in the open datasets that are
’close’ to a given class in the data of interest, then the videos from the open dataset can
5 be potentially used for augmentation resulting in regularization. The following
disclosure further discloses the implementation of this invention in a detailed manner.
NOTATIONS
10 [0023] Let X denote the sample space encompassing the elements of transformed
videos (e.g., Optical flow). Let P
S (xs) and P
t (xt) be two distributions on X respectively
called the source and target distributions. Suppose a semantic labeling scheme is
defined both on P
S (xs) and P
t (xt). That is, let YS = {ys
1
, ys
2
, … ys
N} and Yt = {yt
1
, yt
2
,
… yt
M} be the source and target class labels that are assigned for the samples of P
S (xs)
and P
t (xt) respectively which in-turn define the joint distributions P
S (xs , ys) and P
t 15 (xt
, yt). ‘N’ and ‘M’ are the respective number of source and target classes.
[0024] Let DS = {(xs , ys)} and Dt = {(xt , yt)} denote the tuples of samples drawn
from the two joint distributions P
S and P
t
, respectively. Suppose a parametric
20 discriminative classifier (Deep Neural Network) is learned using Dt to obtain estimate
of the conditional distribution Pθ
t
(yt | xt), where ‘θ’ represents the parameters of the
neural network.
[0025] With these notations, we consider the case where the cardinality of Dt is
25 much less than that of Ds implying that the amount of supervised data in the case of
target distribution is much less than that of the source distribution. In such a case, Pθ
t
(yt | xt) trained on Dt is deemed to overfit and hence doesn’t generalize well. As
discussed, if there exists a ys
p
ϵ YS that is ’close’ to yt
q
ϵ Yt, then samples drawn from
P
S
( xs | ys = ys
p
) can be used to augment the class yt
q
for re-training the model Pθ
t
(yt |
8
xt). In the subsequent section, we describe a procedure to find the ’closest’ ys
p
ϵ YS,
given yt
q
ϵ Yt and a model Pθ
t
(yt | xt) trained on Dt.
[0026] FIG. 2 illustrates an exemplary process flow diagram 200 for building a
5 classifier on sparsely annotated data in accordance with an embodiment of the
invention. Briefly, as illustrated, first a classifier is trained on the target data which is
used to match the modes (classes) of the source data. After mode matching, the
classifier is again trained with the classes from the matched source data along with
directional regularization loss.
10
DISTRIBUTIONAL MODE MATCHING
[0027] Videos lie in very high dimensional space and are of variable length in
general. Thus, standard vector distance metrics are not feasible to measure the
15 closeness of two video objects.
[0028] Further, the objective here is to quantify the distance between the classes as
perceived by the discriminative model (classifier) Pθt (yt | xt) so that data augmentation
is sensible. Thus, we propose to use the maximum posterior likelihood principle to
define the closeness between two classes. X(ys = ysp) = {xs1 , xs2 , … , xsl } denote
20 the samples drawn from PS ( xs | ys = ysp ).
[0029] Now denotes the posterior distribution of
the target classes Yt given the jth feature vector from the source class . With
this, a joint posterior likelihood Lyt |xs of a class ysp could be defined as observing the
target classes given a set of features drawn from a particular source class
25 ysp.
[0030] Mathematically,
9
[0031] Where xsj , j ϵ {1, 2, … , l} , are from the class ys
p
. If it is assumed that Xsj
are drawn from an IID, one can express Eq. (1) as,
[0032] This is because, the parameters ‘θ’ of the discriminator model created using
Dt are independent of X(ys = ys
p 5 ) and are fixed during the evaluation of Lyt |xs , which
implies that yti |xsi is independent of xsi ∀ i ≠ j , thus leading to Eq. 2.
[0033] The posterior likelihood in Eq. 2 can be evaluated for every target class yt
= yt
q
, q ϵ {1, 2, … , M}, denoted by called the target-class-posterior
likelihood corresponding to the features from source class ys
q under the learned
classifier Pθ
t 10 (yt | xt). Mathematically,
[0034] With this definition of the target-class posterior likelihood, we define the
matched source class to a given target class yt
q as follows:
15 [0035] Note that the definition of is specific to a source-target class
pair and therefore all xsj in objective function of the optimization problem in Eq. 4,
comes from a particular source class. Thus, one can employ the discriminative
classifier trained on the target data to find out the ’closest’ matching source class as the
one that maximizes the posterior likelihood observing that class as the given target
20 class under the classifier. Since every class in the joint distribution can be looked into
a ’mode’ and the goal here is to match the classes (’modes’) in the joint distributions
10
of the source and target distributions, the procedure is called distributional mode
matching.
[0036] Referring to the FIG. 1, which demonstrates the idea of mode matching
through examples - Representative frames of the RGB and the optical flow space from
5 two target classes (Arms-up and Rolly-polly) are shown with the corresponding
matched (using the aforementioned procedure) classes of two source datasets. It is
observed that optical flow frames of the target and the source classes have similar
visual properties indicating the closeness.
[0037] Once the matched source class is determined to every given target class, a
set of source classes matched is defined as ys
* = {ys
*1, ys
*2, … ys
*m 10 }. Now, the
discriminative classifier Pθ
t can be re-trained on the samples from the source dataset
corresponding to ys
*
in a supervised way with class labels being the corresponding yt
q
for every ys
*
. This procedure thus increases the quantity and variety of the training data
for Pθ
t
.
15
DIRECTIONAL REGULARIZATION
[0038] The procedure of mode matching described in the previous section
20 effectively changes the semantic meaning of the matched source classes to the semantic
meaning of the target classes. Thus, it is possible to train a classifier on the source data
to discriminate between the matched source classes ys
* = {ys
*1, ys
*2, … ys
*m}. Suppose
such a classifier is denoted by Pϕ
s
(ys | xs) , where ϕ are the model parameters. It is
assumed that Pϕ
s
(ys | xs) and Pθ
t
(yt | xt) have the same architectural properties. Further,
25 it’s also assumed that, the source dataset is larger in size and more diverse compared
to the target dataset. This implies that the Pϕ
s
(ys | xs) has better generalization abilities
compared to Pθ
t
(yt | xt), this fact is leveraged in improving the generalization
capabilities of Pθ
t
(yt | xt) using Pϕ
s
(ys | xs).
11
[0039] Further, during the training of Pϕ
s
(ys
*
| xs) with samples from ys
*
, it is
desirable that the separation that is achieved between the classes in ys
* under the
classifier Pϕ
s
(ys
*
| xs) is ’preserved’ during the training of Pθ
t
(yt | xt) with samples from
ys
*
. Further, the aforementioned properties may be accomplished by imposing a
regularization term during the training of Pθ
t 5 (yt | xt). Specifically, we propose to push
the Eigen directions of the parameter matrix θ towards that of the parameter matrix ϕ.
Note that ϕ is fixed during the training of Pθ
t
(yt | xt). Intuitively, this implies that the
significant directions of the target parameters should follow that of the source
parameters
10 [0040] Mathematically, let Mθ and Mϕ be two square matrices formed by reshaping (without any preference to particular dimensions) the parameters θ and ϕ,
respectively. Suppose we perform an Eigen-value decomposition on Mθ and Mϕ, to
obtain the Eigen vector matrices Eθ and Eϕ, respectively. Let denote the truncated
versions of E with first k significant (a model hyper-parameter) Eigen vectors. Under
15 this setting, we desire the Eigen directions and to be aligned. Mathematically,
if they are perfectly aligned, then
[0041] where Ik is a k-dimensional identity matrix and T denotes the transpose
operation. Thus, any deviation from the condition laid in Eq. 5 is penalized by
20 minimizing the Frobenius norm of the deviation. This is referred to as the directional
regularization denoted as ‘LDR’, which is given by the following equation:
Where, denotes the Frobenius norm of a matrix.
[0042] It shall be noted that, this regularizer is on θ imposed during the training of
Pθ
t 25 (yt | xt) ensuring that the directions of separating hyperplanes of the classifier is
12
encouraged to follow those of the source classifier trained with the matched classes.
Finally, objective function during the re-training of the classifier is as follows:
where is the predicted target class.
5 [0043] FIG. 3 illustrates an exemplary method 300 for building a classifier in
accordance with an embodiment of the invention. Assuming only a small amount of
data from the target distribution, the proposed method 300 at step 302 the one or more
processors of the system 400, trains a classifier on the target data with plurality of
classes. Further, at step 304 and 306, the closest class from the source distribution to
10 all the target classes is estimated using the classifier. Which is realized by computing
Posterior likelihood for each of the classes of a source data using the trained model for
each of the plurality of classes of the target data at step 304, and further matching the
source class that maximizes the posterior likelihood for each of the plurality of classes
of the target data at step 306.
15 [0044] Further at step 308, the one or more processors of the system 400, trains a
new (relatively robust) classifier on the samples from the source distribution with relabeled source classes (matched with the target classes). At step 310, a loss term using
a Frobenius norm is computed based on the parameters of the models Pθ
t
(yt | xt) and
Pϕ
s
(ys | xs). Finally, the model is retrained to build a classifier on sparsely annotated
20 data using samples of the matched source classes with loss term (directional
regularization). The classifier is the final model for the target data.
[0045] FIG. 4 illustrates a system 400 in which aspects of the invention may be
implemented. As shown, the system 400 includes, without limitation, a central
processing unit (CPU) 410, a network interface 430, a bus 440, a memory 460 and
25 storage 450. The system 400 may also include an I/O device interface 420 connecting
I/O devices 470 (e.g., keyboard, display and mouse devices) to the system 400.
13
[0046] The CPU 410 retrieves and executes programming instructions stored in the
memory 460. Similarly, the CPU 410 stores and retrieves application data residing in
the memory 460. The bus 440 facilitates transmission, such as of programming
instructions and application data, between the CPU 410, I/O device interface 420,
5 storage 450, network interface 430, and memory 460. CPU 410 is included to be
representative of a single CPU, multiple CPUs, a single CPU having multiple
processing cores, and the like. Further, the memory 460 is generally included to be
representative of a random-access memory. The storage 450 may be a disk drive
storage device. Although shown as a single unit, the storage 450 may be a combination
10 of fixed and/or removable storage devices, such as tape drives, removable memory
cards or optical storage, network attached storage (NAS), or a storage area-network
(SAN). Further, system 400 is included to be representative of a physical computing
system as well as virtual machine instances hosted on a set of underlying physical
computing systems. Further still, although shown as a single computing system, one of
15 ordinary skill in the art will recognized that the components of the system 400 shown
in FIG. 4 may be distributed across multiple computing systems connected by a data
communications network.
[0047] As shown, the memory 460 includes an operating system 462, an
application interface 464 and one or more applications. The one or more applications
20 may include an application program configured to perform operations for building a
classifier. The operations performed by the CPU while executing instructions of the
application program comprises: training a classifier on the target samples; estimating
the closest class from the source distribution to all the target classes using the classifier;
training a new (relatively robust) classifier on the samples from the source distribution
25 with re-labeled source classes (matched with the target classes); and forming/building
the final model/classifier using the samples of the matched source classes to re-train
theclassifier along with directional regularization.
14
[0048] The processes described above is described as a sequence of steps, this is
solely for the sake of illustration. Accordingly, it is contemplated that some steps may
be added, some steps may be omitted, the order of the steps may be re-arranged, or
some steps may be performed simultaneously. The example embodiments described
5 herein may be implemented in an operating environment comprising software installed
on a computer, in hardware, or in a combination of software and hardware. Some of
the embodiment may also be directed to a sequence of instructions stored in a computer
readable medium, such that, the sequence of instructions when executed by one or more
processing devices allows the processing device to operate as described herein. The
10 computer readable medium can include any primary storage devices or secondary
storage devices.
[0049] Although embodiments have been described with reference to specific
example embodiments, it will be evident that various modifications and changes may
be made to these embodiments without departing from the broader spirit and scope of
15 the system and method described herein. Accordingly, the specification and drawings
are to be regarded in an illustrative rather than a restrictive sense.
[0050] Many alterations and modifications of the present invention will no doubt
become apparent to a person of ordinary skill in the art after having read the foregoing
description. It is to be understood that the phraseology or terminology employed herein
20 is for the purpose of description and not of limitation. It is to be understood that the
description above contains many specifications, these should not be construed as
limiting the scope of the invention but as merely providing illustrations of some of the
personally preferred embodiments of this invention. Thus the scope of the invention
should be determined by the appended claims and their legal equivalents rather than by
25 the examples given herein.
I/WE CLAIM:
1. A method for building a classifier on sparsely annotated data comprising:
with at least one processor (410) of one or more computing devices (400):
training a model (Pθ
t
(yt | xt)) on a target data (Dt = {(xt
1
, yt
1
), (xt
2
, yt
2
) …
(xt
m, yt
m)}) with a plurality of classes, where ‘θ’ represents parameters of
neural network;
computing a posterior likelihood for each of the classes of a source data
(Ds = {(xs
1
, ys
1
), (xs
2
, ys
2
) … (xs
n
, ys
n
)}) using the trained model for each of
the plurality of classes of the target data;
matching the source class that maximizes the posterior likelihood for each
of the plurality of classes of the target data;
training a classifier on source data using matched source classes (ys
* =
{ys
*1, ys
*2, … ys
*m}); and
building a classifier by retraining the model (Pθ
t
(yt | xt)) using matched
source classes (ys
*
), and a loss term.
2. The method as claimed in claim 1, further comprising: computing a loss term
using a frobenius norm (LDR = || Eθ
TEϕ – Ik ||F) based on the parameters of models
Pθ
t
(yt | xt) and Pϕ
s
(ys | xs).
3. The method as claimed in claim 1, wherein the source data is an annotated public
dataset.
16
4. The method as claimed in claim 1, wherein the target data is a dataset build for a
specific use case.
5. The method as claimed in claim 1, wherein video-to-video adaptation is
performed using the classifier for recognizing human actions.
6. A system comprising:
at least one processor (410); and
a memory (460), wherein the memory includes an application program
configured to perform operations for building a classifier, the operations
comprising:
training a model (Pθ
t
(yt | xt)) on a target data (Dt = {(xt
1
, yt
1
), (xt
2
, yt
2
) …
(xt
m, yt
m)}) with a plurality of classes, where ‘θ’ represents parameters of
neural network;
computing a posterior likelihood for each of the classes of a source data
(Ds = {(xs
1
, ys
1
), (xs
2
, ys
2
) … (xs
n
, ys
n
)}) using the trained model for each of
the plurality of classes of the target data;
matching the source class that maximizes the posterior likelihood for each
of the plurality of classes of the target data;
training a classifier on source data using matched source classes (ys
* =
{ys
*1, ys
*2, … ys
*m}); and
building a classifier by retraining the model (Pθ
t
(yt | xt)) using matched
source classes (ys
*
), and a loss term.
17
7. The system as claimed in claim 6, further comprising: computing a loss term
using a frobenius norm (LDR = || Eθ
TEϕ – Ik ||F) based on the parameters of models
Pθ
t
(yt | xt) and Pϕ
s
(ys | xs).
8. The system as claimed in claim 6, wherein the source data is an annotated public
dataset.
9. The system as claimed in claim 6, wherein the target data is a dataset build for a
specific use case.
10. The system as claimed in claim 6, wherein the system performs video-to-video
adaptation using the classifier for recognizing human actions.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 201911028175-PROVISIONAL SPECIFICATION [13-07-2019(online)].pdf | 2019-07-13 |
| 1 | 201911028175-RELEVANT DOCUMENTS [31-08-2023(online)].pdf | 2023-08-31 |
| 2 | 201911028175-FORM FOR STARTUP [13-07-2019(online)].pdf | 2019-07-13 |
| 2 | 201911028175-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 3 | 201911028175-IntimationOfGrant18-02-2022.pdf | 2022-02-18 |
| 3 | 201911028175-FORM FOR SMALL ENTITY(FORM-28) [13-07-2019(online)].pdf | 2019-07-13 |
| 4 | 201911028175-PatentCertificate18-02-2022.pdf | 2022-02-18 |
| 4 | 201911028175-FORM 1 [13-07-2019(online)].pdf | 2019-07-13 |
| 5 | 201911028175-FIGURE OF ABSTRACT [13-07-2019(online)].pdf | 2019-07-13 |
| 5 | 201911028175-Annexure [06-12-2021(online)].pdf | 2021-12-06 |
| 6 | 201911028175-PETITION UNDER RULE 137 [06-12-2021(online)].pdf | 2021-12-06 |
| 6 | 201911028175-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-07-2019(online)].pdf | 2019-07-13 |
| 7 | 201911028175-Written submissions and relevant documents [06-12-2021(online)].pdf | 2021-12-06 |
| 7 | 201911028175-EVIDENCE FOR REGISTRATION UNDER SSI [13-07-2019(online)].pdf | 2019-07-13 |
| 8 | 201911028175-US(14)-ExtendedHearingNotice-(HearingDate-24-11-2021).pdf | 2021-11-22 |
| 8 | 201911028175-DRAWINGS [13-07-2019(online)].pdf | 2019-07-13 |
| 9 | 201911028175-FORM-26 [19-11-2021(online)]-1.pdf | 2021-11-19 |
| 9 | abstract.jpg | 2019-08-20 |
| 10 | 201911028175-FORM 13 [30-09-2019(online)].pdf | 2019-09-30 |
| 10 | 201911028175-FORM-26 [19-11-2021(online)].pdf | 2021-11-19 |
| 11 | 201911028175-Correspondence to notify the Controller [17-11-2021(online)].pdf | 2021-11-17 |
| 11 | 201911028175-DRAWING [30-09-2019(online)].pdf | 2019-09-30 |
| 12 | 201911028175-COMPLETE SPECIFICATION [30-09-2019(online)].pdf | 2019-09-30 |
| 12 | 201911028175-US(14)-HearingNotice-(HearingDate-19-11-2021).pdf | 2021-11-01 |
| 13 | 201911028175-AMMENDED DOCUMENTS [30-09-2019(online)].pdf | 2019-09-30 |
| 13 | 201911028175-FER.pdf | 2021-10-18 |
| 14 | 201911028175-CLAIMS [19-08-2021(online)].pdf | 2021-08-19 |
| 14 | 201911028175-FORM-9 [05-11-2020(online)].pdf | 2020-11-05 |
| 15 | 201911028175-Covering Letter [19-08-2021(online)].pdf | 2021-08-19 |
| 15 | 201911028175-STARTUP [09-11-2020(online)].pdf | 2020-11-09 |
| 16 | 201911028175-ENDORSEMENT BY INVENTORS [19-08-2021(online)].pdf | 2021-08-19 |
| 16 | 201911028175-FORM28 [09-11-2020(online)].pdf | 2020-11-09 |
| 17 | 201911028175-FORM 18A [09-11-2020(online)].pdf | 2020-11-09 |
| 17 | 201911028175-FER_SER_REPLY [19-08-2021(online)].pdf | 2021-08-19 |
| 18 | 201911028175-FORM-26 [19-08-2021(online)].pdf | 2021-08-19 |
| 18 | 201911028175-Proof of Right [19-08-2021(online)].pdf | 2021-08-19 |
| 19 | 201911028175-PETITION u-r 6(6) [19-08-2021(online)].pdf | 2021-08-19 |
| 20 | 201911028175-FORM-26 [19-08-2021(online)].pdf | 2021-08-19 |
| 20 | 201911028175-Proof of Right [19-08-2021(online)].pdf | 2021-08-19 |
| 21 | 201911028175-FER_SER_REPLY [19-08-2021(online)].pdf | 2021-08-19 |
| 21 | 201911028175-FORM 18A [09-11-2020(online)].pdf | 2020-11-09 |
| 22 | 201911028175-ENDORSEMENT BY INVENTORS [19-08-2021(online)].pdf | 2021-08-19 |
| 22 | 201911028175-FORM28 [09-11-2020(online)].pdf | 2020-11-09 |
| 23 | 201911028175-Covering Letter [19-08-2021(online)].pdf | 2021-08-19 |
| 23 | 201911028175-STARTUP [09-11-2020(online)].pdf | 2020-11-09 |
| 24 | 201911028175-FORM-9 [05-11-2020(online)].pdf | 2020-11-05 |
| 24 | 201911028175-CLAIMS [19-08-2021(online)].pdf | 2021-08-19 |
| 25 | 201911028175-FER.pdf | 2021-10-18 |
| 25 | 201911028175-AMMENDED DOCUMENTS [30-09-2019(online)].pdf | 2019-09-30 |
| 26 | 201911028175-COMPLETE SPECIFICATION [30-09-2019(online)].pdf | 2019-09-30 |
| 26 | 201911028175-US(14)-HearingNotice-(HearingDate-19-11-2021).pdf | 2021-11-01 |
| 27 | 201911028175-Correspondence to notify the Controller [17-11-2021(online)].pdf | 2021-11-17 |
| 27 | 201911028175-DRAWING [30-09-2019(online)].pdf | 2019-09-30 |
| 28 | 201911028175-FORM 13 [30-09-2019(online)].pdf | 2019-09-30 |
| 28 | 201911028175-FORM-26 [19-11-2021(online)].pdf | 2021-11-19 |
| 29 | 201911028175-FORM-26 [19-11-2021(online)]-1.pdf | 2021-11-19 |
| 29 | abstract.jpg | 2019-08-20 |
| 30 | 201911028175-DRAWINGS [13-07-2019(online)].pdf | 2019-07-13 |
| 30 | 201911028175-US(14)-ExtendedHearingNotice-(HearingDate-24-11-2021).pdf | 2021-11-22 |
| 31 | 201911028175-Written submissions and relevant documents [06-12-2021(online)].pdf | 2021-12-06 |
| 31 | 201911028175-EVIDENCE FOR REGISTRATION UNDER SSI [13-07-2019(online)].pdf | 2019-07-13 |
| 32 | 201911028175-PETITION UNDER RULE 137 [06-12-2021(online)].pdf | 2021-12-06 |
| 32 | 201911028175-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-07-2019(online)].pdf | 2019-07-13 |
| 33 | 201911028175-FIGURE OF ABSTRACT [13-07-2019(online)].pdf | 2019-07-13 |
| 33 | 201911028175-Annexure [06-12-2021(online)].pdf | 2021-12-06 |
| 34 | 201911028175-PatentCertificate18-02-2022.pdf | 2022-02-18 |
| 34 | 201911028175-FORM 1 [13-07-2019(online)].pdf | 2019-07-13 |
| 35 | 201911028175-IntimationOfGrant18-02-2022.pdf | 2022-02-18 |
| 35 | 201911028175-FORM FOR SMALL ENTITY(FORM-28) [13-07-2019(online)].pdf | 2019-07-13 |
| 36 | 201911028175-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 36 | 201911028175-FORM FOR STARTUP [13-07-2019(online)].pdf | 2019-07-13 |
| 37 | 201911028175-PROVISIONAL SPECIFICATION [13-07-2019(online)].pdf | 2019-07-13 |
| 37 | 201911028175-RELEVANT DOCUMENTS [31-08-2023(online)].pdf | 2023-08-31 |
| 1 | searchstrategyE_28-01-2021.pdf |