Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For Building A Semi Supervised Few Shot Model

Abstract: The disclosure herein relates to methods and systems for building a few-shot model with semi-supervised learning approach. For extreme cases like one-shot learning, the conventional few-shot model may assume each individual sample as a separate cluster, with itself being the prototype. Hence performance of conventional few-shot models trained on multiple shots, may degrade drastically in one-shot inference setup. In the present disclosure, an episodic triplet mining concept is introduced which mines the triplets for each episode and the model is trained for a number of episodes. The triplets are mined such that each triplet include semi hard positive samples and semi hard negative samples, rather than selecting all the available samples, to avoid the over-fitting that arises due to usual all possible triplet mining strategy. Also, an episodic triplet loss function is introduced in place of the conventional prototypical loss function to solve the convergence problem. To be published with FIG. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 August 2020
Publication Number
23/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2024-10-30
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai Maharashtra India 400021

Inventors

1. BHOSALE, Swapnil
Tata Consultancy Services Limited Yantra Park, Opp Voltas HRD Training Center, Subhash Nagar, Pokhran Road No. 2, Thane Maharashtra India 400601
2. CHAKRABORTY, Rupayan
Tata Consultancy Services Limited Yantra Park, Opp Voltas HRD Training Center, Subhash Nagar, Pokhran Road No. 2, Thane Maharashtra India 400601
3. KOPPARAPU, Sunil Kumar
Tata Consultancy Services Limited Yantra Park, Opp Voltas HRD Training Center, Subhash Nagar, Pokhran Road No. 2, Thane Maharashtra India 400601

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention:
METHODS AND SYSTEMS FOR BUILDING A SEMI-SUPERVISED
FEW-SHOT MODEL
Applicant
Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD [001] The disclosure herein generally relates to the field of few-shot learning, and, more particularly, to methods and systems for building a few-shot model with semi-supervised learning approach.
BACKGROUND
[002] Few-shot learning is becoming an important field in machine learning, especially for solving classification tasks where having a significant amount of samples is quite challenging and expensive. Some of the classification tasks where getting the significant amount of samples is challenging and expensive, include identification of rare objects such as unique species of birds, speaker recognition task, audio event classification task, identification of rare events for example uncommon diseases, speech biometry for authenticating a new employee in a large enterprise, and so on. The few-shot learning makes use of less samples for each class while training and generalize unseen classes which appear during testing but are unavailable during the training.
[003] A prototypical network is referred as a popular framework for implementing the few-shot learning through a metric learning concept. In the few-shot metric learning, the prototypical network construct class prototypes in an embedding space, where the samples within same class form clusters around a single prototypical point. In conventional prototypical networks, a variance in training data may easily affect relative locations of prototypes since a few-shot model obtained from the few-shot learning relies on unweighted average of the samples. Also, for extreme cases like one-shot learning where only one sample present in each class, the conventional few-shot model may assume each individual sample as a separate cluster, with itself being the prototype. As a result, performance of few-shot models trained on multiple shots, may degrades drastically in one-shot inference setup.
SUMMARY [004] Embodiments of the present disclosure present technological

improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
[005] In an aspect, there is provided a processor-implemented method for building a semi-supervised few-shot model, the method comprising the steps of: receiving a labeled dataset and an unlabeled dataset, wherein the labeled dataset comprises a predefined number of labeled samples associated with each unique class of a predefined number of unique classes, and the unlabeled dataset comprises a plurality of unlabeled samples; training a machine learning model with a plurality of triplets for each episode of a predefined number of episodes, using a triplet loss for the corresponding episode, to obtain a pre-trained few-shot model, wherein the plurality of triplets for each episode are obtained by: randomly selecting a first set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a support set and a query set for the corresponding episode, wherein the support set comprises a set of support samples and is formed by randomly choosing a first set of labeled samples for each unique class of the first set of unique classes, and the query set comprises a set of query samples and is formed by randomly choosing a second set of labeled samples for each unique class of the first set of unique classes; and forming a triplet for each query sample of the set of query samples present in the query set, to obtain the plurality of triplets for the corresponding episode, using the set of support samples present in the support set, wherein the triplet for each query sample comprises the associated query sample, one or more positive support samples and one or more negative support samples; and re-training the pre-trained few-shot model with a plurality of training triplets for each training episode of a predefined number of training episodes, using a training triplet loss for the corresponding training episode, to build the semi-supervised few-shot model, wherein the plurality of training triplets for each training episode are obtained by: randomly selecting a second set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a training support set and a training query set for the corresponding training episode, wherein the training support set comprises a set of training support samples and is formed by randomly choosing a third set of labeled samples for each unique class

of the second set of unique classes, and the training query set comprises a set of training query samples and is formed by randomly choosing a fourth set of labeled samples for each unique class of the second set of unique classes; randomly selecting a first set of unlabeled samples of the plurality of unlabeled samples present in the unlabeled dataset; assigning the unique class for each unlabeled sample of the first set of unlabeled samples, based on the set of training support samples present in the training support set, using the pre-trained few-shot model, to obtain corresponding labeled samples for the first set of unlabeled samples; adding the set of training support samples present in the training support set and the obtained labeled samples for the first set of unlabeled samples, to form a revised training support set with a second set of training support samples; and forming a training triplet for each training query sample of the set of training query samples present in the training query set, to obtain the plurality of training triplets for the corresponding training episode, using the second set of training support samples present in the revised training support set, wherein the training triplet for each training query sample comprises the associated training query sample, one or more positive training support samples and one or more negative training support samples.
[006] In another aspect, there is provided a system for building a semi-supervised few-shot model, the system comprising: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to: receive a labeled dataset and an unlabeled dataset, wherein the labeled dataset comprises a predefined number of labeled samples associated with each unique class of a predefined number of unique classes, and the unlabeled dataset comprises a plurality of unlabeled samples; train a machine learning model with a plurality of triplets for each episode of a predefined number of episodes, using a triplet loss for the corresponding episode, to obtain a pre-trained few-shot model, wherein the plurality of triplets for each episode are obtained by: randomly selecting a first set of unique classes out of the predefined number of unique classes from the labeled

dataset, to form a support set and a query set for the corresponding episode, wherein the support set comprises a set of support samples and is formed by randomly choosing a first set of labeled samples for each unique class of the first set of unique classes, and the query set comprises a set of query samples and is formed by randomly choosing a second set of labeled samples for each unique class of the first set of unique classes; and forming a triplet for each query sample of the set of query samples present in the query set, to obtain the plurality of triplets for the corresponding episode, using the set of support samples present in the support set, wherein the triplet for each query sample comprises the associated query sample, one or more positive support samples and one or more negative support samples; and re-train the pre-trained few-shot model with a plurality of training triplets for each training episode of a predefined number of training episodes, using a training triplet loss for the corresponding training episode, to build the semi-supervised few-shot model, wherein the plurality of training triplets for each training episode are obtained by: randomly selecting a second set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a training support set and a training query set for the corresponding training episode, wherein the training support set comprises a set of training support samples and is formed by randomly choosing a third set of labeled samples for each unique class of the second set of unique classes, and the training query set comprises a set of training query samples and is formed by randomly choosing a fourth set of labeled samples for each unique class of the second set of unique classes; randomly selecting a first set of unlabeled samples of the plurality of unlabeled samples present in the unlabeled dataset; assigning the unique class for each unlabeled sample of the first set of unlabeled samples, based on the set of training support samples present in the training support set, using the pre-trained few-shot model, to obtain corresponding labeled samples for the first set of unlabeled samples; adding the set of training support samples present in the training support set and the obtained labeled samples for the first set of unlabeled samples, to form a revised training support set with a second set of training support samples; and forming a training triplet for each training query sample of the set of training query samples present in the training query set, to

obtain the plurality of training triplets for the corresponding training episode, using the second set of training support samples present in the revised training support set, wherein the training triplet for each training query sample comprises the associated training query sample, one or more positive training support samples and one or more negative training support samples.
[007] In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive a labeled dataset and an unlabeled dataset, wherein the labeled dataset comprises a predefined number of labeled samples associated with each unique class of a predefined number of unique classes, and the unlabeled dataset comprises a plurality of unlabeled samples; train a machine learning model with a plurality of triplets for each episode of a predefined number of episodes, using a triplet loss for the corresponding episode, to obtain a pre-trained few-shot model, wherein the plurality of triplets for each episode are obtained by: randomly selecting a first set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a support set and a query set for the corresponding episode, wherein the support set comprises a set of support samples and is formed by randomly choosing a first set of labeled samples for each unique class of the first set of unique classes, and the query set comprises a set of query samples and is formed by randomly choosing a second set of labeled samples for each unique class of the first set of unique classes; and forming a triplet for each query sample of the set of query samples present in the query set, to obtain the plurality of triplets for the corresponding episode, using the set of support samples present in the support set, wherein the triplet for each query sample comprises the associated query sample, one or more positive support samples and one or more negative support samples; and re-train the pre-trained few-shot model with a plurality of training triplets for each training episode of a predefined number of training episodes, using a training triplet loss for the corresponding training episode, to build the semi-supervised few-shot model, wherein the plurality of training triplets for each training episode are

obtained by: randomly selecting a second set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a training support set and a training query set for the corresponding training episode, wherein the training support set comprises a set of training support samples and is formed by randomly choosing a third set of labeled samples for each unique class of the second set of unique classes, and the training query set comprises a set of training query samples and is formed by randomly choosing a fourth set of labeled samples for each unique class of the second set of unique classes; randomly selecting a first set of unlabeled samples of the plurality of unlabeled samples present in the unlabeled dataset; assigning the unique class for each unlabeled sample of the first set of unlabeled samples, based on the set of training support samples present in the training support set, using the pre-trained few-shot model, to obtain corresponding labeled samples for the first set of unlabeled samples; adding the set of training support samples present in the training support set and the obtained labeled samples for the first set of unlabeled samples, to form a revised training support set with a second set of training support samples; and forming a training triplet for each training query sample of the set of training query samples present in the training query set, to obtain the plurality of training triplets for the corresponding training episode, using the second set of training support samples present in the revised training support set, wherein the training triplet for each training query sample comprises the associated training query sample, one or more positive training support samples and one or more negative training support samples.
[008] In an embodiment, the labeled samples present in the support set and the query set are distinct, and the labeled samples present in the training support set and the training query set are distinct.
[009] In an embodiment, the one or more positive support samples for each triplet are obtained by identifying one or more support samples out of the set of support samples present in the support set, where each support sample of the one or more support samples having the unique class same as that of the associated query sample present in the triplet, and the one or more negative support samples for each triplet are obtained by identifying one or more support samples out of the set of

support samples present in the support set, where each support sample of the one or more support samples having the unique class different as that of the associated query sample present in the triplet.
[010] In an embodiment, the one or more positive training support samples for each training triplet are obtained by identifying one or more training support samples out of the set of training support samples present in the training support set, where each training support sample of the one or more training support samples having the unique class same as that of the associated training query sample present in the training triplet, and the one or more negative training support samples for each training triplet are obtained by identifying one or more training support samples out of the set of training support samples present in the training support set, where each training support sample of the one or more training support samples having the unique class different as that of the associated training query sample present in the training triplet.
[011] In an embodiment, training the machine learning model with the plurality of triplets for each episode, comprises: computing an embedding for each query sample, each positive support sample of the one or more positive support samples and each negative support sample of the one or more negative support samples, present in each triplet of the plurality of triplets, using an embedding model with initial weights present in the machine learning model; finding a second set of positive support samples from the one or more positive support samples, in each triplet, wherein an Euclidean distance between the embedding of the query sample and the embedding of each positive support sample of the second set positive support samples, is largest; finding a second set of negative support samples from the one or more negative support samples, in each triplet, wherein the Euclidean distance between the embedding of the query sample and the embedding of each negative support sample of the second set of negative support samples, is smallest; determining a triplet loss for each triplet based on (i) an average Euclidean distance (Dy) associated with the second set of positive support samples, (ii) an average Euclidean distance (Dz) associated with the second set of negative support samples, (iii) a predefined hyper-parameter (m), and (iv) a predefined triplet loss

value (h); calculating the triplet loss for the corresponding episode, by adding the triplet loss for each triplet of the plurality of triplets present in the corresponding episode; and updating the weights of the embedding model using a backpropagation, based on the triplet loss for the corresponding episode, to train the machine learning model for a successive episode.
[012] In an embodiment, the average Euclidean distance (Dy) associated with the second set of positive support samples is obtained by calculating an average of the Euclidean distances associated with each positive support sample of the second set of positive support samples, with respect to the query sample.
[013] In an embodiment, the average Euclidean distance (Dz) associated with the second set of negative support samples is obtained by calculating an average of the Euclidean distances associated with each negative support sample of the second set of negative support samples, with respect to the query sample.
[014] In an embodiment, the triplet loss for each triplet is a maximum value out of: (i) Dy - Dz +m and (ii) the predefined triplet loss value (h).
[015] In an embodiment, assigning the unique class for each unlabeled sample of the first set of unlabeled samples, based on the set of training support samples present in the training support set, using the pre-trained few-shot model, comprises: computing an embedding for the corresponding unlabeled sample, and each training support sample of the set of training support samples, using an embedding model of the pre-trained few-shot model; finding a second set of training support samples from the set of support samples, wherein the embedding of each training support sample of the second set of training support sample is closest to the embedding of the corresponding unlabeled sample; identifying the unique class present in majority within the second set of training support samples; and assigning the identified unique class to the corresponding unlabeled sample.
[016] In an embodiment, re-training the pre-trained few-shot model with the plurality of training triplets for each training episode, comprises: computing an embedding for each training query sample, each positive training support sample of the one or more positive training support samples and each negative training support sample of the one or more negative training support samples, present in

each training triplet of the plurality of training triplets, using an embedding model of the pre-trained few-shot model; finding a second set of positive training support samples from the one or more positive training support samples, in each training triplet, wherein an Euclidean distance between the embedding of the training query sample and the embedding of each positive training support sample of the second set of positive training support samples, is largest; finding a second set of negative training support samples from the one or more negative training support samples, in each training triplet, wherein the Euclidean distance between the embedding of the training query sample and the embedding of each negative training support sample of the second set of negative training support samples, is smallest; determining a training triplet loss for each training triplet based on (i) an average training Euclidean distance (Eb) associated with the second set of positive training support samples, (ii) an average training Euclidean distance (Eg) associated with the second set of negative training support samples, (iii) a predefined hyper-parameter (m), and (iv) a predefined triplet loss value (h); calculating the training triplet loss for the corresponding training episode, by adding the training triplet loss for each training triplet of the plurality of training triplets present in the corresponding training episode; and updating weights of the embedding model using a backpropagation, based on the training triplet loss for the corresponding training episode, to re-train the pre-trained few-shot model for a successive training episode.
[017] In an embodiment, the average training Euclidean distance (Eb) associated with the second set of positive training support samples, is obtained by calculating an average of the Euclidean distances associated with each positive training support sample of the second set of positive training support samples, with respect to the training query sample.
[018] In an embodiment, the average training Euclidean distance (Eg) associated with the second set of negative training support samples, is obtained by calculating an average of the Euclidean distances associated with each negative training support sample of the second set of negative training support samples, with respect to the training query sample.

[019] In an embodiment, the training triplet loss for each training triplet is a maximum value out of: (i) Eb - Eg +m and (ii) the predefined triplet loss value (h).
[020] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the embodiments of the present disclosure, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[021] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[022] FIG.1 is an exemplary block diagram of a system for building a semi-supervised few-shot model, in accordance with some embodiments of the present disclosure.
[023] FIG.2A through FIG.2C illustrate exemplary flow diagrams of a processor-implemented method for building a semi-supervised few-shot model, in accordance with some embodiments of the present disclosure.
[024] FIG.3 is a graph showing a performance comparision of the few-shot model of the present disclosure over the conventional few-shot model trained with a prototypical loss function, for 5-way 1-shot and 20-way 1-shot test setups, in a speaker recognition task, in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS [025] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is

intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
[026] Supervised deep learning models rely heavily on availability of substantial amount of labeled data while training. But, acquiring labeled data may be a costly process as intense involvement of human efforts is required. Sometimes, the availability is limited due to certain restrictions while collecting and distributing the labeled data. For example, in applications such as identification of rare objects such as unique species of birds, rare events such as uncommon diseases, speech biometry for authenticating a new employee in a large enterprise, and so on, it may be challenging to create reliable models using traditional deep neural networks. In contrast, humans are capable of learning new classes, even from very few samples of the training data. A generalization task utilizes past experiences to learn new concepts across different domains. In fact, humans are able to learn new classes and are good at generalizing these concepts by relating it to the past experiences. On other hand, most of the conventional deep learning models may be able to learn high level features and extract complex characteristics, when sufficient amount of the labeled data is used for supervised learning based training.
[027] Few-shot learning make use of less samples for the applications where getting the significant amount of samples is challenging, and the few-shot models may be developed in semi-supervised learning approach so that the most of the available samples are utilized. The few-shot learning may generalize the new classes that are unseen during the training. A learning paradigm is re-framed in the few-shot learning, such that the model is not trained to classify a test sample into one of classes that are seen during the training, but to generate pairs of samples, and optimize the model such that the model makes predictions whether the two samples are similar or dissimilar. This learning paradigm has a potential to create efficient models by learning the generalization to the samples that belong to the new classes, without re-training the model for the training data from the new classes. An extreme case of the few-shot learning is a one-shot learning, where a single sample for each of the unseen classes is available for making an inference about the test sample.

[028] The few-shot learning may be divided into two categories, namely, meta-learning and metric learning. A meta-learning framework, in general, contains a meta-level learner and a base-level learner. The base-level learner is designed to have a specific task in hand, such as classification, regression etc. The meta-level learner aims to learn prior knowledge across different tasks, which may be transferred to the base-level learner to quickly adapt to similar unseen tasks. On the other hand, metric learning framework learn a mapping to an embedding space, where the samples belonging to the same classes are closer, as compared to those belonging to different classes.
[029] Prototypical network is a popular framework in the metric learning. The prototypical network learns an embedding space, where the samples within the same class form clusters around a single prototypical point (centroid), which is represented by the mean of the individual samples within the cluster. Conventional prototypical network employs a prototypical loss concept while learning. During inference, a query sample is assigned the class (label) corresponding to nearest prototype in the embedding space. However, variance in the training data may easily affect relative locations (positions) of the prototypes since the model relies on unweighted average of the samples. Also, for extreme cases like one-shot learning, the model may assume each individual sample as a separate cluster, with itself being the prototype. As a result, the performance of models trained on multiple shots, may degrades drastically in one-shot inference setup. Also, the conventional prototypical network may make use of triplets that are formed from on the samples of the training data. However conventional triplet selection during the training process is cumbersome since choosing all possible combinations may lead to over-fitting. Further, in conventional triplet loss function present in the few-shot models, a number of possible triplets to be passed through the embedding space which may grow quadratically with the number of samples. Hence a convergence of the conventional triplet loss function is very slow and this may also effect the performance of the few-shot model.
[030] The present disclosure herein provides methods and systems that solve the technical problems in building an efficient few-shot model in semi-

supervised learning approach based of metric learning concept. An episodic triplet mining concept is introduced in the present disclosure, which mines the triplets for each episode and the model is trained for a number of episodes. The triplets are mined such that each triplet include semi hard positive samples and semi hard negative samples, rather than selecting all the available samples, particularly to avoid the over-fitting that arises due to usual all possible triplet mining strategy. Also, an episodic triplet loss function is introduced by the present disclosure, in place of the conventional prototypical loss function to solve the convergence problem.
[031] In the context of the present disclosure, terms such as an anchor sample and a query sample may be interchangeably used based on the context, however the terms refer to the sample for which the triplet has to be mined during the training. In the present disclosure, the term episode may be referred as an iteration and the few-shot model is built after training the model for the number of iterations.
[032] Referring now to the drawings, and more particularly to FIG.1 through FIG.3, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary systems and/or methods.
[033] FIG.1 is an exemplary block diagram of a system 100 for building the semi-supervised few-shot model, in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 includes or is otherwise in communication with one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more hardware processors 104, the memory 102, and the I/O interface(s) 106 may be coupled to a system bus 108 or a similar mechanism.
[034] The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface(s) 106 may include a variety of software and hardware

interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a plurality of sensor devices, a printer and the like. Further, the I/O interface(s) 106 may enable the system 100 to communicate with other devices, such as web servers and external databases.
[035] The I/O interface(s) 106 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface(s) 106 may include one or more ports for connecting a number of computing systems with one another or to another server computer. Further, the I/O interface(s) 106 may include one or more ports for connecting a number of devices to one another or to another server.
[036] The one or more hardware processors 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may be used interchangeably. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, portable computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[037] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 102 includes a plurality of modules 102a and a repository 102b for storing data processed, received, and generated by one or more of the plurality of modules 102a. The plurality of modules 102a may include routines,

programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types.
[038] The plurality of modules 102a may include programs or computer-readable instructions or coded instructions that supplement applications or functions performed by the system 100. The plurality of modules 102a may also be used as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 102a can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 104, or by a combination thereof. In an embodiment, the plurality of modules 102a can include various sub-modules (not shown in FIG.1). Further, the memory 102 may include information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure.
[039] The repository 102b may include a database or a data engine. Further, the repository 102b amongst other things, may serve as a database or includes a plurality of databases for storing the data that is processed, received, or generated as a result of the execution of the plurality of modules 102a. Although the repository 102a is shown internal to the system 100, it will be noted that, in alternate embodiments, the repository 102b can also be implemented external to the system 100, where the repository 102b may be stored within an external database (not shown in FIG. 1) communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, new data may be added into the external database and/or existing data may be modified and/or non-useful data may be deleted from the external database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). In another embodiment, the data stored in the repository 102b may be distributed between the system 100 and the external database.
[040] Referring to FIG.2A through FIG.2C, components and functionalities of the system 100 are described in accordance with an example

embodiment of the present disclosure. For example, FIG.2A through FIG.2C illustrate exemplary flow diagrams of a processor-implemented method 200 for building the semi-supervised few-shot model, in accordance with some embodiments of the present disclosure. Although steps of the method 200 including process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any practical order. Further, some steps may be performed simultaneously, or some steps may be performed alone or independently.
[041] At step 202 of the method 200, the one or more hardware processors 104 of the system 100 are configured to receive a labeled dataset and an unlabeled dataset. The labeled dataset includes a predefined number of labeled samples associated with each unique class of a predefined number of unique classes. For example, x number of labeled samples associated with each unique class of a c number of unique classes resembling an x-shot c-way setup. The unlabeled dataset includes a plurality of unlabeled samples. In an embodiment, the plurality of unlabeled samples includes a plurality of weakly labeled samples and a plurality of completely unlabeled samples. The plurality of completely unlabeled samples are the samples that does not contain any labels. The plurality of weakly labeled samples are the samples where the exact label is not present to each sample however some set of the weakly labeled samples out of the plurality of weakly labeled samples may be associated with one or more unique classes of the predefined number of unique classes.
[042] Each sample present in the labeled dataset and the unlabeled dataset may be associated with a type of applications such as classification tasks For example, each sample present in the labeled dataset and the unlabeled dataset may be an audio sample in case of the audio event classification task. In an embodiment, the labeled dataset and the unlabeled dataset may be received through a user. In an

embodiment, the labeled dataset and the unlabeled dataset may be stored in the repository 102b present in the system 100.
[043] At step 204 of the method 200, the one or more hardware processors 104 of the system 100 are configured to train a machine learning model for a predefined number of episodes to obtain a pre-trained few-shot model. A plurality of triplets may be formed for each episode of the plurality of episodes. A triplet loss is calculated for each episode and based on the triplet loss the machine learning model is trained in a successive episode of the plurality of episodes.
[044] The plurality of triplets for each episode are mined by using the labeled samples present in the labeled dataset. This is referred as episodic triplet mining. At step 204a of the method 200, the one or more hardware processors 104 of the system 100 are configured to randomly select a first set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a support set and a query set for the corresponding episode. In an embodiment, the first set of unique classes selected for one episode may be different from another episode
[045] The support set present in each episode includes a set of support samples and is formed by randomly choosing a first set of labeled samples for each unique class of the first set of unique classes. The query set present in each episode includes a set of query samples and is formed by randomly choosing a second set of labeled samples for each unique class of the first set of unique classes. For example, if n samples are present in first set of labeled samples, for unique class of k unique classes present in the first set of unique classes, then the support set S include where �� is a number of support samples.
Similarly, the Query set Q include is a number of
query samples.
[046] In an embodiment, the labeled samples present in the support set and the query set are distinct. For example, the labeled sample present in the support set may not be present in the query set.
[047] At step 204b of the method 200, the one or more hardware processors 104 of the system 100 are configured to form a triplet for each query

sample of the set of query samples present in the query set, to obtain the plurality of triplets for the corresponding episode. The triplet for each query sample is formed by using the set of support samples present in the support set. The triplet for each query sample includes the associated query sample, one or more positive support samples and one or more negative support samples. The associated query sample present in the triplet may be termed the anchor sample.
[048] In an embodiment, the one or more positive support samples present in each triplet are obtained by identifying one or more support samples out of the set of support samples present in the support set. Each support sample identified out of the one or more support samples has the unique class which is same as that of the associated query sample present in the triplet. Similarly, the one or more negative support samples present in each triplet are obtained by identifying one or more support samples out of the set of support samples present in the support set. Each support sample identified out of the one or more support samples has the unique class which is different as that of the associated query sample present in the triplet.
[049] The machine learning model is trained with the plurality of triplets present in each episode, for the predefined number of episodes. For example, the predefined number of episodes may be ‘30’. Training the machine learning model with the plurality of triplets for each episode is explained in below sub-steps. First, an embedding is computed for each query sample, each positive support sample of the one or more positive support samples and each negative support sample of the one or more negative support samples, present in each triplet of the plurality of triplets. An embedding model present in the machine learning model is used for computing the embedding for each sample present in each triplet of the plurality of triplets. The embedding model work as a feature extractor that extracts the features for each sample. So the embedding for each sample include one more features associated with the sample. The embedding model includes an embedding function parameterized with initial weights.
[050] Then, a Euclidean distance between the embedding of the query sample and the embedding of each positive support sample of the one or more

positive support samples present in each triplet is calculated. Similarly the Euclidean distance between the embedding of the query sample and the embedding of each negative support sample of the one or more negative support samples present in each triplet is calculated. A second set of positive support samples from the one or more positive support samples, in each triplet, are identified based on the associated Euclidean distances. The second set of positive support samples are identified, in each triplet, such that the Euclidean distance between the embedding of the query sample and the embedding of each positive support sample of the second set of positive support samples, is largest. The positive support samples present in the second set of positive support samples are referred as semi hard positive support samples.
[051] Similarly, a second set of negative support samples from the one or more negative support samples, in each triplet, are identified based on the associated Euclidean distances. The second set of negative support samples are identified, in each triplet, such that the Euclidean distance between the embedding of the query sample and the embedding of each negative support sample of the second set negative support samples, is smallest. The negative support samples present in the second set of negative support samples are referred as semi hard negative support samples. Overall, the one or more positive support samples that are very close to the query sample present in the associated triplet are identified to form the second set of positive support samples. Similarly, the one or more negative support samples that are very far to the query sample present in the associated triplet are identified to form the second set of negative support samples.
[052] Then, a triplet loss for each triplet is determined based on (i) an average Euclidean distance (Dy) associated with the second set of positive support samples, (ii) an average Euclidean distance (Dz) associated with the second set of negative support samples, (iii) a predefined hyper-parameter (m), and (iv) a predefined triplet loss value (h). in an embodiment, the average Euclidean distance (Dy) associated with the second set of positive support samples is obtained by calculating an average of the Euclidean distances associated with each positive support sample of the second set of positive support samples, with respect to the

query sample. Similarly, the average Euclidean distance (Dz) associated with the second set of negative support samples is obtained by calculating an average of the Euclidean distances associated with each negative support sample of the second set of negative support samples, with respect to the query sample. The predefined hyper-parameter (m) indicates a margin, for example, value of m is ‘0.3’. The predefined triplet loss value (h) indicates a default triplet loss value which may be for example ‘0’ (zero). In an embodiment, the triplet loss for each triplet is calculated using an episodic triplet loss function defined as a maximum value out of: (i) Dy - Dz +m and (ii) the predefined triplet loss value (h).
Episodic triplet loss function = maximum of (Dy - Dz +m, h)
[053] Further, the triplet loss for the corresponding episode is calculated by adding the triplet loss for each triplet of the plurality of triplets present in the corresponding episode. Then, based on the triplet loss for the corresponding episode, the weights of the embedding model are updated using a backpropagation algorithm, to train the machine learning model for a successive episode. The backpropagation algorithm may be supervised learning algorithm for training artificial neural networks using gradient descent, which calculates an amount of change required in the weights of each layer. The embedding function of the embedding model projects each sample present in each triplet, to an M dimensional embedding in the latent space and then the embedding function is optimized to reduce the distance between the embeddings of query sample and support samples belonging to the same class and increase the distance between query sample and support samples belonging to different classes. The embedding function is optimized with each triplet such that Dy - Dz > m. The pre-trained few-shot model is obtained after training the machine learning model for the predefined number of episodes.
[054] At step 206 of the method 200, the one or more hardware processors 104 of the system 100 are configured to re-train the pre-trained few-shot model for a predefined number of training episodes to build the semi-supervised few-shot model. A plurality of training triplets may be formed for each training episode of the plurality of episodes. A training triplet loss is calculated for each training

episode and based on the training triplet loss, the pre-trained few-shot model is re-trained in a successive training episode of the plurality of training episodes.
[055] The plurality of training triplets for each training episode are mined by using the labeled samples present in the labeled dataset as well as the unlabeled samples present in the unlabeled dataset. The pre-trained few-shot model obtained at step 204 is used to assign the class (labels) for each unlabeled sample present in the unlabeled dataset. The labeled samples present in the labeled dataset and the unlabeled samples along with assigned unique classes are combined to re-train the pre-trained few-shot model obtained at step 204, to build the semi-supervised few-shot model.
[056] For obtaining the plurality of triplets for each training episode, at step 206a of the method 200, the one or more hardware processors 104 of the system 100 are configured to randomly select a second set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a training support set and a training query set for the corresponding training episode. In an embodiment, the second set of unique classes selected for one training episode may be different from another training episode.
[057] The training support set present in each training episode includes a set of training support samples and is formed by randomly choosing a third set of labeled samples for each unique class of the second set of unique classes. The training query set present in each training episode includes a set of training query samples and is formed by randomly choosing a fourth set of labeled samples for each unique class of the second set of unique classes. The third set of labeled samples for each unique class of the second set of unique classes present in the training support set is distinct with the forth set of labeled samples for each unique class of the second set of unique classes present in the training query set. This mean, the labeled sample present in the training support set may not be present in the training query set.
[058] At step 206b of the method 200, the one or more hardware processors 104 of the system 100 are configured to randomly select a first set of unlabeled samples of the plurality of unlabeled samples present in the unlabeled

dataset. In an embodiment, if the unlabeled dataset includes only the plurality of weakly labeled samples, then the first set of unlabeled samples may be randomly selected from the plurality of weakly labeled samples. In another embodiment, if the unlabeled dataset includes only the plurality of completely unlabeled samples, then the first set of unlabeled samples may be randomly selected from the plurality of completely unlabeled samples. In yet another embodiment, if the unlabeled dataset includes both the plurality of weakly labeled samples and the plurality of completely unlabeled samples, then the first set of unlabeled samples may be randomly selected either from the plurality of weakly labeled samples or from the plurality of completely unlabeled samples, at a time.
[059] At step 206c of the method 200, the one or more hardware processors 104 of the system 100 are configured to assign the unique class for each unlabeled sample of the first set of unlabeled samples, based on the set of training support samples present in the training support set. The pre-trained few-shot model obtained at step 204 of the method 200 is used to infer or assign the unique class for each unlabeled sample of the first set of unlabeled samples. In an embodiment, a pseudo-labeling technique is employed for assigning the unique class to each unlabeled sample of the first set of unlabeled samples,
[060] To assign the unique class for each unlabeled sample of the first set of unlabeled samples, an embedding is computed for the corresponding unlabeled sample whose unique class to be assigned or inferred, and each training support sample of the set of training support samples, using the embedding model of the pre-trained few-shot model obtained at step 204 of the method 200. Then, a second set of training support samples is selected from the set of support samples, such that the embedding of each training support sample of the second set of training support sample is closest to the embedding of the corresponding unlabeled sample. Next, the unique class present in majority within the second set of training support samples is identified. The identified unique class to the corresponding unlabeled sample is assigned. Once the unique class is assigned to each unlabeled sample of the first set of unlabeled samples, the unlabeled samples present in the first set of unlabeled samples becomes the labeled samples.

[061] At step 206d of the method 200, the one or more hardware processors 104 of the system 100 are configured to add the set of training support samples present in the training support set and the obtained labeled samples for the first set of unlabeled samples, to form a revised training support set with a second set of training support samples. The second set of training support samples present in the revised training support set include all the labeled samples.
[062] At step 206e of the method 200, the one or more hardware processors 104 of the system 100 are configured to form a training triplet for each training query sample of the set of training query samples present in the training query set, to obtain the plurality of training triplets for each training episode of the plurality of training episodes. The training triplet for each training query sample is formed by using the second set of training support samples present in the revised training support set. The training triplet for each training query sample includes the associated training query sample, one or more positive training support samples and one or more negative training support samples. The associated training query sample present in the each training triplet may be termed as a training anchor sample.
[063] In an embodiment, the one or more positive training support samples present in each training triplet are obtained by identifying one or more training support samples out of the second set of training support samples present in the revised training support set. Each training support sample identified out of the one or more training support samples has the unique class which is same as that of the associated training query sample present in the training triplet. Similarly, the one or more negative training support samples present in each training triplet are obtained by identifying one or more training support samples out of the second set of training support samples present in the revised training support set. Each training support sample identified out of the one or more training support samples has the unique class which is different as that of the associated training query sample present in the training triplet.
[064] The pre-trained few-shot model obtained at step 204 of the method 200 is re-trained with the plurality of training triplets present in each training

episode of the predefined number of training episodes, to build the semi-supervised few-shot model. For example, the predefined number of training episodes may be ‘1000’. Re-training the pre-trained few-shot model obtained at step 204 of the method 200, with the plurality of training triplets present in each training episode is explained in below sub-steps.
[065] First, the embedding is computed for each training query sample, each positive training support sample of the one or more positive training support samples and each negative training support sample of the one or more negative training support samples, present in each training triplet of the plurality of training triplets. The embedding model present in the pre-trained few-shot model obtained at step 204 of the method 200 is used for computing the embedding for each sample present in each training triplet of the plurality of training triplets.
[066] Then, the Euclidean distance between the embedding of the training query sample and the embedding of each positive training support sample of the one or more positive support samples present in each training triplet is calculated. Similarly the Euclidean distance between the embedding of the training query sample and the embedding of each negative training support sample of the one or more negative training support samples present in each training triplet is calculated. A second set of positive training support samples from the one or more positive training support samples, in each training triplet, are identified based on the associated Euclidean distances. The second set of positive support samples are identified, in each training triplet, such that the Euclidean distance between the embedding of the training query sample and the embedding of each positive training support sample of the second set positive training support samples, is largest. The positive training support samples present in the second set of positive training support samples are referred as semi hard positive support samples.
[067] Similarly, a second set of negative training support samples from the one or more negative training support samples, in each training triplet, are identified based on the associated Euclidean distances. The second set of negative training support samples are identified, in each training triplet, such that the Euclidean distance between the embedding of the training query sample and the embedding

of each negative training support sample of the second set of negative training support samples, is smallest. The negative training support samples present in the second set of negative training support samples are referred as semi hard negative support samples. Overall, the one or more positive training support samples that are very close to the training query sample present in the associated training triplet are identified to form the second set of positive training support samples. Similarly, the one or more negative training support samples that are very far to the training query sample present in the associated training triplet are identified to form the second set of negative training support samples.
[068] Then, a training triplet loss for each training triplet is determined based on (i) an average Euclidean distance (Eb) associated with the second set of positive training support samples, (ii) an average Euclidean distance (Eg) associated with the second set of negative training support samples, (iii) the predefined hyper-parameter (m), and (iv) the predefined triplet loss value (h). in an embodiment, average Euclidean distance (Eb) associated with the second set of positive training support samples is obtained by calculating an average of the Euclidean distances associated with each positive training support sample of the second set of positive training support samples, with respect to the training query sample. Similarly, the average Euclidean distance (Eg) associated with the second set of negative training support samples is obtained by calculating an average of the Euclidean distances associated with each negative training support sample of the second set of negative training support samples, with respect to the training query sample. The predefined hyper-parameter (m) indicates a margin, for example, a value of m is ‘0.3’. The predefined triplet loss value (h) indicates a default triplet loss value which may be for example ‘0’ (zero). In an embodiment, the training triplet loss for each training triplet is calculated using an episodic triplet loss function defined as a maximum value out of: (i) Eb - Eg +m and (ii) the predefined triplet loss value (h). Episodic triplet loss function = maximum of (Eb - Eg +m, h)
[069] Further, the training triplet loss for the corresponding training episode is calculated by adding the training triplet loss for each training triplet of the plurality of training triplets present in the corresponding training episode. Then,

based on the training triplet loss for the corresponding training episode, the weights of the embedding model of the pre-trained few-shot model, are updated using the backpropagation, to re-train the pre-trained few-shot model for a successive training episode. The embedding function of the embedding model projects each sample present in each training triplet, to an M dimensional embedding in the latent space and then the embedding function is optimized to reduce the distance between the embeddings of training query sample and training support samples belonging to the same class and increase the distance between training query sample and training support samples belonging to different classes. The embedding function is optimized with each training triplet such that Eb - Eg > m. The semi-supervised few-shot model is obtained after re-training the pre-trained few-shot model for the predefined number of training episodes.
[070] In accordance with the present disclosure, the methods and systems, make use of both the labeled samples and also unlabeled samples (after assigning the labels using the pre-trained few-shot model) during the training. Hence the few-shot model build in the present disclosure is efficient in few-shot classification and related applications. Since only the semi hard positive samples that are farthest to the anchor sample (query sample) and the semi hard negative support samples that are nearest to the anchor sample (query sample) are identified for each triplet while training, the problem of overfitting may not be occurred. In the episodic triplet mining of the present disclosure, since the anchor sample (query sample) interacts with all the negative support samples present in the triplet, the episodic triplet loss function may be converged in quick time and updates at each episode may be stable.
[071] In accordance with the present disclosure, though the methods and systems are provided to build the few-shot model in the semi-supervised learning approach, a version of the few-shot model of the present disclosure (the pre-trained few-shot model ) may work as a supervised few-shot model when only the labeled dataset is available while training the model. Hence the present disclosure may hold good for building both the supervised few-shot model and the semi-supervised few-shot model. Example scenario:

[072] To validate the few-shot model of the present disclosure, a set of experiments are conducted for three different tasks, namely, (1) Image Character Recognition (Omniglot dataset), (2) Speaker Recognition task (CSTR Voice Cloning Toolkit (VCTK) corpus), and (3) Audio Event Classification (Freesound Dataset, 2018 (FSD)).
[073] Character Recognition (from images) task : Omniglot is a handwritten character recognition dataset with 1623 unique characters that are derived from 50 alphabets. Each character was drawn by 20 people. Each image is augmented by rotating it by 90, 180 and 270 degrees, which yields 6492 unique classes. The distribution of class samples in train, test and validation sets are 4112, 1692 and 688, respectively. To extract features, convolution blocks stacked on top of each other are used wherein each convolution block consists of a 2D convolution layer, followed by max-pooling and ReLU activation. Each convolution layer has 64 filters and a stride of size 2. As a result the output of embedding model is a 64 dimensional embedding for each sample. For the semi-supervised learning experiments, 10% of samples from each class are randomly selected within the training data, to obtain the labeled train set. The rest of the samples are used for obtain the unlabeled train set. In episode triplet construction, the number of support samples for each class (NS) is selected as 10, the number of query samples (NQ) are selected as 15 and the number of unique classes (NC) are selected as 15.
[074] Speaker Recognition task: VCTK corpus is an English multi-speaker dataset, with 44 hours of audio spoken by 109 native English speakers. The dataset divided into 70:20:10 random train-test-validation split, such that set of speakers in the train, test and validation set are completely disjoint. Each audio is downsampled to 16 kHz, and split into audio segments of 3 seconds each. Mel-spectrograms are extracted as an initial feature from each segment and used as an input to the embedding model. The embedding model is constructed using two layers of 1-D convolutions each with a kernel of size 3 and with 128 filters each. The use of 1-D convolution helps learn the temporal contexts between the adjacent frames. Each convolution layer is followed by a max-pooling layer with kernel of

size 3. Additionally, batch normalization is performed over the output of each convolution layer.
[075] A multi-head self-attention mechanism is applied over the output of the second convolution layer and average the output of each head to obtain an 128 dimensional embedding. The semi-supervised learning experiments are conducted considering the availability of two different portions of the training data as labeled train set, (a) 1/3rd (33.33%) of the samples of each speaker present in the training data and (b) 2/3rd (66.67%) of the samples of each speaker present in the training data. In both (a) and (b), the remaining samples from the training data are used as the unlabeled train set. In episode triplet construction, the number of support samples for each class (NS) is selected as 20, the number of query samples (NQ) are selected as 15 and the number of unique classes (NC) are selected as 5, to give the best performance across all testing configurations.
[076] Audio Event Classification task : Freesound dataset (2018) consists of 18,873 audio files wherein each audio file is assigned one of the 41 unique audio events from the Audioset Ontology of Google. In each of the 3 folds of the experiment, 10 classes are randomly selected with all its corresponding audio files as the test set, and split remaining classes into train classes and validation classes in 90:10 ratio, such that the audio events in train, test and validation are always disjoint. All audio file are downsampled to 16 kHz, and split into 1 second chunks. A VGGish architecture is used as the embedding model, which outputs a 128 dimensional embedding for each chunk of audio. For the semi supervised learning experiments, 50% of samples are randomly selected from each audio event within the training data, to obtain the labeled training set. The rest of the samples are used to obtain the unlabeled train set. The test set and validation set for the supervised and semi supervised experiments for each task are kept fixed. In episode triplet construction, the number of support samples for each class (NS) is selected as 10, the number of query samples (NQ) are selected as 5 and the number of unique classes (NC) are selected as 5.
[077] In all the three tasks, the predefined hyper-parameter (m) is taken as ‘0.3’. Also, a single model is trained and tested against all test configurations (5-

way 1-shot, 5-way 5-shot, and so on). For all the three tasks, the number of hard positive support samples is considered as 3 and the number of hard negative support samples are considered as 5. For character recognition task, the model trained for 5000 episodes. For speaker recognition and audio event classification tasks, the model is trained for 10,000 episodes. An Adam optimizer is used with initial learning rate of 10-3, which gets reduced by half after every 1000 episodes.
[078] While testing, the number of query samples (NQ) are considered as 15 and reported the average accuracy over 1000 test episodes. For the semi supervised setup, the pre-training few-shot model is trained for 20, 50, 50 episodes for character recognition, speaker recognition and audio event classification, respectively. The amount of unlabeled samples are varied per class in each episode from 1 to 5. We employ a similar testing setup as the supervised setup, keeping the test set unchanged. All the experiments are conducted using Keras (backend tensorflow) on a Nvidia Tesla K40 GPU. Performance results analysis:
[079] The performance of all the experiments in terms of average accuracy and variance across all test episodes is evaluated. Table 1, shows the performance of Character recognition task for one-shot learning with 1/10th of training data is labeled. The semi-supervised adaptation of the present disclosure using weakly labeled data outperforms the conventional methods by an absolute 0.24%. Also, choosing random samples in a completely unlabeled manner also performs competitively.

Method Accuracy
Supervised (meta learning for semi-supervised few-shot classification 94.62 ± 0.09
Supervised (present disclosure) 93.98 ± 0.08
Soft k-means + cluster 97.68 ± 0.07
Semi supervised: weakly labeled (present disclosure) 97.92 ± 0.11
Semi supervised: completely unlabeled (present disclosure) 97.88 ± 0.07

[080] Table 2 shows results for few-shot speaker recognition task on the entire training data. The present disclosed approach of episodic triplet loss is compared with the prototypical loss approach, with the same embedding model. The present disclosure surpassed the accuracy of the model trained using the prototypical loss in 5-way 1-shot, 20-way 1-shot, 20-way 5-shot. Especially in 20-way 1-shot and 5-way 1-shot, the present disclosure outperforms the model trained using prototypical loss with identical embedding model by an absolute 11.17% and 3.24% respectively. This shows the effectiveness of present disclosure especially for extreme few-shot learning such as one-shot.

Loss function 5-way 1- 5-way 5- 20-way 1- 20 way 5-
shot shot shot shot
Prototypical loss 83.42 96.88 56.23 78.47
Episodic triplet loss 86.66 93.04 67.4 79.15
(present disclosure)
[081] Table 3 shows the performance of speaker recognition task with semi supervised learning using weakly labeled and completely unlabeled data, where only 1/3rd of the training data is labeled. The present disclosure achieves significantly better results over the baseline (which performs supervised training using 1/3rd of training data), thereby reducing a gap with respect to the top-line across all four setups by an average of 11.45%.

Method 5-way 1- 5-way 5- 20-way 1- 20 way 5-
shot shot shot shot
Supervised (baseline) 76.13 87.90 49.66 65.45
Semi-supervised weakly 85.88 91.37 64.44 76.90
labeled
Semi-supervised: 85.41 92.13 63.55 75.39
completely unlabeled
Top-line (100% labeled 86.66 93.04 67.40 79.15
data

[082] Table 4 shows the performance of speaker recognition task with semi supervised learning using weakly labeled and completely unlabeled data, where only 1/3rd of the training data is labeled. In this scenario also the present disclosure achieve an average increase of 1.66% across all four setups. Specifically for 5-way 1-shot, the present disclosure achieves an absolute improvement of 0.39% over the model trained in supervised manner using 100% labeled training data. The psuedo labeling technique is an iterative process which instead of diffusing the probability spread over multiple classes, assigns high probability towards one particular class. This helps by reducing the density (or entropy) around the decision boundaries.

Method 5-way 1- 5-way 5- 20-way 1- 20 way 5-
shot shot shot shot
Supervised (baseline) 84.12 92.14 63.53 76.32
Semi-supervised weakly 85.28 92.34 65.10 77.59
labeled
Semi-supervised: 87.05 92.84 65.00 78.37
completely unlabeled
Top-line (100% labeled 86.66 93.04 67.40 79.15
data
[083] Table 5 shows the performance when using episodic triplet loss and prototypical loss for the audio event classification task for various one-shot scenarios. The present disclosure achieve an average improvement of 3.72% in terms of accuracy when training the model using our episodic triplet loss technique across all three one-shot scenarios.

Loss function 5-way 1-shot 7-way 1-shot 10-way 1-shot
Prototypical loss 75.33 72.95 63.33
Episodic triplet loss 78.53 75.04 69.40
[084] Table 6 shows the performance of audio event classification task with semi-supervised learning using weakly labeled and completely unlabeled with

episodic triplet loss technique. For both, weakly labeled and completely labeled experiments, 50% of the samples belonging to each class in the train set is used as the unlabeled training set and the remaining samples as the labeled training set.

Method 5-way 1-shot 7-way 1-shot 10-way 1-shot
Supervised (baseline) 76.56 72.34 61.44
Semi-supervised weakly labeled 79.20 73.34 69.24
Semi-supervised: completely unlabeled 77.73 73.66 68.33
Top-line (100% labeled data 78.53 75.04 69.33
[085] FIG.3 is a graph showing a performance comparison of the few-shot model of the present disclosure over the conventional few-shot model trained with a prototypical loss function, for 5-way 1-shot and 20-way 1-shot test setups, in a speaker recognition task, in accordance with some embodiments of the present disclosure. In FIG.3, the performance of the disclosed few-shot model is compared with the model trained with the prototypical loss. Here, 5-way 1-shot and 20-way 1-shot test setups are implemented for calculating the episodic triplet loss of the present disclosure with different training sample size (33% of labeled samples, 66% of labeled samples, and 100% of labeled samples) and the conventional prototypical loss with 100% of labeled samples. From the graph of FIG.3, in case of extreme few-shot test setups such as the 20-way 1-shot and 5-way 1-shot, the accuracy of the model trained with prototypical loss degrades drastically when compared with the few-shot model trained with episodic triplet loss of the present disclosure.
[086] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do

not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[087] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[088] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[089] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily

defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims (when included in the specification), the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[090] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[091] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.

WE CLAIM:
1. A processor-implemented method (200) for building a semi-supervised
few-shot model, the method (200) comprising the steps of:
receiving, via one or more hardware processors, a labeled dataset and an unlabeled dataset, wherein the labeled dataset comprises a predefined number of labeled samples associated with each unique class of a predefined number of unique classes, and the unlabeled dataset comprises a plurality of unlabeled samples (202);
training, via the one or more hardware processors, a machine learning model with a plurality of triplets for each episode of a predefined number of episodes, using a triplet loss for the corresponding episode, to obtain a pre-trained few-shot model (204), wherein the plurality of triplets for each episode are obtained by:
randomly selecting a first set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a support set and a query set for the corresponding episode, wherein the support set comprises a set of support samples and is formed by randomly choosing a first set of labeled samples for each unique class of the first set of unique classes, and the query set comprises a set of query samples and is formed by randomly choosing a second set of labeled samples for each unique class of the first set of unique classes (204a); and
forming a triplet for each query sample of the set of query samples present in the query set, to obtain the plurality of triplets for the corresponding episode, using the set of support samples present in the support set, wherein the triplet for each query sample comprises the associated query sample, one or more positive support samples and one or more negative support samples (204b); and re-training, via the one or more hardware processors, the pre-trained few-shot model with a plurality of training triplets for each training episode of a predefined number of training episodes, using a training triplet loss for

the corresponding training episode, to build the semi-supervised few-shot model (206), wherein the plurality of training triplets for each training episode are obtained by:
randomly selecting a second set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a training support set and a training query set for the corresponding training episode, wherein the training support set comprises a set of training support samples and is formed by randomly choosing a third set of labeled samples for each unique class of the second set of unique classes, and the training query set comprises a set of training query samples and is formed by randomly choosing a fourth set of labeled samples for each unique class of the second set of unique classes (206a);
randomly selecting a first set of unlabeled samples of the plurality of unlabeled samples present in the unlabeled dataset (206b);
assigning the unique class for each unlabeled sample of the first set of unlabeled samples, based on the set of training support samples present in the training support set, using the pre-trained few-shot model, to obtain corresponding labeled samples for the first set of unlabeled samples (206c);
adding the set of training support samples present in the training support set and the obtained labeled samples for the first set of unlabeled samples, to form a revised training support set with a second set of training support samples (206d); and
forming a training triplet for each training query sample of the set of training query samples present in the training query set, to obtain the plurality of training triplets for the corresponding training episode, using the second set of training support samples present in the revised training support set, wherein the training triplet for each training query sample comprises the associated training query

sample, one or more positive training support samples and one or more negative training support samples (206e).
2. The method as claimed in claim 1, wherein the labeled samples present in the support set and the query set are distinct, and the labeled samples present in the training support set and the training query set are distinct.
3. The method as claimed in claim 1, wherein the one or more positive support samples for each triplet are obtained by identifying one or more support samples out of the set of support samples present in the support set, where each support sample of the one or more support samples having the unique class same as that of the associated query sample present in the triplet, and the one or more negative support samples for each triplet are obtained by identifying one or more support samples out of the set of support samples present in the support set, where each support sample of the one or more support samples having the unique class different as that of the associated query sample present in the triplet.
4. The method as claimed in claim 1, wherein the one or more positive training support samples for each training triplet are obtained by identifying one or more training support samples out of the set of training support samples present in the training support set, where each training support sample of the one or more training support samples having the unique class same as that of the associated training query sample present in the training triplet, and the one or more negative training support samples for each training triplet are obtained by identifying one or more training support samples out of the set of training support samples present in the training support set, where each training support sample of the one or more training support samples having the unique class different as that of the associated training query sample present in the training triplet.

5. The method as claimed in claim 1, wherein training the machine learning model with the plurality of triplets for each episode, comprises:
computing an embedding for each query sample, each positive support sample of the one or more positive support samples and each negative support sample of the one or more negative support samples, present in each triplet of the plurality of triplets, using an embedding model with initial weights present in the machine learning model;
finding a second set of positive support samples from the one or more positive support samples, in each triplet, wherein an Euclidean distance between the embedding of the query sample and the embedding of each positive support sample of the second set positive support samples, is largest;
finding a second set of negative support samples from the one or more negative support samples, in each triplet, wherein the Euclidean distance between the embedding of the query sample and the embedding of each negative support sample of the second set of negative support samples, is smallest;
determining a triplet loss for each triplet based on (i) an average Euclidean distance (Dy) associated with the second set of positive support samples, (ii) an average Euclidean distance (Dz) associated with the second set of negative support samples, (iii) a predefined hyper-parameter (m), and (iv) a predefined triplet loss value (h);
calculating the triplet loss for the corresponding episode, by adding the triplet loss for each triplet of the plurality of triplets present in the corresponding episode; and
updating the weights of the embedding model using a backpropagation, based on the triplet loss for the corresponding episode, to train the machine learning model for a successive episode.

6. The method as claimed in claim 5, wherein the average Euclidean distance (Dy) associated with the second set of positive support samples is obtained by calculating an average of the Euclidean distances associated with each positive support sample of the second set of positive support samples, with respect to the query sample.
7. The method as claimed in claim 5, wherein the average Euclidean distance (Dz) associated with the second set of negative support samples is obtained by calculating an average of the Euclidean distances associated with each negative support sample of the second set of negative support samples, with respect to the query sample.
8. The method as claimed in claim 5, wherein the triplet loss for each triplet is a maximum value out of: (i) Dy - Dz +m and (ii) the predefined triplet loss value (h).
9. The method as claimed in claim 1, wherein assigning the unique class for each unlabeled sample of the first set of unlabeled samples, based on the set of training support samples present in the training support set, using the pre-trained few-shot model, comprises:
computing an embedding for the corresponding unlabeled sample, and each training support sample of the set of training support samples, using an embedding model of the pre-trained few-shot model;
finding a second set of training support samples from the set of support samples, wherein the embedding of each training support sample of the second set of training support sample is closest to the embedding of the corresponding unlabeled sample;
identifying the unique class present in majority within the second set of training support samples; and

assigning the identified unique class to the corresponding unlabeled sample.
10. The method as claimed in claim 1, wherein re-training the pre-trained few-shot model with the plurality of training triplets for each training episode, comprises:
computing an embedding for each training query sample, each positive training support sample of the one or more positive training support samples and each negative training support sample of the one or more negative training support samples, present in each training triplet of the plurality of training triplets, using an embedding model of the pre-trained few-shot model;
finding a second set of positive training support samples from the one or more positive training support samples, in each training triplet, wherein an Euclidean distance between the embedding of the training query sample and the embedding of each positive training support sample of the second set of positive training support samples, is largest;
finding a second set of negative training support samples from the one or more negative training support samples, in each training triplet, wherein the Euclidean distance between the embedding of the training query sample and the embedding of each negative training support sample of the second set of negative training support samples, is smallest;
determining a training triplet loss for each training triplet based on (i) an average training Euclidean distance (Eb) associated with the second set of positive training support samples, (ii) an average training Euclidean distance (Eg) associated with the second set of negative training support samples, (iii) a predefined hyper-parameter (m), and (iv) a predefined triplet loss value (h);

calculating the training triplet loss for the corresponding training episode, by adding the training triplet loss for each training triplet of the plurality of training triplets present in the corresponding training episode; and
updating weights of the embedding model using a backpropagation, based on the training triplet loss for the corresponding training episode, to re-train the pre-trained few-shot model for a successive training episode.
11. The method as claimed in claim 10, wherein the average training Euclidean distance (Eb) associated with the second set of positive training support samples, is obtained by calculating an average of the Euclidean distances associated with each positive training support sample of the second set of positive training support samples, with respect to the training query sample.
12. The method as claimed in claim 10, wherein the average training Euclidean distance (Eg) associated with the second set of negative training support samples, is obtained by calculating an average of the Euclidean distances associated with each negative training support sample of the second set of negative training support samples, with respect to the training query sample.
13. The method as claimed in claim 10, wherein the training triplet loss for each training triplet is a maximum value out of: (i) Eb - Eg +m and (ii) the predefined triplet loss value (h).
14. A system (100) for building a semi-supervised few-shot model, the system (100) comprising:
a memory (102) storing instructions;
one or more Input/Output (I/O) interfaces (106); and

one or more hardware processors (104) coupled to the memory (102) via the one or more I/O interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
receive a labeled dataset and an unlabeled dataset, wherein the labeled dataset comprises a predefined number of labeled samples associated with each unique class of a predefined number of unique classes, and the unlabeled dataset comprises a plurality of unlabeled samples;
train a machine learning model with a plurality of triplets for each episode of a predefined number of episodes, using a triplet loss for the corresponding episode, to obtain a pre-trained few-shot model, wherein the plurality of triplets for each episode are obtained by:
randomly selecting a first set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a support set and a query set for the corresponding episode, wherein the support set comprises a set of support samples and is formed by randomly choosing a first set of labeled samples for each unique class of the first set of unique classes, and the query set comprises a set of query samples and is formed by randomly choosing a second set of labeled samples for each unique class of the first set of unique classes; and
forming a triplet for each query sample of the set of query
samples present in the query set, to obtain the plurality of triplets for
the corresponding episode, using the set of support samples present
in the support set, wherein the triplet for each query sample
comprises the associated query sample, one or more positive support
samples and one or more negative support samples; and
re-train the pre-trained few-shot model with a plurality of training
triplets for each training episode of a predefined number of training
episodes, using a training triplet loss for the corresponding training episode,
to build the semi-supervised few-shot model, wherein the plurality of
training triplets for each training episode are obtained by:

randomly selecting a second set of unique classes out of the predefined number of unique classes from the labeled dataset, to form a training support set and a training query set for the corresponding training episode, wherein the training support set comprises a set of training support samples and is formed by randomly choosing a third set of labeled samples for each unique class of the second set of unique classes, and the training query set comprises a set of training query samples and is formed by randomly choosing a fourth set of labeled samples for each unique class of the second set of unique classes;
randomly selecting a first set of unlabeled samples of the plurality of unlabeled samples present in the unlabeled dataset;
assigning the unique class for each unlabeled sample of the first set of unlabeled samples, based on the set of training support samples present in the training support set, using the pre-trained few-shot model, to obtain corresponding labeled samples for the first set of unlabeled samples;
adding the set of training support samples present in the training support set and the obtained labeled samples for the first set of unlabeled samples, to form a revised training support set with a second set of training support samples; and
forming a training triplet for each training query sample of the set of training query samples present in the training query set, to obtain the plurality of training triplets for the corresponding training episode, using the second set of training support samples present in the revised training support set, wherein the training triplet for each training query sample comprises the associated training query sample, one or more positive training support samples and one or more negative training support samples.

15. The system as claimed in claim 14, wherein the labeled samples present in the support set and the query set are distinct, and the labeled samples present in the training support set and the training query set are distinct.
16. The system as claimed in claim 14, wherein the one or more hardware processors (104) are further configured to:
obtain the one or more positive support samples for each triplet, by identifying one or more support samples out of the set of support samples present in the support set, where each support sample of the one or more support samples having the unique class same as that of the associated query sample present in the triplet; and
obtain the one or more negative support samples for each triplet, by identifying one or more support samples out of the set of support samples present in the support set, where each support sample of the one or more support samples having the unique class different as that of the associated query sample present in the triplet.
17. The system as claimed in claim 14, wherein the one or more hardware
processors (104) are further configured to:
obtain the one or more positive training support samples for each training triplet, by identifying one or more training support samples out of the set of training support samples present in the training support set, where each training support sample of the one or more training support samples having the unique class same as that of the associated training query sample present in the training triplet; and
obtain the one or more negative training support samples for each training triplet, by identifying one or more training support samples out of the set of training support samples present in the training support set, where each training support sample of the one

or more training support samples having the unique class different as that of the associated training query sample present in the training triplet.
18. The system as claimed in claim 14, wherein the one or more hardware processors (104) are further configured to train the machine learning model with the plurality of triplets for each episode, by:
computing an embedding for each query sample, each positive support sample of the one or more positive support samples and each negative support sample of the one or more negative support samples, present in each triplet of the plurality of triplets, using an embedding model with initial weights present in the machine learning model;
finding a second set of positive support samples from the one or more positive support samples, in each triplet, wherein an Euclidean distance between the embedding of the query sample and the embedding of each positive support sample of the second set positive support samples, is largest;
finding a second set of negative support samples from the one or more negative support samples, in each triplet, wherein the Euclidean distance between the embedding of the query sample and the embedding of each negative support sample of the second set of negative support samples, is smallest;
determining a triplet loss for each triplet based on (i) an average Euclidean distance (Dy) associated with the second set of positive support samples, (ii) an average Euclidean distance (Dz) associated with the second set of negative support samples, (iii) a predefined hyper-parameter (m), and (iv) a predefined triplet loss value (h);

calculating the triplet loss for the corresponding episode, by adding the triplet loss for each triplet of the plurality of triplets present in the corresponding episode; and
updating the weights of the embedding model using a backpropagation, based on the triplet loss for the corresponding episode, to train the machine learning model for a successive episode.
19. The system as claimed in claim 18, wherein the one or more hardware processors (104) are further configured to obtain the average Euclidean distance (Dy) associated with the second set of positive support samples, by calculating an average of the Euclidean distances associated with each positive support sample of the second set of positive support samples, with respect to the query sample.
20. The system as claimed in claim 18, wherein the one or more hardware processors (104) are further configured to obtain the average Euclidean distance (Dz) associated with the second set of negative support samples, by calculating an average of the Euclidean distances associated with each negative support sample of the second set of negative support samples, with respect to the query sample.
21. The system as claimed in claim 18, wherein the one or more hardware processors (104) are further configured to determine the triplet loss for each triplet, from a maximum value out of: (i) Dy - Dz +m and (ii) the predefined triplet loss value (h).
22. The system as claimed in claim 14, wherein the one or more hardware processors (104) are further configured to assign the unique class for each unlabeled sample of the first set of unlabeled samples, based on the set of

training support samples present in the training support set, using the pre-trained few-shot model, by:
computing an embedding for the corresponding unlabeled sample, and each training support sample of the set of training support samples, using an embedding model of the pre-trained few-shot model;
finding a second set of training support samples from the set of support samples, wherein the embedding of each training support sample of the second set of training support sample is closest to the embedding of the corresponding unlabeled sample;
identifying the unique class present in majority within the second set of training support samples; and
assigning the identified unique class to the corresponding unlabeled sample.
23. The system as claimed in claim 14, wherein the one or more hardware processors (104) are further configured to re-train the pre-trained few-shot model with the plurality of training triplets for each training episode, by:
computing an embedding for each training query sample, each positive training support sample of the one or more positive training support samples and each negative training support sample of the one or more negative training support samples, present in each training triplet of the plurality of training triplets, using an embedding model of the pre-trained few-shot model;
finding a second set of positive training support samples from the one or more positive training support samples, in each training triplet, wherein an Euclidean distance between the embedding of the training query sample and the embedding of each positive training support sample of the second set of positive training support samples, is largest;
finding a second set of negative training support samples from the one or more negative training support samples, in each

training triplet, wherein the Euclidean distance between the embedding of the training query sample and the embedding of each negative training support sample of the second set of negative training support samples, is smallest;
determining a training triplet loss for each training triplet based on (i) an average training Euclidean distance (Eb) associated with the second set of positive training support samples, (ii) an average training Euclidean distance (Eg) associated with the second set of negative training support samples, (iii) a predefined hyper-parameter (m), and (iv) a predefined triplet loss value (h);
calculating the training triplet loss for the corresponding training episode, by adding the training triplet loss for each training triplet of the plurality of training triplets present in the corresponding training episode; and
updating weights of the embedding model using a backpropagation, based on the training triplet loss for the corresponding training episode, to re-train the pre-trained few-shot model for a successive training episode.
24. The system as claimed in claim 23, wherein the one or more hardware processors (104) are further configured to obtain the average training Euclidean distance (Eb) associated with the second set of positive training support samples, by calculating an average of the Euclidean distances associated with each positive training support sample of the second set of positive training support samples, with respect to the training query sample.
25. The system as claimed in claim 23, wherein the one or more hardware processors (104) are further configured to obtain the average training Euclidean distance(Eg) associated with the second set of negative training support samples, by calculating an average of the Euclidean distances

associated with each negative training support sample of the second set of negative training support samples, with respect to the training query sample.
26. The system as claimed in claim 23, wherein the one or more hardware processors (104) are further configured to determine the training triplet loss for each training triplet, from a maximum value out of: (i) Eb - Eg +m and (ii) the predefined triplet loss value (h).

Documents

Application Documents

# Name Date
1 202021034689-STATEMENT OF UNDERTAKING (FORM 3) [12-08-2020(online)].pdf 2020-08-12
2 202021034689-REQUEST FOR EXAMINATION (FORM-18) [12-08-2020(online)].pdf 2020-08-12
3 202021034689-FORM 18 [12-08-2020(online)].pdf 2020-08-12
4 202021034689-FORM 1 [12-08-2020(online)].pdf 2020-08-12
5 202021034689-FIGURE OF ABSTRACT [12-08-2020(online)].jpg 2020-08-12
6 202021034689-DRAWINGS [12-08-2020(online)].pdf 2020-08-12
7 202021034689-DECLARATION OF INVENTORSHIP (FORM 5) [12-08-2020(online)].pdf 2020-08-12
8 202021034689-COMPLETE SPECIFICATION [12-08-2020(online)].pdf 2020-08-12
9 202021034689-FORM-26 [16-10-2020(online)].pdf 2020-10-16
10 202021034689-Proof of Right [11-02-2021(online)].pdf 2021-02-11
11 Abstract1.jpg 2022-06-08
12 202021034689-FER.pdf 2023-03-06
13 202021034689-OTHERS [08-08-2023(online)].pdf 2023-08-08
14 202021034689-FER_SER_REPLY [08-08-2023(online)].pdf 2023-08-08
15 202021034689-DRAWING [08-08-2023(online)].pdf 2023-08-08
16 202021034689-COMPLETE SPECIFICATION [08-08-2023(online)].pdf 2023-08-08
17 202021034689-CLAIMS [08-08-2023(online)].pdf 2023-08-08
18 202021034689-ABSTRACT [08-08-2023(online)].pdf 2023-08-08
19 202021034689-PatentCertificate30-10-2024.pdf 2024-10-30
20 202021034689-IntimationOfGrant30-10-2024.pdf 2024-10-30

Search Strategy

1 npl3E_06-03-2023.pdf
2 npl2E_06-03-2023.pdf
3 npl1E_06-03-2023.pdf

ERegister / Renewals

3rd: 08 Nov 2024

From 12/08/2022 - To 12/08/2023

4th: 08 Nov 2024

From 12/08/2023 - To 12/08/2024

5th: 08 Nov 2024

From 12/08/2024 - To 12/08/2025

6th: 07 Jul 2025

From 12/08/2025 - To 12/08/2026