Sign In to Follow Application
View All Documents & Correspondence

Unsupervised Representation Learning By Deep Convolution Autoencoder For Image Clustering

Abstract: Despite significant advances in clustering methods in recent years, the outcome of clustering of a natural image dataset is still unsatisfactory due to need of a good feature representation of an image and a method for discriminating these features to different clusters such that intra-class variance is less and inter-class variance is high. Often these two are dealt with independently and thus features are not sufficient enough to partition data meaningfully. Embodiments of the present disclosure provide systems and methods that implement a Deep Convolutional Autoencoder to discover these features required for separation of images into various clusters, wherein image representation features are learnt automatically for clustering, wherein a coherent (positive) image and an incoherent/negative image are simultaneously selected for a given image for learning better discriminative features to group similar images in a cluster and at the same time separating dissimilar images across clusters.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 December 2018
Publication Number
27/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
kcopatents@khaitanco.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-11-28
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai 400021 Maharashtra, India

Inventors

1. DAS, Dipanjan
Tata Consultancy Services Limited Building 1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata 700160 West Bengal, India
2. BHOWMICK, Brojeshwar
Tata Consultancy Services Limited Building 1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata 700160 West Bengal, India
3. GHOSH, Ratul
Indian Institute of Information Technology IIIT Rd, Near Boys Hostel, Devghat, Jhalwa, Prayagraj 211015 Uttar Pradesh, India

Specification

TECHNICAL FIELD
[001] The disclosure herein generally relates to image clustering techniques, and, more particularly, to unsupervised representation learning by deep convolution autoencoder for image clustering.
BACKGROUND
[002] Despite significant advances in clustering methods in recent years, the outcome of clustering of a natural image dataset is still unsatisfactory due to two important drawbacks. Firstly, clustering of images needs a good feature representation of an image and secondly, a robust method is required which can discriminate these features for making them belonging to different clusters such that intra-class variance is less and inter-class variance is high. Often these two aspects are dealt with independently and thus the features are not sufficient enough to partition the data meaningfully.
SUMMARY
[003] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, In one aspect, there is provided a processor implemented method for image clustering. The method comprises sequentially receiving by a Deep Convolution Autoencoder (DCA) via one or more hardware processors, an anchor image from a set of images comprised in a memory, wherein the anchor image comprises an object of interest; extrapolating each of the anchor image to identify (i) a corresponding positive image by augmenting the anchor image, and (ii) a corresponding negative image comprised in the set of images; receiving, in an encoder of the DCA, each of the anchor image, the corresponding positive image and the corresponding negative image to obtain a first representation of each of the anchor image, and a second representation of the corresponding positive image, and a third representation of the corresponding negative image; simultaneously (i) minimizing a distance between each of the anchor image and

the corresponding positive image and (ii) maximizing the distance between each of the anchor image and the corresponding negative image in the first representation, the second representation and the third representation respectively in an unsupervised manner; reconstructing, in a decoder of the DCA, each of the anchor image using the representation of the anchor image to obtain a corresponding reconstructed anchor image and an associated reconstruction loss; and transforming the representation of each of the anchor image into a model assignment probability and automatically assigning, using a K-means algorithm, the model assignment probability of each of the anchor image to a single cluster by applying (i) a sample oriented entropy on the model assignment probability of each of the anchor image to obtain a deterministic model assignment probability and (ii) a batch oriented entropy on the model assignment probability to obtain an uniform distribution of anchor images to a plurality of clusters, wherein the DCA learns the uniform distribution of each of the anchor image to the plurality of clusters.
[004] In an embodiment, the corresponding negative image may be identified and selected by: computing a corresponding statistics between each of the anchor image and the corresponding positive image; performing a comparison of a distance of a candidate negative image with the computed corresponding statistics in the representation space of the anchor image and the corresponding positive image; and identifying the candidate negative image as the corresponding negative image based on the comparison. In an embodiment, the candidate negative image is identified as the corresponding negative image when the distance of the candidate negative image is greater than the computed corresponding statistics.
[005] In an embodiment, local structure of one or more images from the set of images is preserved using the reconstruction loss.
[006] In an embodiment, the step of augmenting each of the anchor image is augmented to identify the corresponding positive image comprises at least one of a random rotation, a random scaling, a random erasing, a random flip, adjustment of brightness, and adjustment of contrast of each of the anchor image.

[007] In another aspect, there is provided a system for image clustering. The system comprises a memory storing instructions and a set of images; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to execute: a Deep Convolution Autoencoder (DCA) comprised in the memory, wherein the DCA is configured to: sequentially receive an anchor image from the set of images comprised in the memory, wherein the anchor image comprises an object of interest; extrapolate each of the anchor image to identify (i) a corresponding positive image by augmenting the anchor image, and (ii) a corresponding negative image comprised in the set of images; receive, in an encoder of the DCA, each of the anchor image, the corresponding positive image and the corresponding negative image to obtain a first representation of each of the anchor image, and a second representation of the corresponding positive image, and a third representation of the corresponding negative image; simultaneously (i) minimize a distance between each of the anchor image and the corresponding positive image and (ii) maximize the distance between each of the anchor image and the corresponding negative image in the first representation, the second representation and the third representation respectively in an unsupervised manner; reconstruct, in a decoder of the DCA, each of the anchor image using the representation of the anchor image to obtain a corresponding reconstructed anchor image and an associated reconstruction loss; and transform the representation of each of the anchor image into a model assignment probability and automatically assign, using a K-means algorithm, the model assignment probability of each of the anchor image to a single cluster by applying (i) a sample oriented entropy on the model assignment probability of each of the anchor image to obtain a deterministic model assignment probability and (ii) a batch oriented entropy on the model assignment probability to obtain an uniform distribution of anchor images to a plurality of clusters, wherein the DCA learns the uniform distribution of each of the anchor image to the plurality of clusters.

[008] In an embodiment, the corresponding negative image is identified and selected by: computing a corresponding statistics between each of the anchor image and the corresponding positive image; performing a comparison of a distance of a candidate negative image with the computed corresponding statistics in the representation space of the anchor image and the corresponding positive image; and identifying the candidate negative image as the corresponding negative image based on the comparison. In an embodiment, the candidate negative image is identified as the corresponding negative image when the distance of the candidate negative image is greater than the computed corresponding statistics.
[009] In an embodiment, local structure of one or more images from the set of images is preserved using the reconstruction loss.
[010] In an embodiment, each of the anchor image is augmented to identify the corresponding positive image by performing at least one of a random rotation, a random scaling, a random erasing, a random flip, adjustment of brightness, and adjustment of contrast of each of the anchor image.
[011] In yet another aspect, there are provided one or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes a method for image clustering. The method comprises sequentially receiving by a Deep Convolution Autoencoder (DCA) via one or more hardware processors, an anchor image from a set of images comprised in a memory, wherein the anchor image comprises an object of interest; extrapolating each of the anchor image to identify (i) a corresponding positive image by augmenting the anchor image, and (ii) a corresponding negative image comprised in the set of images; receiving, in an encoder of the DCA, each of the anchor image, the corresponding positive image and the corresponding negative image to obtain a first representation of each of the anchor image, and a second representation of the corresponding positive image, and a third representation of the corresponding negative image; simultaneously (i) minimizing a distance between each of the anchor image and the corresponding positive image and (ii) maximizing the distance between each

of the anchor image and the corresponding negative image in the first representation, the second representation and the third representation respectively in an unsupervised manner; reconstructing, in a decoder of the DCA, each of the anchor image using the representation of the anchor image to obtain a corresponding reconstructed anchor image and an associated reconstruction loss; and transforming the representation of each of the anchor image into a model assignment probability and automatically assigning, using a K-means algorithm, the model assignment probability of each of the anchor image to a single cluster by applying (i) a sample oriented entropy on the model assignment probability of each of the anchor image to obtain a deterministic model assignment probability and (ii) a batch oriented entropy on the model assignment probability to obtain an uniform distribution of anchor images to a plurality of clusters, wherein the DCA learns the uniform distribution of each of the anchor image to the plurality of clusters.
[012] In an embodiment, the corresponding negative image may be identified and selected by: computing a corresponding statistics between each of the anchor image and the corresponding positive image; performing a comparison of a distance of a candidate negative image with the computed corresponding statistics in the representation of the anchor image and the corresponding positive image; and identifying the candidate negative image as the corresponding negative image based on the comparison. In an embodiment, the candidate negative image is identified as the corresponding negative image when the distance of the candidate negative image is greater than the computed corresponding statistics.
[013] In an embodiment, local structure of one or more images from the set of images is preserved using the reconstruction loss.
[014] In an embodiment, the step of augmenting each of the anchor image is augmented to identify the corresponding positive image comprises at least one of a random rotation, a random scaling, a random erasing, a random flip, adjustment of brightness, and adjustment of contrast of each of the anchor image.

[015] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[016] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[017] FIG. 1 illustrates an exemplary block diagram of a system for unsupervised representation learning by a Deep Convolution Autoencoder (DCA) for image clustering, in accordance with an embodiment of the present disclosure.
[018] FIG. 2, with reference to FIG. 1, illustrates an exemplary flow diagram of a method for unsupervised representation learning by the DCA for image clustering using the system of FIG. 1 in accordance with an embodiment of the present disclosure.
[019] FIG. 3A illustrates an exemplary architecture of the DCA comprised in, and as implemented by the system of FIG. 1 in accordance with an embodiment of the present disclosure.
[020] FIG. 3B depicts an exemplary block diagram illustrating a method for identifying and selecting a negative image for a given anchor image, in accordance with an embodiment of the present disclosure.
[021] FIG. 4A depicts a graphical representation illustrating result of ablation study on MNIST dataset, in accordance with an example embodiment of the present disclosure.
[022] FIG. 4B depicts a graphical representation illustrating result of ablation study (CIFAR 10), in accordance with an example embodiment of the present disclosure.
[023] FIG. 4C and FIG. 4D depict impact of a constraint ( ) on the model assignment distribution of class 1 of MNIST dataset, in accordance with an example embodiment of the present disclosure.

[024] FIGS. 5A through 5D depict graphical representations of hyper parameter selection in accordance with an example embodiment of the present disclosure.
[025] FIGS. 6A through 6G depict (uniform) distribution of images from MNIST dataset to a plurality of clusters in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
[026] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[027] Clustering of images is one of the fundamental and challenging problems in computer vision and machine learning. Several applications like 3D reconstruction from an image set, storyline reconstruction from photo streams, web scale fast image clustering, etc., use image clustering as one of the important tool in their methodology. While the literature for clustering algorithm is voluminous, there are roughly two types of approaches for clustering viz. hierarchical clustering and centroid-based clustering. Most popular algorithm in the hierarchical methods is agglomerative clustering which begins with many small clusters, and then merges clusters gradually to come with certain number of clusters without fixing the number of clusters beforehand. On the other hand, centroid based methods (e.g. K-means) picks K samples from the input data for initialization of cluster centroids which is refined iteratively by minimizing the distance between the input data and the centroids. All of such methods require the

notion of feature to represent an image data so that a meaningful partition of an image dataset can be obtained. If the dimension of such feature is very high then it becomes ineffective to compute the distance metrics between the features due to well known fact of curse of dimensionality. Hence a variety of methods like principal component analysis (PCA), canonical correlation analysis (CCA), nonnegative matrix factorization (NMF) and sparse coding (dictionary learning) etc. are used regularly to reduce the dimensionality of features before clustering and hence the performance of the clustering methods crucially depends on the choice of these different feature representation.
[028] In recent years, deep-learning (DL) have made huge success in producing good representation of image which is required for different machine learning tasks. DL learns these powerful representations from the image through high-level non-linear mapping and hence these representations can be used to partition the data into different clusters. K-means works better with such representations learned by DL models than using other traditional methods. There are broadly two approaches to use the deep representation of the data for clustering. The first naïve approach is to use the hidden representation (features) of the data which are extracted from a well-trained deep network using supervision. However, this approach cannot fully exploit the power of deep neural network for unsupervised clustering because the usage of an already trained deep networks for some other purpose lacks of knowledge of feature required for partitioning the unknown data. In the second approach some existing clustering method is embedded into a DL model. This association enables the DL model to learn cluster-oriented representation. For example, one of existing approaches integrate K-means algorithm into deep auto-encoders and perform cluster assignment on the bottleneck layers. Such an approach produces better outcome as the features learned by the DL models are relevant for clustering. Also, if distribution of the cluster assignment of the features can be predicted through some auxiliary distribution then with the observed cluster assignment of a feature and the auxiliary distribution accuracy of the clustering can be improved. To this end, in a recent work, a clustering objective to learn deep representation by

minimizing the KL divergence between the observed cluster assignment
probability and an auxiliary target distribution which is derived from current soft
cluster assignment was proposed. For designing the auxiliary distribution a
function of current model assignment distribution and the frequency per cluster
were used. However, the cluster assignment probabilities are changing with
training and therefore without any constraints, the auxiliary distribution also keeps
on changing resulting in poor quality of representation. To improve the
representation, a regularization on the DL model was proposed which enforces that feature representation of intra-class should come close to each other to improve the representation learning. Along with the constraints like feature coherence for images for same class, clustering methods should also have important constraints like the separation between inter-class features should be high. But in an unsupervised setting detection of image which does not belong to same class is difficult.
[029] In the present disclosure, embodiments and systems and methods associated thereof introduce constraint(s) on Deep Convolutional Autoencoder (DCA) model whose objective is to bring intra-class images closer and make inter-class images distant in their representation space (also referred as latent representation or representation space or representation and interchangeably used herein) simultaneously. To this end, for every anchor image in the database a positive image is selected which is an augmented version of the anchor image and a negative image which is neither similar to the original anchor image nor the augmented image in their latent representation space. The negative image is selected from the image database using selection technique/algorithm described hereinafter. A constraint is put in clustering using this image triplet (anchor image, positive image and negative image) to minimize the distance between the anchor and positive image and at the same time maximize the distance between the anchor and negative image in their latent representation space (also referred as representation). This constraint as introduced in the present disclosure by the systems and methods thereof allows the model to learn better feature representation which in turn produce a better auxiliary distribution (as this is a

function of the feature representation) required for improved accuracy in clustering being in unsupervised setting. Additionally, the present disclosure describes a technique to avoid the degenerate condition which occurs due to the abrupt distribution of model assignment causing an uncertainty in cluster assignment for a particular sample. For example, class 1 of MNIST dataset may have higher probability for assign into multiple classes during training. So, if the probability of other classes can be suppressed by making the probability distribution close to one-hot vector then probability distribution lead to correct cluster assignment (refer FIG. 4C). More specifically, FIG. 4C depicts a graphical representation illustrating impact of a constraint (Lb ) on the model assignment distribution of class 1 of MNIST dataset, in accordance with an example embodiment of the present disclosure. Here X-axis represents class and Y-axis represents probability. FIG. 4C shows ambiguous cluster assignments of images of class 1 because the probability of assigning that to class 9, 7 and 4 are also high when (Lb ) is not used.
[030] Referring now to the drawings, and more particularly to FIGS. 1 through 6G, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[031] FIG. 1 illustrates an exemplary block diagram of a system 100 for unsupervised representation learning by Deep Convolution Autoencoder (DCA) for image clustering, in accordance with an embodiment of the present disclosure. The system 100 may also be referred as ‘image clustering system’ and interchangeably used hereinafter. In an embodiment, the system 100 includes one or more processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more processors 104. The one or more processors 104 may be one or more software processing modules and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors,

central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the device 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[032] The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
[033] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment a database 108 can be stored in the memory 102, wherein the database 108 may comprise, but are not limited to images for example, anchor images, positive images, negative images, associated representations thereof, information pertaining distance between each of the anchor image and the corresponding positive image and the distance between each of the anchor image and the corresponding negative image in the first representation, the second representation and the third representation respectively, information pertaining selection of negative image(s), one or more constraints such as reconstruction loss, sample oriented entropy, and batch oriented entropy, augmentation information pertaining to identification of positive image based on the anchor image under consideration, and the like. In an embodiment, the memory 102 may store one or more technique(s) (e.g., distance computation techniques,

reconstruction techniques, K-means clustering techniques, and the like) and the Deep Convolutional Autoencoder (DCA), which when executed by the one or more hardware processors 104 perform the methodology described herein. The memory 102 may further comprise information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure.
[034] FIG. 2, with reference to FIG. 1, illustrates an exemplary flow diagram of a method for unsupervised representation learning by a Deep Convolution Autoencoder (DCA) for image clustering using the system 100 of FIG. 1 in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to components of the system 100 of FIG. 1, and the flow diagram as depicted in FIG. 2.
[035] In an embodiment of the present disclosure, at step 202, a Deep Convolution Autoencoder (DCA) is executed by the one or more hardware processors 104, to sequentially receive an anchor image from a set of images comprised in the memory 108. In other words, images from the set of images may be random chosen/selected, wherein the randomly chosen images are processed in an order (e.g., one after another), in one example embodiment. In the present disclosure, literal meaning of sequentially receiving an anchor image from the set of images shall not be construed as processing each image (or anchor image) from the set of images as stored. The anchor image includes an object of interest, in one example embodiment. For instance, FIGS. 3A-3B, depict the set of images comprised in the memory 108 (also referred herein as image data base). More specifically, FIG. 3A depicts an exemplary architecture of the Deep Convolution Autoencoder (DCA) comprised in the system 100 when executed performs image clustering, in accordance with an embodiment of the present disclosure. As can be seen from FIG. 2, the DCA has an encoder layer which maps the input image xi (also referred as an anchor image and interchangeably

used hereinafter) to it’s latent rep representation zi=Fф(xi) using a stack of
convolution layers followed by a fully connected layer. The decoder layer has a
fully connected layer followed by a stack of deconvolutional layers to convert the
latent representation back to original image x'i=G θ (zi)=G θ (F ф (xi)) . The
systems and methods of the present disclosure attempt to minimize a reconstruction loss given by Equation (1). The purpose of using reconstruction loss is to preserve the local structure of input images into features space. In other words, local structure of one or more images from the set of images is preserved using the reconstruction loss.

[036] In an embodiment of the present disclosure, at step 204, the Deep Convolution Autoencoder (DCA) is executed by the one or more hardware processors 104 to extrapolate each of the anchor image to identify (i) a corresponding positive image by augmenting the anchor image, and (ii) a corresponding negative image comprised in the set of images. FIGS. 3A-3B depict how the anchor image is augmented to obtain a corresponding positive image for each anchor image. FIGS. 3A-3B further depict a negative image selected from the set of images comprised in the memory 108. More specifically, FIG. 3A, with reference to FIGS. 1-2, illustrates an exemplary architecture of the DCA comprised in, and as implemented by the system 100 of FIG. 1 in accordance with an embodiment of the present disclosure. FIG. 3B, with reference to FIGS. 1 through 3A, depicts an exemplary block diagram illustrating a method for identifying and selecting a negative image for a given anchor image, in accordance with an embodiment of the present disclosure. In an embodiment of the present disclosure, the corresponding negative image is identified and selected by: computing a corresponding statistics between each of the anchor image and the corresponding positive image; performing a comparison of a distance of a candidate negative image with the computed corresponding statistics in the representation of the anchor image and the corresponding positive image; and identifying the candidate negative image as the corresponding negative image

based on the comparison. Below is an exemplary pseudo code for identification and selection of the negative image from the set of images comprised in the memory 108, and the pseudo code of the present disclosure shall not be construed as limiting the scope of the present disclosure:
[037] Pseudo code for identification and selection of a negative image:
Input: anchor image Ia , augments of Ia [ Iip where i=1 to 10 ] image dataset

[038] Referring back to FIG. 2, in an embodiment of the present disclosure, at step 206, the Deep Convolution Autoencoder (DCA) is executed by the one or more hardware processors 104 to receive, in an encoder of the DCA, each of the anchor image, the corresponding positive image and the corresponding negative image to obtain a first representation of each of the anchor image, and a second representation of the corresponding positive image, and a third representation of the corresponding negative image.
[039] In an embodiment of the present disclosure, at step 208, the Deep Convolution Autoencoder (DCA) is executed by the one or more hardware

processors 104 to simultaneously (i) minimize a distance between each of the anchor image and the corresponding positive image and (ii) maximizing the distance between each of the anchor image and the corresponding negative image in the first representation, the second representation and the third representation respectively in an unsupervised manner.
[040] In an embodiment of the present disclosure, at step 210, the Deep Convolution Autoencoder (DCA) is executed by the one or more hardware processors 104 to reconstructing, in a decoder of the DCA, each of the anchor image using the representation of the anchor image to obtain a corresponding reconstructed anchor image and an associated reconstruction loss. The above steps of 206 till 210 are better understood by way of following description discussed in detail:
[041] The clustering layer in the DCA of the system 100 consists of cluster centers µj, where j=1,...,K, where K is the number of predefined
clusters. In the present disclosure, the system 100 uses student-t distribution to measure the similarity between zi and the cluster center µj. The probability
(model assignment) of assigning sample to cluster is given by the equation (2) below by way of following exemplary expression:

[042] In absence of the target label in an unsupervised setup, an auxiliary probability is utilized by the system 100 for assigning a data to a cluster as given by equation (3) below by way of following exemplary expression

[043] Clustering loss Lc is then computed using the KL divergence
between model assignment probability distribution and the auxiliary distribution as given by the Equation (4) below by way of following exemplary expression.

[044] KL divergence loss given by the equation (4), may not produce better discriminative representation in unsupervised setup where actual target

distribution is unavailable. In other words, there exist overlapping distribution of latent representations for different classes/clustering categories. This overlapping representation occurs due to change in auxiliary probability distribution (equation 3) as training progress. To mitigate this problem, the system 100 utilizes a new constraint on representation learning to learn more discriminating feature by simultaneously minimizing L2 distance between images of same class and maximize L2 distance among inter-classes as described in step 208. The new constraint is used and given by in equation (5) in an unsupervised setup for learning the better feature representation required for clustering. This constraint enforce that the anchor image (Ia) and the positive image (Ip) should come close
and at the same time the distance between the feature of the anchor image (Ia)
and the negative image (In) should increase.

[045] As described above in step 204, the present disclosure and its systems and methods utilizes different augmentation methods for example, random rotation, random scale, random erasing, random flip, and the like, to produce the positive image (Ip) from the anchor image(Ia). It is difficult task to
select the negative image (In) for the anchor image (Ia) in an unsupervised setup
where labels are unknown. Random selection of a negative image (In) from the
image dataset cannot guarantee that it is truly negative with respect to anchor
image(Ia) and therefore may fall into the same class that of the anchor image
(Ia). So, the present disclosure describes the pseudo code (refer above) to select a proper/appropriate negative image (In) to enforce that negative image (In) does
not belong to the same class of the anchor image (Ia) to satisfy the necessary
condition for the equation (5).

[046] The above pseudo code when executed by the system 100 tries to
discover the distribution of the augment images with respect to anchor image in
their latent representation space by using L2 distance. It selects the appropriate
image (also referred as candidate negative image) as a negative image from
dataset whose latent representation does not fall within the distribution of the
augment images of anchor image. The DCA as implemented and executed by the
system 100 is pre-trained for three epochs with reconstruction loss to initialize
parameter of the encoder to enable the above pseudo code to use the latent
representation for selecting negative images.
[047] FIG. 3B, demonstrates the selection process of a negative image.
For example, it selects the Ship image as a negative image for the anchor image
Horse instead of selecting Horse2 image with different orientation of Horse
image. This method enables the DCA to learn better representation which helps
in clustering. However, an image belonging to a class can have a considerable
probability to belong to multiple clusters during training due to the unconstrained
model assignment probability over classes as mentioned in equation (2). For
instance, when experimental results were conducted for MNIST dataset, it was
observed that 35% of the images from class 1 had high probabilities for belonging
to its own class and rest of the samples from class 1 belongs to other classes
during training after 10 epoch. This leads to an incorrect clustering without
explicit constraint as shown in an ablation study by the present disclosure in FIG.
4A. More specifically, FIG. 4A, with reference to FIGS. 1 through 3B, depicts a
graphical representation illustrating result of ablation study on MNIST dataset, in
accordance with an example embodiment of the present disclosure. To mitigate
this problem as described in the ablation study, the system 100 utilizes new
constraints, given by the equation (6) and (7) respectively, based on the difference
between the entropy of the average of the qij (Equation (2)), and the entropy of
the qij (equation (2)) so that the information of the input is retained in qij .
[048] The above described is better understood by way of following:

[049] As mentioned above, a sample oriented entropy, , is introduced
by the system as a new constraint to ensure each image i is predominantly
assigned to a single cluster given by the Equation 6.

[050] Ls becomes minimum when qij is deterministic distribution, i.e.,
qij is same as one-hot vector. One trivial side effect of the sample oriented entropy is that the clustering layer may produce a constant one-hot vector qij for
all input in dataset, i.e., selecting a single cluster for all of the data. To avoid this
local minima, system 100 utilizes another constraint referred as a batch oriented
entropy Lb given by the Equation 7. The motivation of this constraint is to
distribute data across all the cluster equally.

[051] Both Ls and batch oriented entropy Lb complement each other, and
therefore can be combined together as below expression and rewritten as:

[052] It was observed that histogram of the images from class 1 from
MNIST dataset after adding these constraint (equation 6) clearly indicated that the
class assignment becomes significantly better during training resulting in
improved accuracy overall (refer FIG. 4D). The corresponding ablation study is
shown in FIG. 4B, wherein the clustering of images is depicted. More
specifically, FIG. 4B, with reference to FIGS. 1 through 4A, depicts a graphical representation illustrating result of ablation study (CIFAR 10), in accordance with an example embodiment of the present disclosure. FIG. 4D depicts a graphical representation illustrating impact of a constraint ( Lb ) on the model assignment distribution of class 1 of MNIST dataset, in accordance with an example embodiment of the present disclosure. More specifically, FIG. 4D depicts a graphical representation illustrating images of class 1 having the only probability to belong to a single cluster only which is their own class after using ( Lb ).

[053] In other words, the representation of each of the anchor image is transformed into a model assignment probability and the model assignment probability of each of the anchor image is automatically assigned to a single cluster by applying (i) a sample oriented entropy on the model assignment probability of each of the anchor image so that the model assignment probability becomes deterministic and (ii) a batch oriented entropy on the model assignment probability to obtain an uniform distribution of anchor images to a plurality of clusters, at step 212 of FIG. 2. The DCA learns this uniform distribution of each of the anchor image to the plurality of clusters.
[054] Further, the final loss function for the DCA is given by the Equation (8).

with the hyper parameter of α , β , γ , δ , and w
[055] Alternatively, losses sample oriented entropy Ls and batch oriented
entropy Lb can be viewed as a combined loss, wherein both Ls and batch oriented
entropy Lb complement each other, and the expression (8) can be written as:


[056] Combining the sample oriented entropy (Ls) and the batch oriented
entropy ( Lb) maximizes the retention of information of the input (e.g., anchor image) into model assignment to produce better model assignments to improve image clustering. In an embodiment,
[057] FIGS. 6A through 6G, with reference to FIGS. 1 through 5B, depict (uniform) distribution of images from MNIST dataset to a plurality of clusters in accordance with an embodiment of the present disclosure. In FIGS. 6A through G, various clusters are denoted by color coding schema and numeric symbols. For instance, ‘0’ represents ‘black’ indicating a cluster 1, ‘1’ represents ‘dark blue’ indicating a cluster 2, ‘2’ represents ‘light green’ indicating a cluster 3, ‘3’ represent ‘aqua’ indicating a cluster 4, ‘4’ represents ‘red’ indicating a cluster 5, ‘5’ represents ‘pink’ indicating a cluster 6, ‘6’ represents ‘yellow’ indicating a cluster 7, ‘7’ represents ‘bluish gray’ indicating a cluster 8, ‘8’ represents ‘light

purple’ indicating a cluster 9, and ‘9’ represents ‘dark green’ indicating a cluster 10, respectively. Due to crowded clustering of images scattered across and huge overlap between the clusters there is no depiction of color coding schema and numeric symbols representation in FIG. 6A. However, it is to be understood and appreciated that such color coding schema and numeric symbols can be depicted in FIG. 6A, but for the sake of brevity the same has been depicted in FIGS. 6B through 6G for better understanding of the embodiments of the present disclosure.
[058] Experiments
[059] Datasets:
[060] The image clustering technique as described above in FIGS. 2 through 5, was evaluated on the following, widely-used image datasets:
[061] The MNIST dataset consisted of total 70000 handwritten digits of 28x28 pixels. All images from the training and test sets were used without their labels.
[062] USPS: It is a handwritten digits dataset from the USPS postal service, containing 11,000 samples of 16x16 images.
[063] CIFAR-10 contained 32x32 color images of ten different object classes. Here, all images of the training set were used by the present disclosure.
[064] CIFAR-100 contained 32x32 color images of hundred different object classes. Here, all images of the training set were used by the present disclosure.
[065] FRGC: 20 random subjects were selected from the original dataset and 2,462 face images were collected. Face regions were further cropped and resized into 32x32 images.
[066] Evaluation metric:
[067] The clustering method is evaluated by the unsupervised clustering accuracy (ACC).


where li is the ground-truth label, ci is the cluster assignment i.e., cj=
argmaxj(qij) and are all possible one-to-one mappings between clusters
and labels.
[068] Experiment Result
[069] The present disclosure implemented and executed Tensorflow to implement the method of the present disclosure on a Linux based workstation equipped with Graphical Processing Unit (GPU). TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. The result are reported on two type of networks, viz a shallow network suitable for the sparse dataset (like MNIST, USPS) and a deeper variant of the shallow network with more parameters for a dataset like CIFAR10, CIFAR100, FRGC which has more features than MNIST or USPS. The encoder of the shallow network consisted of three convolutional layers, with kernel size 3x3 and stride 1, followed by a fully connected layer. All the convolutional layers were associated with a max-pooling layer with stride 2. The decoder of the shallow network consisted of fully connected layer followed by deconvolutional layer corresponding to encoder’s convolution layers. The deeper variant of the shallow network uses additional two convolution and two deconvolution layers for encoder and decoder respectively. For both the networks zero padding was used for all convolutional and max-pooling layers. Adam optimizer with learning-rate of 0:001 was used. RLU (rectified linear units) for all the hidden activation was used and batch normalization was applied to each layer. The weights of all the convolution and fully connected layer were initialized using Xavier approach known in the art. A mini-batch size of 128 was used alongwith 64-dimensional representation vectors and 10-dimensional representation vector for deeper and shallow network respectively. Images of all the dataset were normalized to zero-mean and unit variance and the network (DCA) was pre-trained with only reconstruction loss and cluster centers were initialized using K-means clustering method(s) (or K-means algorithm).

[070] In order to find suitable value for the hyper parameters mentioned in equation (8) extensive experiment was conducted on MNIST dataset. Hyper parameters were sampled in range of [0; 1] and experiments on the shallow network was conducted for ‘N’ times (e.g., say 100 times) to find the suitable values for hyper parameters. Accuracy was computed based on evaluation metric, given by the equation (9), by keeping one hyper parameter fixed and changing rest of the hyper parameters as shown in FIGS. 5A-5D. X-axis in FIGS. 5A-5D represent the hyper parameter value and Y-axis represent the accuracy of DCA implemented by the system 100 network using equation (9). The vertical line shows the range of accuracy achieved by the DCA by keeping the corresponding hyper parameter (X-axis) unchanged and changing rest of the hyper parameters. More specifically, FIGS. 5A through 5D, with reference to FIGS. 1 through 4B, depict graphical representations of hyper parameter selection study in accordance with an example embodiment of the present disclosure.
[071] Based on this experiment hyper parameters for MNIST and USPS
were chosen as α=1, β=0.8, γ=0.01, δ=0.001, and w=0.3. Same
experiment was conducted for deeper version of the network for CIFAR10,
CIFAR100, FRGC and the hyper parameters were found to be as α=1, β=0.7,
γ=0.01, δ=0.001, and for shallow and deeper network respectively.
[072] The complete results are summarized in below Table 1. Specifically, Table 1 depicts clustering performance based on accuracy (%) of different algorithms (higher is better). Mean of 20 runs has been reported in the below Table 1.

Table 1
Methods Dataset

MNIST USPS CIFAR10 CIFAR100 FRGC ImageNet
K-means on pixels 53.49 46 20.4 16.5 24.3 12.65
DEC 84 62* - - 37.8 -
VaDE 94.46 - - - - -
JULE 96.4 95* - - 46.1 -

DEPICT 96.5 96.4 - - 47 -
IMSAT 98.4 - 45.6ϯ 27.5ϯ - -
Present disclosure 98.93 97.63 44.19 25.4 47.28 35.69
ϯ denotes using weights of pre-trained deep residual networks. Results marked with * are excerpted from [DEPICT]. Dash marks (-) has been shown where method does not provide any result for the dataset.
[073] In the above Table 1, K-means on pixels, DEC, VaDE, JULE, DEPICT, and IMSAT are all traditional approach, wherein DEC refers to Deep embedding for cluster analysis, VaDE is variable deep learning, JULE refers to ‘Joint unsupervised learning of deep representations and image clusters’, DEPICT refers to ‘Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization’, and IMSAT refers to ‘Learning discrete representations via information maximizing self-augmented training’ respectively.
[074] Shallow network was used to produce result for MNIST and USPS datasets. Deeper version of the network was used to produce result for CIFAR10, CIFAR100 and FRGC. It was observed during the experimental conduct that the experimental results of the present disclosure exceeded state-of-the-art (traditional method(s)) accuracy on MNIST, USPS, and FRGC including competitive result being achieved on CIFAR10 and CIFAR100. IMSAT achieved the best accuracy on CIFAR10 and CIFAR100 by using the feature from an already pre-trained model of deep residual networks. This deep residual network was trained on ImageNet dataset of natural images which is very similar to CIFAR (dataset of natural images). Features extracts for CIFAR10 and CIFAR100 from this model have good representation which is very helpful for clustering. The same pre-trained model on FRGC was used for the clustering which gives very poor accuracy (24.31%). In contrast, the DCA of the present disclosure was used for FRGC and CIFAR10 and CIFAR100. It was observed that the experimental results exceeded state-of-the-art accuracy on FRGC and reasonable good accuracy on CIFAR. This show that in general the method of the present disclosure produces significantly better result with more generalization across dataset.

[075] Effect of imbalanced data
[076] The performance of the present disclosure’s model is quite well for the balanced dataset. In order to check the performance in imbalanced datasets, retention rate as was used to keep the sample for a given class. For example, if minimum retention rate is rmin, this implies that samples (e.g., image samples)
from class 0 were randomly selected with the probability of rmin and class 9 with
the probability of 1, samples from other classes fall linearly in between rmin and
1. Below Table 2 shows that the approach of the present disclosure is robust enough for unbalance dataset also. Specifically, Table 2 depicts clustering accuracy (%) on imbalance sub-sample of MNIST dataset.

Table 2
Methods


0.1 0.3 0.5 0.7 0.9
K-means pixels on 46.96 48.73 52.86 53.16 53.39
DEC 70.10 80.92 82.68 84.69 85.41
Present disclosure 88.02 94.5 96.12 97.2 97.91
[077] Embodiments of the present disclosure, implement a deep learning based clustering network system and method thereof to learn inter-class separation in latent representation space in an unsupervised setup. The deep feature representations are characterized by high intra-class similarity and low inter-class similarity at the same time by utilizing the above pseudo code to choose the negative image in unsupervised setup. Further, the experiments showed that the DCA of the system 100 is more generalized and achieves superior results across variety of dataset compared to other alternative methods.
[078] As mentioned above, the present disclosure, and its systems and methods apply constraints (e.g., sample oriented entropy, and batch oriented entropy) on deep representation learning to enforce that feature representations are able to learn both intra-class similarities and inter-class separation at a same time in unsupervised setup which helps to improve clustering. Moreover, the present

disclosure and its method use entropy minimization objective for model
assignment probability. This new objective make the model assignment
probability distribution deterministic which helps in better clustering. On the contrary to some other traditional approaches which use features from some already pre-trained models for feature rich dataset such as CIFAR10, the present disclosure and its method does not depend on such existing pre-trained models.
[079] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[080] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[081] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not

limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[082] The illustrated steps are set out to explain the exemplary
embodiments shown, and it should be anticipated that ongoing technological
development will change the manner in which particular functions are performed.
These examples are presented herein for purposes of illustration, and not
limitation. Further, the boundaries of the functional building blocks have been
arbitrarily defined herein for the convenience of the description. Alternative
boundaries can be defined so long as the specified functions and relationships
thereof are appropriately performed. Alternatives (including equivalents,
extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[083] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term

“computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[084] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.

WE CLAIM:
1. A processor implemented method for clustering images, comprising:
sequentially receiving (202), by a Deep Convolution Autoencoder (DCA) via one or more hardware processors, an anchor image from a set of images comprised in a memory, wherein the anchor image comprises an object of interest;
extrapolating each of the anchor image to identify (i) a corresponding positive image by augmenting the anchor image, and (ii) a corresponding negative image comprised in the set of images (204);
receiving, in an encoder of the DCA, each of the anchor image, the corresponding positive image and the corresponding negative image to obtain a first representation of each of the anchor image, and a second representation of the corresponding positive image, and a third representation of the corresponding negative image (206);
simultaneously (i) minimizing a distance between each of the anchor image and the corresponding positive image and (ii) maximizing the distance between each of the anchor image and the corresponding negative image in the first representation, the second representation and the third representation respectively in an unsupervised manner (208);
reconstructing, in a decoder of the DCA, each of the anchor image using the representation of the anchor image to obtain a corresponding reconstructed anchor image and an associated reconstruction loss (210); and
transforming the representation of each of the anchor image into a model assignment probability and automatically assigning, using a k-means clustering method, the model assignment probability of each of the anchor image to a single cluster by applying (i) a sample oriented entropy on the model assignment probability of each of the anchor image to obtain a deterministic model assignment probability and (ii) a batch oriented entropy on the model assignment probability to obtain an uniform distribution of anchor images to a plurality of clusters (212), wherein the DCA learns the uniform distribution of each of the anchor image to the plurality of clusters.

2. The processor implemented method of claim 1, wherein the corresponding
negative image is identified and selected by:
computing a corresponding statistics between each of the anchor image and the corresponding positive image;
performing a comparison of a distance of a candidate negative image with the computed corresponding statistics in the representation of the anchor image and the corresponding positive image; and
identifying the candidate negative image as the corresponding negative image based on the comparison.
3. The processor implemented method of claim 3, wherein the candidate negative image is identified as the corresponding negative image when the distance of the candidate negative image is greater than the computed corresponding statistics.
4. The processor implemented method of claim 1, wherein local structure of one or more images from the set of images is preserved using the reconstruction loss.
5. The processor implemented method of claim 1, wherein the step of augmenting each of the anchor image to identify the corresponding positive image comprises at least one of a random rotation, a random scaling, a random erasing, a random flip, adjustment of brightness, and adjustment of contrast of each of the anchor image.
6. A system (100) for clustering images, comprising:
a memory (102) storing instructions and a set of images;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to execute:

a Deep Convolution Autoencoder (DCA) that is configured to:
sequentially receive an anchor image from the set of images comprised in the memory, wherein the anchor image comprises an object of interest;
extrapolate each of the anchor image to identify (i) a corresponding positive image by augmenting the anchor image, and (ii) a corresponding negative image comprised in the set of images;
receive, in an encoder of the DCA, each of the anchor image, the corresponding positive image and the corresponding negative image to obtain a first representation of each of the anchor image, and a second representation of the corresponding positive image, and a third representation of the corresponding negative image;
simultaneously (i) minimize a distance between each of the anchor image and the corresponding positive image and (ii) maximize the distance between each of the anchor image and the corresponding negative image in the first representation, the second representation and the third representation respectively in an unsupervised manner;
reconstruct, in a decoder of the DCA, each of the anchor image using the representation of the anchor image to obtain a corresponding reconstructed anchor image and an associated reconstruction loss; and
transform the representation of each of the anchor image into a model assignment probability and automatically assign, using a k-means clustering method, the model assignment probability of each of the anchor image to a single cluster by applying (i) a sample oriented entropy on the model assignment probability of each of the anchor image to obtain a deterministic model assignment probability and (ii) a batch oriented entropy on the model assignment probability to obtain an uniform distribution of anchor images to a plurality of clusters, wherein the DCA learns the uniform distribution of each of the anchor image to the plurality of clusters.

7. The system of claim 6, wherein the corresponding negative image is
identified and selected by:
computing a corresponding statistics between each of the anchor image and the corresponding positive image;
performing a comparison of a distance of a candidate negative image with the computed corresponding statistics in the representation of the anchor image and the corresponding positive image; and
identifying the candidate negative image as the corresponding negative image based on the comparison.
8. The system of claim 7, wherein the candidate negative image is identified as the corresponding negative image when the distance of the candidate negative image is greater than the computed corresponding statistics.
9. The system of claim 6, wherein local structure of one or more images from the set of images is preserved using the reconstruction loss.
10. The system of claim 6, wherein each of the anchor image is augmented to identify the corresponding positive image by performing at least one of a random rotation, a random scaling, a random erasing, a random flip, adjustment of brightness, and adjustment of contrast of each of the anchor image.

Documents

Application Documents

# Name Date
1 201821049728-STATEMENT OF UNDERTAKING (FORM 3) [28-12-2018(online)].pdf 2018-12-28
2 201821049728-REQUEST FOR EXAMINATION (FORM-18) [28-12-2018(online)].pdf 2018-12-28
3 201821049728-FORM 18 [28-12-2018(online)].pdf 2018-12-28
4 201821049728-FORM 1 [28-12-2018(online)].pdf 2018-12-28
5 201821049728-FIGURE OF ABSTRACT [28-12-2018(online)].jpg 2018-12-28
6 201821049728-DRAWINGS [28-12-2018(online)].pdf 2018-12-28
7 201821049728-DECLARATION OF INVENTORSHIP (FORM 5) [28-12-2018(online)].pdf 2018-12-28
8 201821049728-COMPLETE SPECIFICATION [28-12-2018(online)].pdf 2018-12-28
9 201821049728-FORM-26 [14-02-2019(online)].pdf 2019-02-14
10 Abstract1.jpg 2019-03-28
11 201821049728-Proof of Right (MANDATORY) [20-06-2019(online)].pdf 2019-06-20
12 201821049728-ORIGINAL UR 6(1A) FORM 1-210619.pdf 2019-07-16
13 201821049728-ORIGINAL UR 6(1A) FORM 26-210219.pdf 2019-12-09
14 201821049728-OTHERS [26-07-2021(online)].pdf 2021-07-26
15 201821049728-FER_SER_REPLY [26-07-2021(online)].pdf 2021-07-26
16 201821049728-CLAIMS [26-07-2021(online)].pdf 2021-07-26
17 201821049728-FER.pdf 2021-10-18
18 201821049728-PatentCertificate28-11-2023.pdf 2023-11-28
19 201821049728-IntimationOfGrant28-11-2023.pdf 2023-11-28

Search Strategy

1 2021-01-1215-34-01E_27-01-2021.pdf

ERegister / Renewals

3rd: 04 Dec 2023

From 28/12/2020 - To 28/12/2021

4th: 04 Dec 2023

From 28/12/2021 - To 28/12/2022

5th: 04 Dec 2023

From 28/12/2022 - To 28/12/2023

6th: 04 Dec 2023

From 28/12/2023 - To 28/12/2024

7th: 19 Nov 2024

From 28/12/2024 - To 28/12/2025

8th: 20 Nov 2025

From 28/12/2025 - To 28/12/2026