Abstract: A method for retrieving images from a large image database given image and text based queries, the method comprising, with a computing system comprising one or more computing devices, which is capable of extracting a plurality of features from an image to semantically segment the image, recognizing the objects and attributes present in the image and converting the image into a semantic representation, converting the image and sketch queries into the semantic representation, mapping the semantic representation of the query into a binary code, converting the database images into the semantic representation and then mapping them to the binary codes for storage, performing a fast similarity search for the query code in the database and retrieving and displaying the most similar matches. Ref: Fig. 1
FIELD OF INVENTION
The present invention relates to the technologies which can be used to efficiently retrieve images from a large database when provided with an image or a complex sketch based query, using a physical hardware based data source, by using Multiview hashing algorithms by mapping the queries to a binary code.
BACKGROUND OF THE INVENTION
[001] The amount of visual data such as images and videos has increased exponentially over the last few years and there is a need to develop techniques that are capable of efficiently organizing, searching and exploiting these massive collections. In order to effectively do so, a system should be capable of searching and organizing images based on complex and descriptive queries.
BRIEF DESCRIPTION OF THE DRAWINGS
[002] This disclosure is illustrated by way of example and not by way of limitation in the accompanying figures. The figures may, alone or in combination, illustrate one or more embodiments of the disclosure. Elements illustrated in the figures are not necessarily drawn to scale. Reference labels may be repeated among the figures to indicate corresponding or analogous elements.
[003] FIG. 1 is a simplified module diagram of at least one embodiment of an environment of a computing system including components for a semantic segmentation module, a multi-view hashing module and an image database as disclosed herein.
3
[004] FIG. 2 is a simplified schematic diagram of embodiments of the semantic segmentation module.
[005] FIG. 3 is a simplified schematic diagram of embodiments of the sketch conversion module.
[006] FIG. 4 is a simplified schematic diagram of embodiments of the image conversion module.
[007] FIG. 5 is a simplified schematic diagram of embodiments of the multi-view hashing module.
[008] FIG. 6 is a simplified flow diagram of at least one embodiment of retrieving images from the image database using the similarity search module.
[009] FIG. 7 is a simplified flow diagram of at least one embodiment of building the image database from a large set of images.
[0010] FIG. 8 is a simplified block diagram of an exemplary computing environment in connection with which at least one embodiment of the system of FIG. 1.
DETAILED DESCRIPTION OF THE INVENTION
[0011] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and are described in detail below. It should be understood that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed. On the contrary, the intent is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
4
[0012] A paper titled “Multi-Modal Image Retrieval for Complex Queries using Small Codes” authored by Siddiquie, Behjat et al., is relevant to the present patent and the contents of the same are hereby incorporated by reference. The aforesaid paper, proposes a unified framework for image retrieval capable of handling complex and descriptive queries in a scalable manner. The framework supports query specification in terms of objects, attributes and spatial relationships thereby allowing substantially complex queries. Furthermore, these queries can be specified in three different modalities – images, sketches and structured text. The framework also consists of a multi-view hashing algorithm capable of mapping queries of different modalities to the same binary representation, enabling efficient and scalable retrieval of multimodal queries.
[0013] This disclosure relates to the technical fields of machine learning-based image retrieval with respect to retrieving relevant or similar images based on an image or a sketch query. This is done by mapping the query to a pre-defined semantic representation. In case of an image based query, semantic segmentation algorithms are used to extract features and map it to the semantic representation. Similarly, sketch based queries are also converted into the semantic representation. Multi-View hashing algorithms are then used to convert the semantic representation into a binary code. The binary code based representation of the query can then be used to search through a large scale image database that has been previously mapped to the binary representation in a very efficient manner. The disclosed examples focus mainly on retrieval of images (such as professional and/or amateur images uploaded to the World Wide Web). However, it should be understood that these examples are illustrative only, and aspects of the disclosed techniques can be applied to videos or collections of multimedia content (e.g., collections that include images, videos, and text) equally as well.
[0014] Aspects of this disclosure include:
5
[0015] 1. Semantic Representation – may comprise of binary masks of objects and attributes present in the sketch or the image. The semantic representation encodes the objects and attributes present in the image or sketch based query as well as the spatial relationships between them.
[0016] 2. Image to Semantic – converts an image into the semantic representation. This is done by semantically segmenting the image by assigning an object or attribute label to each pixel. State-of-the art semantic segmentation approaches are utilized to extract features from the images and assign the pixel wise object and attribute labels.
[0017] 3. Multi-View Hashing – to map queries of different modalities to the same hash code, enabling efficient and scalable image retrieval based on multi-modal queries of images and text.
[0018] Referring now to FIG. 1, an embodiment of the computing system 100 is shown in the context of an image retrieval system 101 (e.g., a physical or virtual execution or runtime environment). The illustrative computing system 100 may include at least one data source of image data from data source 102, at least one data source of sketch data from data source 104, one or more sets of images 106, a semantic segmentation modules 112 (which may include a number of subcomponents, described below), a sketch conversion module 114, an image conversion module 116, a multi-view hashing module 140, a similarity search module 150, and an image database 160 (database). The system 101 may further include one or more other requesting applications/devices/systems 170. Each of the components of the computing system 100 and their respective subcomponents may be embodied as hardware, software, a combination of hardware and software, or another type of physical component.
6
[0019] The semantic segmentation module 110 uses automated techniques, such as computer vision algorithms to capture and extract objects, colors and textures from the input source image 102 during operation of the computing system 100. To extract these concepts, deep learning based features that identify semantic concepts may be used. For example, deep learning models, such as Convolutional Neural Networks (CNNs), which are variants of multilayer perceptrons consisting of multiple convolutional and pooling layers followed by fully connected layers, may be used to identify semantic concepts from extracted lower level features. The illustrative data source 102 may be embodied as any hardware, software, or combination of hardware and software capable of performing the functions described herein. For instance, the data source 102, may include one or more images or a video and/or others. Alternatively or in addition, the data source 102, may include computers, computer networks, smart phones, mobile phones or camera devices, memory, storage devices, or any other types of devices capable of capturing and/or recording and/or storing and/or transmitting stored or recorded multimodal data such as audio files, digital image files, video clips, and/or other types of data files.
[0020] The illustrative feature extraction modules 112 and each of its sub-components, submodules, and data structures may be embodied as any hardware, software, or combination of hardware and software capable of performing the functions described herein. For example, the semantic segmentation module 112 may include data acquisition and extraction routines to perform object, color and texture segmentation. Feature extraction modules 112 may include object extraction module(s) 210, color extraction module(s) 212, and texture extraction module(s) 214, which may all provide information to generate a semantic segmentation 120.
[0021] The module 112 could be any computing process that runs on any photographic images, videos and/or sketches or together on all of them. The result of the compute process will
7
be the information about the objects in the image that includes but not limited to location (from module(s) 210), color (from module(s) 212), shape, count, segmentation and texture (from module(s) 214)
[0022] Referring now to FIG. 3, a simplified illustration of an embodiment 200 of the sketch conversion module 114 is shown. In FIG. 3, a human generated sketch data source 104 is used as an input. The sketch conversion module 114, generates a sketch semantic representation 130. The sketch semantic representation permits easy encoding of the spatial relationships between the objects in an image. The sketches are converted into Co binary masks representing each object category (whether it appears in the sketch or not), Cc masks representing each color and Ct masks representing each texture. The binary mask corresponding to the object o has value 1 at pixel (i; j) if the sketch contains the corresponding object class at pixel (i; j) and similarly for colors and textures. These binary masks are then resized to d x d, leading to each sketch being represented by a vector of dimension (Co + Cc + Ct) d^2.
[0023] Referring now to FIG. 4, a simplified illustration of an embodiment 300 of the image conversion module 116 is shown. In FIG. 4, a semantic segmentation generated by the semantic segmentation module 112, is used as an input. The image conversion module 116, generates an image semantic representation 132. The image semantic representation like the sketch semantic representation permits easy encoding of the spatial relationships between the objects in an image. The semantic segmentations are converted into Co binary masks representing each object category (whether it appears in the sketch or not), Cc masks representing each color and Ct masks representing each texture. The binary mask corresponding to the object o has value 1 at pixel (i; j) if the sketch contains the corresponding object class at
8
pixel (i; j) and similarly for colors and textures. These binary masks are then resized to d x d, leading to each sketch being represented by a vector of dimension (Co + Cc + Ct) d^2.
[0024] Referring now to FIG. 5, a simplified illustration of an embodiment 400 of the multiview hashing module 116 is shown. In FIG 5, an image semantic representation or a sketch semantic representation can be used as an input. In case of an image semantic representation, the image projection module 310 is used to project the image semantic representation to a compact binary code. In case of a sketch semantic representation, the sketch projection module is used to project the sketch semantic representation to a compact binary code.
[0025] In order to initially learn the image projection matrix and the sketch projection matrix used in the image projection and the sketch projection modules respectively, we use a learning procedure with some annotated images and sketches. We are given a set of n data points, for which we have two different modalities X = {xi}; i = 1, … , n; and Y = {yi}; i = 1, …, n;. For example, X could consist of the semantic representations computed from the images and Y could be the representations from the corresponding sketches. In general, one can have more than two modalities. Projection matrices Wx and Wy that can convert the data into a compact binary code are learnt. The binary code hx for the feature vector x is computed as hx = sgn(xWx). Like most other hashing approaches, one learns Wx (and similarly Wy) that assigns the same binary codes hx1 and hx2 to data points x1 and x2 that are very similar. However, there is the additional constraint that an image xi, and a sketch yj, which are semantically similar, should be mapped to similar binary codes hxi and hyj by Wx and Wy respectively. This is overcome by following a two stage procedure - the first stage involves projecting different modalities of the data to a common low dimensional linear subspace, while the second stage consists of applying an orthogonal transformation to the linear subspace so as to minimize the quantization error
9
when mapping this linear subspace to a binary code. A Partial Least Squares (PLS) based approach is adopted to map different modalities of the data into a common latent linear subspace. Let X be an (n x Dx) matrix containing one modality of the training data X, and Y be an (n x Dy) matrix containing the corresponding instances from a different modality of the training data Y. PLS decomposes X and Y such that: X = TP^T + E; Y = UQ^T + F; U = TD + H where T and U are (n x p) matrices containing the p extracted latent vectors, the (Dx x p) matrix P and the (Dy x p) matrix Q represent the loadings and the (n x Dx) matrix E, the (n x Dy) matrix F and the (n x p) matrix H are the residuals. D is a (p x p) matrix that relates the latent scores of X and Y. The PLS method iteratively constructs projection vectors Wx = {x1, x2, … , wxp} and Wy = {wy1, wy2, … , wyp} in a greedy manner. Each stage of the iterative process, involves computing: [cov(ti; ui)]^2 = max |wxi|=1;|wyi|=1 [cov(Xwxi; Y wyi)]^2 where ti and ui are the ith columns of the matrices T and U respectively, and cov(ti; ui) is the sample covariance between latent vectors ti and ui. This process is repeated until the desired number of latent vectors p, have been determined. PLS produces the projection matrices Wx and Wy that project different modalities of the data into a common orthogonal basis. The first few principal directions computed by PLS contain most of the covariance, hence encoding each direction with a single bit distorts the Hamming distance, resulting in a poor retrieval performance. This problem is overcome by computing a rotated projection matrix Ŵx = WxR, where R is a randomly generated (p x p) orthogonal rotation matrix. Doing so distributes the information content in each direction in a more balanced manner, leading to the Hamming distance in the binary space better approximating the Euclidean distance in the joint subspace induced by PLS. A more principled and effective approach called Iterative Quantization (ITQ), which involves an iterative optimization procedure to compute the optimal rotation matrix R, that minimizes the quantization
10
error Q, given by: Q(H;R) = ||H - XWxR||F where H is the (n x p) binary code matrix representing X and ||.||F represents the Frobenius norm. We employ ITQ to modify the joint linear subspace for the multiple modalities produced by PLS and learn more efficient binary codes. The final projection matrices are given by Ŵx = WxR and Ŵy = WyR.
[0026] Referring now to Figure 6, an illustrative method for retrieving relevant images is shown. The method 600 may be embodied as computerized programs, routines, logic, and/or instructions, to be executed using a hardware, firmware, software, or any combination thereof. At block 150, the method receives/accesses binary codes corresponding to image or sketch based queries from the Multiview hashing module. In block 150, the similarity search module searches for similar binary codes in the image database 160, using fast similarity search algorithms. The retrieved images are then sent to the requesting applications and devices 170 and displayed for the user.
[0027] Referring now to Figure 7, an illustrative method for building the image database is shown. The method 700 may be embodied as computerized programs, routines, logic, and/or instructions, to be executed using a hardware, firmware, software, or any combination thereof. At block 106, the method receives/accesses images from a large database that one wants to retrieve images from. In block 710, deep learning based algorithms are used to extract objects, colors and textures present in the images and generate their semantic representations. In block 712, a set of sketches human generated sketches are created and annotated for some of these images. In block 714, given associated sets of images and sketches the multiview hashing algorithm learns the projection matrices for projecting the semantic representations from the images and the sketches. Next, the projection matrix for the images is used to project the semantic representation of all the database images into binary codes and store them in the image database 160.
11
EXAMPLE USAGE SCENARIOS
[0028] The components of the image retrieval system 101 have a number of different applications. Embodiments of the system 101 may enable retrieval multimedia content using a semantic segmentation module residing in a computer, computer network, smart phone, mobile phone or camera device, memory, storage device, or any other types of device capable of capturing and/or recording and/or storing and/or transmitting stored or recorded multimodal content, e.g. digital image files, video clips or such data file and further the training of the multi-view hashing module using a set of images and sketches from an image database, such as images downloaded from the web. For instance, the image retrieval system 101 may be used by an online shopper to search for clothing items from a large set similar to a picture of a clothing item, or a sketch of a clothing select or organize content, or to proactively recommend content that may be relevant or of interest to a certain user or set of users, or by a search engine or other content delivery mechanism to rank or arrange content on a display.
IMPLEMENTATION EXAMPLES
[0029] Referring now to FIG. 8, a simplified block diagram of an exemplary computing environment 800 for the computing system 100, in which the multi-view hashing module 140 and the image database 160, may be implemented, is shown. The illustrative implementation 800 includes a computing device 810, which may be in communication with one or more other computing systems or devices 832 via one or more networks 830. The computer device 810 comprises on storage media 820 multi-view hashing modules 140 and image database 160.
12
[0030] The illustrative computing device 810 includes at least one processor 812 (e.g. a microprocessor, microcontroller, digital signal processor or any such processor), memory 814, and an input/output (I/O) subsystem 816. The computing device 810 may be embodied as any type of computing device such as a personal computer (e.g., a desktop, laptop, tablet, smart phone, mobile phone, camera, wearable or body-mounted device or any such device capable of capturing and/or transmitting data), a server, an enterprise computer system, a network of computers, a combination of computers and other electronic devices, or other electronic devices. Although not specifically shown, it should be understood that the I/O subsystem 816 typically includes, among other things, an I/O controller, a memory controller, and one or more I/O ports. The processor 812 and the I/O subsystem 816 are communicatively coupled to the memory 814. The memory 814 may be embodied as any type of suitable computer memory device (e.g., volatile memory such as various forms of random access memory).
[0031] The I/O subsystem 816 is communicatively coupled to a number of components including one or more user input devices 818 (e.g., a touchscreen, keyboard, virtual keypad, microphone and such input devices), one or more storage media 820, one or more output devices 822 (e.g., speakers, LEDs, screen and other such output devices ), one or more sensing devices 824, one or more camera or other sensor applications 826 (e.g., software-based sensor controls) and one or more network interfaces 832.
[0032] The storage media 820 may include one or more hard drives or other suitable data storage devices (e.g., flash memory, memory cards, memory sticks, and/or others). In some embodiments, portions of systems software (e.g., an operating system), framework/middleware (e.g., APIs, object libraries and likes). Portions of systems software or framework/middleware
13
may be copied to the memory 814 during operation of the computing device 810, for faster processing or other reasons.
[0033] The one or more network interfaces 828 may communicatively couple the computing device 810 to a network, such as a local area network, wide area network, personal cloud, enterprise cloud, public cloud, and/or the Internet, for example. Accordingly, the network interfaces 828 may include one or more wired or wireless network interface cards or adapters, for example, as may be needed pursuant to the specifications and/or design of the particular computing system 800. The network interface(s) 828 may provide short-range wireless or optical communication capabilities using, e.g., Near Field Communication (NFC), wireless fidelity (Wi-Fi), radio frequency identification (RFID), infrared (IR), or other suitable technology.
[0034] The other computing system(s) 832 may be embodied as any suitable type of computing system or device such as any of the aforementioned types of devices or other electronic devices or systems. For example, in some embodiments, the other computing systems 832 may include one or more server computers used to store portions of the persuasive image database 160. The computing system 800 may include other components, sub-components, and devices not illustrated in FIG. 8 for clarity of the description. In general, the components of the computing system 800 are communicatively coupled as shown in FIG. 8 by electronic signal paths, which may be embodied as any type of wired or wireless signal paths capable of facilitating communication between the respective devices and components.
14
ADDITIONAL EXAMPLES
[0035] Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
[0036] A method for retrieving images relevant to an image or a sketch based query may comprise extracting objects, colors and textures, present in the image along with their spatial locations using the semantic segmentation module. The semantic segmentation is further converted into an image semantic representation using the image conversion module. In case of a sketch based query, the sketch is converted into a sketch semantic representation using the sketch conversion module. The image and sketch semantic representations are converted into binary codes using the multi-view hashing modules. The image projection module is used to map the image semantic representation to a binary code, while the sketch projection module is used to map the sketch semantic representation to the binary code. Subsequently, the similarity search module is used to search for the binary code in the image database and retrieve similar images. The method may further comprise: accessing a large set of images, extracting objects, colors and textures from them and converting them to the image semantic representation. An additional set of human created sketches associated with some of these images may also be used. These sketches are converted into the sketch semantic representation. The method may further comprise: learning a multi-view hashing, comprising of an image projection matrix and a sketch projection matrix. The projection matrices are used to map the image semantic representation from the database images to binary codes. The method may further retrieving images relevant to the query from the database and an output displaying these relevant images.
15
GENERAL CONSIDERATIONS
[0037] In the foregoing description, numerous specific details, examples, and scenarios are set forth in order to provide a more thorough understanding of the present disclosure. It will be appreciated, however, that embodiments of the disclosure may be practiced without such specific details. Further, such examples and scenarios are provided for illustration, and are not intended to limit the disclosure in any way. Those of ordinary skill in the art, with the included descriptions, should be able to implement appropriate functionality without undue experimentation.
[0038] References in the specification to “an embodiment,”, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is believed to be within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly indicated.
[0039] Embodiments in accordance with the disclosure may be implemented in hardware, firmware, software, or any combination thereof. Embodiments may also be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device or a “virtual machine” running on one or more computing devices). For example, a machine-readable medium may include any suitable form of volatile or non-volatile memory.
16
[0040] Modules, data structures, and the like defined herein are defined as such for ease of discussion, and are not intended to imply that any specific implementation details are required. For example, any of the described modules and/or data structures may be combined or divided into sub-modules, sub-processes or other units of computer code or data as may be required by a particular design or implementation.
[0041] In the drawings, specific arrangements or orderings of schematic elements may be shown for ease of description. However, the specific ordering or arrangement of such elements is not meant to imply that a particular order or sequence of processing, or separation of processes, is required in all embodiments. In general, schematic elements used to represent instruction blocks or modules may be implemented using any suitable form of machine-readable instruction, and each such instruction may be implemented using any suitable programming language, library, application-programming interface (API), and/or other software development tools or frameworks. Similarly, schematic elements used to represent data or information may be implemented using any suitable electronic arrangement or data structure. Further, some connections, relationships or associations between elements may be simplified or not shown in the drawings so as not to obscure the disclosure.
[0042] This disclosure is to be considered as exemplary and not restrictive in character, and all changes and modifications that come within the spirit of the disclosure are desired to be protected.
WE CLAIM:
1. A method for retrieving images from a large image database given image and text based queries, the method comprising, with a computing system comprising one or more computing devices, which is capable of:
extracting a plurality of features from an image to semantically segment the image, recognizing the objects and attributes present in the image and converting the image into a semantic representation;
converting the image and sketch queries into the semantic representation;
mapping the semantic representation of the query into a binary code;
converting the database images into the semantic representation and then mapping them to the binary codes for storage;
performing a fast similarity search for the query code in the database; and
retrieving and displaying the most similar matches.
2. The method of claim 1, wherein a Convolutional Neural Network (CNN) is used to extract features from the image, the extracted features represent the objects and color and texture attributes present in the image along with their relative spatial configurations, thereby semantically segmenting the image
3. The method of claim 1, wherein the semantic segmentation generated from the image is converted into a pre-defined semantic representation.
18
4. The method of claim 1, wherein a sketch based query is converted into a pre-defined semantic representation.
5. The method of claim 1, comprising;
semantically segmenting the database images to generate the semantic representation for each image in the database;
learning a projection matrix to map the semantic representation of the images to a binary code;
learning a multi-view hashing function to map images and sketches to binary codes using an annotated set of images and sketches; and
storing all images in the database in the form of these binary codes in a compact manner.
6. The method of claim 5, wherein a multi-view hashing algorithm is used to map image and sketch based queries to a common low dimensional binary code, the binary code preserves semantic and spatial similarities of the query images and sketches.
7. The method of claim 1, wherein the semantic representation from the query image or sketch is mapped into a compact binary code.
8. The method of claim 1, wherein a fast similarity based search algorithm is used to search for the image database for images similar to the binary codes obtained from the image or sketch query and displaying these images for the user.
| # | Name | Date |
|---|---|---|
| 1 | 3653-del-2015-Form-5-(09-11-2015).pdf | 2015-11-09 |
| 2 | 3653-del-2015-Form-3-(09-11-2015).pdf | 2015-11-09 |
| 3 | 3653-del-2015-Form-2-(09-11-2015).pdf | 2015-11-09 |
| 4 | 3653-del-2015-Form-1-(09-11-2015).pdf | 2015-11-09 |
| 5 | 3653-del-2015-Correspondence Others-(09-11-2015).pdf | 2015-11-09 |
| 6 | Other Patent Document [25-05-2016(online)].pdf | 2016-05-25 |
| 7 | Drawing [08-11-2016(online)].jpg | 2016-11-08 |
| 8 | Description(Complete) [08-11-2016(online)].pdf | 2016-11-08 |
| 9 | Form 13 [12-11-2016(online)].pdf | 2016-11-12 |
| 10 | OTHERS [11-06-2017(online)].pdf | 2017-06-11 |
| 11 | Form 9 [04-07-2017(online)].pdf | 2017-07-04 |
| 12 | 3653-del-2015-FORM 18A [18-07-2017(online)].pdf | 2017-07-18 |
| 13 | 3653-DEL-2015-FER.pdf | 2017-08-28 |
| 14 | 3653-del-2015-FER_SER_REPLY [06-02-2018(online)].pdf | 2018-02-06 |
| 15 | 3653-del-2015-DRAWING [06-02-2018(online)].pdf | 2018-02-06 |
| 16 | 3653-del-2015-CLAIMS [06-02-2018(online)].pdf | 2018-02-06 |
| 17 | 3653-DEL-2015-HearingNoticeLetter.pdf | 2018-05-02 |
| 18 | 3653-del-2015-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [05-06-2018(online)].pdf | 2018-06-05 |
| 19 | 3653-DEL-2015-PETITION UNDER RULE 137 [05-06-2018(online)].pdf | 2018-06-05 |
| 20 | 3653-del-2015-ExtendedHearingNoticeLetter_04Oct2018.pdf | 2018-08-28 |
| 21 | 3653-DEL-2015-PETITION UNDER RULE 138 [18-10-2018(online)].pdf | 2018-10-18 |
| 22 | 3653-DEL-2015-Written submissions and relevant documents (MANDATORY) [29-11-2018(online)].pdf | 2018-11-29 |
| 23 | 3653-DEL-2015-FORM-26 [29-11-2018(online)].pdf | 2018-11-29 |
| 24 | 3653-DEL-2015-PatentCertificate04-12-2018.pdf | 2018-12-04 |
| 25 | 3653-DEL-2015-IntimationOfGrant04-12-2018.pdf | 2018-12-04 |
| 26 | 3653-del-2015-OTHERS [03-05-2019(online)].pdf | 2019-05-03 |
| 27 | 3653-del-2015-FORM FOR STARTUP [03-05-2019(online)].pdf | 2019-05-03 |
| 1 | search11_24-08-2017.pdf |