Sign In to Follow Application
View All Documents & Correspondence

Method And System For Semantic Change Detection In Images

Abstract: Change detection systems process at least two images of an object or a Field of View (FoV), captured at two different time instances, and identify one or more changes to the object(s) or the scene captured in the FoV. The existing change detection systems detect changes between scenes by processing images. The disclosure herein generally relates to image processing, and, more particularly, to a method and system for semantic level change detection in images. The system, by comparing a test image and a reference image of a FoV or of a specific object, captured over different time instances, identifies changes a semantic level, and communicates to the user, what the detected changes are.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
16 July 2018
Publication Number
03/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ip@legasis.in
Parent Application
Patent Number
Legal Status
Grant Date
2024-02-23
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai - 400021, Maharashtra, India

Inventors

1. GUBBI LAKSHMINARASIMHA, Jayavardhana Rama
Tata Consultancy Services Limited, Gopalan Global Axis SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village, Bangalore - 560066, Karnataka, India
2. VARGHESE, Ashley
Tata Consultancy Services Limited, Gopalan Global Axis SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village, Bangalore - 560066, Karnataka, India
3. PURUSHOTHAMAN, Balamuralidhar
Tata Consultancy Services Limited, Gopalan Global Axis SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village, Bangalore - 560066, Karnataka, India

Specification

Claims:1. A processor implemented method for change detection, comprising:
capturing, via one or more hardware processors, at least one test image of a Field of View (FOV); and
processing the at least one test image with a corresponding reference image, via the one or more hardware processors, comprising:
identifying at least one domain that matches the at least one test image and the corresponding reference image;
extracting a plurality of domain specific features at a plurality of different scales, separately from the test image and the reference image;
generating at least one feature map at each of the plurality of different scales, separately for the test image and the reference image;
transforming the feature maps to reduce number of classes from a current number to a pre-determined number, separately for the test image and the reference image;
up-sampling spatial-dimension of the feature maps to an image spatial dimension, wherein a plurality of semantic features that form a semantic feature set each corresponding to the test image and the reference image separately, are extracted separately from the test image and the reference image by means of the up-sampling;
comparing the semantic feature set of the test image with the semantic feature set of the reference image to detect at least one change between the test image and the reference image, at a coarse level and a finer level;
generating a combined learned deep representation at the plurality of different scales; and
generating a labelled change map for the test image and the reference image, based on the combined learned deep representation, wherein the labelled change map represents the at least one change at a semantic level.

2. The method as claimed in claim 1, wherein the plurality of different scales are pre-configured.
3. The method as claimed in claim 1, wherein the feature map at a scale represents one or more features extracted at that scale.
4. A system (100) for change detection, comprising:
a memory module (101) storing instructions;
one or more communication interfaces (103); and
one or more hardware processors (102) coupled to the memory module (101) via the one or more communication interfaces (103), wherein the one or more hardware processors (102) are configured by the instructions to:
capture at least one test image of a Field of View (FOV); and
process the at least one test image with a corresponding reference image, comprising:
identify at least one domain that matches the at least one test image and the corresponding reference image;
extracting a plurality of domain specific features at a plurality of different scales, separately from the test image and the reference image;
generating at least one feature map at each of the plurality of different scales, separately for the test image and the reference image;
transforming the feature maps to reduce number of classes from a current number to a pre-determined number, separately for the test image and the reference image; and
up-sampling spatial-dimension of the feature maps an image spatial dimension, wherein a plurality of semantic features that form a semantic feature set each corresponding to the test image and the reference image separately, are extracted separately from the test image and the reference image by means of the up-sampling;
comparing the semantic feature set of the test image with the semantic feature set of the reference image to detect at least one change between the test image and the reference image, at a coarse level and a finer level;
generating a combined learned deep representation at the plurality of different scales; and
generating a labelled change map for the test image and the reference image, based on the combined learned deep representation, wherein the labelled change map represents the at least one change at a semantic level.
5. The system as claimed in claim 4, wherein said system is configured to have the plurality of different scales pre-configured.
6. The system as claimed in claim 4, wherein said system is configured to represent using the feature map at a scale, one or more features extracted at that scale.
7. The system as claimed in claim 4, wherein said system is configured to perform the change detection by using a history data pertaining to change detection as a reference, wherein the system is trained based on the history data by:
providing a plurality of test image-reference image pairs as input to the system;
providing data pertaining to one or more changes identified for each of the plurality of test image-reference image pairs; and
training the system to learn based on each test image-reference image pair and the corresponding one or more changes identified, to generate one or more domain specific data models, wherein the one or more domain specific data models are used by the system, to determine:
at least one domain matching a given test image-reference image pair; and
semantic output representing change for the given test image-reference image pair. , Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:

METHOD AND SYSTEM FOR SEMANTIC CHANGE DETECTION IN IMAGES

Applicant

Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the invention and the manner in which it is to be performed.


TECHNICAL FIELD
[001] The disclosure herein generally relates to image processing, and, more particularly, to a method and system for semantic level change detection in images.

BACKGROUND
[002] A change detection mechanism, as the name implies, is used to analyze two or more images and identify/detect change(s) between the images. Here, one of the two images being compared is a real-time input (a present image), whereas the second image could be a historical image of same Field of View (FoV) as that of the other image, however, may have change(s) in terms of illumination, pose, image resolution and so on. The images being compared for change detection may be of same object or objects in a specific Field of View (FoV). Such a change detection mechanism has got a variety of applications. For example, field inspection using unmanned vehicles, especially drones, has gained popularity as sophisticated drones having advanced surveillance capabilities are available for inspection. In this approach, a drone that has one or more cameras fitted to it, flies over a target or a FoV, clicks one or more images, and the captured one or more images are analyzed using appropriate image processing schema to perform the change detection. This mechanism eliminates need for performing manual inspection which could be a cumbersome task. If there is damage, the method can quantify the progress of damage. This is accomplished by comparing the present image with historical image.
[003] The existing image processing schemas being used for change detection are capable of detecting change(s) at different levels. Most of the existing mechanisms identify changes in terms of pixels, between two or more images being compared, and present the information to a user as results of the change detection being performed. However, it is upto the users how to further interpret the results.

SUMMARY
[004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a processor implemented method for change detection is provided. The method involves capturing via one or more hardware processors at least one test image of a Field of View (FOV); and then processing the at least one test image with a corresponding reference image, via the one or more hardware processors. Processing of the at least one test image and the corresponding reference image involves: extracting a plurality of domain specific features at a plurality of different scales, separately from the test image and the reference image; generating at least one feature map at each of the different scales, separately for the test image and the reference image; transforming the feature maps to reduce number of classes from a current number to a pre-determined number, separately for the test image and the reference image; up-sampling spatial-dimension of the feature maps an image spatial dimension, wherein a plurality of semantic features that form a semantic feature set each corresponding to the test image and the reference image separately, are extracted separately from the test image and the reference image by means of the up-sampling; comparing the semantic feature set of the test image with the feature set of the reference image to detect at least one change between the test image and the reference image, at a coarse level and a finer level; generating a combined learned deep representation at the different scales; and generating a labelled change map, wherein the labelled change map represents the at least one change at a semantic level, for the test image and the reference image, based on the combined learned deep representation, wherein the labelled change map represents the at least one change at a semantic level, provided a change exists in the test image generating a labelled change map.
[005] In another aspect, a system (100) for change detection is provided. The system (100) includes a memory module (101) storing instructions; one or more communication interfaces (103); and one or more hardware processors (102) coupled to the memory module (101) via the one or more communication interfaces (103). The one or more hardware processors (102) are configured by the instructions to capture at least one test image of a Field of View (FOV); and process the at least one test image with a corresponding reference image. Processing the at least one test image with a corresponding reference image involves: extracting a plurality of domain specific features at a plurality of different scales, separately from the test image and the reference image; generating at least one feature map at each of the different scales, separately for the test image and the reference image; transforming the feature maps to reduce number of classes from a current number to a pre-determined number, separately for the test image and the reference image; up-sampling spatial-dimension of the feature maps an image spatial dimension, wherein a plurality of semantic features that form a semantic feature set each corresponding to the test image and the reference image separately, are extracted separately from the test image and the reference image by means of the up-sampling; comparing the semantic feature set of the test image with the semantic feature set of the reference image to detect at least one change between the test image and the reference image, at a coarse level and a finer level; generating a combined learned deep representation at the different scales; and generating a labelled change map, for the test image and the reference image, based on the combined learned deep representation, wherein the labelled change map represents the at least one change at a semantic level, provided a change exists in the test image generating a labelled change map.
[006] In yet another aspect, one or more non-transitory machine readable information storage mediums comprising one or more instructions is provided. The instructions, which when executed by one or more hardware processors cause capturing of at least one test image of a Field of View (FOV); and processing of the at least one test image with a corresponding reference image. Processing of the at least one test image and the corresponding reference image involves: extracting a plurality of domain specific features at a plurality of different scales, separately from the test image and the reference image; generating at least one feature map at each of the different scales, separately for the test image and the reference image; transforming the feature maps to reduce number of classes from a current number to a pre-determined number, separately for the test image and the reference image; up-sampling spatial-dimension of the feature maps an image spatial dimension, wherein a plurality of semantic features that form a semantic feature set each corresponding to the test image and the reference image separately, are extracted separately from the test image and the reference image by means of the up-sampling; comparing the semantic feature set of the test image with the semantic feature set of the reference image to detect at least one change between the test image and the reference image, at a coarse level and a finer level; generating a combined learned deep representation at the different scales; and generating a labelled change map, for the test image and the reference image, based on the combined learned deep representation, wherein the labelled change map represents the at least one change at a semantic level, provided a change exists in the test image generating a labelled change map.
[007] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS

[008] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[009] FIG. 1 illustrates an exemplary system for semantic change detection of images, according to some embodiments of the present disclosure.
[010] FIG. 2 is a flow diagram depicting steps involved in the process of performing change detection of images, using the system of FIG. 1, according to some embodiments of the present disclosure.
[011] FIG. 3 illustrates a flow diagram depicting steps involved in the process of processing a test image and a corresponding reference image, for semantic level change detection, using the system of FIG. 1, in accordance with some embodiments of the present disclosure.
[012] FIG. 4 illustrates an example mode of implementation of the system of FIG. 1 for performing the semantic change detection, in accordance with some embodiments of the present disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS
[013] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[014] Referring now to the drawings, and more particularly to FIGS. 1 through 4, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[015] FIG. 1 illustrates an exemplary system for semantic change detection of images, according to some embodiments of the present disclosure. In an embodiment, the system 100 includes one or more hardware processors 102, communication interface(s) or input/output (I/O) interface(s) 103, and one or more data storage devices or memory module 101 operatively coupled to the one or more hardware processors 102. The one or more hardware processors 102 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, graphics controllers, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[016] The communication interface(s) 103 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the communication interface(s) 103 can include one or more ports for connecting a number of devices to one another or to another server.
[017] The memory module(s) 101 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, one or more modules (not shown) of the system 100 can be stored in the memory 101.
[018] The system 100 may be realized in the form of a parallel weight ties network, as in Fig. 4. In this mode of implementation, the system 100 includes two parallel networks out of which one network is used to process the test image, while the other processes the reference image, simultaneously. In this mode of implementation, both the networks learn same features from the images being processed, which in turn makes feature comparison convenient. Further, outputs from different convolution layers of the networks are combined to facilitate capturing of course and finer details of objects in the images being compared. The networks shown in Fig. 4 are weight-tied networks, which means both the networks have same number of parameters and weight values.
[019] FIG. 2 is a flow diagram depicting steps involved in the process of performing change detection of images, using the system of FIG. 1, according to some embodiments of the present disclosure. The system 100 is initially trained based on ‘training data’ pertaining to semantic level change detection, using appropriate machine learning approach such as but not limited to a deep learning approach, which allows the system 100 to perform change detection at a semantic level, in real-time scenarios. The system 100 generates one or more data models pertaining to different domains, based on the training and associated learning process. Further, appropriate data models (matching a test image-reference image pair) is picked and used by the system for the semantic change detection purpose. An example of the semantic level change detection is given below:
[020] Consider that in order to inspect a particular construction site periodically, an Unmanned Aerial Vehicle (drone) is used. The drone flies over the construction site and clicks one or more pictures (reference images) of the construction site and/or of specific objects in the construction site. In a different instance (maybe after a few hours/days/weeks/months (matching a defined periodicity)), the drone flies to the same construction site and again takes one or more pictures (test images). Now, between the first and second instances of image capture, many changes may have happened in the construction site. For example, some structures may have got demolished, while some new structures may have been built. Also, position of equipments being used may have changed. By processing the test images and the corresponding reference images, the system 100 identifies one or more changes at a semantic level. Here, ‘changes at semantic level’ refers to capability of the system 100 to specify what has changed/what change has been identified. In the aforementioned example, considering that a structure has been newly built (between the first and second instances of taking images) near ‘building A’ in the reference image, the system 100 identifies the newly built structure, determines ‘category of change’ and may tell the user that ‘newly built structure adjacent to building A’, along with marking the same. Here, the ‘category of change’ is domain specific. For example, if the ‘domain’ matching the test image is ‘construction’, then the category of change may be at least one of barrier, bin, sign board, and so on. Similarly if the domain is traffic, then some of the possible categories of changes are vehicle, road, traffic sign, and so on. Such categories are also termed as ‘domain entities’, and such domain entities are semantically recognized by the system 100. In various embodiments, data pertaining to such categories maybe pre-configured with one or more databases associated with the system 100, or maybe dynamically configured with the system 100 , by at least one user, using an appropriate user interface. In an embodiment, the system 100 may be trained to ignore certain changes while performing the semantic level change detection. For example, if a reference image was clicked in winter (that means snow is visible in the image) and if the test image was clicked in summer (which means snow is not visible in the image), the system 100, while performing the semantic level change detection, can be configured to ignore the ‘changes’ caused due to weather/season changes. The system 100 may ignore other similar changes as ‘background changes’.
[021] In this process, initially at least one image of a Field of View (FoV) is captured (202) as test image(s), by the system 100. The image of the FoV may capture multiple objects in the FoV. Reference image(s) corresponding to the captured at least one test image is selected (204), from a repository associated with the system 100. Further, at least one domain that matches the test image (and in turn the corresponding reference image) is determined (206) by the system 100. Determining the one or more domains may include comparing one or more features extracted from the test image with a plurality of data models pertaining to same or different domains, and selecting one or more domains in which a match is found. For example, the domain may be construction, traffic, warehouse surveillance and so on. In an embodiment, the system 100 maintains in associated memory module 101, one or more data models corresponding to each domain. Upon identifying the one or more domains matching the test image, corresponding data model(s) is selected (208) and used for further processing the test and reference images. The at least one test image and the corresponding reference image(s) is processed (210) by the system 100 to generate (212) a labelled change map which represents at least one change between each test image-reference image pair. In an embodiment, if the test and reference images are identical, then the system 100 conveys to the users that no change has been detected between the test and reference images. In various embodiments, the steps in method 200 may be performed in the same order or in different orders. In another embodiment, one or more of the steps in method 200 may be skipped.
[022] FIG. 3 illustrates a flow diagram depicting steps involved in the process of processing a test image and a corresponding reference image, for semantic level change detection, using the system of FIG. 1, in accordance with some embodiments of the present disclosure. The semantic level change detection is explained by assuming that the system 100 is implemented in the form of the parallel weight tied network as in FIG. 4.
[023] While processing a test image – reference image pair using the parallel weight tied network, both the images are processed separately by different networks, simultaneously, to generate separate semantic feature set for the test image and the reference image. Steps 302, 304, 306, and 308 refer to processing of the test image to generate semantic feature set corresponding to the test image, whereas the steps 310, 312, 314, and 316 cover processing of the reference image to generate semantic feature set corresponding to the reference image.
[024] At 302, a plurality of domain specific features are extracted from the test image, by a group of convolution layers comprising of both convolutional and max-pool layers. This enables capturing image features with smaller receptive fields, specifically useful where large field of view images are present. In one instance, shown in figure 4, three convolutional layers CP-3, CP-4 and CP-5 are used. The convolutional layers could be outputs tapped using a pre-trained model or the layers can be constructed and trained from scratch. The number of convolutional layers is ‘10’ until a spatial dimension ‘N’ is reached, wherein ‘N’ is empirically chosen. For change detection application, ‘N’ is controlled by reconstruction error in the de-convolutional layer. For changed object detection (i.e., to detect objects changed between the test and the reference images) tasks, higher ‘N’ is preferred and hence acts as a trade-off value between change detection and changed object detection. The pooling layer helps in increasing a receptive field and the choice is domain dependent. Once the domain is known, the pooling layer is used to down sample spatial dimension of the images. For instance, in street view application, the down-sampling may be by a factor of 2 and in satellite images, the down-sampling may be by a factor of 4. This results in capturing image features at different scales that is captured in step 304. The feature map now captures changes at each scale that is important in ensuring network robustness to pose variations and zoom variation when the images are captured. Further at 306, the feature map at each scale is transformed/reduced from a current number to a pre-determined limit/number of classes. For example, the number of classes in street-view application is 10. The number of feature parameters resulting from step 305, is too large. In order to reduce the dimensionality, at 306, feature transformation is performed using single/group of convolution layers and nonlinear activation functions. For instance, in one implementation shown in Figure 4, a 1 x 1 convolution layer is used that achieves feature mapping into lower dimension. Using a 1*1 convolution layer followed by non-linear activation function, final change representation specific to the domain is achieved. Until step 306, the feature weights are tied to ensure similar representations are learnt. In one implementation shown in Figure 4, feature transformation reduces the dimension of feature maps to the required number of classes that is 10. This implies, 10 neurons are connected to the same receptive field, instead of connecting 512 or 1024 neurons to same receptive field. Neurons are part of a neural network used in the weight tied network, used for the purpose of semantic change detection. An advantage is that each neuron focuses only the feature that is specific to a category of change. At 308, each pixel is mapped to a group of 10 neurons. That is, one of the 10 neurons gets activated for each pixel indicating change. Choice of a loss function (which is a cross entropy error function) is critical in detection of the change map. In the instance shown in Figure 4, a SoftMax loss function is employed. At 308, the feature map at each scale is up-sampled to the original image dimension, using a de-convolution layer. At this step, learned image representation is mapped onto input image dimension, and a semantic feature set corresponding to the test image is generated.
[025] The corresponding reference image is processed in the same way (steps 310, 312, 314, and 316), and a semantic feature set corresponding to the reference image is generated. In an embodiment, for both the test image as well as the reference image, the semantic feature set is generated at each of the plurality of scales. In an embodiment, by tying weights of the only first block of the networks that are processing the test image and the reference image, the networks are made to extract same features (from the test image and the reference image). As only the first blocks are weight-tied, remaining blocks of the networks can freely process the extracted features, which in turn allows the networks to learn the features at different levels.
[026] The semantic feature sets generated at each scale of the test image is compared (318) with corresponding semantic feature set of the reference image, to achieve coarse level and finer level change detection. The coarse and finer level changes detected at each scale, at step 318, are then combined to generate (320) combined learned deep representation. Further, from the combined change map, a non-linear activation function is used at the end of network to classify each output neurons to one of ‘n’ classes, which generates the label and the change map, wherein the value of ‘n’ maybe pre-configured (say, 10). A labelled change map is generated (322) based on the generated label and the change map, wherein the labelled change map captures at least one change at a semantic level. In various embodiments, the steps in method 300 may be performed in the same order or in different orders. In another embodiment, one or more of the steps in method 300 may be skipped.
[027] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[028] The embodiments of present disclosure herein addresses unresolved problem of semantic change detection in images. The embodiment, thus provides a mechanism for detecting and outputting changes at a semantic level, by comparing a test image and a reference image.
[029] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[030] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[031] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[032] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[033] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.

Documents

Application Documents

# Name Date
1 201821026492-STATEMENT OF UNDERTAKING (FORM 3) [16-07-2018(online)].pdf 2018-07-16
2 201821026492-REQUEST FOR EXAMINATION (FORM-18) [16-07-2018(online)].pdf 2018-07-16
3 201821026492-FORM 18 [16-07-2018(online)].pdf 2018-07-16
4 201821026492-FORM 1 [16-07-2018(online)].pdf 2018-07-16
5 201821026492-FIGURE OF ABSTRACT [16-07-2018(online)].jpg 2018-07-16
6 201821026492-DRAWINGS [16-07-2018(online)].pdf 2018-07-16
7 201821026492-COMPLETE SPECIFICATION [16-07-2018(online)].pdf 2018-07-16
8 201821026492-FORM-26 [05-09-2018(online)].pdf 2018-09-05
9 201821026492-Proof of Right (MANDATORY) [06-09-2018(online)].pdf 2018-09-06
10 Abstract1.jpg 2018-09-07
11 201821026492-ORIGINAL UR 6(1A) FORM 1 & FORM 26-120918.pdf 2019-02-13
12 201821026492-OTHERS [15-04-2021(online)].pdf 2021-04-15
13 201821026492-FER_SER_REPLY [15-04-2021(online)].pdf 2021-04-15
14 201821026492-COMPLETE SPECIFICATION [15-04-2021(online)].pdf 2021-04-15
15 201821026492-CLAIMS [15-04-2021(online)].pdf 2021-04-15
16 201821026492-FER.pdf 2021-10-18
17 201821026492-US(14)-HearingNotice-(HearingDate-08-02-2024).pdf 2024-01-18
18 201821026492-FORM-26 [07-02-2024(online)].pdf 2024-02-07
19 201821026492-FORM-26 [07-02-2024(online)]-2.pdf 2024-02-07
20 201821026492-FORM-26 [07-02-2024(online)]-1.pdf 2024-02-07
21 201821026492-Correspondence to notify the Controller [07-02-2024(online)].pdf 2024-02-07
22 201821026492-Written submissions and relevant documents [22-02-2024(online)].pdf 2024-02-22
23 201821026492-FORM-26 [22-02-2024(online)].pdf 2024-02-22
24 201821026492-PatentCertificate23-02-2024.pdf 2024-02-23
25 201821026492-IntimationOfGrant23-02-2024.pdf 2024-02-23

Search Strategy

1 2021-05-0713-08-05AE_07-05-2021.pdf
2 2020-10-0911-49-01E_09-10-2020.pdf

ERegister / Renewals

3rd: 23 May 2024

From 16/07/2020 - To 16/07/2021

4th: 23 May 2024

From 16/07/2021 - To 16/07/2022

5th: 23 May 2024

From 16/07/2022 - To 16/07/2023

6th: 23 May 2024

From 16/07/2023 - To 16/07/2024

7th: 23 May 2024

From 16/07/2024 - To 16/07/2025

8th: 10 Jul 2025

From 16/07/2025 - To 16/07/2026