Abstract: Abstract The present invention provides an image processing method that is capable of evaluating accurately saliency regions of a single still image without the need for any previous knowledge. The framework of target detection approximates the foreground of an image and overlaps with visually conspicuous image locations by developing a saliency algorithm based on the image signature, is claimed. The image processing method includes the foreground separation problem and a saliency map generating approach using a binary, holistic “image descriptor”. This simple descriptor preferentially contains information about the foreground of an image – a property which underlies the usefulness of this descriptor for detecting salient image regions. This target detection algorithm predicts human fixation points best among other methods and requires much shorter processing time. Figure 1 (for publication)
Field of the Invention
The present invention relates to the field of digital image processing and, more particularly to a method for detecting significant targets in an image using saliency regions.
Background of the Invention
Conventionally, in the field of image processing, there is well known a technology of detecting (extracting) an image region to which a human is expected to pay attention in the image or a noteworthy image region (hereinafter each image region is referred to as a salient region) from the image. Using a salient region detecting technology, a saliency measure of each pixel in the image is calculated, and a saliency map indicating the saliency measure of each pixel in the image is also produced.
There are various prior art documents which discloses different methods of finding the saliency regions in a digital image. Different methods of localization and detection of main objects in an image are also described.
For example, the salient region detecting technology can be used to detect a main subject from the image. Generally, a learning-based algorithm is used to detect the salient region. For example, a type of a feature is previously learned and decided based on data of a plurality of images used as a learning target, and the feature of each portion in a target image data is extracted based on the decided type of feature and the target image data used as a calculation target of a saliency measure. According to the prior technology a saliency
measure closer to human sense can be determined by considering learning effect as a form of human experience or memory. However, in the above learning-based algorithm, it is necessary to previously prepare a plurality of pieces of image data to obtain the learning target that can be previous knowledge for the target image data. Therefore, the saliency measure cannot be evaluated in the case where previous knowledge does not exist.
Therefore there is a need in the art for a method and system for detecting a target image significantly using saliency regions to solve the above mentioned limitations.
Summary of the Invention The present invention addresses the above disadvantage with an object of providing an image processing system device and an image processing method, which are capable of evaluating accurately a saliency measure of even a single still image without the need for any previous knowledge.
An aspect of the present invention is to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below.
Accordingly, in one aspect of the present invention relates to a method for detecting targets using salient regions. The method includes the steps of capturing a scene using an imaging sensor based on region of interest, resizing the captured image and dividing the same in to RGB image planes, obtaining saliency regions for all the image planes by generating saliency map in order to measure fixation points on the image plane, the map of the scene express the
saliency of all location in the image plane, the generated map corresponds to the most saliency of any one location with respect to its neighbourhood, separation of the foreground of the image to suppress an effect of one or more foreground objects on the obtained saliency map which have the effect suppressed occupying a majority of one of the boundary region, combining the foreground mask and saliency map to find the mean of all feature maps in saliency regions all over the image, filtering the combined saliency map with Gaussian kernel and resize it to the original image resolution and finding the perceptual distance of the filtered output to ensure that the detected targets/objects in the input image and displaying the detected targets/objects of the input image on a display unit.
In another aspect of the present invention relates to a system for detecting targets using salient regions. The system including an imaging sensor configured to capture the scene, wherein the imaging sensor consists of optical camera, an image capturing module configured to capture a scene by imaging sensor based on ROI (region of interest), a saliency map generation and foreground separation module configured to obtain feature maps based on color, orientation, texture, motion and depth of objects/targets, a combining the maps module configured to find the mean of all feature maps in saliency regions all over the image, an image filtering module configured to filter the combined saliency map with Gaussian kernel and a display module configured to display the detected targets/objects on the screen, the saliency map generation module uses the figure-ground separation problem using a binary,
holistic “image descriptor” i.e. sign function of the Discrete Cosine Transform of an image.
Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.
Brief description of the drawings
The above and other aspects, features, and advantages of certain exemplary embodiments of the present invention will be more apparent from the following description taken in conjunction with the accompanying drawings in which:
Figure 1 illustrates an exemplary block diagram indicating functional modules of the disclosed target detection system using salient regions in accordance with embodiments of the present invention.
Figure 2 illustrates an exemplary process flow diagram of the disclosed target detection process in accordance with embodiments of the present invention.
Persons skilled in the art will appreciate that elements in the figures are illustrated for simplicity and clarity and may have not been drawn to scale. For example, the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help to improve understanding of various exemplary embodiments of the present disclosure. Throughout the
drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures.
Detailed description of the invention
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic is intended to provide.
Figs. 1 through 2, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way that would limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged communications system. The terms used to describe various embodiments are exemplary. It should be understood that these are provided to merely aid the understanding of the description, and that their use and definitions, in no way limit the scope of the invention. Terms first, second, and the like are used to differentiate between objects having the same terminology and are in no way intended to represent a chronological order, unless where explicitly stated otherwise. A set is defined as a non-empty set including at least one element.
The objective of the present invention describes a computer-based implementation which allows automatic detection of salient parts of image information. Another embodiment separates the foreground using block-wise processing. This combined model is used to develop a system to detect targets
of interest in the digital images. The aim of the invention is to match the distance between images induced by the image signature is closer to human perceptual distance.
In the first step, this attribute describes the higher order statistical analysis of image information to compute saliency. In this step, gray-scale image or temporal image planes are used to get saliency maps. Maps are generated by applying sign function of discrete cosine transform on each image plane. Combine these maps in to a single map which leads a signature saliency map of a scene.
Another attribute discloses the detection of extended but interrupted contours within the image information that can contribute to image saliency. In another aspect relates to the improvement of computing saliency for target detection, by filtering the saliency maps.
Another attribute relates to the ability of the method to provide specific feedback on how to improve the saliency of specific objects or locations in the scene. This element is uses the distance function to select the image regions of the targets.
Another element discloses the foreground detection of the input image by block level processing. Block-wise processing involves finding the summation of the absolute difference between each block and its transpose. Normalize the processed blocks and convert into binary image. This ensures the foreground separation to find the targets.
Embodiments of the present disclosure include various steps, the steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, firmware and/or by human operators.
Embodiments of the present disclosure may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).
Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present disclosure with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present disclosure may involve one or more computers (or one or more
processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the disclosure could be accomplished by modules, routines, subroutines, or subparts of a computer program product.
Although the present disclosure has been described with the system and method for target detection using saliency maps it should be appreciated that the same has been done merely to illustrate the disclosure in an exemplary manner and any other purpose or function for which the explained structure or configuration can be used, is covered within the scope of the present disclosure.
Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. This disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the disclosure to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).
Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this disclosure. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this disclosure. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named.
Aspects of the present disclosure relate to a system and method for Automatic target detection and identifying the target regions in a scene. Figure 1 shows a conceptual diagram of the target detection system using saliency maps. With reference to Figure 1, image capture module 1 consists of an imaging sensor which takes snapshots of a scene and an acquisition unit 2 which receives the images and further these images fed to the saliency map generation & foreground separation block 3 to processes them. After saliency maps normalized and overlapped with foreground mask, which is helpful to combine easily to get a single map 4. Next, Image filtering block 5 uses correlation to filter the combined saliency map. And display 7 is used to show the outcome of the entire process. Although the exemplary embodiment shows
a complete view of target detection system, one of the key features of this system is that its process – feasible and its software reliability and robustness.
One embodiment of the present invention provides an imaging sensor where the sensors comprise of optical cameras; the sensor is a digital area scan camera 1. The digital area scan camera typically uses a linear array of Charge Coupled Devices (CCDs) to build up a series of single pixel lines, thereby creating a final image. In addition, the digital area scan camera provides exceptional resolution, thereby allowing the target detection system to consider the fine details of a scene. The camera or imaging sensor may be mounted on a fixed or movable platform. Image acquisition unit, acquires the image data and selects or crops the image based on ROI (region of interest) with respect to where the camera is mounted.
Further, the present invention system includes Saliency map generation and foreground separation block 3, which determines a two-dimensional map that encodes salient objects in a visual environment. The map of the scene expresses the saliency of all locations in this image. This map is the result of competitive interactions among feature maps for image features including color, orientation, texture, motion, depth, and so on, that interact with in and across each map. At any time, the currently strongest location in the saliency map corresponds to the most saliency of any one location with respect to its neighbourhood. By default, the system directs attention towards the most salient region. As discussed earlier, the reconstructed image (Inverse DCT of sign function of discrete cosine transform) detects spatially sparse signals
embedded in spectrally sparse backgrounds. This will show that the saliency map formed from the reconstruction greatly overlaps with regions of human overt attentional interest, measured as fixation points on an input image.
Further, the present invention system includes Saliency map generation and foreground separation block 3, which separates the foreground in an image. This procedure uses the block-wise operations to perform foreground detection. In this embodiment, each block and its transpose were used to find the absolute difference. After that, find the mean of the absolute difference and apply normalization of entire image (all blocks). Then, Binarize the image and apply morphological operations to get the foreground mask of an input image.
Further, the present invention system includes combining the saliency maps 4 by finding the saliency value computed by the mean (or sum) pixel intensity of the object region in the saliency map of the original image. This block shows normalizing the features to extract salient image location from the raw center-surround maps, and to discard inconspicuous locations. This process may be critical to the operation of the system. This operation follows the flow chart of Figure 2. Each feature map is first normalized to a fixed dynamic range such as between 0 and 1. This may eliminate feature-dependent amplitude differences that may be due to different feature extraction mechanisms. Then, apply image filtering 5 on the obtained saliency map. Now, combine the foreground mask and saliency map to accomplish the target detection.
In image filtering, the combined map is convolved by a large difference of Gaussians kernel and the results are added to the center contents of the map. The additional distance input implements the short-range excitation processes and the long-range inhibitory processes between the neighbouring visual locations. Highlighted parts of the image represent the targets, and this will be displayed on the display 6.
The exact details of the saliency flow are shown in Figure 2. First, capture the input image from the image sensor 11 and then the input color image is resized to a desired map width 12 (i.e. coarse 64× 48 pixel representation). Then, for each color channel 13 will be processed, the saliency map is formed from the image reconstructed from the image signature. Image signature is created by applying sign function of the Discrete Cosine Transform (DCT) of an image plane 14, 15. Further, perform inverse Discrete Cosine Transform (IDCT) 16. The outcome of the IDCT is processed to get saliency map by applying a dot product of resultant of IDCT 17. This process is followed for each image plane to obtain saliency regions for all image planes 18. The above flow ensures the saliency region generation of an input image in all color planes.
The obtained saliency regions corresponds to feature maps for image features including color, orientation, texture, motion, depth, and so on. The combining of all saliency regions 19 is attained by finding the mean of pixel intensity of the object region in the saliency map of the original image planes. Further, image filtering will be performed on combined saliency regions using
Gaussian kernel 20. The final saliency region is resized to the original size by applying bicubic interpolation method 21.
The Heatmap overlay 22 ensures the prominent targets in a particular color map/patch. This process is done by using perceptual distance between the image signature descriptor of the original image and the modified image was compared to get accurate maps. This distance is a sensitive one when images share a background, as they do in the case of a change blindness pair. The distance between the descriptors should be related to the extent of difference in their salient, or foreground, regions.
Another process flow is shown in Fig.2, which separates background and foreground. This is a block level process to find the accurate foreground. First, segment the image into equal size blocks 23 and find its transpose 24. Next, find the absolute difference 25 between the individual block and its transpose. And, replace each block pixels with its summation of all pixels in that particular block 26. Normalize the image 27 and convert image into binary image 28. And, apply the morphological operations to attain foreground mask 29. Finally, merge both foreground mask and Heatmap overlay map 30, this process ensures the accurate object/target regions 31.The final output of the salient map represents the targets in an input image; this will be displayed on the display 32.
Those skilled in this technology can make various alterations and modifications without departing from the scope and spirit of the invention.
Therefore, the scope of the invention shall be defined and protected by the following claims and their equivalents.
FIGS. 1-2 are merely representational and are not drawn to scale. Certain portions thereof may be exaggerated, while others may be minimized. FIGS. 1-2 illustrate various embodiments of the invention that can be understood and appropriately carried out by those of ordinary skill in the art.
In the foregoing detailed description of embodiments of the invention, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description of embodiments of the invention, with each claim standing on its own as a separate embodiment.
It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used
as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively.
We Claim:
1. A method for detecting targets using salient regions, the method
comprising:
capturing a scene using an imaging sensor based on region of interest;
resizing the captured image and dividing the same in to RGB image planes;
obtaining saliency regions for all the image planes by generating saliency map in order to measure fixation points on the image plane, wherein the map of the scene express the saliency of all location in the image plane, the generated map corresponds to the most saliency of any one location with respect to its neighbourhood;
separation of the foreground of the image to suppress an effect of one or more foreground objects on the obtained saliency map which have the effect suppressed occupying a majority of one of the boundary region;
combining the foreground mask and saliency map to find the mean of all feature maps in saliency regions all over the image;
filtering the combined saliency map with Gaussian kernel and resize it to the original image resolution and finding the perceptual distance of the filtered output to ensure that the detected targets/objects in the input image; and displaying the detected targets/objects of the input image.
2. The method as claimed in claim 1, wherein the step of retrieving the
saliency regions comprising:
finding discrete cosine transform on each image plane and then sign function to the transformed image plane, further applying dot product to the outcome of the Signum function and finding the inverse discrete cosine transform.
3. The method as claimed in claim 1, wherein the step of detecting of the foreground mask from the input image is based on block wise processing of the original image.
4. The method as claimed in claim 1, wherein the step of combining the saliency regions of all image planes by finding the mean of pixel intensity of the object region in the saliency map of the original image planes.
5. The method as claimed in claim 1, wherein the saliency regions of all image planes combined by finding the mean of pixel intensity of the object region in the saliency map of the original image planes.
6. The method as claimed in claim 1, wherein the step of filtering further comprising:
merging foreground mask and heatmap overlay map to detect the region of the targets based on concentrated color map.
7. A system for detecting targets using salient regions, the system
comprising:
an imaging sensor configured to capture the scene, wherein the imaging sensor consists of optical camera;
an image capturing module configured to capture a scene by imaging sensor based on ROI (region of interest);
a saliency map generation and foreground separation module configured to obtain feature maps based on color, orientation, texture, motion and depth of objects/targets;
a combining the maps module configured to find the mean of all feature maps in saliency regions all over the image;
an image filtering module configured to filter the combined saliency map with Gaussian kernel; and
a display module configured to display the detected targets/objects on the screen.
wherein the saliency map generation module uses the figure-ground separation problem using a binary, holistic “image descriptor” i.e. sign function of the Discrete Cosine Transform of an image.
8. The system as claimed in claim 7, wherein the image filtering module is configured to perform correlation on combined saliency regions using Gaussian kernel and this process highlights the more prominent regions, and wherein the image filtering module is further configured to resize the filtered salient regions to original size of the input image.
| # | Name | Date |
|---|---|---|
| 1 | 201841011412-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 1 | 201841011412-STATEMENT OF UNDERTAKING (FORM 3) [27-03-2018(online)]_62.pdf | 2018-03-27 |
| 2 | 201841011412-PROOF OF ALTERATION [04-10-2024(online)].pdf | 2024-10-04 |
| 2 | 201841011412-STATEMENT OF UNDERTAKING (FORM 3) [27-03-2018(online)].pdf | 2018-03-27 |
| 3 | 201841011412-PROOF OF RIGHT [27-03-2018(online)].pdf | 2018-03-27 |
| 3 | 201841011412-IntimationOfGrant20-02-2024.pdf | 2024-02-20 |
| 4 | 201841011412-PatentCertificate20-02-2024.pdf | 2024-02-20 |
| 4 | 201841011412-FORM 1 [27-03-2018(online)].pdf | 2018-03-27 |
| 5 | 201841011412-Written submissions and relevant documents [13-02-2024(online)].pdf | 2024-02-13 |
| 5 | 201841011412-DRAWINGS [27-03-2018(online)]_127.pdf | 2018-03-27 |
| 6 | 201841011412-DRAWINGS [27-03-2018(online)].pdf | 2018-03-27 |
| 6 | 201841011412-Correspondence to notify the Controller [29-01-2024(online)].pdf | 2024-01-29 |
| 7 | 201841011412-US(14)-HearingNotice-(HearingDate-30-01-2024).pdf | 2024-01-05 |
| 7 | 201841011412-DECLARATION OF INVENTORSHIP (FORM 5) [27-03-2018(online)]_85.pdf | 2018-03-27 |
| 8 | 201841011412-Response to office action [14-09-2022(online)].pdf | 2022-09-14 |
| 8 | 201841011412-DECLARATION OF INVENTORSHIP (FORM 5) [27-03-2018(online)].pdf | 2018-03-27 |
| 9 | 201841011412-COMPLETE SPECIFICATION [27-03-2018(online)]_53.pdf | 2018-03-27 |
| 9 | 201841011412-FER.pdf | 2021-10-17 |
| 10 | 201841011412-ABSTRACT [26-04-2021(online)].pdf | 2021-04-26 |
| 10 | 201841011412-COMPLETE SPECIFICATION [27-03-2018(online)].pdf | 2018-03-27 |
| 11 | 201841011412-CLAIMS [26-04-2021(online)].pdf | 2021-04-26 |
| 11 | abstract 201841011412.jpg | 2018-03-28 |
| 12 | 201841011412-COMPLETE SPECIFICATION [26-04-2021(online)].pdf | 2021-04-26 |
| 12 | 201841011412-Proof of Right (MANDATORY) [04-07-2018(online)].pdf | 2018-07-04 |
| 13 | 201841011412-DRAWING [26-04-2021(online)].pdf | 2021-04-26 |
| 13 | 201841011412-FORM-26 [04-07-2018(online)].pdf | 2018-07-04 |
| 14 | 201841011412-FER_SER_REPLY [26-04-2021(online)].pdf | 2021-04-26 |
| 14 | Correspondence by Agent_Form 1 And Form 26_06-07-2018.pdf | 2018-07-06 |
| 15 | 201841011412-FORM 18 [13-08-2018(online)].pdf | 2018-08-13 |
| 15 | 201841011412-OTHERS [26-04-2021(online)].pdf | 2021-04-26 |
| 16 | 201841011412-FORM 18 [13-08-2018(online)].pdf | 2018-08-13 |
| 16 | 201841011412-OTHERS [26-04-2021(online)].pdf | 2021-04-26 |
| 17 | Correspondence by Agent_Form 1 And Form 26_06-07-2018.pdf | 2018-07-06 |
| 17 | 201841011412-FER_SER_REPLY [26-04-2021(online)].pdf | 2021-04-26 |
| 18 | 201841011412-DRAWING [26-04-2021(online)].pdf | 2021-04-26 |
| 18 | 201841011412-FORM-26 [04-07-2018(online)].pdf | 2018-07-04 |
| 19 | 201841011412-COMPLETE SPECIFICATION [26-04-2021(online)].pdf | 2021-04-26 |
| 19 | 201841011412-Proof of Right (MANDATORY) [04-07-2018(online)].pdf | 2018-07-04 |
| 20 | 201841011412-CLAIMS [26-04-2021(online)].pdf | 2021-04-26 |
| 20 | abstract 201841011412.jpg | 2018-03-28 |
| 21 | 201841011412-ABSTRACT [26-04-2021(online)].pdf | 2021-04-26 |
| 21 | 201841011412-COMPLETE SPECIFICATION [27-03-2018(online)].pdf | 2018-03-27 |
| 22 | 201841011412-COMPLETE SPECIFICATION [27-03-2018(online)]_53.pdf | 2018-03-27 |
| 22 | 201841011412-FER.pdf | 2021-10-17 |
| 23 | 201841011412-DECLARATION OF INVENTORSHIP (FORM 5) [27-03-2018(online)].pdf | 2018-03-27 |
| 23 | 201841011412-Response to office action [14-09-2022(online)].pdf | 2022-09-14 |
| 24 | 201841011412-US(14)-HearingNotice-(HearingDate-30-01-2024).pdf | 2024-01-05 |
| 24 | 201841011412-DECLARATION OF INVENTORSHIP (FORM 5) [27-03-2018(online)]_85.pdf | 2018-03-27 |
| 25 | 201841011412-DRAWINGS [27-03-2018(online)].pdf | 2018-03-27 |
| 25 | 201841011412-Correspondence to notify the Controller [29-01-2024(online)].pdf | 2024-01-29 |
| 26 | 201841011412-Written submissions and relevant documents [13-02-2024(online)].pdf | 2024-02-13 |
| 26 | 201841011412-DRAWINGS [27-03-2018(online)]_127.pdf | 2018-03-27 |
| 27 | 201841011412-PatentCertificate20-02-2024.pdf | 2024-02-20 |
| 27 | 201841011412-FORM 1 [27-03-2018(online)].pdf | 2018-03-27 |
| 28 | 201841011412-PROOF OF RIGHT [27-03-2018(online)].pdf | 2018-03-27 |
| 28 | 201841011412-IntimationOfGrant20-02-2024.pdf | 2024-02-20 |
| 29 | 201841011412-STATEMENT OF UNDERTAKING (FORM 3) [27-03-2018(online)].pdf | 2018-03-27 |
| 29 | 201841011412-PROOF OF ALTERATION [04-10-2024(online)].pdf | 2024-10-04 |
| 30 | 201841011412-STATEMENT OF UNDERTAKING (FORM 3) [27-03-2018(online)]_62.pdf | 2018-03-27 |
| 30 | 201841011412-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |
| 1 | searchstrategyE_13-10-2020.pdf |