Abstract: The present invention relates to methods and systems for processing at least one live image of an object, the method comprising capturing the at least one live image of the object, via an electronic device (100), and identifying a base portion of the live image, said base portion being a part of said object. The method further includes rendering a mesh to cover the identified base portion, the mesh being a transparent mesh invisible on a display screen (110) of the electronic device (100); and overlaying at least one object filter on the rendered mesh, wherein the mesh hides at least one segment of the at least one object filter when the at least one segment occurs behind the mesh, thereby facilitating real-time visualization of the object filter overlaid on the at least one live image. Fig. 1
FIELD OF THE INVENTION
The present invention generally relates to the field of augmented reality, and more particularly, to methods and systems for processing a live image of an object to facilitate real-time visualization of object filters on the live image.
BACKGROUND 5
This section is intended to provide information relating to the field of the invention and thus any approach/functionality described below should not be assumed to be qualified as prior art merely by its inclusion in this section.
With the rapid growth of social networking applications, the sharing of media content on social network including sharing of images, videos, live videos, etc. has also increased. 10 Users typically wish to enhance the appearance of their media content before sharing the same with other users. To facilitate this, many social networking applications allow users to add filters or overlay objects to images and videos. Typically, users of social networking applications modify images/videos by adding filters or overlay objects to a specific portion of the image/video. For instance, in the user’s own image, the user may wish to apply an 15 overlay object such as a hat to their head in the image. In order to be able to apply filters/overlays to these base objects (such as the face in this example), it is essential to first detect and identify these objects in the image. Typically, base objects in an image/ video are detected by using techniques such as face-point recognition, average 3-D face, etc., which provide location and scale of the base objects in the image/ video. 20 Subsequently, the overlay objects are overlaid on the detected base object at the appropriate location over the image/ video frame. These overlay objects therefore form a top layer over the image/ video frame. A critical drawback of this methodology is that the overlay objects are visible completely when the position or orientation of the base object changes such that it should hide the overlay object partially. For instance, in the 25 above hat example, when the orientation of the user’s face changes in the video (i.e. the user turns left or right), the position of the hat does not automatically change in accordance with this change in orientation.
3
SUMMARY OF THE INVENTION
This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter. 5
In view of the drawbacks and limitations of the prior art systems, one object of the present invention is to provide systems and methods for image processing that facilitates implementing overlay filters on images and videos that can maintain the illusion of objects’ existence in the real world. Another object of the invention is to provide systems and methods for image processing that provide an enhanced user experience. 10
In view of these and other objects, one aspect of the invention relates to a method for processing at least one live image of an object, the method comprising capturing the at least one live image of the object, via an electronic device and identifying a base portion of the at least one live image, said base portion being a part of said object. The method further includes rendering a mesh to cover the identified base portion, the mesh being a 15 transparent mesh invisible on a display screen of the electronic device. Subsequently, at least one object filter is overlaid on the rendered mesh, wherein the mesh hides at least one segment of the at least one object filter when the at least one segment occurs behind the mesh, thereby facilitating real-time visualization of the object filter overlaid on the at least one live image. 20
Another aspect of the invention relates to a portable electronic device for processing at least one live image of an object, the device comprising a camera to capture the at least one live image of the object, and a tracking module coupled to the camera for identifying a base portion of the at least one live image, said base portion being a part of said object. The device also comprises an overlay module coupled to the camera and the tracking 25 module, said overlay module configured to render a mesh to cover the identified base portion, the mesh being a transparent mesh invisible on a display screen, and overlaying at least one object filter on the rendered mesh. This mesh hides at least one segment of the at least one object filter when said at least one segment occurs behind the mesh, thereby facilitating real-time visualization of the object filter overlaid on the live image. 30
4
BRIEF DESCRIPTION OF DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Some drawings may indicate the components using block diagrams and may not represent 5 the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components. The connections between the sub-components of a component have not been shown in the drawings for the sake of clarity, therefore, all sub-components shall be assumed to be connected to each other unless 10 explicitly otherwise stated in the disclosure herein.
Fig. 1 illustrates the block diagram of a system for processing live image of an object, in accordance with exemplary embodiments of the present disclosure.
Fig. 2 illustrates the method for processing live image of an object, in accordance with exemplary embodiments of the present disclosure. 15
Fig. 3 illustrates the mesh as used in accordance with exemplary embodiments of the present disclosure.
Fig. 4 illustrates a first exemplary live image processed in accordance with exemplary embodiments of the present disclosure.
Fig. 5 illustrates a second exemplary live image processed in accordance with exemplary 20 embodiments of the present disclosure.
Fig. 6 illustrates a third exemplary live image processed in accordance with exemplary embodiments of the present disclosure.
The foregoing shall be more apparent from a more detailed description of the invention below. 25
5
DETAILED DESCRIPTION
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent, however, that embodiments of the present invention may be practiced without these specific details or with additional details that may be obvious 5 to a person skilled in the art. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. 10
As used herein, a “social networking application” refers to a mobile or web application used for the purpose of social networking wherein users can interact with each other by means of text, audio, video or a combination thereof. In a preferred embodiment, the social networking application is an instant messaging application. The social networking application provides many other features such as viewing and sharing media content, 15 read news, play games, shop, make payments, and any other features as may be obvious to a person skilled in the art.
As used herein, portable electronic device refers to any electrical, electronic, electromechanical and computing device. Portable electronic device may include but is not limited to a mobile phone, a smartphone, laptop, personal digital assistant, tablet 20 computer, general purpose computer, or any other computing device as may be obvious to a person skilled in the art.
As used herein, “connect”, “configure”, “couple” and its cognate terms, such as “connects”, “connected”, “configured” and “coupled” may include a physical connection (such as a wired/wireless connection), a logical connection (such as through logical gates 25 of semiconducting device), other suitable connections, or a combination of such connections, as may be obvious to a skilled person.
The present invention is directed to systems and methods for enhancing overlay filters or augmented reality on media items such as a live images in a social networking application stored or residing on a portable electronic device. As used herein, a “live image” refers to 30
6
a live (i.e. real-time) camera stream or a live video stream, said live image being captured using an image/video capturing device such as a camera. A live image may include one or more real-world objects, for instance, a human figure. The object in the live image may include one or more “base portions” over which an object filter is to be applied or overlaid. For instance, where the live image includes a human figure, the base portion 5 may be the face of the user.
As used herein, “object filters” or “overlay filters” refer to media objects such as 3D models, images, emoticons, gifs, doodles, etc. that may be overlaid on a live image such that they partially cover the one or more base portions in the live image. In a preferred embodiment, the overlay filter or item is smaller in size as compared to the base media 10 item. Object filters and overlay filters have been used interchangeably throughout the description and may be construed accordingly.
In order to obviate the problem of the prior art systems of complete rendering of the object filters even when the position or orientation of the “base portion” changes, the present invention provides a unique invisible culling mask/mesh implemented over the 15 live image. Such mask orients itself with respect to the one or more base portions in the live image and prevents anything behind the mask from getting rendered. This gives the effect that the object filter (which is a virtual object) and the real-world object co-exist in the same space, i.e. as if the object in the live image (for instance the user’s face) is covered by the object filter (for instance, a hat) in reality. 20
As shown in Fig. 1, the system 100 for processing a live image of an object comprises a camera 102, a tracking module 104, an overlay module 106, a processor 108, a display 110 and a memory 112, wherein all the components of the system are coupled to/ interconnected to each other. In an embodiment, the system 100 includes a combination of software components and hardware components, wherein the hardware components 25 are embedded in the portable electronic device and the software components are executed at the portable electronic device. i. While Fig. 1 depicts various separate components, it will be appreciated that the tracking module 104, overlay module 106 and processor 108 may be integrated together in an electronic package or chip.
7
The camera 102 is configured to capture at least one live image of an object, wherein the camera may be a camera integrated in the portable electronic device or a custom camera of the social networking application. The camera 102 is configured to provide the live image to the tracking module 104 that is configured to identify a base portion of the at least one live image over which an object filter is to be applied, wherein said base portion 5 is a part of said object in the live image. In an embodiment, the base portion in the live image over which the object filter is to be applied is a face of the user, wherein such face in the live image is detected using techniques such as face-point recognition or the average face-model. Detection of the base portion in the live image is explained further in detail with reference to figure 2. The tracking module 104 is further configured to 10 provide the information relating to the identified base portion to the overlay module 106.
The overlay module 106 is configured to render a mesh/mask to cover the identified base portion, the mesh being a transparent mesh invisible on a display screen 110. The overlay module 106 is further configured to overlay at least one object filter on the rendered mesh wherein the mesh hides at least one segment of the at least one object filter when 15 said at least one segment occurs behind the mesh, thereby facilitating real-time visualization of the object filter overlaid on the live image. In an embodiment, the overlay module 106 comprises of a render mask module and a render filter module, wherein the render mask module is configured to render the mesh and the render filter module is configured to render the object filters. In a preferred embodiment, the overlay module 20 106 is configured to enable depth test and disable colour render, thereby making the mesh invisible to the user. The ‘depth test’ refers to the z-depth of each pixel rendered on the display screen, and the ‘color render’ refers to the ability to render color to each pixel rendered on the display screen. Since color render is disabled for the mask/mesh, there is no rendering of colors on the mesh. 25
The display screen/module 110 is configured to display the overlaid object filter on the live image of the object, to the user. In an embodiment, the display screen is the integrated display screen of the personal electronic device. The display screen may include but is not limited to Cathode ray tube display (CRT), Light-emitting diode display (LED), Electroluminescent display (ELD), Electronic paper, E Ink, Plasma display panel 30 (PDP), Liquid crystal display (LCD), High-Performance Addressing display (HPA), Thin-film
8
transistor display (TFT), Organic light-emitting diode display (OLED), or any other display as may be obvious to a person skilled in the art. The display module 110 is further configured to receive user input in the form of touch, pressing a button on the portable electronic device, etc.; and to display data/information to the user. The display module 110 is configured to accept one or more inputs from the user relating to application of 5 one or more object filters to the live image, wherein inputs may include type of object filter, the object filter, the position of the object filter, one or more base portions over which said object filters is desired to be placed or overlaid, etc. The display module 110 is further configured to provide said inputs to the overlay module 106.
The memory 112 is configured to store the live images captured by the camera 102, the 10 base portions identified by the tracking module 104, the one or more object filters applied by the overlay module 106, the live images with overlaid object filters, etc. The memory 112 is also configured to store a collection of object filters that may be provided to the user or recommended to the user via the display 110.
The invention encompasses a system 100 that resides within a portable electronic device, 15 wherein the memory 112 and the display 110 may be a memory and display of the electronic device respectively. In this case, the memory of the portable electronic device also stores the social networking application. The invention also encompasses a system 100 that resides as a separate entity, i.e. outside the portable electronic device.
The portable electronic device as used herein may be a specialized computing device. For 20 example, the portable electronic device may be video infrastructure or audio infrastructure device that is optimized for services such as image/video capturing, etc. The portable electronic device may include a processor, a speaker/microphone coupling, a keypad, a display/touchpad, non-removable memory, removable memory, a power source, and other peripherals. 25
The processor 108 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the 30
9
like. The processor 108 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the portable electronic device to operate in a wireless environment.
The non-removable memory may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable 5 memory may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
Figure 2 illustrates an exemplary method for processing a live image of an object, in accordance with exemplary embodiments of the present disclosure. The method includes at step 202, capturing at least one live image of the object via the electronic device. In an 10 embodiment, the live image is captured by the camera of the electronic device. The invention encompasses capturing a live image of an object in response to a user request to capture the image, wherein said user request may be received at the display screen 110 of the system 100 in the form of touch, press of a key, voice command, etc. This user request may be received when the user is accessing the social networking application. For 15 instance, the user when using a social networking application may press/touch the camera button in the application to initiate capturing of a live image of himself. The camera 102 captures the live image of the user and display the same on the display screen 110, wherein this live image includes the user’s face, shoulders, etc.
Next, at step 204, a base portion of the at least one live image is identified, wherein said 20 base portion is a part of said object. The identification of said one or more base portions includes identifying/ detecting the scale, position and the orientation of these portions with reference to the object in the live image. The invention encompasses automatically identifying one or more base portions of the object in the live image. For instance, when the user’s image is captured, the invention identifies base portions such as face, shoulder, 25 eyes, etc. The invention also encompasses identifying a particular base portion of the object based on a user input, wherein the input may be received at the display screen 110 in the form of touch/press, etc. on the particular base portion displayed on the display screen 110. For instance, when the user’s image is captured, the user may press/touch on
10
his face in the live image on the display screen 110 to indicate that he wishes the system to identify this particular base portion.
For instance, in the above example when a user’s live image is captured, the face of the user (base portion) may be identified in this step. In an embodiment, where the face of the user is required to be identified in this step, any face recognition technique may be 5 used such as Principal Component Analysis (PCA), Kernel PCA, Weighted PCA, Linear Discriminant Analysis (LDA), Kernel LDA, Semi-supervised Discriminant Analysis, Independent Component Analysis (ICA), Neural Network based methods, Multidimensional Scaling (MDS), etc.
Identifying a face of a user may include face detection, feature extraction (such as eyes, 10 ears, etc.), face classification (based on shape such as round, oval etc.; based on orientation such as side view, front view etc.).
Next, at step 206, a mesh is rendered to cover the identified base portion, the mesh being a transparent mesh invisible on a display screen of the electronic device. The mesh may be a pre-created mesh or a dynamically created mesh. Before or after rendering the mesh, 15 the mesh may be appropriately scaled to the scale of the base portion of the object in the at least one image. For instance, in the above example, an invisible mesh is rendered on the face of the user, wherein the mesh covers the entire face of the user in the live image. This mesh is however, invisible to the user on the display screen 110. This mesh is scaled appropriately so as to cover the entire face of the user, wherein scaling includes increasing 20 or decreasing the size of the mesh, altering the shape of the mesh, etc. The mesh comprises a plurality of pixels that automatically forms a shape based on the contours of the base portion, i.e. the shape of the mesh is altered so that the contour of the mesh exactly or most closely matches with the contours of the base portion (i.e. the face of the user in this example). This is illustrated in the Figure 3 of this disclosure. 25
Subsequently, at step 208, at least one object filter is overlaid on the rendered mesh, wherein the mesh hides at least one segment of the at least one object filter when the at least one segment occurs behind the mesh, thereby facilitating real-time visualization of the object filter overlaid on the at least one live image. The invention also encompasses providing a plurality of pre-existing object filters to the user, via the display screen 110, 30
11
and in response to this, receiving a selection of an object filter that is to be overlaid on the object in the live image. For instance, continuing with the above example, the user may be provided with multiple options in response to which user may select a “spectacles” object filter to be applied on his face. This filter is then overlaid on the user’s face over the rendered mesh. The invention also encompasses recommending one or 5 more object filters based on the size, shape, etc. of the identified base portion.
The segment of the object filter may occur behind the mesh based on a movement of the base portion in the live image. For instance, in the above example, when a user moves his face sideways, then the in the live image one of the edges of the spectacle occurs or falls behind the mesh, and therefore this segment is not rendered on the display screen 110. 10
Fig. 4 illustrates a first exemplary live image processed in accordance with exemplary embodiments of the present disclosure. Fig. 4 shows the face 402 of a user captured by the camera 102 in a front view, wherein the “spectacles” filter 404 is applied to the user’s face.
Fig. 5 illustrates a second exemplary live image processed in accordance with exemplary 15 embodiments of the present disclosure. Fig. 5 shows the side view of the face 402 of the user when the user in the live image has turned his face sideways (towards the left). As can be clearly seen, the first edge 502 of the spectacle filter 404 is visible, while the second edge 504 of the spectacle filter 404 is hidden since it falls/ occurs behind the mesh. Similarly, Fig. 6 illustrates a third exemplary live image processed in accordance with 20 exemplary embodiments of the present disclosure. Fig. 6 shows the side view of the face 402 of the user. As can be clearly seen, the second edge 504 of the spectacle filter 404 is visible, while the first edge 502 of the spectacle filter 404 is hidden since it falls/ occurs behind the mesh.
By the existing solutions, both the first edge 502 and the second edge 504 would be visible 25 in the side views, since the object filter of the prior art are overlaid on the objects directly, wherein the filters are incapable of being altered in case the orientation of the object or a portion thereof changes. The present invention provides significant technical advancement as compared to techniques and solutions known in the prior art since it is able to facilitate real-time visualization of the object filters on live images and is able to 30
12
hide at least those segments that occur behind the mask. Thus, the invention is capable of giving an appearance to the user that overlaid object filter and the object itself co-exist in the same space.
The invention encompasses simultaneously implementing or applying multiple object filters to a single live image. The invention also encompasses simultaneously applying the 5 same or different object filters to different live images. Although the invention has been explained with reference to applying object filters on a user’s face, it will be appreciated by those skilled in the art that such references are only exemplary and the invention encompasses applying object filters to a live image containing any kind of objects, any kind and number of base portions, etc. While the present invention has been described 10 with reference to certain preferred embodiments and examples thereof, other embodiments, equivalents and modifications are possible and are also encompassed by the scope of the present disclosure.
We Claim
1. A method for processing at least one live image of an object, the method comprising:
capturing the at least one live image of the object, via an electronic device (100); 5
identifying a base portion of the at least one live image, said base portion being a part of said object;
rendering a mesh to cover the identified base portion, the mesh being a transparent mesh invisible on a display screen (110) of the electronic device (100); and 10
overlaying at least one object filter on the rendered mesh, wherein the mesh hides at least one segment of the at least one object filter when the at least one segment occurs behind the mesh, thereby facilitating real-time visualization of the object filter overlaid on the at least one live image. 15
2. The method as claimed in claim 1, wherein the object filter is selected by a user from a plurality of pre-existing object filters.
3. The method as claimed in claim 1, wherein the at least one segment of the at least 20 one object filter occurs behind the mesh based on a movement of the base portion in the live image.
4. The method as claimed in claim 1, wherein the mesh comprises a plurality of pixels that automatically forms a shape based on the contours of the base portion. 25
5. The method as claimed in claim 1, further comprising disabling colour of each of said pixels for providing transparency to the pre-created mesh.
6. The method as claimed in claim 1, further comprising scaling the mesh based on 30 a scale of the base portion of the at least one live image.
14
7. A portable electronic device for processing at least one live image of an object, the device comprising:
a camera (102) to capture the at least one live image of the object;
a tracking module (104) for identifying a base portion of the at least one 5 live image, said base portion being a part of said object; and
an overlay module (106) for:
rendering a mesh to cover the identified base portion, the mesh being a transparent mesh invisible on a display screen (110), and
overlaying at least one object filter on the rendered mesh, 10
wherein the mesh hides at least one segment of the at least one object filter when said at least one segment occurs behind the mesh, thereby facilitating real-time visualization of the object filter overlaid on the live image.
15
8. The portable electronic device of claim 7 wherein the mesh is one of a pre-created mesh and a dynamically created mesh.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 201611039150-IntimationOfGrant23-10-2024.pdf | 2024-10-23 |
| 1 | Form 3 [16-11-2016(online)].pdf | 2016-11-16 |
| 2 | 201611039150-PatentCertificate23-10-2024.pdf | 2024-10-23 |
| 2 | Drawing [16-11-2016(online)].pdf | 2016-11-16 |
| 3 | Description(Provisional) [16-11-2016(online)].pdf | 2016-11-16 |
| 3 | 201611039150-Written submissions and relevant documents [30-05-2024(online)].pdf | 2024-05-30 |
| 4 | abstract.jpg | 2017-01-14 |
| 4 | 201611039150-CORRECTED PAGES [28-05-2024(online)].pdf | 2024-05-28 |
| 5 | Form 26 [01-02-2017(online)].pdf | 2017-02-01 |
| 5 | 201611039150-MARKED COPY [28-05-2024(online)].pdf | 2024-05-28 |
| 6 | 201611039150-Power of Attorney-010217.pdf | 2017-02-03 |
| 6 | 201611039150-Correspondence to notify the Controller [06-05-2024(online)].pdf | 2024-05-06 |
| 7 | 201611039150-FORM-26 [06-05-2024(online)].pdf | 2024-05-06 |
| 7 | 201611039150-Correspondence-010217.pdf | 2017-02-03 |
| 8 | Other Patent Document [16-05-2017(online)].pdf | 2017-05-16 |
| 8 | 201611039150-US(14)-HearingNotice-(HearingDate-17-05-2024).pdf | 2024-04-15 |
| 9 | 201611039150-FER_SER_REPLY [16-03-2022(online)].pdf | 2022-03-16 |
| 9 | 201611039150-OTHERS-180517.pdf | 2017-05-23 |
| 10 | 201611039150-FER.pdf | 2021-10-17 |
| 10 | 201611039150-OTHERS-180517-.pdf | 2017-05-23 |
| 11 | 201611039150-Correspondence-180517.pdf | 2017-05-23 |
| 11 | 201611039150-FORM 18 [03-11-2020(online)].pdf | 2020-11-03 |
| 12 | 201611039150-COMPLETE SPECIFICATION [10-11-2017(online)].pdf | 2017-11-10 |
| 12 | 201611039150-ENDORSEMENT BY INVENTORS [10-11-2017(online)].pdf | 2017-11-10 |
| 13 | 201611039150-CORRESPONDENCE-OTHERS [10-11-2017(online)].pdf | 2017-11-10 |
| 13 | 201611039150-DRAWING [10-11-2017(online)].pdf | 2017-11-10 |
| 14 | 201611039150-CORRESPONDENCE-OTHERS [10-11-2017(online)].pdf | 2017-11-10 |
| 14 | 201611039150-DRAWING [10-11-2017(online)].pdf | 2017-11-10 |
| 15 | 201611039150-COMPLETE SPECIFICATION [10-11-2017(online)].pdf | 2017-11-10 |
| 15 | 201611039150-ENDORSEMENT BY INVENTORS [10-11-2017(online)].pdf | 2017-11-10 |
| 16 | 201611039150-Correspondence-180517.pdf | 2017-05-23 |
| 16 | 201611039150-FORM 18 [03-11-2020(online)].pdf | 2020-11-03 |
| 17 | 201611039150-OTHERS-180517-.pdf | 2017-05-23 |
| 17 | 201611039150-FER.pdf | 2021-10-17 |
| 18 | 201611039150-FER_SER_REPLY [16-03-2022(online)].pdf | 2022-03-16 |
| 18 | 201611039150-OTHERS-180517.pdf | 2017-05-23 |
| 19 | 201611039150-US(14)-HearingNotice-(HearingDate-17-05-2024).pdf | 2024-04-15 |
| 19 | Other Patent Document [16-05-2017(online)].pdf | 2017-05-16 |
| 20 | 201611039150-Correspondence-010217.pdf | 2017-02-03 |
| 20 | 201611039150-FORM-26 [06-05-2024(online)].pdf | 2024-05-06 |
| 21 | 201611039150-Correspondence to notify the Controller [06-05-2024(online)].pdf | 2024-05-06 |
| 21 | 201611039150-Power of Attorney-010217.pdf | 2017-02-03 |
| 22 | 201611039150-MARKED COPY [28-05-2024(online)].pdf | 2024-05-28 |
| 22 | Form 26 [01-02-2017(online)].pdf | 2017-02-01 |
| 23 | 201611039150-CORRECTED PAGES [28-05-2024(online)].pdf | 2024-05-28 |
| 23 | abstract.jpg | 2017-01-14 |
| 24 | 201611039150-Written submissions and relevant documents [30-05-2024(online)].pdf | 2024-05-30 |
| 24 | Description(Provisional) [16-11-2016(online)].pdf | 2016-11-16 |
| 25 | Drawing [16-11-2016(online)].pdf | 2016-11-16 |
| 25 | 201611039150-PatentCertificate23-10-2024.pdf | 2024-10-23 |
| 26 | Form 3 [16-11-2016(online)].pdf | 2016-11-16 |
| 26 | 201611039150-IntimationOfGrant23-10-2024.pdf | 2024-10-23 |
| 1 | searchstrategyE_08-09-2021.pdf |