Abstract: A METHOD OF AUTOMATICALLY GENERATING DOLLY ZOOM EFFECT BY A USER DEVICE The present invention describes a method of generating dolly zoom effect by a user device. The method comprises capturing, by a capturing module, one or more frames, processing in the capturing module, the one or more captured frames to create a mosaic image and a depth mosaic, and processing, by a viewing module, the mosaic image and the depth mosaic, thereby generating dolly zoom effect. The capturing module further comprises creating, by a mosaic engine module, the mosaic image of the one or more captured frames, generating, by a depth estimation module, a depth map of the one or more captured frames, registering, by a registration module, the mosaic image and the depth map to create a depth mosaic, and encoding, by a muxer module, the mosaic image, the depth mosaic and a meta data of a region of interest into a file. Figure 1
DESC:FORM 2
THE PATENTS ACT, 1970
[39 of 1970]
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(Section 10; Rule 13)
A METHOD OF AUTOMATICALLY GENERATING DOLLY ZOOM EFFECT BY A USER DEVICE
SAMSUNG ELECTRONICS CO., LTD
129, Samsung-ro, Yeongtong-gu, Suwon-si,
Gyeonggi-do 443-742,
Republic of Korea
A Korean Company
The following Specification particularly describes the invention and the method it is being performed
RELATED APPLICATION
The present invention claims benefit of the Indian Provisional Application No. 6326/CHE/2015 titled " METHOD OF AUTOMATICALLY GENERATING DOLLY ZOOM EFFECT USING A WIRELESS COMMUNICATION DEVICE” by Samsung R&D Institute India – Bangalore Private Limited, filed on 24th November 2015, which is herein incorporated in its entirety by reference for all purposes.
FIELD OF THE INVENTION
The present invention generally relates to media content and, more particularly, relates to a method and system for automatically generating a dolly zoom effect on an image captured by a camera in a user device.
BACKGROUND OF THE INVENTION
Generally, the events that are being captured using a camera are stored as media content, such as an image, a video, an audio recording and/or the like. Media content is even more frequently captured by a camera or other image capturing device attached to a mobile device. However, mobile devices do not provide hardware or software capability to perform cinematographic effects.
The cinematographic effect such as dolly zoom characterized by setting a zoom lens to adjust the angle of view while the camera moves toward or away from a subject by retaining same size of the subject throughout the action, is difficult to achieve. The dolly zoom effect is a camera technique that causes a foreground element to remain in same size while changing size of background elements. This technique is widely used in filming industries.
Currently, a high end camera is needed to achieve the effect and a non-professional will need to try the shot multiple times to achieve the effect even with professional equipment. Most user interfaces of camera equipped mobile phones do not allow continuous zoom, but instead employ a particular set of fixed zoom steps. Manually adjusting a zoom setting with the available user interfaces is not flexible enough. In general, matching the speed of the camera motion with the correct change in focal length is very challenging with high end cameras and impossible with mobile phone cameras.
The dolly zoom effect can be created using light field cameras. The light field camera has a user interface to allow the user to provide input for managing the operations of camera. Thus, it requires an expertise to control working of the camera so as to create dolly zoom effect. Also, the light field camera requires an external hardware such as a zooming lens for creating dolly zoom effect. Hence, it is not possible in a user device to create dolly zoom effect without using the zooming lens or any image editing tool.
Thus, there is a need for a system and method to allow user to automatically create dolly zoom effect without requiring any special hardware in a user device.
The above mentioned shortcomings, disadvantages and problems are addressed herein and which will be understood by reading and studying the following specification.
SUMMARY OF THE INVENTION
Various embodiments herein describe a method of automatically generating dolly zoom effect by a user device. The method comprises capturing, by a capturing module, one or more frames, processing in the capturing module, the one or more captured frames to create a mosaic image and a depth mosaic, and processing, by a viewing module, the mosaic image and the depth mosaic, thereby generating dolly zoom effect.
According to an embodiment herein, the method of capturing comprises: moving a camera in one of left direction and right direction on detecting an object at corner of the frame, and stopping the movement of the camera on detecting the object at center of the frame. The method of capturing comprises: moving a camera in both left and right direction on detecting an object at corner of the frame, stopping the movement of the camera on detecting the object at center of the frame. The method of capturing comprises: moving a camera in circular direction, stopping the movement of the camera on reaching a start position. The method of capturing comprises: placing an object at the center of a pre-defined area while taking a single frame.
According to an embodiment herein, the method of processing, by the capturing module, the at least one captured frame to create the mosaic image and the depth mosaic, comprises: creating, by a mosaic engine module, the mosaic image of the one or more captured frames, generating, by a depth estimation module, a depth map of the one or more captured frames, registering, by a registration module, the mosaic image and the depth map to create a depth mosaic, and encoding, by a muxer module, the mosaic image, the depth mosaic and a meta data of a region of interest into a file.
According to an embodiment herein, the method further comprises receiving one or more user gestures on one or more objects for rendering dolly zoom effect, wherein the user gesture comprises at least one of a pinch zoom, hovering, pull and push action using an external device or fingers, and head movement towards or away from the user device.
According to an embodiment herein, the method of processing, by the viewing module, the mosaic image and the depth map, comprises: decoding, by a demuxer module, the mosaic image, the depth mosaic and the meta data from the file, segmenting the region of interest from a background image in the decoded mosaic image based on the decoded depth mosaic and the decoded meta data, the region of interest is identified based on one of a user input and determined during image capturing, generating different perspective by a warping module such that the decoded mosaic image is divided into multiple region of interests so that each one is bigger than previous region of interest, and overlaying, by a composition module, the segmented mosaic images on to the multiple region of interest based on the meta data of a region of interest.
According to an embodiment herein, registering, by the registration module, further comprises aligning the created mosaic image with the generated depth map to provide same scale of rotation and translation. Creating, by the mosaic engine module, the mosaic image of the one or more captured camera frames further comprises: determining an amount of overlap between at least two consecutive frames from the one or more captured frames, comparing the determined amount of overlap between at least two consecutive frames with a predetermined threshold, storing the one or more compared frames, if the amount of overlap between the at least two consecutive frames is less than the predetermined threshold, and discarding the one or more compared frames, if the amount of overlap between the at least two consecutive frames is greater than the predetermined threshold. The method further comprising determining an amount of overlap between the frame with the stored frame.
According to an embodiment herein, the depth estimation module comprises at least one of an infrared sensor and a stereo camera, as a depth sensor. The decoding further comprises providing the mosaic image as input to the warping module, and the depth mosaic as input to the composition module.
According to an embodiment herein, a system for generating dolly zoom effect by a user device, comprises: a capturing module for capturing and processing the one or more captured frames to create a mosaic image and a depth mosaic, and a viewing module connected to the capturing module for processing the mosaic image and the depth mosaic, thereby generating dolly zoom effect.
According to an embodiment herein, the capturing module comprises: a mosaic engine module for creating the mosaic image of the one or more captured frames, a depth estimation module for generating a depth map of the one or more captured frames, a registration module connected to the mosaic engine module and the depth estimation module, for creating a depth mosaic using the mosaic image and the depth map, and a muxer module connected to the registration module, for decoding the mosaic image, the depth mosaic and a meta data of a region of interest into a file.
According to an embodiment herein, the viewing module comprises: a demuxer module for decoding the mosaic image, the depth mosaic and the meta data from the file, a segmentation module connected to the demuxer module, for segmenting the region of interest from a background image in the decoded mosaic image, a warping module connected to the segmentation module for generating different perspective of background such that each one is bigger than previous region of interest, and a composition module connected to the segmentation module and the warping module, for overlaying the segmented mosaic images on to the multiple region of interest based on the meta data of the region of interest.
The foregoing has outlined, in general, the various aspects of the invention and is to serve as an aid to better understanding the more complete detailed description which is to follow. In reference to such, there is to be a clear understanding that the present invention is not limited to the method or application of use described and illustrated herein. It is intended that any other advantages and objects of the present invention that become apparent or obvious from the detailed description or illustrations contained herein are within the scope of the present invention.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The other objects, features and advantages will occur to those skilled in the art from the following description of the preferred embodiment and the accompanying drawings in which:
Figure 1 is a schematic block diagram of a capturing module for capturing an object to generate a dolly zoom effect, according to an embodiment of the present invention.
Figures 2A to 2D are schematic diagrams illustrating exemplary methods of capturing an object, according to an embodiment of the present invention.
Figures 3A to 3D are schematic diagrams illustrating exemplary movements of camera with respect to the object as illustrated in Figures 2A to 2D, according to an embodiment of the present invention.
Figure 4 is a schematic block diagram a viewing module for viewing an object on generating dolly zoom effect, according to an embodiment of the present invention.
Figure 5 is a schematic diagram illustrating user interaction such as pinching for generating dolly zoom effect, according to an embodiment of the present invention.
Figure 6 is a schematic diagram illustrating selection of region of interest (ROI) using touch co-ordinates and associating the zoom action with pinch, according to an embodiment of the present invention.
Although specific features of the present invention are shown in some drawings and not in others. This is done for convenience only as each feature may be combined with any or all of the other features in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The present invention provides a method of automatically generating dolly zoom effect by a user device. In the following detailed description of the embodiments of the invention, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
The specification may refer to “an”, “one” or “some” embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms “includes”, “comprises”, “including” and/or “comprising” when used in this specification, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The present invention provides a system and method of automatically generating dolly zoom effect by a user device. The system comprises a capturing module and a viewing module. The capturing module captures and processes the one or more captured frames to create a mosaic image and a depth mosaic. The viewing module, which is connected to the capturing module, processes the mosaic image and the depth mosaic received from the capturing module, in order to generate dolly zoom effect.
In one embodiment, an image that is being captured is made to pass through a mosaic engine module for generating wide field of view through mosaic of selected camera frame/s. While passing, an amount of overlap between two consecutive camera frames is determined. The frames which are having amount of overlap less than a threshold value are stored/registered for mosaic creation. The registered frames are stitched to produce a mosaic. A depth map corresponding to each input on camera frame is generated and scaled along with created mosaic to create one to one correspondence between mosaic and depth mosaic. Both the generated mosaic image and the depth mosaic are written in a file using a muxer module. At viewing module, a demuxer module, a segmentation module, a warping module and a composition module are used to display output frame to the user.
Figure 1 is a schematic block diagram a capturing module for capturing an object to generate a dolly zoom effect, according to an embodiment of the present invention. According to Figure 1, the capturing module comprises a mosaic engine module 101, a depth estimation module 102, a registration module 103 and a muxer module 104. The capturing module assist in capturing images and generating dolly zoom effect in a user device. The mosaic engine module 101 is adapted for generating wide field of view through mosaic of selected camera frame/s. The frames that are captured by a camera of the user device are passed to the mosaic engine module 101 to determine an amount of overlap between two consecutive frames from the captured frames. If the determined amount of overlap between two consecutive frames is less than a pre-determined threshold, then those frames are selected and stored for mosaic creation. Otherwise, if the determined amount of overlap between two consecutive frames is more than a pre-determined threshold, then those frames are discarded. Further, the next camera frame that is captured is compared with the last stored frame to determine an amount of overlap.
Further, once the frames are selected, the whole set of frames are registered locally (between two frames) and globally (all frames) so as to keep the trajectory complementary to direction of motion as minimum as possible. These registered frames are stitched to produce a mosaic. The depth estimation module 102 generates a depth map of the captured frames. The depth map could be binary such as foreground and background or could contain many levels of depth. The depth is estimated using infrared sensor to measure the amount of distortion in projected pattern or a stereo camera to measure the amount of disparity in the direction of motion or based on image segmentation techniques.
The registration module 103 is adapted for aligning the created mosaic with the generated depth map to provide same scale of rotation and translation. Since, selected frames from the mosaic engine module 101 were aligned for directed motion same has to be performed for depth map images, so as to create one to one correspondence between the generated mosaic and depth mosaic. Warping matrices created in the mosaic engine module 101 are used to register selected depth images to create depth mosaic.
The resultant mosaic is sent to the muxer module 104 which is adapted for writing generated mosaic along with depth mosaic into the file. Also, localized region of interest calculated during generation of depth map is written as metadata in the file.
Figures 2A to 2D are schematic diagrams illustrating exemplary methods of capturing an object, according to an embodiment of the present invention. The frames are captured by the camera in various methods. According to an embodiment as shown in Figure 2A, if the object is at the extreme corner of the frame, then the user should move the camera either to the left or right of the object until the object of interest is at the center of a camera frame. The overlap determined between the frames is passed to user interface so as to intimate the user when to stop or indicate when the capture will stop automatically. This guidance will help the user to keep the motion in undirected direction as minimum as possible.
According to another embodiment as shown in Figure 2B, the limited wide view can be further increased when the user moves in both the directions such as left and right of the object until the object of interest is at the center of a camera frame. According to yet another embodiment as shown in Figure 2C, the user rotates the frame along its axis, such that the first frame at 0 degree and the last frame captured at 360 degree will have some degree of overlap. According to yet another embodiment as shown in Figure 2D, a single image captured is used to create dolly zoom effect. The user is requested to confine the object of interest within the highlighted rectangle as shown in Figure 2D during capture.
Figures 3A to 3D are schematic diagrams illustrating exemplary movements of camera with respect to the object as illustrated in Figures 2A to 2D, according to an embodiment of the present invention. Figure 3A to 3D depict the camera movement to capture frames in line with the method described in Figure 2A to 2D.
Figure 4 is a schematic block diagram of a viewing module for viewing an object on generating dolly zoom effect, according to an embodiment of the present invention. According to Figure 4, the viewing modules comprises a demuxer module 401, a segmentation module 402, a warping module 403, a composition module 404 and a display 405. According to the present invention, the demuxer module 401 decodes the generated mosaic or single image and depth mosaic from the file along with the metadata associated with it. The decoded mosaic or single image and the depth mosaic is fed to the segmentation module 402. The segmentation module 402 separates region of interest from the background of the mosaic. The separated region of interest could be fed interactively by the user or the region of interest could be determined during the capture. The segmentation module 402 performs the segmentation based on depth data¸ color data or depth and color data.
Further, the warping module 403 generates different field of view as seen in dolly zoom. The complete mosaic or single image is divided into multiple region of interest such that each one is bigger than the last region of interest. This spacing of region of interest could be linear or nonlinear. In order to achieve the effect of increasing field of view with background going far, piece wise warp is used so as to map wider region into smaller region of interest. The piece wise warp could be converging to one point or multiple points. With single converging point, all the regions will move uniformly to a point. However, with multiple converging points, the rate of movement of all the regions is made variable. The region near to object of interest will appear to converge quickly to a point as compared to the region far from object of interest. The piece wise warp parameters are used to warp individual region of interest and is passed on to the composition module 404.
The composition module 404 facilitates the output frame to be displayed. The composition module 404 overlays segmented object of interest from depth map mosaic on the frames generated from warping module 403. The object of interest may be of constant size or of increasing size and the increase could be linear or non-linear. The object of interest will be overlaid on warped frames using weighed alpha blending, poison blending, laplacian blending or techniques involving integration of smooth seams. Even though the frame was captured with respect to object at the center of first frame, the zoom characteristic is not limited to itself alone. It can be extended to any other object in the frame with varying degree of zoom characteristic. The generated frames are passed on to display 405.
Figure 5 is a schematic diagram illustrating user interaction such as pinching for generating dolly zoom effect, according to an embodiment of the present invention. The dolly zoom effect could be associated with user interaction such as pinch zoom, hovering, pull and push action using external device such as stylus or fingers, and head movement towards or away from device.
Figure 6 is a schematic diagram illustrating selection of region of interest (ROI) using touch co-ordinates and associating the zoom action with pinch, according to an embodiment of the present invention. The present invention also allows a user to create dolly zoom effect on object or multiple objects in the same image based on the objects selected by the user. The objects of interest could be selected by using user interaction such as pinching.
Various embodiments of the present invention are adapted to provide automatic creation of dolly zoom effect from a single image or a mosaic. The present invention allows an amateur user using a mobile device to create a dolly zoom type of effect automatically. It further provides a capture user interface (UX) that assists the user in capturing the perfect mosaic or single image needed for dolly zoom creation. The present invention does not require any special support from hardware such as optical zoom and also does not require any other external editing tools or expertise in photography.
Although the embodiments herein are described with various specific embodiments, it will be obvious for a person skilled in the art to practice the invention with modifications. However, all such modifications are deemed to be within the scope of the claims. It is also to be understood that the following claims are intended to cover all of the generic and specific features of the embodiments described herein and all the statements of the scope of the embodiments which as a matter of language might be said to fall there between.
,CLAIMS:We claim:
1. A method of generating dolly zoom effect by a user device, comprising:
capturing, by a capturing module, one or more frames;
processing in the capturing module, the one or more captured frames to create a mosaic image and a depth mosaic; and
processing, by a viewing module, the mosaic image and the depth mosaic, thereby generating dolly zoom effect.
2. The method of claim 1, wherein capturing comprises:
moving a camera in one of left direction and right direction on detecting an object at corner of the frame; and
stopping the movement of the camera on detecting the object at center of the frame.
3. The method of claim 1, wherein capturing comprises:
moving a camera in both left and right direction on detecting an object at corner of the frame;
stopping the movement of the camera on detecting the object at center of the frame.
4. The method of claim 1, wherein capturing comprises:
moving a camera in circular direction;
stopping the movement of the camera on reaching a start position.
5. The method of claim 1, wherein capturing comprises:
placing an object at the center of a pre-defined area while taking a single frame.
6. The method of claim 1, wherein processing, by the capturing module, the at least one captured frame to create the mosaic image and the depth mosaic, comprises:
creating, by a mosaic engine module, the mosaic image of the one or more captured frames;
generating, by a depth estimation module, a depth map of the one or more captured frames;
registering, by a registration module, the mosaic image and the depth map to create a depth mosaic; and
encoding, by a muxer module, the mosaic image, the depth mosaic and a meta data of a region of interest into a file.
7. The method of claim 1, further comprises receiving one or more user gestures on one or more objects for rendering dolly zoom effect,
wherein the user gesture comprises at least one of a pinch zoom, hovering, pull and push action using an external device or fingers, and head movement towards or away from the user device.
8. The method of claims 1 and 2, wherein processing, by the viewing module, the mosaic image and the depth map, comprises:
decoding, by a demuxer module, the mosaic image, the depth mosaic and the meta data from the file;
segmenting the region of interest from a background image in the decoded mosaic image based on the decoded depth mosaic and the decoded meta data, the region of interest is identified based on one of a user input and determined during image capturing;
generating different perspective by a warping module such that the decoded mosaic image is divided into multiple region of interests so that each one is bigger than previous region of interest; and
overlaying, by a composition module, the segmented mosaic images on to the multiple region of interest based on the meta data of a region of interest.
9. The method of claim 2, wherein registering, by the registration module, further comprises aligning the created mosaic image with the generated depth map to provide same scale of rotation and translation
10. The method of claim 1, wherein creating, by the mosaic engine module, the mosaic image of the one or more captured camera frames further comprises:
determining an amount of overlap between at least two consecutive frames from the one or more captured frames;
comparing the determined amount of overlap between at least two consecutive frames with a predetermined threshold;
storing the one or more compared frames, if the amount of overlap between the at least two consecutive frames is less than the predetermined threshold; and
discarding the one or more compared frames, if the amount of overlap between the at least two consecutive frames is greater than the predetermined threshold.
11. The method of claim 2, further comprising determining an amount of overlap between the frame with the stored frame.
12. The method of claim 2, wherein the depth estimation module comprises at least one of an infrared sensor and a stereo camera, as a depth sensor.
13. The method of claim 3, wherein decoding further comprises providing the mosaic image as input to the warping module, and the depth mosaic as input to the composition module.
14. A system for generating dolly zoom effect by a user device, comprising:
a capturing module for capturing and processing the one or more captured frames to create a mosaic image and a depth mosaic; and
a viewing module connected to the capturing module for processing the mosaic image and the depth mosaic, thereby generating dolly zoom effect.
15. The system of claim 10, wherein the capturing module comprises:
a mosaic engine module for creating the mosaic image of the one or more captured frames;
a depth estimation module for generating a depth map of the one or more captured frames;
a registration module connected to the mosaic engine module and the depth estimation module, for creating a depth mosaic using the mosaic image and the depth map; and
a muxer module connected to the registration module, for decoding the mosaic image, the depth mosaic and a meta data of a region of interest into a file.
16. The system of claims 10 and 11, wherein the viewing module comprises:
a demuxer module for decoding the mosaic image, the depth mosaic and the meta data from the file;
a segmentation module connected to the demuxer module, for segmenting the region of interest from a background image in the decoded mosaic image;
a warping module connected to the segmentation module for generating different perspective of background such that each one is bigger than previous region of interest; and
a composition module connected to the segmentation module and the warping module, for overlaying the segmented mosaic images on to the multiple region of interest based on the meta data of the region of interest.
Dated this the 25th day of October 2016
Signature
SANTOSH VIKRAM SINGH
Patent agent
Agent for the applicant
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 6326-CHE-2015-IntimationOfGrant11-03-2024.pdf | 2024-03-11 |
| 1 | Power of Attorney [24-11-2015(online)].pdf | 2015-11-24 |
| 2 | 6326-CHE-2015-PatentCertificate11-03-2024.pdf | 2024-03-11 |
| 2 | Drawing [24-11-2015(online)].pdf | 2015-11-24 |
| 3 | Description(Provisional) [24-11-2015(online)].pdf | 2015-11-24 |
| 3 | 6326-CHE-2015-PETITION UNDER RULE 137 [12-12-2023(online)].pdf | 2023-12-12 |
| 4 | 6326-CHE-2015-Written submissions and relevant documents [12-12-2023(online)].pdf | 2023-12-12 |
| 4 | 6326-CHE-2015-Power of Attorney-090816.pdf | 2016-08-22 |
| 5 | 6326-CHE-2015-FORM-26 [07-12-2023(online)].pdf | 2023-12-07 |
| 5 | 6326-CHE-2015-Form 1-090816.pdf | 2016-08-22 |
| 6 | 6326-CHE-2015-Correspondence-F1-PA-090816.pdf | 2016-08-22 |
| 6 | 6326-CHE-2015-Correspondence to notify the Controller [05-12-2023(online)].pdf | 2023-12-05 |
| 7 | OTHERS [25-10-2016(online)].pdf | 2016-10-25 |
| 7 | 6326-CHE-2015-US(14)-HearingNotice-(HearingDate-08-12-2023).pdf | 2023-10-26 |
| 8 | Form 18 [25-10-2016(online)].pdf | 2016-10-25 |
| 8 | 6326-CHE-2015-ABSTRACT [12-09-2019(online)].pdf | 2019-09-12 |
| 9 | 6326-CHE-2015-CLAIMS [12-09-2019(online)].pdf | 2019-09-12 |
| 9 | Drawing [25-10-2016(online)].pdf | 2016-10-25 |
| 10 | 6326-CHE-2015-DRAWING [12-09-2019(online)].pdf | 2019-09-12 |
| 10 | Description(Complete) [25-10-2016(online)].pdf | 2016-10-25 |
| 11 | 6326-CHE-2015-FER_SER_REPLY [12-09-2019(online)].pdf | 2019-09-12 |
| 11 | Form-2(Online).pdf | 2016-10-26 |
| 12 | 6326-CHE-2015-OTHERS [12-09-2019(online)].pdf | 2019-09-12 |
| 12 | Form-18(Online).pdf | 2016-10-26 |
| 13 | 6326-CHE-2015-AMENDED DOCUMENTS [17-07-2019(online)].pdf | 2019-07-17 |
| 13 | 6326-CHE-2015-FER.pdf | 2019-03-15 |
| 14 | 6326-CHE-2015-FORM 13 [17-07-2019(online)].pdf | 2019-07-17 |
| 14 | 6326-CHE-2015-RELEVANT DOCUMENTS [17-07-2019(online)].pdf | 2019-07-17 |
| 15 | 6326-CHE-2015-FORM 13 [17-07-2019(online)].pdf | 2019-07-17 |
| 15 | 6326-CHE-2015-RELEVANT DOCUMENTS [17-07-2019(online)].pdf | 2019-07-17 |
| 16 | 6326-CHE-2015-AMENDED DOCUMENTS [17-07-2019(online)].pdf | 2019-07-17 |
| 16 | 6326-CHE-2015-FER.pdf | 2019-03-15 |
| 17 | Form-18(Online).pdf | 2016-10-26 |
| 17 | 6326-CHE-2015-OTHERS [12-09-2019(online)].pdf | 2019-09-12 |
| 18 | 6326-CHE-2015-FER_SER_REPLY [12-09-2019(online)].pdf | 2019-09-12 |
| 18 | Form-2(Online).pdf | 2016-10-26 |
| 19 | 6326-CHE-2015-DRAWING [12-09-2019(online)].pdf | 2019-09-12 |
| 19 | Description(Complete) [25-10-2016(online)].pdf | 2016-10-25 |
| 20 | 6326-CHE-2015-CLAIMS [12-09-2019(online)].pdf | 2019-09-12 |
| 20 | Drawing [25-10-2016(online)].pdf | 2016-10-25 |
| 21 | 6326-CHE-2015-ABSTRACT [12-09-2019(online)].pdf | 2019-09-12 |
| 21 | Form 18 [25-10-2016(online)].pdf | 2016-10-25 |
| 22 | 6326-CHE-2015-US(14)-HearingNotice-(HearingDate-08-12-2023).pdf | 2023-10-26 |
| 22 | OTHERS [25-10-2016(online)].pdf | 2016-10-25 |
| 23 | 6326-CHE-2015-Correspondence to notify the Controller [05-12-2023(online)].pdf | 2023-12-05 |
| 23 | 6326-CHE-2015-Correspondence-F1-PA-090816.pdf | 2016-08-22 |
| 24 | 6326-CHE-2015-Form 1-090816.pdf | 2016-08-22 |
| 24 | 6326-CHE-2015-FORM-26 [07-12-2023(online)].pdf | 2023-12-07 |
| 25 | 6326-CHE-2015-Written submissions and relevant documents [12-12-2023(online)].pdf | 2023-12-12 |
| 25 | 6326-CHE-2015-Power of Attorney-090816.pdf | 2016-08-22 |
| 26 | Description(Provisional) [24-11-2015(online)].pdf | 2015-11-24 |
| 26 | 6326-CHE-2015-PETITION UNDER RULE 137 [12-12-2023(online)].pdf | 2023-12-12 |
| 27 | Drawing [24-11-2015(online)].pdf | 2015-11-24 |
| 27 | 6326-CHE-2015-PatentCertificate11-03-2024.pdf | 2024-03-11 |
| 28 | Power of Attorney [24-11-2015(online)].pdf | 2015-11-24 |
| 28 | 6326-CHE-2015-IntimationOfGrant11-03-2024.pdf | 2024-03-11 |
| 1 | searchstrategyfor_6326_CHE_2015_FEB2019_28-02-2019.pdf |