Specification
CODING AND DECODING OF OMNIDIRECTIONAL VIDEO
1. Field of the invention
The present invention relates generally to the field of omnidirectional videos, such as in particular 360 °, 180 ° videos, etc. More particularly, the invention relates to the encoding and decoding of 360 °, 180 °, etc. views which are captured in order to generate such videos, as well as the synthesis of non-captured intermediate viewpoints.
The invention can in particular, but not exclusively, be applied to the video coding implemented in the current AVC and HEVC video coders and their extensions (MVC, 3D-AVC, MV-HEVC, 3D-HEVC, etc.), and to the corresponding video decoding.
2. Prior art
To generate an omnidirectional video, such as for example a 360 ° video, it is common to use a 360 ° camera. Ui® such 360 ° camera consists of several 2D cameras (two dimensions) installed on a spherical platform. Each 2D camera captures a particular angle of a 3D scene (three dimensions), the set of views captured by the cameras making it possible to generate a video representing the 3D scene according to a 360 ° x180 ° field of view. It is also possible to use a single 360 ° camera to capture the 3D scene with a 360 ° x180 ° field of view. Such a field of vision can of course be smaller, for example at 270 ° x135 °.
Such 360 ° videos then allow the user to look at the scene as if it were placed in the center of it and to look around it, in 360 °, thus providing a new way of watching videos. Such videos are generally reproduced on virtual reality headsets, also known under the English name HMD for “Head Mounted Devices”. However, they can also be displayed on 2D screens equipped with suitable user interaction means. The number of 2D cameras to capture a 360 ° scene varies depending on the platforms used.
To generate a 360 ° video, the divergent views captured by the various 2D cameras are put end to end, taking into account the overlaps between views, to create a panoramic 2D image. This step is also known as
name of "stitching" in English. For example, an EquiRectangular projection (ERP) is a possible projection to obtain such a panoramic image. According to this projection, the views captured by each of the 2D cameras are projected onto a spherical surface. Other types of projections are also possible, such as a Cube Mapping type projection (projection on the faces of a cube). The views projected onto a surface are then projected onto a 2D plane to obtain a 2D panoramic image comprising at a given moment all the views of the scene which have been captured.
In order to increase the feeling of immersion, several 360 ° cameras of the aforementioned type can be used simultaneously to capture a scene, these cameras being positioned in the scene in an arbitrary manner. A 360 ° camera can be a real camera, that is to say a physical object, or else a virtual camera, in which case, the view is obtained by view generation software. In particular, such a virtual camera makes it possible to generate views representative of points of view of the 3D scene which have not been captured by real cameras.
The image of the 360 ° view obtained using a 360 ° camera or else the images of 360 ° views obtained using several 360 ° cameras (real and virtual) are then encoded using for example :
- a conventional 2D video encoder, for example an encoder conforming to the HEVC standard (abbreviation of “High Efficiency Video Coding”),
a conventional 3D video encoder, for example an encoder conforming to MV-HEVC and 3D-HEVC standards.
Such encoders are not sufficiently efficient in terms of compression, taking into account the very large number of data of the image of a 360 ° view to be encoded, and in addition, of the images of several 360 ° views to be encoded, and of the particular geometry of the 360 ° representation of the 3D scene using such 360 ° views. Moreover, the views captured by the 2D cameras of a 360 ° camera being divergent, the aforementioned encoders are not sufficiently suitable for encoding the different images of 360 ° views, because the inter-image prediction will be little or not used by these. encoders. Indeed, between two views captured respectively by two 2D cameras, there is little similar content that can be predicted. Therefore, all 360 ° view images are compressed in the same way. Specifically,
3. Subject matter and summary of the invention
One of the aims of the invention is to remedy the drawbacks of the aforementioned state of the art.
To this end, an object of the present invention relates to a method for coding an image of a view forming part of a plurality of views, the plurality of views simultaneously representing a 3D scene according to different positions or different viewing angles, implemented by an encoding device, comprising the following:
- select a first encoding method or a second encoding method to encode the image of the view,
- generate a data signal containing information indicating whether it is the first encoding method or the second encoding method which is selected,
- if the first encoding method is selected, encoding the original data of the picture of the view, the first encoding method providing original encoded data,
- if the second coding method is selected:
• encoding processed data of the image of the view, these data having been obtained by means of an image processing applied to the original data of the image of the view, the encoding providing processed encoded data,
• code information describing the image processing that has been applied,
- the generated data signal containing in addition:
• the original coded data of the picture of the view, if the first coding method was selected,
• the processed encoded data of the image of the view, as well as the encoded information describing the image processing, if the second encoding method has been selected.
Thanks to the invention, among several images from current views to be coded of the aforementioned type, said images representing a very large number of data to be coded, and therefore to be signaled, it is possible to combine two coding techniques for each image of each view to code:
- a first coding technique, according to which the images of one or more views are conventionally encoded (HEVC, MVC-HEVC, 3D-HEVC, for example), so as to obtain respectively reconstructed images forming very good views quality,
- a second innovative coding technique, according to which processed image data from one or more other views are encoded, so as to obtain, on decoding, processed image data which therefore does not correspond to the original data of these images , but with the benefit of a not insignificant reduction in the signaling cost of the coded processed data of these images.
On decoding, will then be found, for each image of each other view, the processed data of which has been encoded according to the second encoding method, the corresponding processed data of the image of the view, as well as the information describing the processing of the view. image applied, at encoding, to the original view image data. Such processed data can then be processed using the corresponding image processing description information, in order to constitute an image of the view which, used with at least one of the images of a view reconstructed according to the first method of conventional decoding, will make it possible to synthesize non-captured intermediate view images, in a particularly efficient and high-performance manner.
The present invention also relates to a method of decoding a data signal representative of an image of a view forming part of a plurality of views, the plurality of views simultaneously representing a 3D scene from different positions or different viewing angles. , implemented by a decoding device, comprising the following:
- from the data signal, read an item of information indicating whether the image of the view is to be decoded according to a first or a second decoding method,
- if it is the first decoding method:
• read, in the data signal, coded data associated with the image of the view,
• reconstruct an image of the view from the encoded data read, the image of the reconstructed view containing the original data of the image of the view,
- if it is the second decoding method:
• read, in the data signal, coded data associated with the image of the view,
• reconstruct an image of the view, from the encoded data read, the image of the reconstructed view containing processed data of the image of the view, in association with information describing an image processing used for obtain the processed data.
According to a particular embodiment:
- the processed view image data is the view image data that has not been deleted following the application of a view image cropping,
the image processing description information is location information, in the image of the view, of one or more cropped areas.
Such a trimming processing applied to the image of said view makes it possible not to encode part of the original data of the latter, for the benefit of a significant reduction in the transmission rate of the encoded data associated with the image of said view, since the data belonging to the zone (s) which have been cropped are neither coded nor signaled to the decoder. The reduction in throughput will depend on the size of the trimmed area (s). The image of the view which will be reconstructed after decoding, then possibly processing of its processed data, using the corresponding image processing description information, will therefore not contain all of its original data or at least will be different by compared to the image of the original view. Obtaining such an image of the view thus cropped does not however call into question the efficiency of the synthesis of an intermediate image which would use such an image of said cropped view, once reconstructed. Indeed, such a synthesis using one or more images reconstructed using a conventional decoder (HEVC, MVC-HEVC, 3D-HEVC, for example), it is possible to find the original area in the view.
intermediate, thanks to the image of said view and conventionally reconstructed images.
According to another particular embodiment:
the processed data of the image of the view are the data of at least one zone of the image of the view which has undergone sampling, according to a given sampling factor and in at least one given direction,
the image processing description information comprises at least one item of information on the location, in the image of the view, of the at least one sampled zone.
Such processing favors a homogeneous degradation of the image of said view, again with the aim of optimizing the reduction in the data rate resulting from the sampling applied, then coded. The subsequent reconstruction of such an image of the view so sampled, even if it provides a reconstructed image of the view which is degraded / different from the original image of the view, the original data of which was sampled, then encoded, does not call into question the efficiency of the synthesis of an intermediate image which would use such an image of said reconstructed sampled view. Indeed, such a synthesis using one or more images reconstructed using a conventional decoder (HEVC, MVC-HEVC, 3D-HEVC, for example),
According to another particular embodiment:
- the processed data of the image of the view are the data of at least one area of the image of the view which has undergone filtering,
the image processing description information comprises at least one item of information on the location, in the image of the view, of the at least one filtered zone.
Such processing favors the deletion of the data from the image of said view which are considered as not useful to be coded, with a view to optimizing the reduction in bit rate of the coded data which are advantageously constituted only by the filtered data of the image.
The subsequent reconstruction of such an image of the view thus filtered, even if it provides a reconstructed image of the view which is degraded / different from the original image of the view, the original data of which was filtered, then encoded, does not call into question the efficiency of the synthesis of an intermediate image which would use such an image of said reconstructed filtered view. Indeed, such a synthesis using one or more images reconstructed using a conventional decoder (HEVC, MVC-HEVC, 3D-HEVC, for example), it is possible to find the original zone in the intermediate view, thanks to to the filtered area of the image of said view, and the conventionally reconstructed images.
According to another particular embodiment:
- the processed data of the image of the view are pixels of the image of the view, corresponding to an occlusion detected using an image of another view of the plurality,
the image processing description information includes an indicator of the pixels of the image of the view which are found in the image of another view.
Similarly to the previous embodiment, such processing favors the deletion of the data from the image of said view which are considered as not useful to be coded, with a view to optimizing the reduction in bit rate of the coded data which are advantageously constituted only by the pixels of the image of said view, the absence of which has been detected in another image of a current view of said plurality.
The subsequent reconstruction of such an image of the view, even if it provides a reconstructed image of the view which is degraded / different from the original image of the view, of which only the occluded area was encoded, does not not call into question the efficiency of the synthesis of an intermediate image which would use such an image of said reconstructed view. Indeed, such a synthesis using one or more images reconstructed using a conventional decoder (HEVC, MVC-HEVC, 3D-HEVC, for example), it is possible to find the original zone in the intermediate view, thanks to like the current view and conventionally reconstructed images.
According to another particular embodiment:
- the processed data of the view image, which have been encoded / decoded, are pixels which are calculated:
- from the original data of the view image,
- from the original data of an image of at least one other view which is encoded / decoded using the first encoding / decoding method,
- and possibly from the original data of an image of at least one other view, for which processed data are encoded / decoded using the second encoding / decoding method,
the description information of said image processing comprises:
- an indicator of the pixels of the image of the view which have been calculated,
- location information, in the image of at least one other view which has been encoded / decoded using the first encoding / decoding method, of the original data which has been used to calculate the pixels of the view image,
- and optionally, location information, in the image of at least one other view, for which processed data has been encoded / decoded, original data which has been used to calculate the pixels of the image of the view .
According to another particular embodiment, the processed data of an image of a first view and the processed data of an image of at least one second view are combined into a single image.
Correspondingly to the above embodiment, the processed data of the image of the view which is obtained according to the second decoding method comprises the processed data of an image of a first view and the processed data of a first view. image of at least a second view.
According to a particular embodiment:
- the processed encoded / decoded data of the image of the view are image type data,
the coded / decoded information describing the image processing are data of image type and / or of textual type.
The invention also relates to a device for encoding an image of a view forming part of a plurality of views, the plurality of views simultaneously representing a 3D scene according to different positions or different viewing angles, the encoding device comprising a processor that is configured to implement the following, at a current time:
- select a first encoding method or a second encoding method to encode the image of the view,
- generate a data signal containing information indicating whether it is the first encoding method or the second encoding method which is selected,
- if the first encoding method is selected, encoding the original data of the picture of the view, the first encoding method providing original encoded data,
- if the second coding method is selected:
• encoding processed data of the image of the view, the processed data having been obtained by means of an image processing applied to the original data of the image of the view, the encoding providing encoded processed data,
• code information describing the image processing that has been applied,
- the generated data signal containing in addition:
• the original coded data of the picture of the view, if the first coding method was selected,
• the processed encoded data of the image of the view, as well as the encoded information describing the image processing, if the second encoding method has been selected.
Such a coding device is in particular capable of implementing the aforementioned coding method.
The invention also relates to a device for decoding a data signal representative of an image of a view forming part of a plurality of views, the plurality of views simultaneously representing a 3D scene from different positions or different viewing angles. , the decoding device comprising a processor which is configured to implement the following, at a current instant:
- read, in the data signal, information indicating whether the image of the view is to be decoded according to a first or a second decoding method,
- if it is the first decoding method:
• read, in the data signal, coded data associated with the image of the view,
• reconstruct an image of the view from the encoded data read, the image of the reconstructed view containing the original data of the image of the view,
- if it is the second decoding method:
• read, in the data signal, coded data associated with the image of the view,
• reconstruct an image of the view from the encoded data read, the image of the reconstructed view containing processed data of the image of the view, in association with description information of an image processing used to obtain the data processed.
Such a decoding device is in particular capable of implementing the aforementioned decoding method.
The invention also relates to a data signal containing data encoded according to the aforementioned encoding method.
The invention also relates to a computer program comprising instructions for implementing the coding method or the decoding method according to the invention, according to any one of the particular embodiments described above, when said program is executed. by a processor.
This program can use any programming language, and be in the form of source code, object code, or intermediate code between source code and object code, such as in a partially compiled form, or in any other. desirable shape.
The invention also relates to a recording medium or information medium readable by a computer, and comprising instructions of a computer program as mentioned above.
The recording medium can be any entity or device capable of storing the program. For example, the medium may comprise a storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or else a magnetic recording means, for example a USB key or a hard disk.
On the other hand, the recording medium can be a transmissible medium such as an electrical or optical signal, which can be conveyed via an electrical or optical cable, by radio or by other means. The program according to the invention can in particular be downloaded from an Internet type network.
Alternatively, the recording medium can be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the aforementioned encoding or decoding method.
4. Brief description of the drawings
Other characteristics and advantages will emerge more clearly on reading several preferred embodiments, given by way of simple illustrative and non-limiting examples, and described below with reference to the appended drawings, in which:
- Figure 1 shows the main actions performed by the coding method according to one embodiment of the invention,
FIG. 2A represents a first type of data signal capable of being generated following the implementation of the encoding method of FIG. 1,
FIG. 2B represents a second type of data signal capable of being generated following the implementation of the encoding method of FIG. 1,
FIG. 2C represents a third type of data signal capable of being generated following the implementation of the encoding method of FIG. 1,
FIG. 3A represents a first embodiment of a method for coding all the images of views available at a current instant,
FIG. 3B represents a second embodiment of a method for coding all the view images available at a current instant,
FIGS. 4A to 4E each represent an example of a processing applied to the image of a view, according to a first embodiment,
FIGS. 5A to 5D each represent an example of a processing applied to the image of a view, according to a second embodiment,
FIG. 6 represents an example of a processing applied to the image of a view, according to a third embodiment,
FIG. 7 represents an example of a processing applied to the image of a view, according to a fourth embodiment,
FIG. 8 represents an example of a processing applied to the image of a view, according to a fifth embodiment,
FIG. 9 represents an example of a processing applied to the image of a view, according to a sixth embodiment,
FIG. 10 represents an encoding device implementing the encoding method of FIG. 1,
- Figure 11 shows the main actions performed by the decoding method according to one embodiment of the invention,
FIG. 12A represents a first embodiment of a method for decoding all the images of views available at a current instant,
FIG. 12B represents a second embodiment of a method for decoding all the images of views available at a current instant,
- Figure 13 shows a decoding device implementing the decoding method of Figure 11,
FIG. 14 represents an embodiment of a view image synthesis, in which view images reconstructed according to the decoding method of FIG. 11 are used,
FIGS. 15A to 15D each represent an example of a processing applied to the image of a view after reconstruction of the latter, according to a first embodiment,
FIG. 16 represents an example of a processing applied to the image of a view after reconstruction of the latter, according to a second embodiment,
FIG. 17 represents an example of a processing applied to the image of a view after reconstruction of the latter, according to a third embodiment,
FIG. 18 represents an example of a processing applied to the image of a view after reconstruction of the latter, according to a fourth embodiment.
5. Description of the general principle of the invention
The invention mainly proposes a scheme for encoding a plurality of current images of respectively a plurality of views, the plurality of views representing at the current instant a 3D scene according to a given position or a given viewing angle, in which two coding techniques are available:
- a first coding technique, according to which at least one current image of a view is coded using a conventional coding mode, such as for example HEVC, MV-HEVC, 3D-HEVC,
- a second innovative coding technique, according to which the processing data of at least one current image of a view, which results from the application of a processing of the original data of this image, using a particular image processing, are coded using a conventional coding mode of the aforementioned type and / or any other suitable coding mode, so as to significantly reduce the cost of signaling the coded data of this image, resulting from the processing implemented before the encoding step.
Correspondingly, the invention proposes a decoding scheme which makes it possible to combine two decoding techniques:
a first decoding technique, according to which at least one current image of an encoded view is reconstructed using a conventional decoding mode, such as for example HEVC, MV-HEVC, 3D-HEVC, and corresponding to a conventional coding mode used for coding and having been signaled to the decoder, so as to obtain at least one reconstructed image of a view which is of very good quality,
- a second innovative decoding technique according to which the encoded processing data of at least one image of a view are decoded using a decoding mode corresponding to the encoding mode signaled to the decoder, namely either the mode of conventional encoding and / or the other suitable encoding mode, so as to obtain processed image data and description information of the image processing from which the processed data obtained originate. The processed data obtained on decoding for this image therefore does not correspond to the original data thereof, unlike the image data decoded according to the first decoding technique.
The image of the view which will be reconstructed subsequently from such decoded processed image data and image processing description information will be different from the original image of the view, i.e. before processing, then coding of its original data. However, such a reconstructed image of the view will constitute the image of a view which, used with images of other views
reconstructed according to the first conventional decoding technique, will make it possible to synthesize images of intermediate views, in a particularly efficient and high-performance manner.
6. Examples of coding scheme implementation
Described below is a method for encoding 360 °, 180 ° or other omnidirectional videos which can use any type of multi-view video encoding, for example conforming to the 3D-HEVC or MV-HEVC standard, or the like.
With reference to FIG. 1, such a coding method applies to a current image of a view which is part of a plurality of views Vi, ..., V N , the plurality of views representing a 3D scene according to respectively a plurality of viewing angles or a plurality of positions / orientations.
According to a common example, in the case where three omnidirectional cameras are used to generate a video, for example 360 °:
- a first omnidirectional camera can for example be placed in the center of the 3D scene, according to a 360 ° x180 ° viewing angle,
- a second omnidirectional camera can for example be placed on the left in the 3D scene, according to a 360 ° x180 ° viewing angle,
- a third omnidirectional camera can for example be placed on the right in the 3D scene, according to a 360 ° x180 ° viewing angle.
According to another more atypical example, in the case where three omnidirectional cameras are used to generate a video a °, with 0 ° "<360 °:
- a first omnidirectional camera can for example be placed in the center of the 3D scene, according to a 360 ° x180 ° viewing angle,
- a second omnidirectional camera can for example be placed on the left in the 3D scene, at a viewing angle of 270 ° x135 °,
- a third omnidirectional camera can for example be placed on the right in the 3D scene, according to a viewing angle, 180 ° x90 °.
Other configurations are of course possible.
At least two views of said plurality of views may or may not represent the 3D scene from the same viewing angle.
The coding method according to the invention consists in coding at a current instant:
- an image IV 1 of a view V 1;
- an IV 2 image of a V 2 view ,
- an image IV k of a view V k ,
- an image IV N of a view V N ,
An image of a view considered can be both a texture image and a depth image. The image of a view considered, for example the image IV k , contains a number Q (Q> 1) of original data (d1 k , ..., dQ k ), such as for example Q pixels.
The coding method then comprises the following, for at least one image IV k of a view V k , to be coded:
In C1, a first coding method MC1 or a second coding method MC2 of the image IV k is selected .
If the first coding method MC1 is selected, in C10, flag_proc information is coded, for example on a bit set to 0, to indicate that the coding method MC1 is selected.
In C1 1 a, the Q original data (pixels) d1 k , ..., dQ k of the image IV k are encoded using a conventional encoder, such as for example conforming to the HEVC standard, MV- HEVC, 3D-HEVC, etc. At the end of the coding C1 1 a, an coded image IVC k of the view V k is obtained. The coded image IVC k then contains Q coded original data dc1 k , dc2 k , ..., dcQ k .
At C12a, a data signal F1 k is generated. As shown in FIG. 2A, the data signal F1 k contains the information flag_proc = 0 relating to the selection of the first encoding method MC1, as well as the original data encoded dc1 k , dc2 k , ..., dcQ k .
If the second encoding method MC2 is selected, in C10, flag_proc information is encoded, for example on a bit set to 1, to indicate that the encoding method MC2 is selected.
In C1 1 b, the coding method MC2 is applied to data DT k resulting from a processing of the image IV k , carried out before the coding step.
Such DT k data includes:
- image type data (pixels) corresponding to all or part of the original data of the image IV k which have been processed using a particular image processing, before the coding step, including various examples details will be described later in the description,
- Description information of the image processing having been applied to the image IV k , before the coding step C1 1 b, such description information being for example of textual and / or image type.
At the end of the C1 1 b coding, processed data coded DTC k are obtained. They are representative of a processed image coded IVTC k .
Thus, the processed data DT k does not correspond to the original data of the image IV k .
For example, these processed data DT k correspond to an image whose resolution is greater or smaller than that of the image IV k before processing. Thus, the processed image IV k could for example be larger, since it was obtained from images of other views, or on the contrary be smaller, since it results from the removal of one or more original pixels from the image. 'image IV k .
According to another example, these processed data DT k correspond to an image whose representation format (YUV, RGB, etc.) of the processed image IV k is different from the original format of the IV k image before processing, as well as the number of bits to represent a pixel (16 bits, 10 bits, 8 bits, etc.).
According to yet another example, these processed data DT k correspond to a color or texture component which is degraded with respect to the original texture or color component of the image IV k before processing.
According to yet another example, these processed data DT k correspond to a particular representation of the original content of the image IV k before processing, for example a representation of the filtered original content of the image IV k .
In the case where the processed data DT k are only image data, that is to say are for example in the form of a grid of pixels, the encoding method MC2 can be implemented by a encoder which is similar to the encoder implementing the first encoding method MC1. It could be a lossy or lossless encoder. In the case where the processed data DT k is different from the image data, such as for example data of textual type, or else comprises both image data and data of another type than data d 'image, the MC2 encoding method can be implemented:
- by a lossless encoder to specifically encode textual data,
- By a lossy or lossless encoder to specifically encode the image data, such an encoder possibly being identical to the encoder implementing the first encoding method MC1 or else different.
At C12b, a data signal F2 k is generated. As represented in FIG. 2B, the data signal F2 k contains the information flag_proc = 1 relating to the selection of the second encoding method MC2 and the processed data encoded DTC k , in the case where these data are all data d 'picture.
As an alternative, in C12c, in the case where the processed DTC encoded data k includes both image data and textual type data, two signals F3 k and F'3 k are generated.
As shown in Figure 2C:
- the data signal F'3 contains the information flag_proc = 1 relating to the selection of the second coding method MC2 and the processed data coded DTC k of image type,
the data signal F'3 k contains the processed data coded DTC k of textual type.
The coding method which has just been described above can then be implemented for each picture IV-i, IV 2 , ..., IV N of the N views to be coded available, for some of them only or still can be limited to the image IV k , with for example k = 1.
According to two exemplary embodiments shown in FIGS. 3A and 3B, it is assumed for example that among the N images IV-i, ..., IV N to be coded:
- the n first images IV-i, ..., IV n are coded using the first coding technique MC1: the n first views V 1 to V n are called master views because, once reconstructed, the images of n master views will contain all of their original data and will be relevant to be used with one or more of the Nn other views in order to synthesize the arbitrary view images required by a user,
the Nn other images IV n + i , ..., IV N are processed before being encoded using the second encoding method MC2: these Nn other processed images belong to so-called additional views.
If n = 0, all the images IV 1; IV N of all views are processed. If n = N-1, the image of a single view among N is processed, for example the image of the first view.
At the end of the processing of the Nn other images IV n + 1 , ..., IV N , are obtained Mn processed data. If M = N, there are as many data processed as there are views to be processed. If M
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202117016941-2. Marked Copy under Rule 14(2) [09-05-2023(online)].pdf |
2023-05-09 |
| 1 |
202117016941-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [09-04-2021(online)].pdf |
2021-04-09 |
| 2 |
202117016941-ABSTRACT [09-05-2023(online)].pdf |
2023-05-09 |
| 2 |
202117016941-STATEMENT OF UNDERTAKING (FORM 3) [09-04-2021(online)].pdf |
2021-04-09 |
| 3 |
202117016941-PRIORITY DOCUMENTS [09-04-2021(online)].pdf |
2021-04-09 |
| 3 |
202117016941-DRAWING [09-05-2023(online)].pdf |
2023-05-09 |
| 4 |
202117016941-POWER OF AUTHORITY [09-04-2021(online)].pdf |
2021-04-09 |
| 4 |
202117016941-FER_SER_REPLY [09-05-2023(online)].pdf |
2023-05-09 |
| 5 |
202117016941-FORM 3 [09-05-2023(online)].pdf |
2023-05-09 |
| 5 |
202117016941-FORM 1 [09-04-2021(online)].pdf |
2021-04-09 |
| 6 |
202117016941-Information under section 8(2) [09-05-2023(online)].pdf |
2023-05-09 |
| 6 |
202117016941-DRAWINGS [09-04-2021(online)].pdf |
2021-04-09 |
| 7 |
202117016941-OTHERS [09-05-2023(online)].pdf |
2023-05-09 |
| 7 |
202117016941-DECLARATION OF INVENTORSHIP (FORM 5) [09-04-2021(online)].pdf |
2021-04-09 |
| 8 |
202117016941-PETITION UNDER RULE 137 [09-05-2023(online)].pdf |
2023-05-09 |
| 8 |
202117016941-COMPLETE SPECIFICATION [09-04-2021(online)].pdf |
2021-04-09 |
| 9 |
202117016941-Proof of Right [13-05-2021(online)].pdf |
2021-05-13 |
| 9 |
202117016941-Retyped Pages under Rule 14(1) [09-05-2023(online)].pdf |
2023-05-09 |
| 10 |
202117016941-FORM-26 [21-06-2021(online)].pdf |
2021-06-21 |
| 10 |
202117016941-Verified English translation [02-05-2023(online)].pdf |
2023-05-02 |
| 11 |
202117016941-FER.pdf |
2022-12-01 |
| 11 |
202117016941.pdf |
2021-10-19 |
| 12 |
202117016941-FORM 18 [23-09-2022(online)].pdf |
2022-09-23 |
| 12 |
202117016941-MARKED COPIES OF AMENDEMENTS [22-09-2022(online)].pdf |
2022-09-22 |
| 13 |
202117016941-AMMENDED DOCUMENTS [22-09-2022(online)].pdf |
2022-09-22 |
| 13 |
202117016941-FORM 13 [22-09-2022(online)].pdf |
2022-09-22 |
| 14 |
202117016941-Annexure [22-09-2022(online)].pdf |
2022-09-22 |
| 15 |
202117016941-AMMENDED DOCUMENTS [22-09-2022(online)].pdf |
2022-09-22 |
| 15 |
202117016941-FORM 13 [22-09-2022(online)].pdf |
2022-09-22 |
| 16 |
202117016941-FORM 18 [23-09-2022(online)].pdf |
2022-09-23 |
| 16 |
202117016941-MARKED COPIES OF AMENDEMENTS [22-09-2022(online)].pdf |
2022-09-22 |
| 17 |
202117016941.pdf |
2021-10-19 |
| 17 |
202117016941-FER.pdf |
2022-12-01 |
| 18 |
202117016941-Verified English translation [02-05-2023(online)].pdf |
2023-05-02 |
| 18 |
202117016941-FORM-26 [21-06-2021(online)].pdf |
2021-06-21 |
| 19 |
202117016941-Proof of Right [13-05-2021(online)].pdf |
2021-05-13 |
| 19 |
202117016941-Retyped Pages under Rule 14(1) [09-05-2023(online)].pdf |
2023-05-09 |
| 20 |
202117016941-COMPLETE SPECIFICATION [09-04-2021(online)].pdf |
2021-04-09 |
| 20 |
202117016941-PETITION UNDER RULE 137 [09-05-2023(online)].pdf |
2023-05-09 |
| 21 |
202117016941-DECLARATION OF INVENTORSHIP (FORM 5) [09-04-2021(online)].pdf |
2021-04-09 |
| 21 |
202117016941-OTHERS [09-05-2023(online)].pdf |
2023-05-09 |
| 22 |
202117016941-DRAWINGS [09-04-2021(online)].pdf |
2021-04-09 |
| 22 |
202117016941-Information under section 8(2) [09-05-2023(online)].pdf |
2023-05-09 |
| 23 |
202117016941-FORM 1 [09-04-2021(online)].pdf |
2021-04-09 |
| 23 |
202117016941-FORM 3 [09-05-2023(online)].pdf |
2023-05-09 |
| 24 |
202117016941-FER_SER_REPLY [09-05-2023(online)].pdf |
2023-05-09 |
| 24 |
202117016941-POWER OF AUTHORITY [09-04-2021(online)].pdf |
2021-04-09 |
| 25 |
202117016941-PRIORITY DOCUMENTS [09-04-2021(online)].pdf |
2021-04-09 |
| 25 |
202117016941-DRAWING [09-05-2023(online)].pdf |
2023-05-09 |
| 26 |
202117016941-STATEMENT OF UNDERTAKING (FORM 3) [09-04-2021(online)].pdf |
2021-04-09 |
| 26 |
202117016941-ABSTRACT [09-05-2023(online)].pdf |
2023-05-09 |
| 27 |
202117016941-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [09-04-2021(online)].pdf |
2021-04-09 |
| 27 |
202117016941-2. Marked Copy under Rule 14(2) [09-05-2023(online)].pdf |
2023-05-09 |
Search Strategy
| 1 |
SearchHistory6E_01-12-2022.pdf |