Abstract: Apparatus for predicting a predetermined block (18) of a picture using a plurality of reference samples (17a, c) The apparatus is configured to form (100) a sample value vector (102, 400) out of the plurality of reference samples, derive from the sample value vector a further vector onto which the sample value vector is mapped by a predetermined invertible linear transform, compute a matrix-vector product between the further vector and a predetermined prediction matrix so as to obtain a prediction vector, and predict samples of the predetermined block on the basis of the prediction vector.
Block-based Prediction
Description
The present application concerns the field of block-based prediction. Embodiments are related to an advantageous way for determining a prediction vector.
Today there exist different block-based intra and inter prediction modes. Samples neighboring a block to be predicted or samples obtained from other pictures can form a sample vector which can undergo a matrix multiplication to determine a prediction signal for the block to be predicted.
The matrix multiplication should preferably be carried out in integer arithmetic and a matrix derived by some machine-learning based training algorithm should be used for the matrix multiplication.
However, such a training algorithm usually only results in a matrix that is given in floating point precision. Thus, one is faced with the problem to specify integer operations such that the matrix multiplication is well approximated using these integer operations and/or to achieve a computational efficiency improvement and/or fender the prediction more effective in terms of implementation.
This is achieved by the subject matter of the independent claims of the present application.
Further embodiments according to the invention are defined by the subject matter of the dependent claims of the present application.
Summary of the Invention
In accordance with a first aspect of the present invention, the inventors of the present application realized that one problem encountered when trying to determine a prediction vector by an encoder or decoder, is the possible lack of using integer arithmetic for calculating a prediction vector for a predetermined block. According to the first aspect of the present application, this difficulty is overcome by deriving from the sample value vector a further vector onto which the sample value vector is mapped by a predetermined invertible linear transform so that the sample value vector is not directly applied in a matrix-vector product calculating the prediction vector. Instead the matrix-vector product is computed between the further vector and a predetermined prediction matrix to calculate the prediction vector. The further vector is, for example, derived such that samples of the predetermined block can be predicted by the apparatus using integer arithmetic operations and/or fixed point arithmetic operations. This is based on the idea, that components of the sample value vector are correlated, whereby an advantageous predetermined invertible linear transform can be used, for example, to obtain the further vector with mainly small entries which enables the usage of an integer matrix and/or a matrix with fixed point values and/or a matrix with small expected quantization errors as the predetermined prediction matrix.
Accordingly, in accordance with a first aspect of the present application, an apparatus for predicting a predetermined block of a picture using a plurality of reference samples, is configured to form a sample value vector out of the plurality of reference samples. The reference samples are, for example, samples neighboring the predetermined block at intra prediction or samples in another picture at inter prediction. According to an embodiment, the reference samples can be reduced, for example, by averaging to obtain a sample value vector with a reduced number of values. Furthermore, the apparatus is configured to derive from the sample value vector a further vector onto which the sample value vector is mapped by a predetermined invertible linear transform, compute a matrix-vector product between the further vector and a predetermined prediction matrix so as to obtain a prediction vector, and predict samples of the predetermined block on the basis of the prediction vector. Based on the further vector, the prediction of samples of the predetermined block can represent an integer approximation of a direct matrix-vector-product between the sample value vector and a matrix to obtain predicted samples of the predetermined block.
The direct matrix-vector-product between the sample value vector and the matrix can equal a second matrix-vector-product between the further vector and a second matrix. The second matrix and/or the matrix are, for example, machine learning prediction matrices. According to an embodiment, the second matrix can be based on the predetermined prediction matrix and an integer matrix. The second matrix equals, for example, a sum of the predetermined prediction matrix and an integer matrix. In other words, the second matrix-vector-product between the further vector and the second matrix can be represented by the matrix-vector product between the further vector and the predetermined prediction matrix and a further matrix-vector product between the integer matrix and the further vector. The integer matrix is, for example, a matrix with a predetermined column i0 consisting of ones and with columns i¹i0 being zero. Thus a good integer approximation and/or a good fixed-point value approximation of the first- and/or the second-matrix-vector-product can be achieved by the apparatus. This is based on the idea, that the predetermined prediction matrix can be quantized or is already a quantized matrix, since the
further vector comprises mainly small values resulting in a marginal impact of possible quantization errors in the approximation of the first- and/or second-matrix-vector-product.
According to an embodiment, the invertible linear transform times a sum of the predetermined prediction vector and an integer matrix can correspond to a quantized version of a machine learning prediction matrix. The integer matrix is, for example, a matrix with a predetermined column io consisting of ones and with columns i¹i0 being zero.
According to an embodiment the invertible linear transform is defined such that a predetermined component of the further vector becomes a, and each of other components of the further vector, except the predetermined component, equal a corresponding component of the sample value vector minus a, wherein a is a predetermined value. Thus a further vector with small values can be realized enabling a quantization of the predetermined prediction matrix and resulting in a marginal impact of the quantization error in the predicted samples of the predetermined block. With this further vector it is possible to predict the samples of the predetermined block by integer arithmetic operations and/or fixed point arithmetic operations.
According to an embodiment, the predetermined value is one of an average, such as an arithmetic mean or weighted average, of components of the sample value vector, a default value, a value signalled in a data stream into which the picture is coded, and a component of the sample value vector corresponding to the predetermined component. The sample value vector is, for example, comprised by the plurality of reference samples or by averages of groups of reference samples out of the plurality of reference samples. A group of reference samples comprises, for example, at least two reference samples, preferably adjacent reference samples.
The predetermined value is, for example, the arithmetic mean or weighted average, of some components (e.g., of at least two components) of the sample value vector or of all components of the sample value vector. This is based on the idea, that the components of the sample value vector are correlated, i.e. values of components may be similar and/or at least some of the components may have equal values, whereby components of the further vector not equal to the predetermined component of the further vector, i.e. components i for i¹i0, wherein i0 represents the predetermined component, probably have an absolute value smaller than the corresponding component of the sample value vector. Thus a further vector with small values can be realized.
The predetermined value can be a default value, wherein the default value is, for example, chosen from a list of default values or is the same for all block sizes, prediction modes and so on. The components of the list of default values can be associated with different block sizes, prediction modes, sample value vector sizes, averages of values of samples value vectors and so on. Thus, for example, depending on the predetermined block, i.e. depending on decoding or encoding settings associated with the predetermined block, an optimized default value is chosen from the list of default values by the apparatus.
Alternatively, the predetermined value can be a value signalled in a data stream into which the picture is coded. In this case, for example, an apparatus for encoding determines the predetermined value. The determination of the predetermined value can be based on the same considerations as described above in the context of the default value.
Components of the further vector not equal to the predetermined component of the further vector, i.e. components i for i¹i0, wherein io represents the predetermined component, have, for example, an absolute value smaller than the corresponding component of the sample value vector with the usage of the default value or the value signalled in the data stream as the predetermined value.
According to an embodiment, the predetermined value can be a component of the sample value vector corresponding to the predetermined component. In other words, the value of a component of the sample value vector corresponding to the predetermined component does not change by applying the invertible linear transform. Thus the value of a component of the sample value vector corresponding to the predetermined component equals, for example, the value of the predetermined component of the further vector.
The predetermined component is, for example, chosen by default, as e.g. described above with regard to the predetermined value as default value. It is clear, that the predetermined component can be chosen by an alternatively procedure. The predetermined component is, for example, chosen similar to the predetermined value. According to an embodiment, the predetermined component is chosen such that a value of a corresponding component of the sample value vector is equal to or has only a marginal deviation from an average of values of the sample value vector.
According to an embodiment, matrix components of the predetermined prediction matrix within a column of the predetermined prediction matrix which corresponds to the predetermined component of the further vector are all zero. The apparatus is configured to compute the matrix-vector product, i.e. the matrix-vector product between the further vector and the predetermined prediction matrix, by performing multiplications by computing a matrix vector product between a reduced prediction matrix resulting from the predetermined prediction matrix by leaving away the column, i.e. the column consisting of zeros, and an even further vector resulting from the further vector by leaving away the predetermined component. This is based on the idea, that the predetermined component of the further vector is set to the predetermined value and that this predetermined value is exactly or close to sample values in a prediction signal for the predetermined block, if the values of the sample value vector are correlated. Thus the prediction of the samples of the predetermined block is optionally based on the predetermined prediction matrix times the further vector or rather the reduced prediction matrix times the even further vector and an integer matrix, whose column io, which column corresponds to the predetermined component, consists of ones and all of whose other columns i¹i0 are zero, times the further vector. In other words, a transformed machine learning prediction matrix, e.g., by an inverse transform of the predetermined invertible linear transform, can be split into the predetermined prediction matrix or rather the reduced prediction matrix and the integer matrix based on the further vector. Thus only the prediction matrix should be quantized to obtain an integer approximation of the machine learning prediction matrix and/or of the transformed machine learning prediction matrix, which is advantageous since the even further vector does not comprise the predetermined component and all other components have a much smaller absolute values than the corresponding component of the sample value vector enabling a marginal impact of the quantization error in the resulting quantization of the machine learning prediction matrix and/or of the transformed machine learning prediction matrix. Furthermore, with the reduced prediction matrix and the even further vector less multiplications have to be performed to obtain the prediction vector, reducing the complexity and resulting in a higher computing efficiency. Optionally, a vector with all components being the predetermined value a can be added to the prediction vector at the prediction of the samples of the predetermined block. This vector can be obtained, as described above, by the matrix-vector product between the integer matrix and the further matrix.
According to an embodiment, a matrix, which results from summing each matrix component of the predetermined prediction matrix within a column of the predetermined prediction matrix, which corresponds to the predetermined component of the further vector, with one, times the predetermined invertible linear transform corresponds to a quantized version of a machine learning prediction matrix. The summing of each matrix component of the predetermined prediction matrix within a column of the predetermined prediction matrix, which corresponds to the predetermined component of the further vector, with one, represents, for example, the transformed machine learning prediction matrix. The transformed machine learning prediction matrix represents, for example, a machine learning prediction matrix transformed by an inverse transform of the predetermined invertible linear transform. The summation can correspond to a summation of the predetermined prediction matrix with an integer matrix, whose column io, which column corresponds to the predetermined component, consists of ones and all of whose other columns i¹i0 are zero.
According to an embodiment, the apparatus is configured to represent the predetermined prediction matrix using prediction parameters and to compute the matrix-vector product by performing multiplications and summations on the components of the further vector and the prediction parameters and intermediate results resulting therefrom. Absolute values of the prediction parameters are representable by an n-bit fixed point number representation with n being equal to or lower than 14, or, alternatively, 10, or, alternatively, 8. In other words, prediction parameters are multiplied and/or summed to elements of the matrix-vector product, like the further vector, the predetermined prediction matrix and/or the prediction vector. By the multiplication and summation operations a fixed point format, e.g. of the predetermined prediction matrix, of the prediction vector and/or of the predicted samples of the predetermined block can be obtained.
An embodiment according to the invention is related to an apparatus for encoding a picture comprising, an apparatus for predicting a predetermined block of the picture using a plurality of reference samples according to any of the herein described embodiments, to obtain a prediction signal. Furthermore, the apparatus comprises an entropy encoder configured to encode a prediction residual for the predetermined block for correcting the prediction signal. For the prediction of the predetermined block to obtain the prediction signal the apparatus is, for example, configured to form a sample value vector out of the plurality of reference samples, derive from the sample value vector a further vector onto which the sample value vector is mapped by a predetermined invertible linear transform, compute a matrix-vector product between the further vector and a predetermined prediction matrix so as to obtain a prediction vector, and predict samples of the predetermined block on the basis of the prediction vector.
An embodiment according to the invention is related to an apparatus for decoding a picture comprising, an apparatus for predicting a predetermined block of the picture using a plurality of reference samples according to any of the herein described embodiments, to obtain a prediction signal. Furthermore, the apparatus comprises an entropy decoder configured to decode a prediction residual for the predetermined block, and a prediction corrector configured to correct the prediction signal using the prediction residual. For the prediction of the predetermined block to obtain the prediction signal the apparatus is, for example, configured to form a sample value vector out of the plurality of reference samples, derive from the sample value vector a further vector onto which the sample value vector is mapped by a predetermined invertible linear transform, compute a matrix-vector product between the further vector and a predetermined
prediction matrix so as to obtain a prediction vector, and predict samples of the predetermined block on the basis of the prediction vector.
An embodiment according to the invention is related to a method for predicting a predetermined block of a picture using a plurality of reference samples, comprising forming a sample value vector out of the plurality of reference samples, deriving from the sample value vector a further vector onto which the sample value vector is mapped by a predetermined invertible linear transform, computing a matrix- vector product between the further vector and a predetermined prediction matrix so as to obtain a prediction vector, and predicting samples of the predetermined block on the basis of the prediction vector.
An embodiment according to the invention is related to a method for encoding a picture comprising, predicting a predetermined block of the picture using a plurality of reference samples according to the above described method, to obtain a prediction signal, and entropy encoding a prediction residual for the predetermined block for correcting the prediction signal.
An embodiment according to the invention is related to a method for decoding a picture comprising, predicting a predetermined block of the picture using a plurality of reference samples according to one of the methods described above, to obtain a prediction signal, entropy decoding a prediction residual for the predetermined block, and correcting the prediction signal using the prediction residual.
An embodiment according to the invention is related to a data stream having a picture encoded thereinto using a herein described method for encoding a picture.
An embodiment according to the invention is related to a computer program having a program code for performing, when running on a computer, a method of any of the herein described embodiments.
Brief Description of the Drawings
The drawings are not necessarily to scale emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments of the invention are described with reference to the following drawings, in which:
Fig. 1 shows an embodiment of an encoding into a data stream;
Fig. 2 shows an embodiment of an encoder;
Fig. 3 shows an embodiment of a reconstruction of a picture;
Fig. 4 shows an embodiment of a decoder;
Fig. 5 shows schematic diagram of a prediction of a block for encoding and/or decoding, according to an embodiment;
Fig. 6 shows a matrix operation for a prediction of a block for encoding and/or decoding according to an embodiment;
Fig. 7.1 shows a prediction of a block with a reduced sample value vector according to an embodiment;
Fig. 7.2 shows a prediction of a block using an interpolation of samples according to an embodiment;
Fig. 7.3 shows a prediction of a block with a reduced sample value vector, wherein only some boundary samples are averaged, according to an embodiment;
Fig. 7.4 shows a prediction of a block with a reduced sample value vector, wherein groups of four boundary samples are averaged, according to an embodiment;
Fig. 8 shows a schematic diagram of an apparatus for predicting a block according to an embodiment;
Fig. 9 shows matrix operations performed by an apparatus according to an embodiment;
Fig. 10a -c show detailed matrix operations performed by an apparatus according to an embodiment;
Fig. 1 1 shows detailed matrix operations performed by an apparatus using offset and scaling parameters, according to an embodiment;
Fig. 12 shows detailed matrix operations performed by an apparatus using offset and scaling parameters, according to a different embodiment; and
Fig. 13 shows a block diagram of a method for predicting a predetermined block, according to an embodiment.
Detailed Description of the Embodiments
Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals even if occurring in different figures.
In the following description, a plurality of details is set forth to provide a more throughout explanation of embodiments of the present invention. However, it will be apparent to those skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form rather than in detail in order to avoid obscuring embodiments of the present invention. In addition, features of the different embodiments described herein after may be combined with each other, unless specifically noted otherwise.
1 Introduction
In the following, different inventive examples, embodiments and aspects will be described. At least some of these examples, embodiments and aspects refer, inter alia, to methods and/or apparatus for video coding and/or for performing block-based Predictions e.g. using linear or affine transforms with neighboring sample reduction and/or for optimizing video delivery (e.g., broadcast, streaming, file playback, etc.), e.g., for video applications and/or for virtual reality applications.
Further, examples, embodiments and aspects may refer to High Efficiency Video Coding (HEVC) or successors. Also, further embodiments, examples and aspects will be defined by the enclosed claims.
It should be noted that any embodiments, examples and aspects as defined by the claims can be supplemented by any of the details (features and functionalities) described in the following chapters.
Also, the embodiments, examples and aspects described in the following chapters can be used individually, and can also be supplemented by any of the features in another chapter, or by any feature included in the claims.
Also, it should be noted that individual, examples, embodiments and aspects described herein can be used individually or in combination. Thus, details can be added to each of said individual aspects without adding details to another one of said examples, embodiments and aspects.
It should also be noted that the present disclosure describes, explicitly or implicitly, features of decoding and/or encoding system and/or method.
Moreover, features and functionalities disclosed herein relating to a method can also be used in an apparatus. Furthermore, any features and functionalities disclosed herein with respect to an apparatus can also be used in a corresponding method. In other words, the methods disclosed herein can be supplemented by any of the features and functionalities described with respect to the apparatuses.
Also, any of the features and functionalities described herein can be implemented in hardware or in software, or using a combination of hardware and software, as will be described in the section “implementation alternatives’’.
Moreover, any of the features described in parentheses ("(... )” or "[...]") may be considered as optional in some examples, embodiments, or aspects.
2 Encoders, decoders
In the following, various examples are described which may assist in achieving a more effective compression when using block-based prediction. Some examples achieve high compression efficiency by spending a set of intra-prediction modes. The latter ones may be added to other intra-prediction modes heuristically designed, for instance, or may be provided exclusively. And even other examples make use of both of the just-discussed specialties. As a vibration of these embodiments it may be, however, that intra prediction is turned into an inter prediction by using reference samples in another picture instead.
In order to ease the understanding of the following examples of the present application, the description starts with a presentation of possible encoders and decoders fitting thereto into which the subsequently outlined examples of the present application could be built. Fig. 1 shows an
apparatus for block-wise encoding a picture 10 into a datastream 12. The apparatus is indicated using reference sign 14 and may be a still picture encoder or a video encoder. In other words, picture 10 may be a current picture out of a video 16 when the encoder 14 is configured to encode video 16 including picture 10 into datastream 12, or encoder 14 may encode picture 10 into datastream 12 exclusively.
As mentioned, encoder 14 performs the encoding in a block-wise manner or block-base. To this, encoder 14 subdivides picture 10 into blocks, units of which encoder 14 encodes picture 10 into datastream 12. Examples of possible subdivisions of picture 10 into blocks 18 are set out in more detail below. Generally, the subdivision may end-up into blocks 18 of constant size such as an array of blocks arranged in rows and columns or into blocks 18 of different block sizes such as by use of a hierarchical multi-tree subdivisioning with starting the multi-tree subdivisioning from the whole picture area of picture 10 or from a pre-partitioning of picture 10 into an array of tree blocks wherein these examples shall not be treated as excluding other possible ways of subdivisioning picture 10 into blocks 18.
Further, encoder 14 is a predictive encoder configured to predictively encode picture 10 into datastream 12. For a certain block 18 this means that encoder 14 determines a prediction signal for block 18 and encodes the prediction residual, i.e. the prediction error at which the prediction signal deviates from the actual picture content within block 18, into datastream 12.
Encoder 14 may support different prediction modes so as to derive the prediction signal for a certain block 18. The prediction modes, which are of importance in the following examples, are intra-prediction modes according to which the inner of block 18 is predicted spatially from neighboring, already encoded samples of picture 10. The encoding of picture 10 into datastream 12 and, accordingly, the corresponding decoding procedure, may be based on a certain coding order 20 defined among blocks 18. For instance, the coding order 20 may traverse blocks 18 in a raster scan order such as row-wise from top to bottom with traversing each row from left to right, for instance. In case of hierarchical multi-tree based subdivisioning, raster scan ordering may be applied within each hierarchy level, wherein a depth-first traversal order may be applied, i.e. leaf nodes within a block of a certain hierarchy level may precede blocks of the same hierarchy level having the same parent block according to coding order 20. Depending on the coding order 20, neighboring, already encoded samples of a block 18 may be located usually at one or more sides of block 18. In case of the examples presented herein, for instance, neighboring, already encoded samples of a block 18 are located to the top of, and to the left of block 18.
Intra-prediction modes may not be the only ones supported by encoder 14. In case of encoder 14 being a video encoder, for instance, encoder 14 may also support inter-prediction modes according to which a block 18 is temporarily predicted from a previously encoded picture of video 16. Such an inter-prediction mode may be a motion-compensated prediction mode according to which a motion vector is signaled for such a block 18 indicating a relative spatial offset of the portion from which the prediction signal of block 18 is to be derived as a copy. Additionally, or alternatively, other non-intra-prediction modes may be available as well such as inter-prediction modes in case of encoder 14 being a multi-view encoder, or non-predictive modes according to which the inner of block 18 is coded as is, i.e. without any prediction.
Before starting with focusing the description of the present application onto intra-prediction modes, a more specific example for a possible block-based encoder, i.e. for a possible implementation of encoder 14, as described with respect to Fig. 2 with then presenting two corresponding examples for a decoder fitting to Figs. 1 and 2, respectively.
Fig. 2 shows a possible implementation of encoder 14 of Fig. 1 , namely one where the encoder is configured to use transform coding for encoding the prediction residual although this is nearly an example and the present application is not restricted to that sort of prediction residual coding. According to Fig. 2, encoder 14 comprises a subtractor 22 configured to subtract from the inbound signal, i.e. picture 10 or, on a block basis, current block 18, the corresponding prediction signal 24 so as to obtain the prediction residual signal 26 which is then encoded by a prediction residual encoder 28 into a datastream 12. The prediction residual encoder 28 is composed of a lossy encoding stage 28a and a lossless encoding stage 28b. The lossy stage 28a receives the prediction residual signal 26 and comprises a quantizer 30 which quantizes the samples of the prediction residual signal 26. As already mentioned above, the present example uses transform coding of the prediction residual signal 26 and accordingly, the lossy encoding stage 28a comprises a transform stage 32 connected between subtractor 22 and quantizer 30 so as to transform such a spectrally decomposed prediction residual 26 with a quantization of quantizer 30 taking place on the transformed coefficients where presenting the residual signal 26. The transform may be a DCT, DST, FFT, Hadamard transform or the like. The transformed and quantized prediction residual signal 34 is then subject to lossless coding by the lossless encoding stage 28b which is an entropy coder entropy coding quantized prediction residual signal 34 into datastream 12. Encoder 14 further comprises the prediction residual signal reconstruction stage 36 connected to the output of quantizer 30 so as to reconstruct from the transformed and quantized prediction residual signal 34 the prediction residual signal in a manner also available at the decoder, i.e. taking the coding loss is quantizer 30 into account. To this end, the prediction residual reconstruction stage 36 comprises a dequantizer 38 which perform the inverse of the quantization of quantizer 30, followed by an inverse transformer 40 which performs the inverse transformation relative to the transformation performed by transformer 32 such as the inverse of the spectral decomposition such as the inverse to any of the above-mentioned specific transformation examples. Encoder 14 comprises an adder 42 which adds the reconstructed prediction residual signal as output by inverse transformer 40 and the prediction signal 24 so as to output a reconstructed signal, i.e. reconstructed samples. This output is fed into a predictor 44 of encoder 14 which then determines the prediction signal 24 based thereon. It is predictor 44 which supports all the prediction modes already discussed above with respect to Fig. 1. Fig 2 also illustrates that in case of encoder 14 being a video encoder, encoder 14 may also comprise an in-loop filter 46 with filters completely reconstructed pictures which, after having been filtered, form reference pictures for predictor 44 with respect to inter-predicted block.
As already mentioned above, encoder 14 operates block-based. For the subsequent description, the block bases of interest is the one subdividing picture 10 into blocks for which the intraprediction mode is selected out of a set or plurality of intra-prediction modes supported by predictor 44 or encoder 14, respectively, and the selected intra-prediction mode performed individually. Other sorts of blocks into which picture 10 is subdivided may, however, exist as well. For instance, the above-mentioned decision whether picture 10 is inter-coded or intra-coded may be done at a granularity or in units of blocks deviating from blocks 18. For instance, the inter/intra mode decision may be performed at a level of coding blocks into which picture 10 is subdivided, and each coding block is subdivided into prediction blocks. Prediction blocks with encoding blocks for which it has been decided that intra-prediction is used, are each subdivided to an intraprediction mode decision. To this, for each of these prediction blocks, it is decided as to which supported intra-prediction mode should be used for the respective prediction block. These prediction blocks will form blocks 18 which are of interest here. Prediction blocks within coding blocks associated with inter-prediction would be treated differently by predictor 44. They would be inter-predicted from reference pictures by determining a motion vector and copying the prediction signal for this block from a location in the reference picture pointed to by the motion vector. Another block subdivisioning pertains the subdivisioning into transform blocks at units of which the transformations by transformer 32 and inverse transformer 40 are performed. Transformed blocks may, for instance, be the result of further subdivisioning coding blocks. Naturally, the examples set out herein should not be treated as being limiting and other examples exist as well. For the sake of completeness only, it is noted that the subdivisioning into coding blocks may, for instance, use multi-tree subdivisioning, and prediction blocks and/or transform
blocks may be obtained by further subdividing coding blocks using multi-tree subdivisioning, as well.
A decoder 54 or apparatus for block-wise decoding fitting to the encoder 14 of Fig. 1 is depicted in Fig. 3. This decoder 54 does the opposite of encoder 14, i.e. it decodes from datastream 12 picture 10 in a block-wise manner and supports, to this end, a plurality of intra-prediction modes. The decoder 54 may comprise a residual provider 156, for example. All the other possibilities discussed above with respect to Fig. 1 are valid for the decoder 54, too. To this, decoder 54 may be a still picture decoder or a video decoder and all the prediction modes and prediction possibilities are supported by decoder 54 as well. The difference between encoder 14 and decoder 54 lies, primarily, in the fact that encoder 14 chooses or selects coding decisions according to some optimization such as, for instance, in order to minimize some cost function which may depend on coding rate and/or coding distortion. One of these coding options or coding parameters may involve a selection of the intra-prediction mode to be used for a current block 18 among available or supported intra-prediction modes. The selected intra-prediction mode may then be signaled by encoder 14 for current block 18 within datastream 12 with decoder 54 redoing the selection using this signalization in datastream 12 for block 18. Likewise, the subdivisioning of picture 10 into blocks 18 may be subject to Optimization within encoder 14 and corresponding subdivision information may be conveyed within datastream 12 with decoder 54 recovering the subdivision of picture 10 into blocks 18 on the basis of the subdivision information. Summarizing the above, decoder 54 may be a predictive decoder operating on a block-basis and besides intra prediction modes, decoder 54 may support other prediction modes such as inter-prediction modes in case of, for instance, decoder 54 being a video decoder. In decoding, decoder 54 may also use the coding order 20 discussed with respect to Fig. 1 and as this coding order 20 is obeyed both at encoder 14 and decoder 54, the same neighboring samples are available for a current block 18 both at encoder 14 and decoder 54. Accordingly, in order to avoid unnecessary repetition, the description of the mode of operation of encoder 14 shall also apply to decoder 54 as far the subdivision of picture 10 into blocks is concerned, for instance, as far as prediction is concerned and as far as the coding of the prediction residual is concerned. Differences lie in the fact that encoder 14 chooses, by optimization, some coding options or coding parameters and signals within, or inserts into, datastream 12 the coding parameters which are then derived from the datastream 12 by decoder 54 so as to redo the prediction, subdivision and so forth.
Fig. 4 shows a possible implementation of the decoder 54 of Fig. 3, namely one fitting to the implementation of encoder 14 of Fig. 1 as shown in Fig. 2. As many elements of the encoder 54 of Fig. 4 are the same as those occurring in the corresponding encoder of Fig. 2, the same
reference signs, provided with an apostrophe, are used in Fig. 4 in order to indicate these elements. In particular, adder 42', optional in-loop filter 46' and predictor 44' are connected into a prediction loop in the same manner that they are in encoder of Fig. 2. The reconstructed, i.e. dequantized and retransformed prediction residual signal applied to adder 42' is derived by a sequence of entropy decoder 56 which inverses the entropy encoding of entropy encoder 28b, followed by the residual signal reconstruction stage 36' which is composed of dequantizer 38' and inverse transformer 40' just as it is the case on encoding side. The decoder's output is the reconstruction of picture 10. The reconstruction of picture 10 may be available directly at the output of adder 42' or, alternatively, at the output of in-loop filter 46'. Some post-filter may be arranged at the decoder's output in order to subject the reconstruction of picture 10 to some post filtering in order to improve the picture quality, but this option is not depicted in Fig. 4.
Again, with respect to Fig. 4 the description brought forward above with respect to Fig. 2 shall be valid for Fig. 4 as well with the exception that merely the encoder performs the optimization tasks and the associated decisions with respect to coding options. However, all the description with respect to block-subdivisioning, prediction, dequantization and retransforming is also valid for the decoder 54 of Fig. 4.
3 ALWIP (Affine Linear Weighted Intra Predictor)
Some non-limiting examples regarding ALWIP are herewith discussed, even if ALWIP is not always necessary to embody the techniques discussed here.
The present application is concerned, inter alia, with an improved block-based prediction mode concept for block-wise picture coding such as usable in a video codec such as HEVC or any successor of HEVC. The prediction mode may be an intra prediction mode, but theoretically the concepts described herein may be transferred onto inter prediction modes as well where the reference samples are part of another picture.
A block-based prediction concept allowing for an efficient implementation such as a hardware friendly implementation is sought.
This object is achieved by the subject-matter of the independent claims of the present application.
Intra-prediction modes are widely used in picture and video coding. In video coding, intraprediction modes compete with other prediction modes such as inter-prediction modes such as motion-compensated prediction modes. In intra-prediction modes, a current block is predicted on the basis of neighboring samples, i.e. samples already encoded as far as the encoder side is concerned, and already decoded as far as the decoder side is concerned. Neighboring sample values are extrapolated into the current block so as to form a prediction signal for the current block with the prediction residual being transmitted in the datastream for the current block. The better the prediction signal is, the lower the prediction residual is and, accordingly, a lower number of bits is necessary to code the prediction residual.
In order to be effective, several aspects should be taken into account in order to form an effective frame work for intra-prediction in a block-wise picture coding environment. For instance, the larger the number of intra-prediction modes supported by the codec, the larger the side information rate consumption is in order to signal the selection to the decoder. On the other hand, the set of supported intra-prediction modes should be able to provide a good prediction signal, i.e. a prediction signal resulting in a low prediction residual.
In the following, there is disclosed - as a comparison embodiment or basis example - an apparatus (encoder or decoder) for block-wise decoding a picture from a data stream, the apparatus supporting at least one intra-prediction mode according to which the intra-prediction signal for a block of a predetermined size of the picture is determined by applying a first template of samples which neighbours the current block onto an affine linear predictor which, in the sequel, shall be called Affine Linear Weighted Intra Predictor (ALWIP).
The apparatus may have at least one of the following properties (the same may apply to a method or to another technique, e.g. implemented in a non-transitory storage unit storing instructions which, when executed by a processor, cause the processor to implement the method and/or to operate as the apparatus):
3.1 Predictors may be complementary to other predictors
The intra-prediction modes which might form the subject of the implementational improvements described further below may be complementary to other intra prediction modes of the codec. Thus, they may be complementary to the DC-, Planar-, or Angular-Prediction modes defined in the HE VC codec resp. the JEM reference software. The latter three types of intra-prediction modes shall be called conventional intra prediction modes from now on. Thus, for a given block in intra mode, a flag needs to be parsed by the decoder which indicates whether one of the intraprediction modes supported by the apparatus is to be used or not.
3.2 More than one proposed prediction modes
The apparatus may contain more than one ALWIP mode. Thus, in case that the decoder knows that one of the ALWIP modes supported by the apparatus is to be used, the decoder needs to parse additional information that indicates which of the ALWIP modes supported by the apparatus is to be used.
The signalization of the mode supported may have the property that the coding of some ALWIP modes may require less bins than other ALWIP modes. Which of these modes require less bins and which modes require more bins may either depend on information that can be extracted from the already decoded bitstream or may be fixed in advance.
4 Some aspects
Fig. 2 shows the decoder 54 for decoding a picture from a data stream 12. The decoder 54 may be configured to decode a predetermined block 18 of the picture. In particular, the predictor 44 may be configured for mapping a set of P neighboring samples neighboring the predetermined block 18 using a linear or affine linear transformation [e.g., ALWIP] onto a set of Q predicted values for samples of the predetermined block.
As shown in Fig. 5, a predetermined block 18 comprises Q values to be predicted (which, at the end of the operations, will be“predicted values”). If the block 18 has M row and N columns, Q=M-N. The Q values of the block 18 may be in the spatial domain (e.g., pixels) or in the transform domain (e.g., DCT, Discrete Wavelet Transform, etc.). The Q values of the block 18 may be predicted on the basis of P values taken from the neighboring blocks 17a-17c, which are in general adjacent to the block 18. The P values of the neighboring blocks 17a-17c may be in the closest positions (e.g., adjacent) to the block 18. The P values of the neighboring blocks 17a-17c have already been processed and predicted. The P values are indicated as values in portions 17’a-17’c, to distinguish them from the blocks they are part of (in some examples, 17’b is not used).
As shown in Fig. 6, in order to perform the prediction, it is possible to operate with a first vector 17P with P entries (each entry being associated to a particular position in the neighboring portions 17’a-17’c), a second vector 18Q with Q entries (each entry being associated with a particular position in the block 18), and a mapping matrix 17M (each row being associated to a particular position in the block 18, each column being associated to a particular position in the neighboring portions 17’a-17’c). The mapping matrix 17M therefore performs the prediction of the P values of the neighboring portions 17’a-17’c into values of the block 18 according to a predetermined mode.
The entries in the mapping matrix 17M may be therefore understood as weighting factors. In the following passages, we will refer to the neighboring portions of the boundary using the signs 17a-17c instead of 17’a-17’c.
In the art there are known several conventional modes, such as DC mode, planar mode and 65 directional prediction modes. There may be known, for example, 67 modes.
However, it has been noted that it is also possible to make use of different modes, which are here called linear or affine linear transformations. The linear or affine linear transformation comprises P-Q weighting factors, among which at least ¼ P*Q weighting factors are non-zero weighting values, which comprise, for each of the Q predicted values, a series of P weighting factors relating to the respective predicted value. The series, when being arranged one below the other according to a raster scan order among the samples of the predetermined block, form an envelope which is omnidirectionally non-linear.
It is possible to map the P positions of the neighboring values 17'a-17’c (template), the Q positions of the neighboring samples 17’a-17’c, and at the values of the P*Q weighting factors of the matrix 17M. A plane is an example of the envelope of the series for a DC transformation (which is a plane for the DC transformation). The envelope is evidently planar and therefore is excluded by the definition of the linear or affine linear transformation (ALWIP). Another example is a matrix resulting in an emulation of an angular mode: an envelope would be excluded from the ALWIP definition and would, frankly speaking, look like a hill leading obliquely from top to bottom along a direction in the P/Q plane. The planar mode and the 65 directional prediction modes would have different envelopes, which would however be linear in at least one direction, namely all directions for the exemplified DC, for example, and the hill direction for an angular mode, for example.
To the contrary, the envelope of the linear or affine transformation will not be omnidirectionally linear. It has been understood that such kind of transformation may be optimal, in some situations, for performing the prediction for the block 18. It has been noted that it is preferable that at least ¼ of the weighting factors are different from zero (i.e , at least the 25% of the P*Q weighting factors are different from 0).
The weighting factors may be unrelated with each other according to any regular mapping rule. Hence, a matrix 17M may be such that the values of its entries have no apparent recognizable relationship. For example, the weighting factors cannot be described by any analytical or differential function.
In examples, an ALWIP transformation is such that a mean of maxima of cross correlations between a first series of weighting factors relating to the respective predicted value, and a second series of weighting factors relating to predicted values other than the respective predicted value, or a reversed version of the latter series, whatever leads to a higher maximum, may be lower than a predetermined threshold (e.g., 0.2 or 0.3 or 0.35 or 0.1 , e.g., a threshold in a range between 0.05 and 0.035). For example, for each couple (11,12) of rows of the ALWIP matrix 17M, a cross correlation may be calculated by multiplying the P values of the i1th row with by the P values of the 12th row. For each obtained cross correlation, the maximum value may be obtained. Hence, a mean (average) may be obtained for the whole matrix 17M (i.e. the maxima of the cross correlations in all combinations are averaged). After that, the threshold may be e.g., 0.2 or 0.3 or 0.35 or 0.1 , e.g., a threshold in a range between 0.05 and 0.035.
Claims
1 . Apparatus (1000) for predicting a predetermined block (18) of a picture (10) using a plurality of reference samples (17a, c), configured to
form (100) a sample value vector (102, 400) out of the plurality of reference samples (17a, c),
derive from the sample value vector (102, 400) a further vector (402) onto which the sample value vector (102, 400) is mapped by a predetermined invertible linear transform (403),
compute a matrix-vector product (404) between the further vector (402) and a predetermined prediction matrix (405) so as to obtain a prediction vector (406), and
predict samples of the predetermined block (18) on the basis of the prediction vector (406).
2. Apparatus (1000) of claim 1 , wherein the predetermined invertible linear transform (403) is defined such that
a predetermined component (1500) of the further vector (402) becomes a or a constant minus a, and
each of other components of the further vector (402), except the predetermined component (1500), equal a corresponding Component of the sample value vector (102, 400) minus a,
wherein a is a predetermined value (1400).
3. Apparatus (1000) of claim 2, wherein the predetermined value (1400) is one of
an average, such as an arithmetic mean or weighted average, of components of the sample value vector (102, 400),
a default value,
a value signalled in a data stream into which the picture (10) is coded, and
a component of the sample value vector (102, 400) corresponding to the predetermined component (1500).
4. Apparatus (1000) of claim 1 , wherein the predetermined invertible linear transform (403) is defined such that
a predetermined component (1500) of the further vector (402) becomes a or a constant minus a, and
each of other components of the further vector (402), except the predetermined component (1500), equal a corresponding component of the sample value vector (102, 400) minus a,
wherein a is an arithmetic mean of components of the sample value vector (102, 400).
5. Apparatus (1000) of claim 1 , wherein the predetermined invertible linear transform (403) is defined such that
a predetermined component (1500) of the further vector (402) becomes a or a constant minus a, and
each of other components of the further vector (402), except the predetermined component (1500), equal a corresponding component of the sample value vector (102, 400) minus a,
wherein a is a component of the sample value vector (102, 400) corresponding to the predetermined component (1500),
wherein the apparatus (1000) is configured to
comprise a plurality of invertible linear transforms, each of which is associated with one component of the further vector (402),
select the predetermined component (1500) out of the components of the sample value vector (102, 400) and
use the invertible linear transform out of the plurality of invertible linear transforms which is associated with the predetermined component (1500) as the predetermined invertible linear transform (403).
6. Apparatus (1000) of any of claims 2 to 5, wherein matrix components of the predetermined prediction matrix (405) within a column of the predetermined prediction matrix (405) which corresponds to the predetermined component (1500) of the further vector (402) are all zero and the apparatus (1000) is configured to
compute the matrix-vector product (404) by performing multiplications by computing a matrix vector product (407) between a reduced prediction matrix (405) resulting from the predetermined prediction matrix (405) by leaving away the column (412) and an even further vector (410) resulting from the further vector (402) by leaving away the predetermined component (1500).
7. Apparatus (1000) of any of claims 2 to 6, configured to, in predicting the samples of the predetermined block (18) on the basis of the prediction vector (406),
compute for each component of the prediction vector (406) a sum of the respective component and a.
8. Apparatus (1000) of any of claims 2 to 7, wherein a matrix, which results from summing each matrix component of the predetermined prediction matrix (405) within a column of the predetermined prediction matrix (405), which corresponds to the predetermined component (1500) of the further vector (402), with one, times the predetermined invertible linear transform (403) corresponds to a quantized version of a machine learning prediction matrix (1 100).
9. Apparatus (1000) of any of the previous claims, configured to
form (100) the sample value vector (102, 400) out of the plurality of reference samples (17a, c) by, for each component of the sample value vector (102, 400),
adopting one reference sample of the plurality of reference samples (17a,c) as the respective component of the sample value vector (102, 400), and/or
averaging two or more components of the sample value vector (102, 400) to obtain the respective component of the sample value vector (102, 400).
10. Apparatus (1000) of any of the previous claims, wherein the plurality of reference samples (17a, c) is arranged within the picture (10) alongside an outer edge of the predetermined block
(18).
1 1. Apparatus (1000) of any of the previous claims, configured to compute the matrix-vector product (404) using fixed point arithmetic operations.
12. Apparatus (1000) of any of the previous claims, configured to compute the matrix-vector product (404) without floating point arithmetic operations.
13. Apparatus (1000) of any of the previous claims, configured to store a fixed point number representation of the predetermined prediction matrix (405).
14. Apparatus (1000) of any of the previous claims, configured to represent the predetermined prediction matrix (405) using prediction parameters and to compute the matrix-vector product (404) by performing multiplications and summations on the components of the further vector (402) and the prediction parameters and intermediate results resulting therefrom, wherein absolute values of the prediction parameters are representable by an n-bit fixed point number representation with n being equal to or lower than 14, or, alternatively, 10, or, alternatively, 8.
15. Apparatus (1000) of claim 14, wherein the prediction parameters comprise
weights each of which is associated with a corresponding matrix component of the predetermined prediction matrix (405).
16. Apparatus (1000) of claim 15, wherein the prediction parameters further comprise
one or more scaling factors each of which is associated with one or more corresponding matrix components of the predetermined prediction matrix (405) for scaling the weight associated with the one or more corresponding matrix component of the predetermined prediction matrix (405), and/or
one or more offsets each of which is associated with one or more corresponding matrix components of the predetermined prediction matrix (405) for offsetting the weight associated with the one or more corresponding matrix component of the predetermined prediction matrix (405).
17. Apparatus (1000) of any of the previous claims, configured to, in predicting the samples of the predetermined block (18) on the basis Of the prediction vector (406),
use interpolation to compute at least one sample position of the predetermined block (18) based on the prediction vector (406) each component of which is associated with a corresponding position within the predetermined block (18).
18. Apparatus for encoding a picture comprising,
an apparatus for predicting a predetermined block (18) of the picture using a plurality of reference samples (17a,c) according to any of the previous claims, to obtain a prediction signal, and an entropy encoder configured to encode a prediction residual for the predetermined block for correcting the prediction signal.
19. Apparatus for decoding a picture comprising,
an apparatus for predicting a predetermined block (18) of the picture using a plurality of reference samples (17a,c) according to any of claims 1 to 17, to obtain a prediction signal,
an entropy decoder configured to decode a prediction residual for the predetermined block, and a prediction corrector configured to correct the prediction signal using the prediction residual.
20. Method (2000) for predicting a predetermined block (18) of a picture using a plurality of reference samples (17a, c), comprising
forming (2100, 100) a sample value vector (102, 400) out of the plurality of reference samples,
deriving (2200) from the sample value vector a further vector (402) onto which the sample value vector is mapped by a predetermined invertible linear transform (403),
computing (2300) a matrix-vector product (404) between the further vector (402) and a predetermined prediction matrix (405) so as to obtain a prediction vector (406), and
predicting (2400) samples of the predetermined block on the basis of the prediction vector
(406).
21. Method for encoding a picture comprising,
predicting a predetermined block (18) of the picture using a plurality of reference samples (17a, c) according to the method (2000) of claim 20, to obtain a prediction signal, and
entropy encoding a prediction residual for the predetermined block for correcting the prediction signal.
22. Method for decoding a picture comprising,
predicting a predetermined block (18) of the picture using a plurality of reference samples (17a, c) to the method (2000) of claim 20, to obtain a prediction signal,
entropy decoding a prediction residual for the predetermined block, and
correcting the prediction signal using the prediction residual.
23. Data stream having a picture encoded thereinto using a method of claim 21.
24. Computer program having a program code for performing, when running on a computer, a method of any of claims 20 to 22.
| # | Name | Date |
|---|---|---|
| 1 | 202127050571-IntimationOfGrant29-02-2024.pdf | 2024-02-29 |
| 1 | 202127050571.pdf | 2021-11-03 |
| 2 | 202127050571-PatentCertificate29-02-2024.pdf | 2024-02-29 |
| 2 | 202127050571-STATEMENT OF UNDERTAKING (FORM 3) [03-11-2021(online)].pdf | 2021-11-03 |
| 3 | 202127050571-REQUEST FOR EXAMINATION (FORM-18) [03-11-2021(online)].pdf | 2021-11-03 |
| 3 | 202127050571-FORM 3 [23-11-2023(online)].pdf | 2023-11-23 |
| 4 | 202127050571-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105-PCT Pamphlet) [03-11-2021(online)].pdf | 2021-11-03 |
| 4 | 202127050571-CLAIMS [29-11-2022(online)].pdf | 2022-11-29 |
| 5 | 202127050571-FORM 18 [03-11-2021(online)].pdf | 2021-11-03 |
| 5 | 202127050571-COMPLETE SPECIFICATION [29-11-2022(online)].pdf | 2022-11-29 |
| 6 | 202127050571-FORM 1 [03-11-2021(online)].pdf | 2021-11-03 |
| 6 | 202127050571-DRAWING [29-11-2022(online)].pdf | 2022-11-29 |
| 7 | 202127050571-FIGURE OF ABSTRACT [03-11-2021(online)].jpg | 2021-11-03 |
| 7 | 202127050571-FER_SER_REPLY [29-11-2022(online)].pdf | 2022-11-29 |
| 8 | 202127050571-OTHERS [29-11-2022(online)].pdf | 2022-11-29 |
| 8 | 202127050571-DRAWINGS [03-11-2021(online)].pdf | 2021-11-03 |
| 9 | 202127050571-DECLARATION OF INVENTORSHIP (FORM 5) [03-11-2021(online)].pdf | 2021-11-03 |
| 9 | 202127050571-FORM 4(ii) [22-09-2022(online)].pdf | 2022-09-22 |
| 10 | 202127050571-COMPLETE SPECIFICATION [03-11-2021(online)].pdf | 2021-11-03 |
| 10 | 202127050571-FORM 3 [10-08-2022(online)].pdf | 2022-08-10 |
| 11 | 202127050571-FER.pdf | 2022-03-22 |
| 11 | 202127050571-FORM-26 [30-12-2021(online)].pdf | 2021-12-30 |
| 12 | 202127050571-FORM 3 [15-03-2022(online)].pdf | 2022-03-15 |
| 12 | Abstract1.jpg | 2022-03-01 |
| 13 | 202127050571-POA [04-03-2022(online)].pdf | 2022-03-04 |
| 13 | 202127050571-Proof of Right [15-03-2022(online)].pdf | 2022-03-15 |
| 14 | 202127050571-AMENDED DOCUMENTS [04-03-2022(online)].pdf | 2022-03-04 |
| 14 | 202127050571-FORM 13 [04-03-2022(online)].pdf | 2022-03-04 |
| 15 | 202127050571-AMENDED DOCUMENTS [04-03-2022(online)].pdf | 2022-03-04 |
| 15 | 202127050571-FORM 13 [04-03-2022(online)].pdf | 2022-03-04 |
| 16 | 202127050571-POA [04-03-2022(online)].pdf | 2022-03-04 |
| 16 | 202127050571-Proof of Right [15-03-2022(online)].pdf | 2022-03-15 |
| 17 | Abstract1.jpg | 2022-03-01 |
| 17 | 202127050571-FORM 3 [15-03-2022(online)].pdf | 2022-03-15 |
| 18 | 202127050571-FER.pdf | 2022-03-22 |
| 18 | 202127050571-FORM-26 [30-12-2021(online)].pdf | 2021-12-30 |
| 19 | 202127050571-COMPLETE SPECIFICATION [03-11-2021(online)].pdf | 2021-11-03 |
| 19 | 202127050571-FORM 3 [10-08-2022(online)].pdf | 2022-08-10 |
| 20 | 202127050571-DECLARATION OF INVENTORSHIP (FORM 5) [03-11-2021(online)].pdf | 2021-11-03 |
| 20 | 202127050571-FORM 4(ii) [22-09-2022(online)].pdf | 2022-09-22 |
| 21 | 202127050571-DRAWINGS [03-11-2021(online)].pdf | 2021-11-03 |
| 21 | 202127050571-OTHERS [29-11-2022(online)].pdf | 2022-11-29 |
| 22 | 202127050571-FER_SER_REPLY [29-11-2022(online)].pdf | 2022-11-29 |
| 22 | 202127050571-FIGURE OF ABSTRACT [03-11-2021(online)].jpg | 2021-11-03 |
| 23 | 202127050571-DRAWING [29-11-2022(online)].pdf | 2022-11-29 |
| 23 | 202127050571-FORM 1 [03-11-2021(online)].pdf | 2021-11-03 |
| 24 | 202127050571-COMPLETE SPECIFICATION [29-11-2022(online)].pdf | 2022-11-29 |
| 24 | 202127050571-FORM 18 [03-11-2021(online)].pdf | 2021-11-03 |
| 25 | 202127050571-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105-PCT Pamphlet) [03-11-2021(online)].pdf | 2021-11-03 |
| 25 | 202127050571-CLAIMS [29-11-2022(online)].pdf | 2022-11-29 |
| 26 | 202127050571-REQUEST FOR EXAMINATION (FORM-18) [03-11-2021(online)].pdf | 2021-11-03 |
| 26 | 202127050571-FORM 3 [23-11-2023(online)].pdf | 2023-11-23 |
| 27 | 202127050571-STATEMENT OF UNDERTAKING (FORM 3) [03-11-2021(online)].pdf | 2021-11-03 |
| 27 | 202127050571-PatentCertificate29-02-2024.pdf | 2024-02-29 |
| 28 | 202127050571.pdf | 2021-11-03 |
| 28 | 202127050571-IntimationOfGrant29-02-2024.pdf | 2024-02-29 |
| 1 | SearchHistoryE_11-03-2022.pdf |