Abstract: The invention concerns an encoder, a decoder and methods for applying and varying a strength of a deblocking or deringing filter (110, 120) for filtering a block (1000) of a picture (12), wherein the deblocking filter (110, 120) is configured to determine, for each of at least eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) of a border (1010) of the block (1000), a dissimilarity between an unfiltered content (1015) of the block (1000) and a surrounding picture content (1016) around the block (1000) along the respective border portion (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024), the eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) including four corner border portions (1021, 1022, 1023, 1024), each arranged at a corner of the block (1000), and four edge border portions (1011, 1012, 1013, 1014), each arranged at intermediary portions of the border (1010) between the corners of the block (1000). Furthermore, the deblocking filter is configured to parametrize a deblocking filtering of the block (1000) using the dissimilarities determined for the at least eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) in order to obtain a filtered content of the block (1000).
DEBLOCKING OR DERINGING FILTER AND ENCODER, DECODER AND METHOD FOR APPLYING AND VARYING A STRENGTH OF A DEBLOCKING OR DERINGING
FILTER
Embodiments of the present invention relate to a deblocking or deringing filter as well as to an encoder, a decoder and respective methods for block-based encoding and decoding picture data using a deblocking or a deringing filter, wherein the strength of said deblock-ing or deringing filter may be varied depending on one or more conditions. Some embodi-ments may particularly concern a selective signaling of a respective filter control parame-ter.
Contemporary perceptual (i.e., lossy) block transform image and video codecs (coders/de-coders) can reach very good visual reconstruction quality even at relatively low bit-rates.
At very low bitrates, however, artifacts such as blurring and discontinuities around the block boundaries, often referred to as "blocking”, appear. To mitigate these typically an-noying artifacts, deblocking postprocessing algorithms are utilized in modern codecs such as H.265 / HEVC, H.266 / WC, and AV1.
In video coding, a typical deblocking post-processor operates as an in-loop filter on each decoded image or frame, i.e., on each inter-picture prediction (also known as motion com-pensation) source image/frame before coding of the next image/frame in the encoding loop. The deblocking postfilter analyzes the boundary pixel values of each reconstructed sub-block of the decoded image in terms of potential discontinuities. If a weak discontinu-ity is found, it is assumed to be caused by the low-rate coding itself and not to be part of the original image and, thus, this discontinuity is reduced by way of smoothing of the pixel values (e. g., the addition of adaptive pixel value offsets).
A similar in-loop filter is the Sample Adaptive Offset (SAO) method used in HEVC, which classifies the decoded pixels per (sub-)block based on their values and determines addi-tive offsets for each pixel class. These additive offsets are then signaled to, and applied in, the decoder per (sub-)biock. In doing so, the SAO filter act as a deringing filter.
Details on the HEVC deblocking filter are given in https://ieeexplore.ieee.org/docu-ment/6324414, A. Norkin et al., “HEVC Deblocking Filter,” IEEE Trans. Circ. Syst. Video
Tech. (CSVT), vol. 22, 2012, an overview of the SAO in-loop filter is provided in https://ieeexplore.ieee.org/document/6324411, C. M. Fu et al., “Sample Adaptive Offset in the HEVC Standard,” IEEE Trans. CSVT, vol. 22, 2012.
In H.266 / WC, the maximum transform block size has doubled compared to the size al-lowed in HEVC, which was found to necessitate the use of stronger deblocking filters (i.e., deblocking postprocessors modifying a wider range of pixels) especially around the large-block boundaries. Such more aggressive deblocking filters, however, increase the risk of smoothing out - and, thus, potentially erasing - original image content which was not caused by the low-rate coding.
It is, therefore, concluded that very strong deblocking filtering is desirable for some low-rate coded high-resolution image and video content and that it is essential to allow for highly selective control of the application of said strong deblocking filtering. Naturally, a bit-flag may be signaled for each sub-block (e.g., each coding tree unit, CTU) to indicate to the receiver (i.e., decoder) whether to allow the application of the strong deblocking fil-ter. This approach, however, would lead to many additional signaling bits being included in the bit-stream, thereby increasing the coding bit-rate to an unacceptable level especially at very low bit-rates.
Thus, a more efficient solution is required. Accordingly, it is an object of the present inven-tion to improve existing artefact-filtering and to provide an efficient signaling of varying fil-tering strengths without the drawbacks mentioned above.
According to a first aspect of the invention, this problem is solved with a decoder having the features of claim 1, an encoder having the features of claim 14, a method for decoding according to claim 26, a method for encoding according to claim 27, a computer readable digital storage medium according to claim 28 and a data stream according to claim 29.
According to a second aspect of the invention, this problem is solved with a deblocking fil-ter having the features of claim 30, a decoder having the features of claim 48, an encoder having the features of claim 50, a method for deblocking according to claim 52, a com-puter readable digital storage medium according to claim 53 and a data stream according to claim 54.
The inventive decoder of the first aspect is configured for block-based decoding of picture data using a deblocking or deringing filter. The decoder is configured to reconstruct, in a blockwise manner, a picture from a data stream using prediction and using a prediction residual coded in the data stream to obtain a reconstructed version of the picture. Predic-tive coding may, for instance, be executed by means of a spatial intra-picture prediction and/or by means of a temporal inter-picture prediction. Intra-picture prediction may be ap-plied to still images and moving images, while inter-picture prediction may only be applied to moving images. For images with low visual activity, e.g. with few image details, the pre-diction typically works very efficiently. As a result, the corresponding prediction residuals may comprise very little signal energy and may therefore often be fully quantized to zero.
In doing so, these zero-coded prediction residuals can be exempt from transmission. For images with higher visual activity, e.g. with more image details, the prediction may typi-cally exhibit relatively high signal variance in its prediction residual, thus requiring a trans-mission of at least one (coarsely) quantized prediction residual which is not fully zero. This may also be referred to as non-zero coding of the respective prediction residual. Said non-zero coded prediction residuals may be candidates for causing visible blocking or dering-ing in the reconstructed (i.e. decoded) version of the picture. Thus, the decoder is config-ured to apply the deblocking or deringing filter to the reconstructed version of the picture.
In this regard, the inventive decoder is furthermore configured to locally vary a strength of the deblocking or deringing filter. In other words, the decoder may control the amount of the deblocking or deringing that is applied to the decoded picture. This may result in an improved image quality over conventional deblocking or deringing filters without said con-trol. Said strength of the deblocking or deringing filter may be quantitively measured. Strength measures may, for instance, be a width of a block’s circumferential portions which are affected by the filter, or differently speaking a measure for a reach up to which the filter causes filtering from the block border of the blocks, wherein the larger the strength, the larger the width. Additionally or alternatively, a mean energy of a difference between the filtered version and the unfiltered version of the reconstructed picture to which the filter is applied may be used to measure the filter strength, wherein the larger the strength, the larger the mean energy. The inventive decoder may selectively decide whether to apply said filter control, i.e. whether to vary the filter strength or not, depending on a pre-selection of candidate pictures or candidate picture areas (e.g. blocks), respec-tively. Said candidate pictures or candidate picture areas (e.g. blocks) may be selected depending on a first measure locally measuring a mean block size, and a second meas-ure locally measuring a frequency of non-zero coding of the prediction residual. The fre-quency of non-zero coding is meant to describe how often a non-zero coding of a predic-tion residual in the respective picture or picture area (e.g. block) was applied. In other words, depending on the number of non-zero coded prediction residuals and depending on the mean block size (e.g. a number of blocks or sub-blocks) the decoder may vary the filter strength of the deblocking or deringing filter.
The inventive encoder of the first aspect is configured for block-based encoding of picture data using a deblocking or deringing filter as an in-loop filter. The encoder is configured to encode, in a blockwise manner, a picture into a data stream using prediction and to en-code a prediction residual into the data stream with providing a reconstructed version of the picture in a prediction loop of the encoder. The encoder is further configured to apply the deblocking or deringing filter onto the reconstructed version of the picture, and to lo-cally vary a strength of the deblocking or deringing filter depending on a first measure lo-cally measuring a mean block size, and a second measure locally measuring a frequency of non-zero coding of the prediction residual. In other words, the encoder may compute an optimum block-based partitioning of the picture in a rate-distortion loop. Based on this computation, the encoder may select a candidate picture or a candidate picture area (e.g. block), respectively, based on the mean block size and the number of non-zero coded prediction residuals. These selected candidate pictures or candidate picture areas (e.g. blocks) may then be subject to the varying filter strength. In other words, if a candidate picture or candidate picture area (e.g. block) was selected by the encoder, it may apply the deblocking or deringing filter with varying filter strength to said selected candidate pic-ture or candidate picture area (e.g. block), i.e. the quantity of deblocking or deringing may be selectively controlled by the encoder and, thus, the quality of picture coding may be im-proved compared to conventional encoders.
According to a second aspect of the invention, a deblocking filter is suggested, wherein said deblocking filter is configured to filter a block of a picture in order to reduce blocking or ringing artefacts. Accordingly, the deblocking filter may also be referred to as a dering-ing filter. The deblocking filter according to the second aspect may be combined with the encoder and/or the decoder and/or the methods according to the first aspect. Alterna-tively, the deblocking filter according to the second aspect may be combined with encod-ers and/or decoders and/or methods being different from the first aspect.
The deblocking filter according to the second aspect may be configured to filter a block of a picture that is processed in a block-based manner. Said filtering may be exploited for re-ducing blocking or ringing artefacts which may appear upon block-based coding of the pic-ture. The picture may be partitioned into several blocks and subblocks. The deblocking fil-ter may be applied to one or more of said blocks and subblocks for reducing blocking or ringing artefacts upon coding of the picture. Each block may have a block border, which may correspond to the outer circumferential borderline of said block. The blocks may be square or generally rectangular, depending on the applied partitioning scheme. Accord-ingly, also the border of each block may be square or rectangular, respectively. The bor-der may comprise several portions, for example, portions extending along an edge (also referred to as edge border portions) and portions extending around a corner (also referred to as corner border portions). If a picture is partitioned into a plurality of blocks, said blocks may be contiguously arranged, i.e. the blocks may abut each other. Accordingly, a first block may be surrounded by one or more other blocks. The content (e.g. pixels con-tained in a block) of neighboring blocks may differ from each other, for instance if there is a transition from a dark picture region into a light picture region. Accordingly, there may be a dissimilarity between a picture content contained inside a first block and a picture con-tent contained outside said first block. Said picture content outside the first block may be contained inside a surrounding second block and may therefore also be referred to as a surrounding picture content. The dissimilarities may represent a difference between the picture content contained inside the first block and the surrounding picture content con-tained outside the first block. Said dissimilarities may also be referred to as an offset be-tween the picture content contained inside the first block and the surrounding picture con-tent contained outside the first block. The higher the dissimilarities the higher the magni-tude of the offset value. The dissimilarities may cause blocking or ringing artefacts upon coding the picture. Thus, they have to be smoothened, which may also be referred to as deblocking or deringing, which may be performed by the inventive deblocking filter. There-fore, the inventive deblocking filter may be configured to determine, for each of at least eight border portions of a border of the block, a dissimilarity between an unfiltered content of the block and a surrounding picture content around the block across the respective bor-der portion. Said at least eight border portions include four corner border portions, each arranged at a corner of the block, and four edge border portions, each arranged at inter-mediary portions of the border between the corners of the block. The deblocking filter may perform a deblocking filtering using different filter characteristics depending on the current picture content, i.e. depending on the aforementioned dissimilarities. Said different filter characteristics may be adjusted by means of adjustable parameters, which may depend on the current picture dissimilarities. Accordingly, the inventive deblocking filter may be configured to parametrize a deblocking filtering process of the block using the dissimilari-ties determined for the at least eight border portions in order to obtain a filtered content of the block.
In the following, embodiments of the present invention are described in more detail with reference to the figures, in which
Fig. 1 shows a schematic block diagram of an apparatus for predictively coding a picture as an example for an encoder where an intra prediction concept according to embodiments of the present application could be imple- mented,
Fig. 2 shows a schematic block diagram of an apparatus for predictively decod- ing a picture, which fits to the apparatus of Fig. 1, as an example for a de- coder where an infra prediction concept according to embodiments of the present application could be implemented,
Fig. 3 shows a schematic diagram illustrating an example for a relationship be- tween the prediction residual signal, the prediction signal and the recon- structed signal so as to illustrate possibilities of setting subdivisions for cod- ing mode selection, transform selection and transform performance, re- spectively,
Fig. 4 shows a schematic block diagram of a decoder according to an embodi- ment,
Fig. 5 shows a schematic view of a picture being pre-partitioned into blocks and being sub-partitioned into sub-blocks,
Figs. 6A-6F show schematic views of a picture being partitioned into blocks using differ- ent multi-tree sub-divisioning schemes,
Fig. 7 shows a schematic block diagram of an encoder according to an embodi- ment,
Fig. 8 shows a block diagram of a method for block-based decoding of picture data using a deblocking or deringing filter according to an embodiment, Fig. 9 shows a block diagram of a method for block-based encoding of picture data using a deblocking or deringing filter as an in-loop filter according to an embodiment,
Fig. 10 shows a block comprising border portions onto which a deblocking filter ac- cording to an embodiment may be applied,
Fig. 11 shows the block of Figure 10, wherein edge boundary sample vectors are depicted, for applying the deblocking filter according to an embodiment,
Fig. 12 shows the block of Figure 10, wherein corner boundary sample vectors are depicted, for applying the deblocking filter according to an embodiment,
Fig. 13A-13D show an upper left corner of a block with different spatial positions of edge boundary vectors and corner boundary vectors according to an embodi- ment,
Fig. 14 shows a block comprising a boundary band for applying the deblocking fil- ter according to an embodiment,
Fig. 15 shows a block being separated into several processing regions according to an embodiment,
Fig. 16 shows a block comprising several partitioning portions according to an em- bodiment,
Fig. 17 shows an exemplary application of the deblocking filter onto a block accord- ing to an embodiment,
Fig. 18 shows a block, wherein the size of the edge boundary vectors and the size of the corner boundary vectors may depend on the size of the block, and
Fig. 19 shows a schematic block diagram of a method for filtering a block 1000 of a block-based coded picture 12 by applying a deblocking filter according to an embodiment.
Equal or equivalent elements or elements with equal or equivalent functionality are de-noted in the following description by equal or equivalent reference numerals.
Method steps which are depicted by means of a block diagram and which are described with reference to said block diagram may also be executed in an order different from the depicted and/or described order. Furthermore, method steps concerning a particular fea-ture of a device may be replaceable with said feature of said device, and the other way around.
In this document, the first aspect of the invention will first be described with reference to Figures 1 to 9. Afterwards, the second aspect of the invention will be described subse-quently with reference to Figures 10 to 19.
Introduction to block-based coding
The following description of the figures starts with a presentation of a description of an en-coder and a decoder of a block-based predictive codec for coding pictures of a video in order to form an example for a coding framework into which embodiments of the present invention may be built in. The respective encoder and decoder are described with respect to Figures 1 to 3. Thereinafter the description of embodiments of the concept of the pre-sent invention is presented along with a description as to how such concepts could be built into the encoder and decoder of Figures 1 and 2, respectively, although the embodi-ments described with the subsequent Figures 4 and following, may also be used to form encoders and decoders not operating according to the coding framework underlying the encoder and decoder of Figures 1 and 2.
Figure 1 shows an apparatus for predictively coding a picture 12 into a data stream 14 ex-emplarily using transform-based residual coding. The apparatus, or encoder, is indicated using reference sign 10. Figure 2 shows a corresponding decoder 20, i.e. an apparatus 20 configured to predictively decode the picture 12’ from the data stream 14 also using trans-form-based residual decoding, wherein the apostrophe has been used to indicate that the picture 12’ as reconstructed by the decoder 20 deviates from picture 12 originally encoded by apparatus 10 in terms of coding loss introduced by a quantization of the prediction re-sidual signal. Figure 1 and Figure 2 exemplarily use transform based prediction residual coding, although embodiments of the present application are not restricted to this kind of prediction residual coding. This is true for other details described with respect to Figures 1 and 2, too, as will be outlined hereinafter.
The encoder 10 is configured to subject the prediction residual signal to spatial-to-spectral transformation and to encode the prediction residual signal, thus obtained, into the data stream 14. Likewise, the decoder 20 is configured to decode the prediction residual signal from the data stream 14 and subject the prediction residual signal thus obtained to spec-tral-to-spatial transformation.
Internally, the encoder 10 may comprise a prediction residual signal former 22 which gen-erates a prediction residual 24 so as to measure a deviation of a prediction signal 26 from the original signal, i.e. from the picture 12. The prediction residual signal former 22 may, for instance, be a subtractor which subtracts the prediction signal from the original signal, i.e. from the picture 12. The encoder 10 then further comprises a transformer 28 which subjects the prediction residual signal 24 to a spatial-to-spectral transformation to obtain a spectral-domain prediction residual signal 24’ which is then subject to quantization by a quantizer 32, also comprised by the encoder 10. The thus quantized prediction residual signal 24” is coded into bitstream 14. To this end, encoder 10 may optionally comprise an entropy coder 34 which entropy codes the prediction residual signal as transformed and quantized into data stream 14. The prediction signal 26 is generated by a prediction stage 36 of encoder 10 on the basis of the prediction residual signal 24” encoded into, and de-codable from, data stream 14. To this end, the prediction stage 36 may internally, as is shown in Figure 1, comprise a dequantizer 38 which dequantizes prediction residual sig-nal 24” so as to gain spectral-domain prediction residual signal 24”’, which corresponds to signal 24’ except for quantization loss, followed by an inverse transformer 40 which sub-jects the latter prediction residual signal 24’” to an inverse transformation, i.e. a spectral-to-spatial transformation, to obtain prediction residual signal 24"”, which corresponds to the original prediction residual signal 24 except for quantization loss. A combiner 42 of the prediction stage 36 then recombines, such as by addition, the prediction signal 26 and the prediction residual signal 24"” so as to obtain a reconstructed signal 46, i.e. a reconstruc-tion of the original signal 12. Reconstructed signal 46 may correspond to signal 12’. A pre-diction module 44 of prediction stage 36 then generates the prediction signal 26 on the basis of signal 46 by using, for instance, spatial prediction, i.e. intra-picture prediction, and/or temporal prediction, i.e. inter-picture prediction.
Likewise, decoder 20, as shown in Figure 2, may be internally composed of components corresponding to, and interconnected in a manner corresponding to, prediction stage 36.
In particular, entropy decoder 50 of decoder 20 may entropy decode the quantized spec-tral-domain prediction residual signal 24” from the data stream, whereupon dequantizer 52, inverse transformer 54, combiner 56 and prediction module 58, interconnected and cooperating in the manner described above with respect to the modules of prediction stage 36, recover the reconstructed signal on the basis of prediction residual signal 24” so that, as shown in Figure 2, the output of combiner 56 results in the reconstructed signal, namely picture 12’.
Although not specifically described above, it is readily clear that the encoder 10 may set some coding parameters including, for instance, prediction modes, motion parameters and the like, according to some optimization scheme such as, for instance, in a manner optimizing some rate and distortion related criterion, i.e. coding cost. For example, en-coder 10 and decoder 20 and the corresponding modules 44, 58, respectively, may sup-port different prediction modes such as intra-coding modes and inter-coding modes. The granularity at which encoder and decoder switch between these prediction mode types
may correspond to a subdivision of picture 12 and 12’, respectively, into coding segments or coding blocks. In units of these coding segments, for instance, the picture may be sub-divided into blocks being intra-coded and blocks being inter-coded. Intra-coded blocks are predicted on the basis of a spatial, already coded/decoded neighborhood of the respective block as is outlined in more detail below. Several intra-coding modes may exist and be se-lected for a respective intra-coded segment including directional or angular intra-coding modes according to which the respective segment is filled by extrapolating the sample val-ues of the neighborhood along a certain direction which is specific for the respective direc-tional intra-coding mode, into the respective intra-coded segment. The intra-coding modes may, for instance, also comprise one or more further modes such as a DC coding mode, according to which the prediction for the respective intra-coded block assigns a DC value to all samples within the respective intra-coded segment, and/or a planar intra-coding mode according to which the prediction of the respective block is approximated or deter-mined to be a spatial distribution of sample values described by a two-dimensional linear function over the sample positions of the respective intra-coded block with driving tilt and offset of the plane defined by the two-dimensional linear function on the basis of the neighboring samples. Compared thereto, inter-coded blocks may be predicted, for in-stance, temporally. For inter-coded blocks, motion vectors may be signaled within the data stream, the motion vectors indicating the spatial displacement of the portion of a previ-ously coded picture of the video to which picture 12 belongs, at which the previously coded/decoded picture is sampled in order to obtain the prediction signal for the respec-tive inter-coded block. This means, in addition to the residual signal coding comprised by data stream 14, such as the entropy-coded transform coefficient levels representing the quantized spectral-domain prediction residual signal 24”, data stream 14 may have en-coded thereinto coding mode parameters for assigning the coding modes to the various blocks, prediction parameters for some of the blocks, such as motion parameters for inter-coded segments, and optional further parameters such as parameters for controlling and signaling the subdivision of picture 12 and 12’, respectively, into the segments. The de-coder 20 uses these parameters to subdivide the picture in the same manner as the en-coder did, to assign the same prediction modes to the segments, and to perform the same prediction to result in the same prediction signal.
Figure 3 illustrates the relationship between the reconstructed signal, i.e. the recon-structed picture 12’, on the one hand, and the combination of the prediction residual signal 24”” as signaled in the data stream 14, and the prediction signal 26, on the other hand. As already denoted above, the combination may be an addition. The prediction signal 26 is
illustrated in Figure 3 as a subdivision of the picture area into intra-coded blocks which are illustratively indicated using hatching, and inter-coded blocks which are illustratively indi-cated not-hatched. The subdivision may be any subdivision, such as a regular subdivision of the picture area into rows and columns of square blocks or non-square blocks, or a multi-tree subdivision of picture 12 from a tree root block into a plurality of leaf blocks of varying size, such as a quadtree subdivision or the like, wherein a mixture thereof is illus-trated in Figure 3 in which the picture area is first subdivided into rows and columns of tree root blocks which are then further subdivided in accordance with a recursive multi-tree subdivisioning into one or more leaf blocks.
Again, data stream 14 may have an intra-coding mode coded thereinto for intra-coded blocks 80, which assigns one of several supported intra-coding modes to the respective intra-coded block 80. For inter-coded blocks 82, the data stream 14 may have one or more motion parameters coded thereinto. Generally speaking, inter-coded blocks 82 are not restricted to being temporally coded. Alternatively, inter-coded blocks 82 may be any block predicted from previously coded portions beyond the current picture 12 itself, such as previously coded pictures of a video to which picture 12 belongs, or picture of another view or an hierarchically lower layer in the case of encoder and decoder being scalable encoders and decoders, respectively.
The prediction residual signal 24"” in Figure 3 is also illustrated as a subdivision of the picture area into blocks 84. These blocks might be called transform blocks in order to dis-tinguish same from the coding blocks 80 and 82. In effect, Figure 3 illustrates that encoder 10 and decoder 20 may use two different subdivisions of picture 12 and picture 12', re-spectively, into blocks, namely one subdivisioning into coding blocks 80 and 82, respec-tively, and another subdivision into transform blocks 84. Both subdivisions might be the same, i.e. each coding block 80 and 82, may concurrently form a transform block 84, but Figure 3 illustrates the case where, for instance, a subdivision into transform blocks 84 forms an extension of the subdivision into coding blocks 80, 82 so that any border be-tween two blocks of blocks 80 and 82 overlays a border between two blocks 84, or alter-natively speaking each block 80, 82 either coincides with one of the transform blocks 84 or coincides with a cluster of transform blocks 84. However, the subdivisions may also be determined or selected independent from each other so that transform blocks 84 could al-ternatively cross block borders between blocks 80, 82. As far as the subdivision into trans-form blocks 84 is concerned, similar statements are thus true as those brought forward with respect to the subdivision into blocks 80, 82, i.e. the blocks 84 may be the result of a regular subdivision of picture area into blocks (with or without arrangement into rows and columns), the result of a recursive multi-tree subdivisioning of the picture area, or a combi-nation thereof or any other sort of blockation. Just as an aside, it is noted that blocks 80,
82 and 84 are not restricted to being of quadratic, rectangular or any other shape.
Figure 3 further illustrates that the combination of the prediction signal 26 and the predic-tion residual signal 24”” directly results in the reconstructed signal 12’. However, it should be noted that more than one prediction signal 26 may be combined with the prediction re-sidual signal 24”” to result into picture 12’ in accordance with alternative embodiments.
In Figure 3, the transform blocks 84 shall have the following significance. Transformer 28 and inverse transformer 54 perform their transformations in units of these transform blocks 84. For instance, many codecs use some sort of DST or DCT for all transform blocks 84. Some codecs allow for skipping the transformation so that, for some of the transform blocks 84, the prediction residual signal is coded in the spatial domain directly. However, in accordance with embodiments described below, encoder 10 and decoder 20 are configured in such a manner that they support several transforms. For example, the transforms supported by encoder 10 and decoder 20 could comprise:
o DCT-II (or DCT-III), where DCT stands for Discrete Cosine Transform
o DST-IV, where DST stands for Discrete Sine Transform
o DCT-IV
o DST-VII
o Identity Transformation (IT)
Naturally, while transformer 28 would support all of the forward transform versions of these transforms, the decoder 20 or inverse transformer 54 would support the correspond-ing backward or inverse versions thereof:
o Inverse DCT-II (or inverse DCT-III)
o Inverse DST-IV
o Inverse DCT-IV
o Inverse DST-VII
o Identity Transformation (IT)
The subsequent description provides more details on which transforms could be sup-ported by encoder 10 and decoder 20. In any case, it should be noted that the set of sup-ported transforms may comprise merely one transform such as one spectral-to-spatial or spatial-to-spectral transform.
As already outlined above, Figures 1 to 3 have been presented as an example where the inventive concept described further below may be implemented in order to form specific examples for encoders and decoders according to the present application. Insofar, the en-coder and decoder of Figures 1 and 2, respectively, may represent possible implementa-tions of the encoders and decoders described herein below. Figures 1 and 2 are, how-ever, only examples. An encoder according to embodiments of the present application may, however, perform block-based encoding of a picture 12 using the concept outlined in more detail below and being different from the encoder of Figure 1 such as, for instance, in that same is no video encoder, but a still picture encoder, in that same does not support inter-prediction, or in that the sub-division into blocks 80 is performed in a manner differ-ent than exemplified in Figure 3. Likewise, decoders according to embodiments of the pre-sent application may perform block-based decoding of picture 12’ from data stream 14 us-ing the coding concept further outlined below, but may differ, for instance, from the de-coder 20 of Figure 2 in that same is no video decoder, but a still picture decoder, in that same does not support intra-prediction, or in that same sub-divides picture 12’ into blocks in a manner different than described with respect to Figure 3 and/or in that same does not derive the prediction residual from the data stream 14 in transform domain, but in spatial domain, for instance.
First Aspect
Figure 4 shows a decoder 20 according to an exemplary embodiment of the present appli-cation according to the first aspect of the invention. The decoder 20 may use the above described concept of block based decoding of picture data, i.e. of a still picture or a mov-ing picture 12’.
The decoder 20 as depicted in Figure 4 may comprise a similar internal structure as the one described above with reference to Figure 2. Thus, equal or equivalent elements or el-ements with equal or equivalent functionality are denoted in Figures 2 and 4 by equal or equivalent reference numerals. However, the decoder 20 of Figure 4 may differ from the decoder of Figure 2 in that it may additionally comprise a deblocking or deringing filter 110 for filtering and attenuating blocking and/or ringing artefacts, wherein blocking may be re-garded as a particular case of a ringing artefact.
As described above, the reconstructed version of the picture, i.e. the decoded picture 12’ may be obtained by combining the residual signal 26 and the prediction residual 24” in the combiner 56. The decoder 20 of Figure 4 may additionally apply the deblocking or
deringing filter 110 to the reconstructed version of the picture, i.e. to the decoded picture 12’, upon combination of the residual signal 26 and the prediction residual 24”.
According to the inventive principle, the decoder 20 may locally vary a strength of the deblocking or deringing filter 110. In other words, the decoder 20 may decide about the filter strength that shall be applied to the decoded picture 12’, e.g. whether a weak or a strong deblocking or deringing filter function shall be applied, or even if a deblocking or deringing filter shall be applied at all.
This decision about the applicable filter strength of the deblocking or deringing filter 110 may be based on a first and a second measure. The first measure may represent a locally measured mean block size. The second measure may represent a frequency of non-zero coding of the prediction residual 24”, i.e. the number of non-zero coded prediction residu-als 24”.
The decoder 20 may determine the variable filter strength on a block-wise basis. There-fore, the decoder 20 may be configured to partition the picture 12 into blocks and to per-form the reconstruction of the picture 12 by using said blocks, similar as described above with reference to Figure 3.
Figure 5 shows an example, wherein the picture 12 may be partitioned into one or more blocks 181, 182, 183, 184. These blocks 181, 182, 183, 184 may also be referred to as coding blocks. The decoder 20 may perform the reconstruction of the picture 12, i.e. the decoding of the picture 12, by using said coding blocks 181, 182, 183, 184.
Furthermore, the coding blocks 181, 182, 183, 184 may be subject to sub-partitioning into one or more sub-blocks 181a-181g and 182a-182d, respectively. The term “blocks” in general as used herein may accordingly refer to coding blocks 181, 182, 183, 184 and/or to sub-blocks 181a-181g, 182a-182d. The above mentioned first measure may be de-signed to locally measure the size of the blocks. Accordingly, the size of the blocks may, for instance, be measured in terms of coding blocks 181, 182, 183, 184 and/or in terms of sub-blocks 181a-181g, 182a-182d.
The partitioning mode for partitioning the coding blocks 181, 182, 183, 184 into one or more sub-blocks 181a-181g, 182a-182d may be signaled in the data stream 14 by means of a coding tree, which may also be referred to as a partitioning tree or a split tree. A tree root block, which may correspond to a coding block 181, 182, 183, 184, may be split into one or more leaf blocks, which may correspond to the sub-blocks 181a-181g and 182a-182d.
Accordingly, as exemplarily depicted in Figure 5, the decoder 20 may be configured to perform the partitioning of the picture 12 into blocks by subjecting each of the plurality of tree root blocks 181, 182, 183, 184 to a recursive multi-tree sub-divisioning so that the blocks form leaf blocks 181a-181g, 182a-182d of the plurality of tree root blocks 181, 182, 183, 184. The decoder 20 may determine the first measure and the second measure lo-cally for each tree root block 181, 182, 183, 184.
For example, in HEVC the coding blocks 181, 182, 183, 184 may also be referred to as Coding Tree Units (CTU), and the sub-blocks 181a-181g, 182a-182d may also be referred to as Coding Units (CU). A non-limiting exemplary embodiment shall be described in the following using the HEVC standard. However, the principle of the present application is not restricted to the HEVC standard.
Let us assume a usage of the principle of this application in an image or video codec de-fining a block size of L x L as the largest possible coding block size. Such a coding block 181, 182, 183, 184, also called CTU above, can be subject to sub-partitioning into multiple square or rectangular sub-blocks 181 a-181g, 182a-182d, each of size M x N pixel units. Examples are depicted in Figures 6A-6F which show, as non-limiting examples, several possibilities of partitioning a coding block 181.
For example, Figure 6A shows an example in which the coding block 181 is not further partitioned into sub-blocks. Accordingly, coding block 181 may only comprise one single sub-block (sub-block 1) and may, thus, be the same as sub-block 1. Figure 6A therefore implies the absence of sub-partitioning.
Some further examples for CTU segmentations, including rectangular sub-blocks, are shown in Figures 6B to 6E. At both the encoder and decoder side, the case of partitioning a CTU 181, 182, 183, 184 into one or more sub-blocks (CUs) 181a-181g, 182a-182d can be identified by way of the CTU’s coding tree signaled in the bit-stream. For example, splitting a block by a quad tree may lead to four square sub-blocks, while splitting a block by a (generalized) binary tree may lead to two (generalized) rectangular sub-blocks.
Figure 6B shows an exemplary partitioning of coding block 181 into four square sub-blocks (1 to 4) splitted by a quad tree. Figure 6C shows an exemplary partitioning of cod-ing block 181 into seven square sub-blocks (1 to 7) splitted by a quad tree. Figure 6D
shows an exemplary partitioning of coding block 181 into seven sub-blocks, wherein sub-block 7 is a square sub-block splitted by a quad tree, wherein sub-blocks 1 to 4 are gener-alized rectangular sub-blocks vertically splitted by a binary tree, and wherein sub-blocks 5 and 6 are generalized rectangular sub-blocks horizontally splitted by a binary tree. Figure
6E shows an exemplary partitioning of coding block 181 into seven sub-blocks, wherein sub-blocks 1 and 2 are generalized rectangular sub-blocks vertically splitted by a binary tree, and wherein sub-blocks 3 to 8 are square sub-blocks splitted by a quad tree.
In other words, the decoder 20 may be configured to read partitioning information (e.g. quad tree, binary tree) from the data stream 14. The decoder 20 may further be config-ured to perform the subjecting of the tree root blocks (CTUs) 181, 182, 183, 184 to the re-cursive multi-tree sub-divisioning depending on said partitioning information. The decoder 20 may further be configured to determine the first measure depending on said partition-ing information.
According to an embodiment, the decoder 20 may be configured to determine the first measure by determining, for each tree root block (CTU) 181, 182, 183, 184, the number of leaf blocks (sub-blocks) 181a-181g, 182a-182d into which the respective tree root block (CTU) 181, 182, 183, 184 is split. In the following, this first measure may be referenced with capital letter A. That is, A may represent, for each tree root block (CTU), the number of leaf blocks or sub-blocks (CUs), respectively.
CTUs 181, 182, 183, 184 with low visual activity (i.e. few image details) are typically not sub-partitioned or are sub-partitioned into only a few relatively large sub-blocks, as shown in Figure 6A, for example. Moreover, for these low-activity CTUs 181, 182, 183, 184, the spatial intra-picture prediction (and temporal inter-picture prediction, if applicable) typically works very efficiently. As a result, the prediction residuals 24” in said CTUs 181, 182, 183, 184 may comprise very little signal energy and, thus, can often be fully quantized to zero and, in doing so, can be exempt from transmission.
Sometimes, though, at least one sub-block in such a low-activity CTU 181, 182, 183, 184 may exhibit relatively high signal variance in its prediction residual 24”, thus requiring a transmission of at least one (coarsely) quantized residual which is not fully zero and which is likely to cause visible blocking in the decoded picture 12’.
Residual coefficient signals, which are also referred to as residual transform units (TU) in HEVC, are each associated with one sub-block. In other words, each sub-block (CU) may comprise a transform unit (TU) for performing a piecewise transformation of the prediction
residual with at least one transform unit per block, i.e. per coding block or per sub-block depending on the granularity of partitioning. Accordingly, in the coding tree, for each tree root block (CTU), a number of leaf blocks (CUs) and a number of coefficient blocks (TUs) may be determined.
A coded block flag (CBF) may indicate whether a residual coefficient signal (TU) has been fully quantized to zero (CBF = 0) or whether a residual coefficient signal (TU) has not been fully quantized to zero (CBF = 1). The latter may also be referred to as a non-zero coded block flag, or non-zero CBF. The number of non-zero coded block flags (CBF = 1) may be signaled in the bitstream for each CTU.
In the following, the number of non-zero coded block flags (CBF = 1) may be referenced with capital letter B. In other words, the capital letter B may represent the number of coef-ficient blocks being not fully quantized to zero. According to the inventive principle, this number B of non-zero coded blocks may represent the second measure.
According to such an embodiment, the decoder 20 may be configured to decode the pre-diction residual from the data stream 14 in units of coefficient blocks (TUs) representing a piecewise transformation of the prediction residual with at least one coefficient block (TU) per block (CTU or CU). The decoder 20 may further be configured to determine the sec-ond measure B by determining, for each tree root block (CTU), the number of coefficient blocks (TUs) being not fully quantized to zero. This may be managed by counting the number of non-zero coded block flags (CBF = 1) in the CTU, for example.
As a non-limiting example, CTUs 181, 182, 183, 184 which are sub-partitioned into fewer than nine (i.e. A < 9) sub-blocks (CUs) with, at the same time, the non-zero coding and transmission of B > 0 residual coefficient signals (TUs, each associated with one sub-block), may benefit most from the application of very strong deblocking or deringing post-filters. Accordingly, these CTUs may be candidate blocks for being subject to very strong deblocking or deringing.
For example, Figure 6F shows a partitioning of coding block 185 into nine sub-blocks, i.e. the number of sub-blocks in this example is A = 9. Thus, the above mentioned condition of A < 9 would, for example, not be met. Thus, the coding block 185 of Figure 6F may not be subject to very strong deblocking post-filters. Accordingly, this CTU 185 may not be a can-didate block for being subject to very strong deblocking or deringing.
Again, at both the encoder and decoder side, the case of partitioning into fewer than A sub-blocks can be identified by way of the CTU’s coding tree signaled in the bit-stream, whereas the presence of B non-zero residual coefficient signals can be noticed by count-ing the number of non-zero coded block flags (CBFs) in the CTU, which are also signaled in the bit-stream.
As mentioned above, it may be checked whether a block (CTU) is a potential candidate block for being subject to a very strong deblocking or deringing using the deblocking or deringing filter, or whether this block shall rather be subject to a lower strength of deblock-ing or deringing. This corresponds to the herein described principle of a highly selective control of the application of said strong deblocking filtering. In other words, the strength of the deblocking or deringing filter may be locally varied.
This local variation of the filter strength may depend on two measures, namely a first measure A representing the number of sub-blocks into which the respective coding block is split, and on a second measure B representing the number of non-zero coded residu-als. If a block fulfils these two measures A and B, then this block is a potential candidate block for being subject to strong deblocking or deringing. This may be indicated in the bit stream by means of a filter control parameter (FCP).
Thus, according to such an embodiment, the decoder 20 may perform the local variation of the filter strength by, for first portions of the picture (i.e. for candidate blocks), where the first and second measures A, B fulfill a predetermined criterion (e.g. A < 9, B > 0), reading strength information (FCP) from the data stream 14 indicative of a strength of the deblock-ing or deringing filter 110 to be applied at the respective portion (i.e. block). For second portions of the picture (i.e. for non-candidate blocks), where the first and second measures A, B do not fulfill the predetermined criterion (e.g. A < 9, B > 0), the decoder 20 may be configured to set the strength of the deblocking or deringing filter 110 to be ap-plied at the respective portion (block) to a lower second strength which is lower than the first filter strength.
Thus, at least for the above mentioned non-limiting example, it can be summarized that
Condition 1: a desired in-loop filtering (e.g. very strong deblocking) shall be allowed in a CTU if
● the signaled coding tree indicates a partitioning of said CTU into fewer than A sub- blocks, and/or
• the number of non-zero valued CBFs (i.e. CBF = 1) signaled in said CTU is B, where B > 0.
In other words, if condition 1 is not met, said desired in-loop filtering shall be disallowed and shall, therefore, always be disabled in the affected CTU at both the encoder and de-coder side. If, on the other hand, condition 1 is met in a CTU, the desired in-loop filtering is allowed, but this does not necessarily mean that said in-loop filtering is also enabled.
In the above described non-limiting example, the predetermined criterion, i.e. Condition 1, was met when A < 9 and B > 0. However, stated in more general terms, the predeter-mined criterion is fulfilled if the first measure A falls below a predetermined threshold, and if the second measure B exceeds or is equal to a second predetermined threshold.
For example, the first predetermined threshold is p with p fulfilling 1 < p < 17 for each of the tree root blocks (CTUs), i.e. A < p. Additionally or alternatively, the second predeter-mined threshold is q with q fulfilling -1 < q < 51, i.e. B ≥ q.
In fact, as discussed above, it is highly desirable to provide a means for realizing highly selective control of the application of super-strong in-loop filters such as very strong deblocking filters. One exemplary way to provide this means is to
Condition 2: signal an in-loop filter control parameter (FCP), e.g. via transmission in a bit stream, in a CTU if
• condition 1 is met for said CTU.
In other words, if condition 1 is not met, said in-loop filter control parameter (FCP) is not signaled. If, on the other hand, condition 1 is met in a CTU, said filter control parameter (FCP) - e.g., an additional single-bit element - is written to the bit-stream by the encoder and read from said bit-stream by the decoder.
If the filter control parameter is present in the bit-stream (i.e., condition 1 is met) for a given CTU, then the value of this control parameter determines whether the decoder is to enable the desired in-loop filtering (e.g. value 1) or to disable it (e.g. value 0) in said CTU. In this way, the encoder can control - and signal - the desired application of, e.g. very strong deblocking.
Summarizing, the concept of the present application may suggest a selective signaling of an in-loop filter control parameter per coding block (e. g., coding tree unit, CTU), to disa-ble or attenuate the application of said in-loop filter in said coding block. The in-loop filter control parameter may only be signaled if the coding block is partitioned into fewer than A sub-blocks or if residual coefficient coding (i.e. non-zero coding) is applied in B of the sub-blocks.
Figure 7 shows an encoder 10 which may be applied according to the concept of the pre-sent application according to the first aspect of the invention. The encoder 10 as depicted in Figure 7 may comprise a similar internal structure as the one described above with ref-erence to Figure 1. Thus, equal or equivalent elements or elements with equal or equiva-lent functionality are denoted in Figures 1 and 7 by equal or equivalent reference numer-als. However, the encoder 10 of Figure 7 may differ from the encoder of Figure 1 in that it may additionally comprise a deblocking or deringing filter 120 for filtering and attenuating blocking and/or ringing artefacts, wherein blocking may be regarded as a particular case of a ringing artefact. The deblocking or deringing filter 120 may be an in-loop filter.
The encoder 10 is configured for block-based encoding of picture data using a deblocking or deringing filter 120 as an in-loop filter. The encoder 120 may further be configured to encode, in a blockwise manner, a picture 12 into a data stream 14 using prediction and by coding a prediction residual into the data stream 14 with providing a reconstructed version of the picture in a prediction loop 136 of the encoder 10. The prediction loop 136 may be a part of the prediction stage 36 which was already explained above with reference to Fig-ure 1.
In said prediction loop 136, the reconstruction of the picture 12 and applying the deblock-ing or deringing filter 120 may be simulated. Accordingly, the encoder 10 may be config-ured to apply the deblocking or deringing filter 120 onto the reconstructed version 12’ of the picture 12.
In said prediction loop 136, the encoder 10 may further try different filter strengths of the deblocking or deringing filter 120 in a similar fashion as explained above for the decoder side. In particular, the filter strength may be varied depending on the above described first measure A and second measure B. Accordingly, the encoder 10 may be configured to lo-cally vary a strength of the deblocking or deringing filter 120 depending on a first measure A locally measuring a mean block size, and a second measure B locally measuring a fre-quency of non-zero coding of the prediction residual.
The encoder 10 is further configured to partition the picture 12 into blocks (CTUs) 181, 182, 183, 184, as described in Figure 3 and Figures 6A to 6F above. The encoder 10 is further configured to perform the encoding using the blocks 181, 182, 183, 184, wherein the first measure A is designed to locally measure a size of the blocks 181, 182, 183, 184.
In particular with reference to Figures 6A to 6F, the encoder 10 may also split the blocks 181, 182, 183, 184 into one or more sub-blocks by using a multi-tree subdivisioning, wherein the coding tree may, for instance, be a quad tree or a (generalized) binary tree.
Thus, the encoder 10 may be configured to perform the partitioning by subjecting each of a plurality of tree root blocks (CTUs) 181, 182, 183, 184 into which the picture 12 is pre-partitioned to recursive multi-tree sub-divisioning so that the blocks 181, 182, 183, 184 form leaf blocks (sub-blocks or CUs) of the plurality of tree-root blocks 181, 182, 183, 184. Furthermore, the encoder 10 may be configured to determine the first measure A and the second measure B locally for each tree root block 181, 182, 183, 184.
To do so, the encoder 10 may try, in the prediction loop 136, one or more different types of multi-tree sub-divisioning. If the encoder 10 found a multi-tree sub-divisioning which works well with the respective CTU, then the encoder 10 selects this multi-tree sub-divi-sioning and adds corresponding partitioning information into the bit stream, based on the selected multi-tree sub-divisioning. Depending on said inserted partitioning information, the encoder 10 may determine the first measure A.
For example, as shown in Figure 6E, the encoder 10 may select a combined quad tree -binary tree scheme for splitting the CTU 181 into six square sub-blocks (sub-blocks 3 to 8) and into two rectangular sub-blocks (sub-blocs 1 and 2). Accordingly, the encoder 10 may split the coding block (CTU) 181 into eight sub-blocks (CUs) which corresponds to a first measure A of A = 8.
Stated in terms of the coding tree, the encoder 10 may be configured to perform the sub-jecting of each of the plurality of tree root blocks to the recursive multi-tree sub-divisioning based on partitioning information (e.g. quad tree / binary tree). The encoder 10 may insert the partitioning information into the data stream 14, and the encoder 10 may determine the first measure A depending on the partitioning information.
The encoder 10 may determine the first measure A on a block-wise basis. That is, the en-coder 10 may determine the number of sub-blocks (CUs) for each coding block (CTU)
181, 182, 183, 184 separately.
Thus, in terms of the coding tree, the encoder 10 may be configured to determine the first measure A by determining, for each tree root block (CTU) 181, 182, 183, 184, the number of leaf blocks (CUs or sub-blocks) into which the respective tree root block (CTU) 181,
182, 183, 184 is split.
As described above, with respect to the decoder 20, also the encoder 10 may be config-ured to predictively code picture data using one or more prediction residual signals.
Residual signals, which are also referred to as residual transform units (TU) in HEVC, are each associated with one sub-block (CU). In other words, each sub-block (CU) may com-prise a transform unit (TU) for performing a piecewise transformation of the prediction re-sidual with at least one transform unit per block, i.e. per coding block or per sub-block de-pending on the granularity of partitioning. Accordingly, in the coding tree, for each tree root block (CTU), a number of leaf blocks (CUs) and a number of coefficient blocks (TUs) may be determined.
CLAIMS
1. Decoder (20) for block-based decoding of picture data using a deblocking or der- inging filter (110), configured to
reconstruct, in a blockwise manner, a picture (12) from a data stream (14) using prediction and using a prediction residual coded in the data stream (14) to obtain a reconstructed version (12’) of the picture (12),
apply the deblocking or deringing filter (110) to the reconstructed version (12’) of the picture (12), and
locally vary a strength of the deblocking or deringing filter (110) depending on a first measure (A) locally measuring a mean block size, and a second measure (B) locally measuring a frequency of non-zero coding of the prediction residual.
2. Decoder (20) according to claim 1, configured to
partition the picture (12) into blocks (181, 182, 183, 184; 181a-181g; 182a- 182d), and
perform the reconstruction of the picture (12) using the blocks (181, 182, 183, 184; 181a-181g; 182a-182d),
wherein the first measure (A) is designed to locally measure a size of the blocks (181, 182, 183, 184; 181a-181g; 182a-182d).
3. Decoder (20) according to claim 2, configured to, in the reconstruction of the pic- ture (12), assign to each block (181, 182, 183, 184; 181a-181g; 182a-182d) one of a plurality of prediction modes to the respective block (181, 182, 183, 184; 181a- 181 g; 182a-182d), the plurality of prediction modes comprising one or more intra- prediction modes and/or one or more inter prediction modes.
4. Decoder (20) according to claim 2 or 3, configured to
perform the partitioning of the picture (12) into blocks (181, 182, 183, 184; 181a-181g; 182a-182d) by subjecting each of a plurality of tree root blocks (181, 182, 183, 184) into which the picture (12) is pre-partitioned to recursive multi-tree sub-divisioning so that the blocks form leaf blocks (181a-181g; 182a-182d) of the plurality of tree-root blocks (181, 182, 183, 184), and
determine the first measure (A) and the second measure (B) locally for each tree root block (181, 182, 183, 184).
5. Decoder (20) according to claim 4, configured to
read partitioning information from the data stream (14),
perform the subjecting of each of the plurality of tree root blocks (181, 182, 183, 184) to the recursive multi-tree sub-divisioning depending on the partitioning information, and
determine the first measure (A) depending on the partitioning information.
6. Decoder (20) according to claim 4 or 5, configured to determine the first measure (A) by determining, for each tree root block (181, 182, 183, 184), the number of leaf blocks (181a-181g; 182a-182d) into which the respective tree root block (181, 182, 183, 184) is split.
7. Decoder (20) according to any one of claims 4 to 6, configured to
decode the prediction residual from the data stream (14) in units of coeffi- cient blocks representing a piecewise transformation of the prediction residual with at least one coefficient block per block (181, 182, 183, 184; 181a-181g; 182a- 182d), and
determine the second measure (B) by determining, for each tree root block (181, 182, 183, 184), the number of coefficient blocks being not fully quantized to zero.
8. Decoder (20) according to claim 7, configured to, for each coefficient block, select one of a plurality of inverse transformations, optionally including an identity trans- formation, and use the selected inverse transformation to obtain a corresponding block (181, 182, 183, 184) or a sub-block (181a-181g; 182a-182d) thereof.
9. Decoder (20) according to any of claims 1 to 8, configured to perform the local var- iation of the strength of the deblocking or deringing filter (110) by
for first portions of the picture (12), where the first and second measures (A, B) fulfill a predetermined criterion, reading strength information from the data stream (14) indicative of a first strength of the deblocking or deringing filter (110) to be applied at the respective portion, and
for second portions of the picture (12), where the first and second measures (A, B) do not fulfill the predetermined criterion, setting a strength of the deblocking or deringing filter (110) to be applied at the respective portion to a lower second strength.
10. Decoder (20) according to claim 9, wherein the predetermined criterion is fulfilled if the first measure (A) falls below a first predetermined threshold, and if the second measure (B) exceeds or is equal to a second predetermined threshold.
11. Decoder (20) according to claim 10, wherein
the first predetermined threshold is p with p fulfilling 1 < p < 17 for each of the tree root blocks (181, 182, 183, 184), and/or
the second predetermined threshold is q with q fulfilling -1 < q < 51 for each of the tree root blocks (181, 182, 183, 184).
12. Decoder (20) according to any one of claims 9 to 11, wherein the first and second portions are tree root blocks (181, 182, 183, 184) into which the picture (12) is pre- partitioned and which are further subject to multi-tree sub-divisioning to result into blocks (181, 182, 183, 184; 181a-181g; 182a-182d) using which the reconstruction is performed.
13. Decoder (20) according to any one of claims 1 to 12, wherein the picture data corn- prises a video and the decoder (20) is a video decoder, an in-loop filter of which is formed by the deblocking or deringing filter (110).
14. Encoder (10) for block-based encoding of picture data using a deblocking or der- inging filter (120) as an in-loop filter, configured to
encode, in a blockwise manner, a picture (12) into a data stream (14) using prediction and by coding a prediction residual into the data stream (14) with
providing a reconstructed version (12’) of the picture (12) in a prediction loop (136) of the encoder (10),
apply the deblocking or deringing filter (120) onto the reconstructed version (12') of the picture (12), and
locally vary a strength of the deblocking or deringing filter (120) depending on a first measure (A) locally measuring a mean block size, and a second measure (B) locally measuring a frequency of non-zero coding of the prediction residual.
15. Encoder (10) according to claim 14, configured to
partition the picture into blocks (181, 182, 183, 184; 181a-181g; 182a- 182d), and
perform the encoding using the blocks (181, 182, 183, 184; 181a-181g; 182a-182d),
wherein the first measure (A) is designed to locally measure a size of the blocks (181, 182, 183, 184; 181a-181g; 182a-182d).
16. Encoder (10) according to claim 15, configured to, in the encoding, assign to each block (181, 182, 183, 184; 181a-181g; 182a-182d) one of a plurality of prediction modes to the respective block (181, 182, 183, 184; 181a-181g; 182a-182d), the plurality of prediction modes comprising one or more intra-prediction modes and/or one or more inter prediction modes.
17. Encoder (10) according to claim 15 or 16, configured to perform the partitioning by subjecting each of a plurality of tree root blocks (181, 182, 183, 184) into which the picture (12) is pre-partitioned to recursive multi-tree sub-divisioning so that the blocks form leaf blocks (181a-181g; 182a-182d) of the plurality of tree-root blocks (181, 182, 183, 184), and
determine the first measure (A) and the second measure (B) locally for each tree root block (181, 182, 183, 184).
18. Encoder (10) according to claim 17, configured to
perform the subjecting of each of the plurality of tree root blocks (181, 182, 183, 184) to the recursive multi-tree sub-divisioning based on partitioning infor- mation,
insert the partitioning information into the data stream (14), and determine the first measure (A) depending on the partitioning information.
19. Encoder (10) according to claim 17 or 18, configured to determine the first meas- ure (A) by determining, for each tree root block (181, 182, 183, 184), the number of leaf blocks (181 a-181g; 182a-182d) into which the respective tree root block (181, 182, 183, 184) is split.
20. Encoder (10) according to any one of claims 17 to 19, configured to
encode the prediction residual into the data stream (14) in units of coeffi- cient blocks representing a piecewise transformation of the prediction residual with at least one coefficient block per block (181, 182, 183, 184; 181a-181g; 182a- 182d), and
determine the second measure (B) by determining, for each tree root block (181, 182, 183, 184), the number of coefficient blocks being not fully quantized to zero.
21. Encoder (10) according to claim 20, configured to, for each coefficient block, select one of a plurality of transformations, optionally including an identity transformation, and use the selected inverse transformation to obtain a corresponding block (181, 182, 183, 184) or a sub-block (181a-181g; 182a-182d) thereof.
22. Encoder (10) according to any of claims 14 to 21, configured to perform the local variation of the strength of the deblocking or deringing filter (120) by
for first portions of the picture (12), where the first and second measures (A, B) fulfill a predetermined criterion, inserting strength information into the data stream (14) indicative of a first strength of the deblocking or deringing filter (120) to be applied at the respective portion, and
for second portions of the picture (12), where the first and second measures (A, B) do not fulfill the predetermined criterion, setting a strength of the deblocking or deringing filter (120) to be applied at the respective portion to a lower second strength.
23. Encoder (10) according to claim 22, wherein the predetermined criterion is fulfilled if the first measure (A) falls below a first predetermined threshold, and if the second measure (B) exceeds or is equal to a second predetermined threshold.
24. Encoder (10) according to claim 23, wherein
the first predetermined threshold is p with p fulfilling 1 < p < 17 for each of the tree root blocks (181, 182, 183, 184), and/or
the second predetermined threshold is q with q fulfilling -1 < q < 51 for each of the tree root blocks (181, 182, 183, 184).
25. Encoder (10) according to any of claims 22 to 24, wherein the first and second por- tions are tree root blocks (181, 182, 183, 184) into which the picture (12) is pre- partitioned and which are further subject to multi-tree sub-divisioning to result into blocks (181a-181g; 182a-182d) using which the encoding is performed.
26. A method for block-based decoding of picture data using a deblocking or deringing filter (110), the method comprising the steps of
reconstructing, in a blockwise manner, a picture (12) from a data stream (14) using prediction and using a prediction residual coded in the data stream (14) to obtain a reconstructed version (12’) of the picture (12),
applying the deblocking or deringing filter (110) to the reconstructed version (12’) of the picture (12), and
locally varying a strength of the deblocking or deringing filter (110) depend- ing on a first measure (A) locally measuring a mean block size, and a second measure (B) locally measuring a frequency of non-zero coding of the prediction re- sidual.
27. A method for block-based encoding of picture data using a deblocking or deringing filter (120) as an in-loop filter, the method comprising the steps of
encoding, in a blockwise manner, a picture (12) into a data stream (14) us- ing prediction and coding a prediction residual into the data stream (14) with providing a reconstructed version (12’) of the picture (12) in a prediction loop (136), applying the deblocking or deringing filter (120) onto the reconstructed ver- sion (12’) of the picture (12), and
locally varying a strength of the deblocking or deringing filter (120) depend- ing on a first measure (A) locally measuring a mean block size, and a second measure (B) locally measuring a frequency of non-zero coding of the prediction re- sidual.
28. A computer readable digital storage medium having stored thereon a computer program having a program code for performing, when running on a computer, a method according to claim 26 or 27.
29. Data stream obtained by a method according to claim 26 or 27.
30. A deblocking filter (110, 120) for filtering a block (1000) of a picture (12), wherein the deblocking filter (110, 120) is configured to
determine, for each of at least eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) of a border (1010) of the block (1000), a dissimilar- ity between an unfiltered content (1015) of the block (1000) and a surrounding pic- ture content (1016) around the block (1000) along the respective border portion (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024), the eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) including four corner border portions (1021, 1022, 1023, 1024), each arranged at a corner of the block (1000), and four edge border portions (1011, 1012, 1013, 1014), each arranged at inter- mediary portions of the border (1010) between the corners of the block (1000), parametrize a deblocking filtering of the block (1000) using the dissimilari- ties determined for the at least eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) in order to obtain a filtered content of the block (1000).
31. The deblocking filter (110, 120) of claim 30 configured to
determine, for each of the at least eight border portions (1011, 1012, 1013 1014, 1021, 1022, 1023, 1024), the dissimilarity by computing a mean difference
between first samples and second samples, said first samples being located inside the block (1000) and adjoining the respective border portion (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024), and said second samples being located outside the block (1000) and adjoining the respective border portion (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024).
32. The deblocking filter (110, 120) of claim 30 or 31 configured to
determine, for each of the at least eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024), the dissimilarity by computing a difference be- tween a first sum over first samples and a second sum over second samples, said first samples being located inside the block (1000) and adjoining the respective border portion (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024), and said second samples being located outside the block (1000) and adjoining the respective bor- der portion (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024).
33. The deblocking filter (110, 120) of any of claims 30 to 32 configured to
set widths of the at least eight border portions (1011, 1012, 1013, 1014,
1021, 1022, 1023, 1024) depending on a size of the block (1000) so that, at least for one of the eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024), a width of the respective border portion (1011, 1012, 1013, 1014, 1021,
1022, 1023, 1024) equals a fraction of a length of the block’s border (1010) which varies for different block sizes.
34. The deblocking filter (110, 120) of any of claims 30 to 33 configured to
set widths of the at least eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) depending on a size of the block (1000) so that,
if the size of the block (1000) is smaller than a first predetermined amount, the four edge border portions (1011, 1012, 1013, 1014) and the four corner border portions (1021, 1022, 1023, 1024) mutually overlap, and if the size of the block (1000) is greater than the first predetermined amount, the four edge border portions (1011, 1012, 1013, 1014) and the four corner border portions (1021, 1022, 1023, 1024) do not overlap.
35. The deblocking filter (110, 120) of any of claims 30 to 34 configured to
set widths of the at least eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) depending on a size of the block (1000) so that,
if the size of the block (1000) is smaller than a first predetermined amount, the four edge border portions (1011, 1012, 1013, 1014) and the four corner border portions (1021, 1022, 1023, 1024) mutually overlap, and if the size of the block (1000) is between a first predetermined amount and a second predetermined amount, the four edge border portions (1011, 1012, 1013, 1014) and the four corner border portions (1021, 1022, 1023, 1024) mutually abut each other, and
if the size of the block (1000) is greater than the second predetermined amount, the four edge border portions (1011, 1012, 1013, 1014) and the four corner border portions (1021, 1022, 1023, 1024) are mutually spaced apart from each other.
36. The deblocking filter (110, 120) of any of claims 33 to 35 configured to
set the widths of the at least eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) depending on the size of the block (1000) separately along horizontal and vertical axes.
37. The deblocking filter (110, 120) of any of claims 30 to 36 configured to perform the deblocking filtering of the block (1000) by
offsetting each sample within a boundary band (1033) of the block (1000) extending along the border (1010) of the block (1000) by using an offset value, and setting said offset value so that
the offset value is, for each line (1031, 1032) of samples be- ing equally shaped to the block's border (1010) and having a con- stant sample offset thereto, constant within each of boundary por- tions (1041a, 1041b, 1051, 1061a, 1061b) into which the boundary band (1033) is circumferentially partitioned,
the offset value of samples within the respective boundary portion (1041a, 1041b, 1051, 1061 a, 1061 b) is subject to an
attenuation from the border (1010) to a middle of the block (1000), and so that
the offset value of samples within the respective boundary portion (1041a, 1041b, 1051, 1061a, 1061b) is computed based on the dissimilarity determined for one or more of the border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) being circumfer- entially nearest to the respective boundary portion (1041a, 1041b, 1051, 1061a, 1061b).
38. The deblocking filter (110, 120) of claim 37 configured to set a band width of the boundary band monotonically increasing with respect to a size of the block (1000).
39. The deblocking filter (110, 120) of claim 37 or 38 configured so that the boundary band comprises a constant circumferential width.
40. Deblocking filter (110, 120) of any of claims 37 to 39 configured so that the bound- ary portions of the boundary band (1033) at least comprise
at each corner of the block (1000), a corner boundary portion (1041a, 1041b) diagonally extending from the respective corner towards an inside (1030) of the block (1000), and
between each pair of neighboring corner boundary portions (1041a, 1041b) of the block (1000), two or three boundary portions (1051, 1061a, 1061 b).
41. The deblocking filter (110, 120) of claim 40 configured so that the two or three boundary portions (1051, 1061a, 1061b) between each pair of neighboring corner boundary portions (1041a, 1041b) of the block (1000) comprise
a first section (1061a) circumferentially neighboring a first corner boundary portion (1041a) of the respective pair,
a second section (1061b) circumferentially neighboring a second corner boundary portion (1041b) of the respective pair, and
a middle boundary portion (1051) circumferentially arranged between the first and second corner boundary portions (1041a, 1041b) of the respective pair.
42. The deblocking filter (110, 120) of claim 40 configured so that the two or three boundary portions (1051, 1061a, 1061b) between each pair of neighboring corner boundary portions (1041a, 1041b) of the block (1000) comprise,
if the block size measured along a direction extending between the corners from which the respective pair of neighboring corner boundary portions (1041a,
1041b) extends towards the inside of the block (1000) is greater than two times a width of the boundary band (1033)
a first section (1061a) circumferentially neighboring a first corner boundary portion (1041a) of the respective pair,
a second section (1061b) circumferentially neighboring a second corner boundary portion (1041b) of the respective pair, and a middle boundary portion (1051) circumferentially extending be- tween the first and second corner boundary portions (1041a, 1041b) of the respective pair, and
if the block size measured along a direction extending between the corners from which the respective pair of neighboring corner boundary portions (1041a, 1041b) extends towards the inside of the block (1000) is not greater than two times a width of the boundary band (1033)
a first section (1061a) circumferentially neighboring a first corner boundary portion (1041a) of the respective pair, and
a second section (1061b) circumferentially neighboring a second corner boundary portion (1041b) of the respective pair,
wherein the first and second sections (1061a, 1061b) abut each other.
43. The deblocking filter (110, 120) of claim 41 or 42 configured so that the middle boundary portion (1051) is circumferentially as wide as the block (1000) minus two times the width of the boundary band (1033).
44. The deblocking filter (110, 120) of any one of claims 40 to 43 configured to
set, for each of the corner boundary portions (1041), the offset value (oc) for samples within the respective corner boundary portion (1041) so that the offset value (oc) is determined based on the dissimilarity deter- mined for the corner border portion (1021, 1022, 1023, 1024) from which the respective corner boundary portion (1041) diagonally extends towards the inside of the block (1000) and is subject to an attenuation from the bor- der (1010) of the block (1000) towards the inside (1030) of the block (1000).
45. The deblocking filter (110, 120) of any one of claims 37 to 44, wherein the attenua- tion from the border (1010) of the block (1000) towards the inside (1030) of the block (1000) is a linear or exponential attenuation.
46. The deblocking filter (110, 120) of any of claims 40 to 45 configured to
set, for each of the corner boundary portions (1041), the offset value (oc) for samples within the respective corner boundary portion (1041) so that the offset value (oc) of the samples within the respective corner boundary portion (1041) varies from the border (1010) of the block (1000) towards the inside (1030) of the block (1000) according to a weighted average over
a first offset value (offsetc) determined based on the dissimilarity determined for the corner border portion (1021, 1022, 1023, 1024) from which the respective corner bound- ary portion (1041) diagonally extends towards the inside of the block (1000) and
a second offset value (mc) determined based on the dissimilarities determined for the edge border portions (1011, 1012, 1013, 1014) circumferentially adjacent to the respec- tive corner border portion (1021, 1022, 1023, 1024) from which the respective corner boundary portion (1041) diago- nally extends towards the inside of the block (1000), wherein weights of the weighted average depend on the samples’ distance from the border (1010) of the block (1000) in a manner so that the weighted average depends monotonically decreasingly less on the first off- set value (offsetc) compared to the second offset value (mc) at increasing distance.
47. The deblocking filter (110, 120) of any of claims 40 to 46 configured so that the two or three boundary portions (1051, 1061a, 1061b) between each pair of neighboring corner boundary portions (1041a, 1041b) of the block (1000) comprise
a first section (1061a) circumferentially neighboring a first corner boundary portion (1041a) of the respective pair,
a second section (1061b) circumferentially neighboring a second corner boundary portion (1041b) of the respective pair, and
a middle boundary portion (1051) circumferentially between the first and second corner boundary portions (104†a, 1041b) of the respective pair, and wherein the deblocking filter (110, 120) is further configured to
set, for the first section (1061a), the offset value for samples within the first section (1061a) so that
for each line (1031, 1032), the offset value of samples within the respective line (1031, 1032) is interpolated between the offset value of the first corner boundary portion (1041a) in the respective line (1031, 1032) and the offset value of the middle boundary por- tion (1051) in the respective line (1031, 1032), and to
set, for the second section (1061b), the offset value for samples within the second section (1061b) so that
for each line (1031, 1032), the offset value of samples within the respective line (1031, 1032) is interpolated between the offset value of the second corner boundary portion (1041b) in the respec- tive line (1031, 1032) and the offset value of the middle boundary portion (1051) in the respective line (1031, 1032).
48. A decoder (20) for block-based decoding of a picture (12), the decoder (20) corn- prising a deblocking filter (110, 120) according to any one of claims 30 to 45.
49. The decoder (20) of claim 48 configured to
decode a prediction residual from the data stream (14) in units of coefficient blocks representing transform blocks into which the prediction residual is parti- tioned, wherein the deblocking filter (110, 120) is configured to deblock the picture (12) in units of transform blocks.
50. An encoder (10) for block-based encoding of a picture (12), the encoder (10) corn- prising a deblocking filter (110, 120) according to any one of claims 30 to 45.
51. The encoder (10) of claim 50 configured to
encode a prediction residual into the data stream (14) in units of coefficient blocks representing transform blocks into which the prediction residual is parti- tioned, wherein the deblocking filter (110, 120) is configured to deblock the picture (12) in units of transform blocks.
52. A method for filtering a block (1000) of a block-based coded picture (12), the method comprising steps of:
determining, for each of at least eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) of a border (1010) of the block (1000), a dissimilar- ity between an unfiltered content (1015) of the block (1000) and a surrounding pic- ture content (1016) around the block (1000) along the respective border portion (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024), the eight border portions (1011, 1012, 1013, 1014, 1021, 1022, 1023, 1024) including four corner border portions (1021, 1022, 1023, 1024), each arranged at a corner of the block (1000), and four edge border portions (1011, 1012, 1013, 1014), each arranged at inter- mediary portions of the border (1010) between the corners of the block (1000), and parametrizing a deblocking filtering of the block (1000) using the dissimilari- ties determined for the at least eight border portions (1011, 1012, 1013, 1014,
1021, 1022, 1023, 1024) in order to obtain a filtered content of the block (1000).
53. A computer readable digital storage medium having stored thereon a computer program having a program code for performing, when running on a computer, a method according to claim 52.
54. A data stream obtained by a method according to claim 52.
| # | Name | Date |
|---|---|---|
| 1 | 202117018937-Correspondence to notify the Controller [12-03-2025(online)]-1.pdf | 2025-03-12 |
| 1 | 202117018937-FORM 3 [14-03-2024(online)].pdf | 2024-03-14 |
| 1 | 202117018937-STATEMENT OF UNDERTAKING (FORM 3) [23-04-2021(online)].pdf | 2021-04-23 |
| 1 | 202117018937-Written submissions and relevant documents [23-04-2025(online)].pdf | 2025-04-23 |
| 2 | 202117018937-Correspondence to notify the Controller [12-03-2025(online)]-1.pdf | 2025-03-12 |
| 2 | 202117018937-Correspondence to notify the Controller [12-03-2025(online)].pdf | 2025-03-12 |
| 2 | 202117018937-Information under section 8(2) [05-03-2024(online)].pdf | 2024-03-05 |
| 2 | 202117018937-REQUEST FOR EXAMINATION (FORM-18) [23-04-2021(online)].pdf | 2021-04-23 |
| 3 | 202117018937-Correspondence to notify the Controller [12-03-2025(online)].pdf | 2025-03-12 |
| 3 | 202117018937-Information under section 8(2) [31-10-2023(online)].pdf | 2023-10-31 |
| 3 | 202117018937-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105) [23-04-2021(online)].pdf | 2021-04-23 |
| 3 | 202117018937-US(14)-ExtendedHearingNotice-(HearingDate-08-04-2025)-1100.pdf | 2025-03-11 |
| 4 | 202117018937-FORM 18 [23-04-2021(online)].pdf | 2021-04-23 |
| 4 | 202117018937-FORM 3 [15-09-2023(online)].pdf | 2023-09-15 |
| 4 | 202117018937-FORM-26 [07-03-2025(online)].pdf | 2025-03-07 |
| 4 | 202117018937-US(14)-ExtendedHearingNotice-(HearingDate-08-04-2025)-1100.pdf | 2025-03-11 |
| 5 | 202117018937-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [05-03-2025(online)].pdf | 2025-03-05 |
| 5 | 202117018937-Information under section 8(2) [11-05-2023(online)].pdf | 2023-05-11 |
| 5 | 202117018937-FORM-26 [07-03-2025(online)].pdf | 2025-03-07 |
| 5 | 202117018937-FORM 1 [23-04-2021(online)].pdf | 2021-04-23 |
| 6 | 202117018937-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [05-03-2025(online)].pdf | 2025-03-05 |
| 6 | 202117018937-FORM 3 [17-03-2023(online)].pdf | 2023-03-17 |
| 6 | 202117018937-FORM 3 [04-03-2025(online)].pdf | 2025-03-04 |
| 6 | 202117018937-DRAWINGS [23-04-2021(online)].pdf | 2021-04-23 |
| 7 | 202117018937-Correspondence to notify the Controller [10-02-2025(online)].pdf | 2025-02-10 |
| 7 | 202117018937-DECLARATION OF INVENTORSHIP (FORM 5) [23-04-2021(online)].pdf | 2021-04-23 |
| 7 | 202117018937-FORM 3 [04-03-2025(online)].pdf | 2025-03-04 |
| 7 | 202117018937-Information under section 8(2) [03-03-2023(online)].pdf | 2023-03-03 |
| 8 | 202117018937-CLAIMS [18-11-2022(online)].pdf | 2022-11-18 |
| 8 | 202117018937-COMPLETE SPECIFICATION [23-04-2021(online)].pdf | 2021-04-23 |
| 8 | 202117018937-Correspondence to notify the Controller [10-02-2025(online)].pdf | 2025-02-10 |
| 8 | 202117018937-US(14)-HearingNotice-(HearingDate-10-03-2025).pdf | 2025-02-10 |
| 9 | 202117018937-FER_SER_REPLY [18-11-2022(online)].pdf | 2022-11-18 |
| 9 | 202117018937-FORM 3 [14-03-2024(online)].pdf | 2024-03-14 |
| 9 | 202117018937-Proof of Right [05-05-2021(online)].pdf | 2021-05-05 |
| 9 | 202117018937-US(14)-HearingNotice-(HearingDate-10-03-2025).pdf | 2025-02-10 |
| 10 | 202117018937-FORM 3 [14-03-2024(online)].pdf | 2024-03-14 |
| 10 | 202117018937-FORM-26 [28-05-2021(online)].pdf | 2021-05-28 |
| 10 | 202117018937-Information under section 8(2) [05-03-2024(online)].pdf | 2024-03-05 |
| 10 | 202117018937-OTHERS [18-11-2022(online)].pdf | 2022-11-18 |
| 11 | 202117018937-FORM 3 [07-09-2022(online)].pdf | 2022-09-07 |
| 11 | 202117018937-FORM 3 [23-09-2021(online)].pdf | 2021-09-23 |
| 11 | 202117018937-Information under section 8(2) [05-03-2024(online)].pdf | 2024-03-05 |
| 11 | 202117018937-Information under section 8(2) [31-10-2023(online)].pdf | 2023-10-31 |
| 12 | 202117018937-FORM 3 [15-09-2023(online)].pdf | 2023-09-15 |
| 12 | 202117018937-Information under section 8(2) [07-09-2022(online)].pdf | 2022-09-07 |
| 12 | 202117018937-Information under section 8(2) [31-10-2023(online)].pdf | 2023-10-31 |
| 12 | 202117018937.pdf | 2021-10-19 |
| 13 | 202117018937-Information under section 8(2) [11-05-2023(online)].pdf | 2023-05-11 |
| 13 | 202117018937-FORM 4(ii) [04-08-2022(online)].pdf | 2022-08-04 |
| 13 | 202117018937-FORM 3 [15-09-2023(online)].pdf | 2023-09-15 |
| 13 | 202117018937-FER.pdf | 2022-02-18 |
| 14 | 202117018937-FORM 3 [17-03-2023(online)].pdf | 2023-03-17 |
| 14 | 202117018937-FORM 3 [23-03-2022(online)].pdf | 2022-03-23 |
| 14 | 202117018937-Information under section 8(2) [11-05-2023(online)].pdf | 2023-05-11 |
| 15 | 202117018937-FER.pdf | 2022-02-18 |
| 15 | 202117018937-FORM 3 [17-03-2023(online)].pdf | 2023-03-17 |
| 15 | 202117018937-FORM 4(ii) [04-08-2022(online)].pdf | 2022-08-04 |
| 15 | 202117018937-Information under section 8(2) [03-03-2023(online)].pdf | 2023-03-03 |
| 16 | 202117018937-CLAIMS [18-11-2022(online)].pdf | 2022-11-18 |
| 16 | 202117018937-Information under section 8(2) [03-03-2023(online)].pdf | 2023-03-03 |
| 16 | 202117018937-Information under section 8(2) [07-09-2022(online)].pdf | 2022-09-07 |
| 16 | 202117018937.pdf | 2021-10-19 |
| 17 | 202117018937-CLAIMS [18-11-2022(online)].pdf | 2022-11-18 |
| 17 | 202117018937-FORM 3 [23-09-2021(online)].pdf | 2021-09-23 |
| 17 | 202117018937-FORM 3 [07-09-2022(online)].pdf | 2022-09-07 |
| 17 | 202117018937-FER_SER_REPLY [18-11-2022(online)].pdf | 2022-11-18 |
| 18 | 202117018937-FORM-26 [28-05-2021(online)].pdf | 2021-05-28 |
| 18 | 202117018937-OTHERS [18-11-2022(online)].pdf | 2022-11-18 |
| 18 | 202117018937-FER_SER_REPLY [18-11-2022(online)].pdf | 2022-11-18 |
| 19 | 202117018937-FER_SER_REPLY [18-11-2022(online)].pdf | 2022-11-18 |
| 19 | 202117018937-FORM 3 [07-09-2022(online)].pdf | 2022-09-07 |
| 19 | 202117018937-OTHERS [18-11-2022(online)].pdf | 2022-11-18 |
| 19 | 202117018937-Proof of Right [05-05-2021(online)].pdf | 2021-05-05 |
| 20 | 202117018937-Information under section 8(2) [07-09-2022(online)].pdf | 2022-09-07 |
| 20 | 202117018937-FORM 3 [07-09-2022(online)].pdf | 2022-09-07 |
| 20 | 202117018937-COMPLETE SPECIFICATION [23-04-2021(online)].pdf | 2021-04-23 |
| 20 | 202117018937-CLAIMS [18-11-2022(online)].pdf | 2022-11-18 |
| 21 | 202117018937-DECLARATION OF INVENTORSHIP (FORM 5) [23-04-2021(online)].pdf | 2021-04-23 |
| 21 | 202117018937-FORM 4(ii) [04-08-2022(online)].pdf | 2022-08-04 |
| 21 | 202117018937-Information under section 8(2) [03-03-2023(online)].pdf | 2023-03-03 |
| 21 | 202117018937-Information under section 8(2) [07-09-2022(online)].pdf | 2022-09-07 |
| 22 | 202117018937-DRAWINGS [23-04-2021(online)].pdf | 2021-04-23 |
| 22 | 202117018937-FORM 3 [17-03-2023(online)].pdf | 2023-03-17 |
| 22 | 202117018937-FORM 3 [23-03-2022(online)].pdf | 2022-03-23 |
| 22 | 202117018937-FORM 4(ii) [04-08-2022(online)].pdf | 2022-08-04 |
| 23 | 202117018937-Information under section 8(2) [11-05-2023(online)].pdf | 2023-05-11 |
| 23 | 202117018937-FORM 3 [23-03-2022(online)].pdf | 2022-03-23 |
| 23 | 202117018937-FER.pdf | 2022-02-18 |
| 23 | 202117018937-FORM 1 [23-04-2021(online)].pdf | 2021-04-23 |
| 24 | 202117018937-FORM 3 [15-09-2023(online)].pdf | 2023-09-15 |
| 24 | 202117018937.pdf | 2021-10-19 |
| 24 | 202117018937-FORM 18 [23-04-2021(online)].pdf | 2021-04-23 |
| 24 | 202117018937-FER.pdf | 2022-02-18 |
| 25 | 202117018937-FORM 3 [23-09-2021(online)].pdf | 2021-09-23 |
| 25 | 202117018937-Information under section 8(2) [31-10-2023(online)].pdf | 2023-10-31 |
| 25 | 202117018937-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105) [23-04-2021(online)].pdf | 2021-04-23 |
| 25 | 202117018937.pdf | 2021-10-19 |
| 26 | 202117018937-FORM 3 [23-09-2021(online)].pdf | 2021-09-23 |
| 26 | 202117018937-FORM-26 [28-05-2021(online)].pdf | 2021-05-28 |
| 26 | 202117018937-Information under section 8(2) [05-03-2024(online)].pdf | 2024-03-05 |
| 26 | 202117018937-REQUEST FOR EXAMINATION (FORM-18) [23-04-2021(online)].pdf | 2021-04-23 |
| 27 | 202117018937-FORM 3 [14-03-2024(online)].pdf | 2024-03-14 |
| 27 | 202117018937-FORM-26 [28-05-2021(online)].pdf | 2021-05-28 |
| 27 | 202117018937-Proof of Right [05-05-2021(online)].pdf | 2021-05-05 |
| 27 | 202117018937-STATEMENT OF UNDERTAKING (FORM 3) [23-04-2021(online)].pdf | 2021-04-23 |
| 28 | 202117018937-COMPLETE SPECIFICATION [23-04-2021(online)].pdf | 2021-04-23 |
| 28 | 202117018937-Proof of Right [05-05-2021(online)].pdf | 2021-05-05 |
| 28 | 202117018937-US(14)-HearingNotice-(HearingDate-10-03-2025).pdf | 2025-02-10 |
| 29 | 202117018937-Correspondence to notify the Controller [10-02-2025(online)].pdf | 2025-02-10 |
| 29 | 202117018937-DECLARATION OF INVENTORSHIP (FORM 5) [23-04-2021(online)].pdf | 2021-04-23 |
| 29 | 202117018937-COMPLETE SPECIFICATION [23-04-2021(online)].pdf | 2021-04-23 |
| 30 | 202117018937-FORM 3 [04-03-2025(online)].pdf | 2025-03-04 |
| 30 | 202117018937-DECLARATION OF INVENTORSHIP (FORM 5) [23-04-2021(online)].pdf | 2021-04-23 |
| 30 | 202117018937-DRAWINGS [23-04-2021(online)].pdf | 2021-04-23 |
| 31 | 202117018937-FORM 1 [23-04-2021(online)].pdf | 2021-04-23 |
| 31 | 202117018937-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [05-03-2025(online)].pdf | 2025-03-05 |
| 31 | 202117018937-DRAWINGS [23-04-2021(online)].pdf | 2021-04-23 |
| 32 | 202117018937-FORM 1 [23-04-2021(online)].pdf | 2021-04-23 |
| 32 | 202117018937-FORM 18 [23-04-2021(online)].pdf | 2021-04-23 |
| 32 | 202117018937-FORM-26 [07-03-2025(online)].pdf | 2025-03-07 |
| 33 | 202117018937-US(14)-ExtendedHearingNotice-(HearingDate-08-04-2025)-1100.pdf | 2025-03-11 |
| 33 | 202117018937-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105) [23-04-2021(online)].pdf | 2021-04-23 |
| 33 | 202117018937-FORM 18 [23-04-2021(online)].pdf | 2021-04-23 |
| 34 | 202117018937-REQUEST FOR EXAMINATION (FORM-18) [23-04-2021(online)].pdf | 2021-04-23 |
| 34 | 202117018937-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105) [23-04-2021(online)].pdf | 2021-04-23 |
| 34 | 202117018937-Correspondence to notify the Controller [12-03-2025(online)].pdf | 2025-03-12 |
| 35 | 202117018937-Correspondence to notify the Controller [12-03-2025(online)]-1.pdf | 2025-03-12 |
| 35 | 202117018937-REQUEST FOR EXAMINATION (FORM-18) [23-04-2021(online)].pdf | 2021-04-23 |
| 35 | 202117018937-STATEMENT OF UNDERTAKING (FORM 3) [23-04-2021(online)].pdf | 2021-04-23 |
| 36 | 202117018937-STATEMENT OF UNDERTAKING (FORM 3) [23-04-2021(online)].pdf | 2021-04-23 |
| 36 | 202117018937-Written submissions and relevant documents [23-04-2025(online)].pdf | 2025-04-23 |
| 1 | 202117018937E_16-02-2022.pdf |