Sign In to Follow Application
View All Documents & Correspondence

Inter Prediction Concept Using Tile Independency Constraints

Abstract: Different concepts for improving video coding efficiency are described, many of them allow for video coding in a manner realizing tile-independent coding with reducing, however, the coding efficiency losses otherwise associated with the tile-dependency disruptions, with nevertheless merely marginally, if all, modifying the codec behavior alongside the tile boundaries.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 May 2021
Publication Number
01/2022
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
mail@lexorbis.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-07-11
Renewal Date

Applicants

FRAUNHOFER-GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
Hansastraße 27c 80686 München

Inventors

1. SKUPIN, Robert
c/o Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI Einsteinufer 37 10587 Berlin
2. SÁNCHEZ DE LA FUENTE, Yago
c/o Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI Einsteinufer 37 10587 Berlin
3. HELLGE, Cornelius
c/o Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI Einsteinufer 37 10587 Berlin
4. WIECKOWSKI, Adam
c/o Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI Einsteinufer 37 10587 Berlin
5. GEORGE, Valeri
c/o Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI Einsteinufer 37 10365 Berlin
6. BROSS, Benjamin
c/o Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI Einsteinufer 37 10587 Berlin
7. SCHIERL, Thomas
c/o Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI Einsteinufer 37 10587 Berlin
8. SÜHRING, Karsten
c/o Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI Einsteinufer 37 10587 Berlin
9. WIEGAND, Thomas
c/o Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI Einsteinufer 37 10587 Berlin

Specification

(EXTRACTED FROM WIPO)
Inter-Prediction Concept Using Tile-Independency Constraints
Description
The present application is concerned with an inter coding concept for use in a block-based codec such as, for example, a hybrid video codec, especially with a concept allowing for tile-based coding, i.e. independent coding of tiles into which a video is spatially subdivided.
Existing application such as 360° video services based on the MPEG OMAF standard heavily rely on spatial video partitioning or segmentation techniques. In such applications, spatial video segments are transmitted to the client and jointly decoded in a manner adaptive to the current client viewing direction. Another relevant application that relies on spatial segmentation of the video plane is the parallelization of encoding and decoding operation, e.g. to facilitate the multi-core capabilities of modern computing platforms.
One such spatial segmentation technique is implemented in HEVC and known as tiles which divide the picture plane into segments forming a rectangular grid. The resulting spatial segments are coded independently with respect to entropy coding and intra prediction. Furthermore, there exists means to indicate that the spatial segments are also coded independently with respect to state-of-the-art inter-prediction. For some applications as the ones listed above, having constraints in all three areas, i.e. entropy coding, intra- and inter-prediction, is vital.
However, as the ever-evolving art of video coding brings along new coding tools of which many relate to the field of inter-prediction, i.e. many tools incorporate new dependencies to previously coded pictures or different areas within the currently coded picture, appropriate care has to be taken on how to guarantee independence in all these mentioned areas.
Till now, it is the encoder which takes care that the coding parameters using the just-mentioned coding tools being available, are set in such a manner that the independent coding of the tiles of the video is adhered to. The decoder“relies on” a respective guarantee signaled by the encoder to the decoder via the bitstream.
It would be worthwhile to have a concept at hand which enables tile-independent coding in a manner which leads to less coding efficiency losses due to the coding dependency disruption due to the tile partition with, nevertheless, causing merely marginal modifications of the codec behavior alongside the boundaries.
Thus, I would be favorable to have a concept at hand which allows for video coding in a manner realizing tile-independent coding with reducing, however, the coding efficiency losses otherwise associated with the tile-dependency disruptions, with nevertheless merely marginally, if all, modifying the codec behavior alongside the tile boundaries.
This object is achieved by the subject-matter of the independent claims of the present application.
Generally speaking, it is a finding of the present application to have found out that a more efficient way of allowing for a tile-independent coding of video material is enabled if the obligation to adhere to the tile-independent coding of the video is, partially, inherited from the encoder to the decoder or, in other words, is partially co-attended to by the decoder so that the encoder may make use of that co-attention. To be more precise, in accordance with embodiments of the present application, the decoder is provided with a tile boundary awareness. That is, the decoder acts in a manner depending on a position of boundaries between tiles into which the video is spatially partitioned. In particular, this inter-tiled boundary awareness also relates to the decoder’s derivation of motion information from the data stream. This“awareness” leads to the decoder recognizing signaled motion information conveyed in the data stream which would, if applied as signaled, lead to compromising the tile-independency requirement, and accordingly, to the decoder mapping such signaled motion information, which would compromise the tile-independence, to allowed motion information states corresponding to motion information leading, when being used for inter-prediction, not to compromising the tile-independency. The encoder may rely on this behavior, i.e., is aware of the decoder’s awareness, and, especially, the redundancy of signalable motion information states resulting from the decoder’s obeyance or enforcement of the tile independence constraints. In particular, the encoder may exploit the decoder’s tile-independency constraint enforcement/obeyance in order to select, among signalable motion information states leading, due to the decoder behavior, to the same motion information at the decoder side, respectively, the one requiring less bitrate such as, for instance, one associated with a signaled motion information prediction residual of being zero. Thus, at the encoder side, the motion
information for a certain inter-predicted block is determined to conform to the constraint that the patch from which the certain inter-predicted block is to be predicted, is within, and does not cross, boundaries of a tile by which the certain inter-predicted block is comprised, i.e., within which the certain inter-predicted block is located, but when encoding the motion information of that certain inter-predicted block into the data stream, the encoder exploits the fact that the derivation thereof from the data stream is to be performed depending on the tile boundaries, i.e., necessitates the just-outlined tile boundary awareness.
In accordance with embodiments of the present application, the motion information comprises a motion vector for a certain inter-predicted block and the tile boundary awareness of the decoder processing relates to the motion vector. In particular, in accordance with embodiments of the present application, the decoder enforces the tile independency constraints with respect to predictively coded motion vectors. That is, according to these embodiments, the decoder obeys or enforces the constraint on the motion vector so as to result in the patch from which the inter-predicted block is to be predicted exceeding the boundaries of the tile within which the inter-predicted block is located, at the time of determining the motion vector on the basis of a motion vector predictor/prediction on the one hand and motion information prediction on residual transmitted in the data stream for the inter-predicted block, on the other hand. That is, the decoder would perform the just-mentioned obeyance/enforcement by use of a mapping which is non-invertible: instead of, for instance, mapping all combinations of motion information prediction and motion information prediction residual to the sum thereof in order to yield the motion vector to be finally used, this mapping would redirect all possible combinations of motion vector prediction and motion vector prediction residual the sum of which would lead to a motion vector associated with a patch exceeding the current tile’s boundaries, i.e., the boundaries of the tile within which the current inter-predicted block is located, towards motion vectors the associated patches of which do not exceed the current tile’s boundary. As a consequence, the encoder may exploit an ambiguity in signaling certain motion vectors for a predetermined inter-predicted block and may, for instance, chose for this inter-predicted block the signaling of a motion vector prediction residual leading to the lowest bitrate. This might be, for instance, a motion vector difference of zero.
In accordance with a variant of the above-outlined idea of providing the decoder with the capability of at least partially adopting the tile-boundary awareness for enforcing the tile- independency constraints for which otherwise the encoder is responsible only, the decoder applies the tile-independency constraint enforcement onto one or more motion information predictors for a certain inter-predicted block, rather than onto the resulting motion information resulting from combining the motion information prediction and the motion information prediction residual. Both, encoder and decoder perform the tile-independency enforcement on the motion information predictor(s) so that both use the same motion information predictor(s). Signalization ambiguity and the possibility of exploiting the latter in order to minimize bitrate is not an issue here. However, preparing the motion information predictor(s) for certain inter-predicted block in advance, i.e., before using same for motion information predictive coding/decoding, enables to tailor or“focus” the available motion vector predictor(s) for a certain inter-predicted block to solely point to patch locations within the current tile instead of wasting one or more motion information predictor(s) pointing to conflicting patch locations, i.e., patch locations exceeding the current tile’s boundaries the usage of which for predictively encoding the motion information for the inter-predicted block would then, anyway, require the signalization of a non-zero motion information prediction residual so as to redirect the conflicting motion information predictor to a patch location in the inner of the current tile. Even here, the motion information may be a motion vector.
Related to the latter variant, but nevertheless being different therefrom, further embodiments of the present application aim at avoiding motion information prediction candidates for a certain inter-predicted block the application of which directly, i.e., with a zero motion information prediction residual, would lead to a compromising of the tile-independency constraint. That is, other than the previous variant, such a motion information prediction candidate would simply not be used for populating a motion information prediction candidate list for a currently predicted inter-predicted block. Encoder and decoder would act the same. No redirection is performed. These predictors are simply left off. That is, the establishment of the motion information prediction candidate list would be made in the same inter-tile boundary aware manner. In this manner, all members signalable for the currently coded inter-predicted block would concentrate onto non-conflicting motion information prediction candidates. Thereby, the complete list would be signalable by way of a point in the data stream, for instance, at no signalable state of such a pointer would have to be“wasted” for motion information prediction candidates forbidden to be signaled in order to conform to the tile-independency constraint as either the motion information prediction candidate pointed to
would be in conflict with this constraint, or any preceding, in rank position, motion information prediction candidate.
Similarly, further embodiments of the present application aim at avoiding populating a motion information prediction candidate list with candidates the origin thereof resides in blocks located outside the current tile, i.e., the tile within which the current inter-predicted block is located. Accordingly, and in accordance with these embodiments, the decoder as well as the encoder checks whether the inter-predicted block adjoins a predetermined side such as the lower and/or right hand side, of the current tile and if so, a first block in a motion information reference picture is identified and the list is populated with a motion information prediction candidate derived from motion information of this first block, and if not, a second block in the motion information reference picture is identified and the motion information prediction candidate list is populated, instead, with a motion information prediction candidate derived from the second block’s motion information. For instance, the first block may be the one inheriting a location co-located to a first alignment location inside the current inter-predicted block, while the second block is the one containing a location co-located to a second predetermined location lying outside the inter-predicted block, namely offset relative to the inter-predicted block along a direction perpendicular to the predetermined side.
In accordance with a further variant of the above-outlined idea of the present application, motion information prediction candidate list construction/establishment is made in a manner so as to shift motion information prediction candidates the origin of which is liable to be in conflict with the tile-independency constraint, i.e., the origin of which lies outside the current tile, towards the end of the motion information prediction candidate list. By this manner, the signalization of pointers into the motion information prediction candidate list at the encoder side is not restricted too much. In other words, the pointers sent for a certain inter-predicted block and signalizing the motion information prediction candidate actually to be used for a current inter-predicted block, indicates this motion information candidate actually to be used by the rank position in the motion information prediction candidate list. By shifting motion information prediction candidates in the list which might be unavailable as their origin is located outside the current tile, so as to occur at the end, or at least later on, in the motion-information prediction candidate list, i.e. at higher ranks, all the motion information prediction candidates preceding the latter in the list are still signalable by the encoder and accordingly, their repertoire of motion information prediction candidates available for predicting the current block’s motion information is still larger than compared to not shifting such “problematic” motion information prediction candidates towards the end of the list. In accordance with some embodiments relating to the just-outlined aspect, the populating of the motion information prediction candidate list in a manner so as to shift“problematic” motion information prediction candidates towards the end of the list is performed in the tile boarder aware manner at encoder and decoder. In this manner, the slight coding efficiency penalty associated with this shifting of potentially more likely to be more effected motion information prediction candidates towards the end of the list is restricted to areas of the pictures of the video alongside the tile boundaries. In accordance with an alternative, however, the shifting of “problematic” motion information prediction candidates towards the end of the list is performed irrespective of the current block lying alongside any tile boundary or not. While slightly reducing the coding efficiency, the latter alternative might improve the robustness and ease the coding/decoding procedure. The “problematic” motion information prediction candidates might be ones derived from blocks in the reference picture or might be ones derived from a motion information history management.
In accordance with a further embodiment of the present application, decoder and encoder determine a temporal motion information prediction candidate in a tile-boundary aware manner by enforcing tile-independency constraints with respect to a predicted motion vector used, in turn, to point to a block in the motion information reference picture, the motion information of which is used to form a temporal motion information prediction candidate in the list. The predicted motion vector is clipped to stay within the current tile or point to a position within the current tile. The availability of such a candidate is, accordingly, under guarantee within the current tile so that the list conformity with the tile-independency constraint is conserved. In accordance with an alternative concept, instead of clipping the motion vector, a second motion vector is used if the first motion vector points outside the current tile. That is, the second motion vector is used to locate the block on the basis of the motion information of which the temporal motion information prediction candidate is formed, if the first motion vector points outside the tile.
In accordance with even further embodiments, the above-outlined idea of providing a decoder with a tile-boundary awareness in order to assist in enforcing tile-independency constraints, a decoder which supports a motion-compensative prediction according to which motion information is coded in the data stream for a certain inter-predicted block and the decoder derives therefrom a motion vector for each sub-block of sub-blocks into which this inter-predicted block is partitioned, performs either the derivation of the sub- block motion vectors or the prediction of each sub-block using the derived motion vectors or both depending on a position of boundaries between the tiles. In this manner, the cases where such effective coding mode could not be used by the encoder as it would be in conflict with the tile-independency constraint, is fairly reduced.
In accordance with a further aspect of the present application, a codec which supports motion-compensated bi-directional prediction and involves a bi-directional optical flow tool at encoder and decoder for improving the motion-compensated bi-directional prediction, is made compliant with tile-independent coding by either providing encoder and decoder with an automatic deactivation of the bi-directional optical flow tool in cases where the application of the tool would lead to a conflict with the tile-independency constraint, or boundary padding would be used in order to determine regions of a patch from which a certain bi-predictively inter-predicted block is predicted using the bi-directional optical flow tool as predicted, which lie outside the current tile.
Another aspect of the present application relates the population of a motion information predictor candidate list using a motion information history list storing previously used motion information. This aspect may be used irrespective of using tile-based coding or not. This aspect seeks to provide a video codec of higher compression efficiency by rendering the selection of the motion information predictor candidate out of the motion information history list by way of which an entry currently to be populated in the candidate list is to be filled, dependent on those motion information predictor candidates by which the motion information predictor candidate list is populated so far. The aim of this dependency is to select motion information entries in the history list more likely the motion information of which is further away from the motion information predictor candidates by which the candidate list is populated so far. An appropriate distance measure may be defined on the basis of, for instance, motion vectors comprised by the motion information entries in their history list and the motion information predictor candidates in the candidate list, respectively, and/or reference picture indices comprised by same. By this manner, the population of the candidate list using a history based candidate leads to a higher degree of“refreshment” of the resulting candidate list so that the likelihood that the encoder finds a good candidate in the candidate list in terms of great distortion optimization is higher than compared to selecting the history-based candidate purely according to its rank in the motion information history list, i.e., how newly same has been entered into the motion information history list. This concept may in fact be also applied to any other motion information predictor candidate which is currently to be selected out of a set of motion
information predictor candidates in order to populate the motion information predictor candidate list.
Advantageous aspects of the present invention are the subject of dependent claims. Preferred embodiments of the present application are described below with respect to the figures, among which:
Fig. shows a block diagram of a block-based video encoder as an example for an encoder where inter prediction concepts according to embodiments of the present application could be implemented;
Fig. 2 shows a block diagram of a block-based video decoder, which fits to the encoder of Fig. 1 , as an example for an encoder where inter prediction concepts according to embodiments of the present application could be implemented;
Fig. 3 shows a schematic diagram illustrating an example for a relationship between the prediction residual signal, the prediction signal and the reconstructed signal so as to illustrate possibilities of setting subdivisions for coding mode selection, transform selection and transform performance, respectively;
Fig. 4 shows a schematic diagram illustrating an exemplary partitioning of a video into tiles and the general aim of tile-independent coding;
Fig. 5 shows a schematic diagram illustrating the JEM (joint exploration codec mode) affine motion model;
Fig. shows a schematic diagram illustrating the ATM VP (Alternative Temporal Motion Vector Prediction) procedure;
Fig. 7 shows a schematic diagram illustrating the Optical Flow trajectory used in the BIO (Bi-directional Optical Flow) tool;
Fig. 8 shows a schematic diagram illustrating the tile-awareness or tile-aware motion information derivation with respect to a predetermined inter-predicted block by the decoder according to some embodiments of the present application and the
encoder’s utilization of that decoder’s behavior for sake of a more effective tile independence codec;
Fig. 8 shows a schematic diagram illustrating the decoder’s obeyance / enforcement feature according to respective embodiments;
Fig. 9 shows a schematic diagram illustrating the decoder’s obeyance / enforcement feature and the usage of a non-invertible mapping for realizing the associated re direction of motion data according to some embodiments of the present application;
Fig. 10 shows a schematic diagram illustrating predictive motion information coding/decoding to illustrate the possibility of applying the decoder’s obeyance / enforcement feature onto the signaled or finally reconstructed motion information state or a motion information prediction;
Fig. 1 1 shows a schematic diagram illustrating the possibility of patch size variations and/or patch size enlargements due to mathematical sample combinations to compute individual samples of the inter-predicted block;
Fig. 12a and b show schematic diagrams illustrating different possibilities for realizing the motion redirection or the non-invertible mapping in connection with motion accuracy dependent patch size due to motion accuracy dependent activation of an interpolation filter, with aiming at allowing patches to get as close as possible to the current tile’s boundary in case of Fig. 12 and with realizing a safety margin for the motion vectors in case of Fig. 12b;
Fig. 13 shows a schematic diagram illustrating the possibility of applying the decoder’s obeyance / enforcement functionality onto the derivation of motion information prediction, with additionally illustrating the optional construction of a motion information predictor candidate list;
Fig. 14 shows a schematic diagram illustrating the non-invertible motion vector mapping involved in the decoder’s obeyance / enforcement functionality and a resulting exemplary set of motion information predictor candidates when additionally, optionally construing a motion information predictor candidate list;
Fig. 15 shows a schematic diagram illustrating the possibility of rendering the candidate availabilities in populating the motion information predictor candidate list dependent on whether certain motion information predictor candidates conflict with the tile independence constraint;
Fig. 16 shows a schematic diagram illustrating the possibility of varying the reference block identification with respect to the derivation of a temporal motion information predictor candidate dependent on whether the inter-predicted block adjoins certain tile sides;
Fig. 17 shows a schematic diagram illustrating the possibility of rendering the population order in populating the motion information predictor candidate list dependent on whether the inter-predicted block adjoins certain tile sides, here with respect to a juxtaposition of temporal motion information predictor candidate and one or more spatial motion information predictor candidates;
Fig. 18 shows a schematic diagram illustrating the possibility of applying the decoder’s obeyance / enforcement functionality onto predicted motion vectors used for the derivation of a temporal motion information predictor candidate;
Fig. 19 shows a schematic diagram illustrating an alternative to the concept of Fig.
18 where a substitute predicted motion vector is used in case of a conflict of the first predicted motion vector with the tile independence constraint
Fig. 20 shows a schematic diagram illustrating motion information derivation for sub-blocks of a current inter-predicted block according to an affine motion model for illustrating different concepts for the decoder assisting in attaining the tile independence constraint;
Fig. 21 shows a schematic diagram illustrating the selection of an History based motion information candidate for populating a motion information candidate list depending on the motion information of candidates by which the list is populated so far, for improving the resulting candidate list; and
Fig. 22 shows a schematic diagram illustrating the functionality of a BIO (Bi directional Optical Flow) tool built into encoder and decoder according to an embedment, for illustrating different concepts for the decoder assisting in attaining the tile independence constraint.
The following description of the figures starts with a presentation of a description of encoder and decoder of a block-based predictive codec for coding pictures of a video in order to form an example for a coding framework into which embodiments for an interprediction codec may be built in. The former encoder and decoder are described with respect to Figs 1 to 3. Thereinafter the description of embodiments of the inter-prediction concepts of the present application are presented. They may be combined or used in combination. In particular, all concepts described later may by combined built into the encoder and decoder of Figs. 1 and 2, respectively, although the embodiments described with the subsequent Figs. 4 and following, may also be used to form encoders and decoders not operating according to the coding framework underlying the encoder and decoder of Figs. 1 and 2.
Fig. 1 shows an apparatus for predictively coding a video 11 composed of a sequence of pictures 12 into a data stream 14 using, exemplarily, transform-based residual coding. The apparatus, or encoder, is indicated using reference sign 10. Fig. 2 shows a corresponding decoder 20, i.e. an apparatus 20 configured to predictively decode a video 1 T composed of a sequence of pictures 12’ from the data stream 14 also using transform-based residual decoding, wherein the apostrophe has been used to indicate that the video 1 T and pictures 12’ as reconstructed by decoder 20 deviate from picture 12 originally encoded by apparatus 10 in terms of coding loss introduced by a quantization of the prediction residual signal. Fig. 1 and Fig. 2 exemplarily use transform based prediction residual coding, although embodiments of the present application are not restricted to this kind of prediction residual coding. This is true for other details described with respect to Fig. 1 and 2, too, as will be outlined hereinafter.
The encoder 10 is configured to subject the prediction residual signal to spatial-to-spectral transformation and to encode the prediction residual signal, thus obtained, into the data stream 14. Likewise, the decoder 20 is configured to decode the prediction residual signal from the data stream 14 and subject the prediction residual signal thus obtained to spectral-to-spatial transformation.
Internally, the encoder 10 may comprise a prediction residual signal former 22 which generates a prediction residual 24 so as to measure a deviation of a prediction signal 26 from the original signal, i.e. the current picture 12. The prediction residual signal former 22 may, for instance, be a subtractor which subtracts the prediction signal from the original signal, i.e. current picture 12. The encoder 10 then further comprises a transformer 28 which subjects the prediction residual signal 24 to a spatial-to-spectral transformation to obtain a spectral-domain prediction residual signal 24’ which is then subject to quantization by a quantizer 32, also comprised by encoder 10. The thus quantized prediction residual signal 24” is coded into bitstream 14. To this end, encoder 10 may optionally comprise an entropy coder 34 which entropy codes the prediction residual signal as transformed and quantized into data stream 14. The prediction residual 26 is generated by a prediction stage 36 of encoder 10 on the basis of the prediction residual signal 24” decoded into, and decodable from, data stream 14. To this end, the prediction stage 36 may internally, as is shown in Fig. 1 , comprise a dequantizer 38 which dequantizes prediction residual signal 24” so as to gain spectral-domain prediction residual signal 24’”, which corresponds to signal 24’ except for quantization loss, followed by an inverse transformer 40 which subjects the latter prediction residual signal 24”’ to an inverse transformation, i.e. a spectral-to-spatial transformation, to obtain prediction residual signal 24””, which corresponds to the original prediction residual signal 24 except for quantization loss. A combiner 42 of the prediction stage 36 then recombines, such as by addition, the prediction signal 26 and the prediction residual signal 24”” so as to obtain a reconstructed signal 46, i.e. a reconstruction of the original signal 12. Reconstructed signal 46 may correspond to signal 12’. A prediction module 44 of prediction stage 36 then generates the prediction signal 26 on the basis of signal 46 by using, for instance, spatial prediction, i.e. intra prediction, and/or temporal prediction, i.e. inter prediction.
Likewise, decoder 20 may be internally composed of components corresponding to, and inter-connected in a manner corresponding to, prediction stage 36. In particular, entropy decoder 50 of decoder 20 may entropy decode the quantized spectral-domain prediction residual signal 24” from the data stream, whereupon dequantizer 52, inverse transformer 54, combiner 56 and prediction module 58, interconnected and cooperating in the manner described above with respect to the modules of prediction stage 36, recover the reconstructed signal on the basis of prediction residual signal 24” so that, as shown in Fig. 2, the output of combiner 56 results in the reconstructed signal, namely picture 12’.
Although not specifically described above, it is readily clear that the encoder 10 may set some coding parameters including, for instance, prediction modes, motion parameters and the like, according to some optimization scheme such as, for instance, in a manner optimizing some rate and distortion related criterion, i.e. coding cost. For example, encoder 10 and decoder 20 and the corresponding modules 44, 58, respectively, may support different prediction modes such as intra-coding modes and inter-coding modes. The granularity at which encoder and decoder switch between these prediction mode types may correspond to a subdivision of picture 12 and 12’, respectively, into coding segments or coding blocks. In units of these coding segments, for instance, the picture may be subdivided into blocks being intra-coded and blocks being inter-coded. Intra-coded blocks are predicted on the basis of a spatial, already coded/decoded neighborhood of the respective block. Several intra-coding modes may exist and be selected for a respective intra-coded segment including directional or angular intra-coding modes according to which the respective segment is filled by extrapolating the sample values of the neighborhood along a certain direction which is specific for the respective directional intra-coding mode, into the respective intra-coded segment. The intra-coding modes may, for instance, also comprise one or more further modes such as a DC coding mode, according to which the prediction for the respective intra-coded block assigns a DC value to all samples within the respective intra-coded segment, and/or a planar intra coding mode according to which the prediction of the respective block is approximated or determined to be a spatial distribution of sample values described by a two-dimensional linear function over the sample positions of the respective intra-coded block with deriving tilt and offset of the plane defined by the two-dimensional linear function on the basis of the neighboring samples. Compared thereto, inter-coded blocks may be predicted, for instance, temporally. For inter-coded blocks, motion information may be signaled within the data stream: The motion information may comprise vectors indicating the spatial displacement of the portion of a previously coded picture of the video to which picture 12 belongs, at which the previously coded/decoded picture is sampled in order to obtain the prediction signal for the respective inter-coded block. More complex motion models may be used as well. This means, in addition to the residual signal coding comprised by data stream 14, such as the entropy-coded transform coefficient levels representing the quantized spectral-domain prediction residual signal 24”, data stream 14 may have encoded thereinto coding mode parameters for assigning the coding modes to the various blocks, prediction parameters for some of the blocks, such as motion parameters for inter-coded blocks, and optional further parameters such as parameters controlling and signaling the subdivision of picture 12 and 12’, respectively, into the blocks. The decoder 20 uses these parameters to subdivide the picture in the same manner as the encoder did, to assign the same prediction modes to the blocks, and to perform the same prediction to result in the same prediction signal.
Claims
1. Block-based video decoder supporting motion-compensated prediction configured to
derive motion information for a predetermined inter-predicted block (104) of a current picture (12a) of a video (1 1 ), which locates a patch (130) in a reference picture (12b), from which the predetermined inter- predicted block (104) is to be predicted, from a data stream (14) into which the video (11 ) is coded, depending on a position of boundaries (102) between tiles (100), into which the video (11 ) is spatially partitioned, and
predict the predetermined inter-predicted block (104) using the motion information from the patch (130) of the reference picture (12b).
2. Block-based video decoder of claim 1 , configured to
check whether a signaled state of the motion information in the data stream leads to the patch (130b) not to be within boundaries (102) of a tile (100a) by which the predetermined inter-predicted block (104) is comprised, and, if so, redirect (142) the motion information from the signaled state to a redirected state leading to the patch (130b’) to be within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised.
3. Block-based video decoder of claim 1 or 2, configured to
predict the motion information for the predetermined inter-predicted block (104) to obtain a motion information prediction (150), and
decode a motion information prediction residual (152) from the data stream (14),
determine the motion information for the predetermined inter-predicted block (104) on the basis of the motion information prediction (150) and motion information prediction residual (152) with obeying a constraint on the motion information so that the patch does not exceed boundaries (102) of a tile (100a) by which the predetermined inter-predicted block (104) is comprised.
Block-based video decoder of claim 3, configured to
obey the constraint on the motion information so that the patch (130) does not exceed the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, by, in determining the motion information for the predetermined inter-predicted block on the basis of the motion information prediction (150) and motion information prediction residual (152), use an non-invertible mapping from the motion information prediction (150) and the motion information prediction residual (152) to the motion information which maps all possible combinations for the motion information prediction (150) and the motion information prediction residual (152) exclusively onto a respective possible motion information so that, if the patch (130) was located using the respective possible motion information, the patch (130) would not exceed the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, and which maps more than one possible combination for the motion information prediction (150) and the motion information prediction residual (152), equaling in the motion information prediction and differing in the motion information prediction residual, onto one possible motion information.
Block-based video decoder of claim 3 or 4, configured
to predict the predetermined inter-predicted block (104) using the motion information by predicting each of samples (162) of the predetermined inter-predicted block by means of a mathematical combination of samples (164) within a respective portion of the patch (130) and/or by deriving two motion vectors (306a, 306b) from the motion information which are defined for two different corners of the predetermined inter-predicted block (104) and computing for each of subblocks into which the predetermined inter-predicted block (104) is partitioned, a subblock’s motion vector (304) locating a respective subblock patch within the reference picture (12b), wherein the subblock’s patch of all subblocks form the patch of the predetermined inter-predicted block, and
so that the constraint on the motion information is selected so that all samples of the portion for all samples (162) of the predetermined inter-predicted block lie within the boundaries of the tile (100a) by which the predetermined inter-predicted block is comprised.
Block-based video decoder of claim 3, configured
To predict the predetermined inter-predicted block (104) using the motion information by predicting each of samples (162) of the predetermined inter-predicted block
by means of a weighted sum of samples (164) within a respective portion of the patch, so that the patch is widened compared to the predetermined inter-predicted block by an extension edge portion (172), if the motion information falls within a first subset of a value domain of the motion information,
by setting same equal to one corresponding sample (164) of the patch, so that the patch is as wide as the predetermined inter-predicted block (104), if the motion information falls within a second subset of the value domain of the motion information, and
so that the constraint on the motion information is selected so that all samples (162) of the patch lie within the boundaries of the tile (100a) by which the predetermined inter-predicted block is comprised and so that, if the motion information falls within the second subset of the value domain of the motion information, the constraint allows the patch to get closer to the boundaries of the tile by which the predetermined inter-predicted block is comprised than a width (170) of the extension edge portion (172).
Block-based video decoder of claim 3, configured
To predict the predetermined inter-predicted block (104) using the motion information by predicting each of samples (162) of the predetermined inter-predicted block
by means of a weighted sum of samples within a respective portion of the patch, so that the patch is widened compared to the predetermined inter-
predicted block by an extension edge portion, if the motion information falls within a first subset of a value domain of the motion information,
by setting same equal to one corresponding sample of the patch, so that the patch is as wide as the predetermined inter-predicted block, if the motion information falls within a second subset of the value domain of the motion information, and
so that the constraint on the motion information is selected so that
all samples (164) of the patch (130) lie within the boundaries of the tile (100a) by which the predetermined inter-predicted block is comprised, if the motion information falls within a first subset of a value domain of the motion information,
all samples (164) of the patch (130) are distanced from the boundaries of the tile by which the predetermined inter-predicted block is comprised, by at least a distance (171 ) accommodating the extension edge portion (172), if the motion information falls within a first subset of a value domain of the motion information.
Block-based video decoder of claim 1 or 2, wherein the motion information comprises a motion vector (154) indicating a translational displacement between the patch and the predetermined inter-predicted block and the block-based video decoder is configured to
predict the motion vector for the predetermined inter-predicted block to obtain a motion vector prediction (150), and
decode a motion vector prediction residual (152) from the data stream,
determine the motion vector (154) for the predetermined inter-predicted block on the basis of the motion vector prediction and motion vector prediction residual with obeying a constraint on the motion vector so that the patch does not exceed boundaries (102) of a tile (100a) by which the predetermined inter-predicted block is comprised.
Block-based video decoder of claim 8, configured to
obey the constraint on the motion vector (154) so that the patch does not exceed boundaries of a tile by which the predetermined inter-predicted block is comprised, by, in determining the motion vector for the predetermined inter-predicted block on the basis of the motion vector prediction and motion vector prediction residual, use an non-invertible mapping from the motion vector prediction and the motion vector prediction residual to the motion vector which maps all possible combinations for the motion vector prediction and the motion vector prediction residual exclusively onto a respective possible motion vector so that, if the patch (130) was located using the respective possible motion vector, the patch would not exceed the boundaries of the tile (100a) by which the predetermined inter-predicted block is comprised, and which maps more than one possible combination for the motion vector prediction and the motion vector prediction residual, equaling in the motion vector prediction and differing in the motion vector prediction residual, onto one possible motion vector.
Block-based video decoder of claim 1 or 2, wherein the motion information comprises a motion vector indicating a translational displacement between the patch and the predetermined inter-predicted block and the block-based video decoder is configured to
predict the motion vector for the predetermined inter-predicted block to obtain a motion vector prediction, and
decode a motion vector prediction residual from the data stream,
determine the motion vector for the predetermined inter-predicted block on the basis of the motion vector prediction and motion vector prediction residual,
predict the predetermined inter-predicted block using the motion vector by predicting samples (162) of the predetermined inter-predicted block
by means of a convolution of a filter kernel with the patch so that the patch in widened compared to the predetermined inter-predicted block owing to a width of the filter kernel, if the motion vector has a sub-pel part which is non-zero, and
by setting each sample the of the predetermined inter-predicted block to one corresponding sample (164) of the patch so that the patch is as wide as the predetermined inter-predicted block, if the motion vector has a sub- pel part which is zero, and
perform the determination of the motion vector for the predetermined inter-predicted block on the basis of the motion vector prediction and motion vector prediction residual so that all samples of the patch lie within the boundaries of the tile by which the predetermined inter-predicted block is comprised.
Block-based video decoder of claim 10, configured to
perform the determination of the motion vector for the predetermined inter-predicted block on the basis of the motion vector prediction and motion vector prediction residual by
computing a preliminary version of the motion vector for the predetermined inter-predicted block on the basis of the motion vector prediction and motion vector prediction residual,
checking whether a distance (171 ) of a footprint (160) of the predetermined inter-predicted block, displaced relative to the predetermined inter-predicted block according to the preliminary version of the motion vector from the boundaries of the tile by which the predetermined inter-predicted block is comprised, is below a widening reach (170) at which the patch is widened compared to the predetermined inter-predicted block owing to a width of the filter kernel and
setting to zero the sub-pel part of the preliminary version of the motion vector to obtain the motion vector if the distance of the footprint is below the widening reach.
Block-based video decoder of claim 11 , configured to
leaving the sub-pel part of the preliminary version of the motion vector unamended to obtain the motion vector if the distance (171 ) of the footprint (160) is not below the widening reach.
Block-based video decoder of claim 11 or 12, configured to
when setting to zero the sub-pel part of the preliminary version of the motion vector, adapt a full-part of the preliminary version of the motion vector to obtain the motion vector in a manner so that the zero-setting of the sub-pel part and the adaptation of the full-pel part results in a rounding of the preliminary version of the motion vector to a nearest full-pel motion vector, resulting in all samples of the patch lying within the boundaries of the tile by which the predetermined inter-predicted block is comprised and being nearest to the preliminary version of the motion vector before zero-setting the sub-pel part and the adaptation of the full-pel part.
Block-based video decoder of claim 1 1 or 12, configured to
when setting to zero the sub-pel part of the preliminary version of the motion vector, clip a full-part of the preliminary version of the motion vector to obtain the motion vector in a manner so that the zero-setting of the sub-pel part and the adaptation of the full-pel part results in a rounding of the preliminary version of the motion vector to a nearest full-pel motion vector, resulting in all samples of the patch lying within the boundaries of the tile by which the predetermined inter-predicted block is comprised and being nearest to and smaller than the preliminary version of the motion vector before zero-setting the sub-pel part and the clipping of the full-pel part.
Block-based video decoder of claim 1 to 14, configured to
check whether a predicted state (192c’) of the motion information would lead, if used as is, to the patch not to be within boundaries (102) of a patch (100a) by which the predetermined inter-predicted block (104) is comprised, and, if not, use the predicted state as a motion information prediction candidate (192) for the predetermined inter-predicted block, and if so, redirect (142) the motion information from the predicted state (192c’) to a redirected state (192c) leading to the patch to be within the boundaries of the patch by which the predetermined inter-predicted block is comprised to obtain the motion information prediction candidate (192),
decode a motion information prediction residual from the data stream,
determine the motion information for the predetermined inter-predicted block on the basis of the motion information prediction candidate and the motion information prediction residual.
Block-based video decoder of any of claims 1 to 14, configured to
derive, by prediction, a motion information prediction candidate (192c) for the predetermined inter-predicted block (104) with obeying a constraint on the motion information prediction candidate (192c) so that, if the motion information was set to be equal to the motion information prediction candidate (192c), the patch would not exceed boundaries of a tile by which the predetermined inter-predicted block is comprised, and
decode a motion information prediction residual (152) from the data stream,
determine the motion information for the predetermined inter-predicted block on the basis of the motion information prediction candidate and the motion information prediction residual.
Block-based video decoder of claim 16, configured to
obey the constraint on the motion information prediction candidate (192c) so that, if the motion information was set to be equal to the motion information prediction candidate, the patch would not exceed boundaries of the tile by which the predetermined inter-predicted block is comprised by, in deriving the motion information prediction candidate (192c) use an non-invertible mapping to map a preliminary motion information prediction candidate (192c’) to the motion information prediction candidate (192c) which maps exclusively onto a respective mapped motion information candidate so that, if the patch was located using the respective mapped motion information candidate, the patch would not exceed
boundaries of the tile by which the predetermined inter-predicted block is comprised, and which maps different settings for the preliminary motion information prediction candidate onto one possible setting for the respective mapped motion information candidate.
18. Block-based video decoder of claim 16 or 17, configured to
establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (104) including the motion information prediction candidate (192).
19. Block-based video decoder of claim 18, configured to
derive, by prediction, each motion information prediction candidate (192) of the motion information prediction candidate list (190) with obeying, for the respective motion information prediction candidate, a constraint on the motion information prediction candidate so that, if the motion information was set to be equal to the respective motion information prediction candidate, the patch would not exceed boundaries of a tile by which the predetermined inter-predicted block is comprised.
20. Block-based video decoder of claim 18 or 19, configured to
Derive for the predetermined inter-predicted block a pointer (193) into the motion information prediction candidate list (190) in order to obtain the motion information prediction candidate (192) on the basis of which the motion information is determined using which the predetermined inter-predicted block is predicted.
21. Block-based video decoder of any of claims 16 to 20, configured to
To predict the predetermined inter-predicted block using the motion information by predicting each of samples of the predetermined inter-predicted block by means of a mathematical combination of samples within a respective portion of the patch and/or by deriving two motion vectors (306a, 306b) from the motion information which are defined for two different corners of the predetermined inter-predicted block (104) and computing for each of subblocks into which the predetermined inter-predicted block (104) is partitioned, a subblock’s motion vector (304) locating a respective subblock patch within the reference picture (12b), wherein the subblock’s patch of all subblocks form the patch of the predetermined inter-predicted block, and
so that the constraint on the motion information prediction candidate (192c) is selected so that, if the motion information was set to be equal to the motion information prediction candidate, all samples of the portion of the patch for all samples of the predetermined inter-predicted block would lie within the boundaries of the tile by which the predetermined inter-predicted block is comprised.
Block-based video decoder of any of claims 16 to 20, configured
To predict the predetermined inter-predicted block using the motion information by predicting each of samples of the predetermined inter-predicted block
by means of a weighted sum of samples within a respective portion of the patch, so that the patch is widened compared to the predetermined inter- predicted block by an extension edge portion, if the motion information falls within a first subset of a value domain of the motion information,
by setting same equal to one corresponding sample of the patch, so that the patch is as wide as the predetermined inter-predicted block, if the motion information falls within a second subset of the value domain of the motion information, and
so that the constraint on the motion information prediction candidate is selected so that, if the motion information was set to be equal to the motion information prediction candidate, all samples of the patch would lie within the boundaries of the tile by which the predetermined inter-predicted block is comprised and so that, if the motion information falls within the second subset of the value domain of the motion information, the constraint allows the patch to get closer to the boundaries of the tile by which the predetermined inter-predicted block is comprised than a width of the extension edge portion.
Block-based video decoder of any of claims 16 to 20, configured
To predict the predetermined inter-predicted block using the motion information by predicting each of samples of the predetermined inter-predicted block
by means of a weighted sum of samples within a respective portion of the patch, so that the patch is widened compared to the predetermined inter- predicted block by an extension edge portion, if the motion information falls within a first subset of a value domain of the motion information,
by setting same equal to one corresponding sample of the patch, so that the patch is as wide as the predetermined inter-predicted block, if the motion information falls within a second subset of the value domain of the motion information, and
so that the constraint on the motion information prediction candidate is selected so that, if the motion information was set to be equal to the motion information prediction candidate,
all samples of the patch would lie within the boundaries of the tile by which the predetermined inter-predicted block is comprised, if the motion information falls within a first subset of a value domain of the motion information,
all samples of the patch would be distanced from the boundaries of the tile by which the predetermined inter-predicted block is comprised, by at least a distance accommodating the extension edge portion, if the motion information falls within a first subset of a value domain of the motion information.
Block-based video decoder of any of claims 16 to 20, wherein the motion information comprises a motion vector indicating a translational displacement between the patch and the predetermined inter-predicted block and the block-based video decoder is configured to
predict the predetermined inter-predicted block using the motion vector by predicting samples of the predetermined inter-predicted block
by means of a convolution of a filter kernel with the patch so that the patch in widened compared to the predetermined inter-predicted block owing to a width of the filter kernel, if the motion vector has a sub-pel part which is non-zero, and
by setting each sample the of the predetermined inter-predicted block to one corresponding sample of the patch so that the patch is as wide as the predetermined inter-predicted block, if the motion vector has a sub-pel part which is zero, and
perform the derivation of the motion vector prediction candidate for the predetermined inter-predicted block so that, if the motion vector was set to be equal to the motion vector prediction candidate, all samples of the patch would lie within the boundaries of the tile by which the predetermined inter-predicted block is comprised.
Block-based video decoder of any of claims 16 to 20, configured to
perform the derivation of the motion vector prediction candidate for the predetermined inter-predicted block by
deriving, by prediction, a preliminary version of the motion vector prediction candidate for the predetermined inter-predicted block,
checking whether a distance of a footprint of the predetermined inter- predicted block, displaced relative to the predetermined inter-predicted block according to the preliminary version of the motion vector prediction candidate from the boundaries of the tile by which the predetermined inter- predicted block is comprised, is below a widening reach at which the patch is widened compared to the predetermined inter-predicted block owing to a width of the filter kernel and
setting to zero the sub-pel part of the preliminary version of the motion vector prediction candidate to obtain the motion vector prediction candidate if the distance of the footprint is below the widening reach.
26. Block-based video decoder of claim 25, configured to
leaving the sub-pel part of the preliminary version of the motion vector prediction candidate unamended to obtain the motion vector prediction candidate if the distance (171 ) of the footprint (160) is not below the widening reach.
27. Block-based video decoder of claim 25 or 26, configured to
when setting to zero the sub-pel part of the preliminary version of motion vector prediction candidate, adapt a full-part of the preliminary version of the motion vector prediction candidate to obtain the motion vector prediction candidate in a manner so that the zero-setting of the sub-pel part and the adaptation of the full- pel part results in a rounding of the preliminary version of the motion vector prediction candidate to a nearest full-pel motion vector prediction candidate, resulting in all samples of the patch lying within the boundaries of the tile by which the predetermined inter-predicted block is comprised and being nearest to the preliminary version of the motion vector prediction candidate before zero-setting the sub-pel part and the adaptation of the full-pel part.
28. Block-based video decoder of claim 25 or 26, configured to
when setting to zero the sub-pel part of the preliminary version of motion vector prediction candidate, clip a full-part of the preliminary version of the motion vector prediction candidate to obtain the motion vector prediction candidate in a manner so that the zero-setting of the sub-pel part and the adaptation of the full-pel part results in a rounding of the preliminary version of the motion vector prediction candidate to a nearest full-pel motion vector prediction candidate, resulting in all samples of the patch lying within the boundaries of the tile by which the predetermined inter-predicted block is comprised and being nearest to and smaller than the preliminary version of the motion vector prediction candidate before zero setting the sub-pel part and the clipping of the full-pel part.
29. Block-based video decoder of any of claims 1 to 28, configured to
establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (104) by selectively populating the motion information prediction candidate list (190) with at least one motion information prediction candidate out of a plurality (200) of motion information prediction candidates by
checking whether, if the motion information was set to be equal to the at least one motion information prediction candidate, the patch (130) would exceed the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block is comprised,
if yes, do not populate the motion information prediction candidate list with the at least one motion information prediction candidate, and
if not, populate the motion information prediction candidate list with the at least one motion information prediction candidate.
Block-based video decoder of claim 29, configured to
if, if the motion information was set to be equal to the at least one motion information prediction candidate, the patch (103) would not exceed the boundaries of the tile (100a) by which the predetermined inter-predicted block is comprised, and if the at least one motion information prediction candidate is part of a bi-predictive motion information the other part of which relates to another reference picture and, if the motion information was set to be equal to the other part, the patch would exceed the boundaries of the tile by which the predetermined inter-predicted block is comprised, enter the at least one motion information prediction candidate into a preliminary motion information list of motion information prediction candidates and use same to populate the motion information prediction candidate list with a pair of entries in the preliminary motion information list of motion information prediction candidates or a combination of one entry in the preliminary motion information list and a default motion information in case of lack a availability of further motion information prediction candidates.
Block-based video decoder of claim 29 or 30, configured to
Perform a selection out of the motion information prediction candidate list (190) to obtain a motion information prediction (150), and
decode a motion information prediction residual (152) from the data stream,
determine the motion information for the predetermined inter-predicted block on the basis of the motion information prediction (150) and motion information prediction residual (152).
Block-based video decoder of claim 31 , configured to
derive for the predetermined inter-predicted block a pointer (193) into the motion information prediction candidate list (190) and
use the pointer (193) for the selection.
Block-based video decoder of any of claims 1 to 32, configured to
establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (104) by selectively populating the motion information prediction candidate list (190) with at least one motion information prediction candidate (192) out of a plurality (200) of motion information prediction candidates by
checking whether the predetermined inter-predicted block (104) in the current picture adjoins a predetermined side of a tile (100a) by which the predetermined inter-predicted block is comprised,
if yes, identify a first block (212) out of blocks of the reference picture (12b) or a further reference picture (12b’) and populate the motion information prediction candidate list (190) with reference motion information using which the first block (212) has been predicted, and
if not, identify a second block (206) out of blocks of the reference picture (12b) or the further reference picture (12b’) and populate the motion information prediction candidate list (190) with reference motion information using which the second block (206) has been predicted.
Block-based video decoder of claim 33, configured to
identify the first block (212) in the reference picture (12b) or further reference picture (12b’) as one of the blocks of the reference picture (12b) or further reference picture (12b’) which includes a first predetermined location (210), collocated to a first alignment location (208) in the current picture (12a), having a first predetermined locational relationship to the predetermined inter-predicted block (104),
wherein the first alignment location (208) lies inside the predetermined inter-predicted block.
Block-based video decoder of claim 34, configured to
wherein the first alignment location (208) is a sample position in the current picture (12a) centered in the predetermined inter-predicted block (104).
Block-based video decoder of any of claims 33 to 35, configured to
identify the second block (206) as one of the blocks of the reference picture (12b) or further reference picture (12b’) which includes a second predetermined location (204’), collocated to a second alignment location (204) in the current picture, having a second predetermined locational relationship to the predetermined inter-predicted block (104),
wherein the second alignment location (204) lies outside the predetermined inter-predicted block (104) and offset relative to predetermined inter- predicted block (104) along a direction perpendicular to the predetermined side.
37. Block-based video decoder of claim 36, configured to
wherein the second alignment location (204) is a sample position in the current picture lying outside the predetermined inter-predicted block (104) and diagonally neighboring a corner sample of the predetermined inter-predicted block which adjoins the predetermined side.
38. Block-based video decoder of any of claims 33 to 37, configured to
Perform a selection out of the motion information prediction candidate list (190) to obtain a motion information prediction (150), and
decode a motion information prediction residual (152) from the data stream,
determine the motion information for the predetermined inter-predicted block (104) on the basis of the motion information prediction and motion information prediction residual.
39. Block-based video decoder of claim 30, configured to
derive for the predetermined inter-predicted block a pointer (193) into the motion information prediction candidate list (190) and
use the pointer (193) for the selection.
40. Block-based video decoder of any of claims 1 to 39, configured to
identifying an aligned block (206; 212) in the reference picture(12b) or further reference picture (12b’), spatially aligned to the predetermined inter-predicted block,
identifying spatially neighboring blocks (220) in the current picture, spatially neighboring the predetermined inter-predicted block (104),
checking whether the predetermined inter-predicted block (104) in the current picture (12a) adjoins a predetermined side of a tile (100a) by which the predetermined inter-predicted block is comprised,
if so, establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (140) by populating the motion information prediction candidate list by one or more spatial motion information prediction candidates (200b) derived from first reference motion information using which the spatially neighboring blocks in the current picture have been predicted and a temporal motion information prediction candidate (200a) derived from second reference motion information using which the aligned block (206; 212) has been predicted with positioning the one or more spatial motion information prediction candidates (200b) in the motion information prediction candidate list (190) at ranks preceding the temporal motion information prediction candidate,
if not, establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (140) by populating the motion information prediction candidate list by one or more spatial motion information prediction candidates (200b) derived from first reference motion information using which the spatially neighboring blocks (220) in the current picture have been predicted and a temporal motion information prediction candidate (200a) derived from second reference motion information using which the aligned block (206; 212) has been predicted with positioning the one or more spatial motion information prediction candidates in the motion information prediction candidate list at ranks following the temporal motion information prediction candidate,
derive for the predetermined inter-predicted block a pointer (193) pointing, in rank order (195), into the motion information prediction candidate list (190),
perform a selection out of the motion information prediction candidate list (190) using the pointer to obtain the motion information prediction (150).
Block-based video decoder of claim 40, wherein
the one or more spatial motion information prediction candidates (200b) derived from first reference motion information using which the spatially neighboring blocks (220) in the current picture have been predicted, comprise at least one equaling the first reference motion information using which one of the spatially neighboring blocks (200) in the current picture have been predicted and at least one equaling a combination of the first reference motion information using which different ones of the spatially neighboring blocks (200) in the current picture have been predicted.
42. Block-based video decoder of claim 40 or 41 , configured to
check whether,
if the predetermined inter-predicted block in the current picture adjoins the predetermined side of the current picture, identify the aligned block out of blocks into which the reference picture (12b) or further reference picture (12b’) is partitioned as one of the blocks which includes a first predetermined location (210) , collocated to a first alignment location (208) in the current picture, having a first predetermined locational relationship to the predetermined inter-predicted block (104), , wherein the first alignment location (208) lies inside the predetermined inter-predicted block, and
if the predetermined inter-predicted block in the current picture does not adjoin the predetermined side of the current picture, identify the aligned block out of blocks into which the reference picture (12b) or further reference picture (12b’) is partitioned as one of the blocks which includes a second predetermined location (204’) in the reference picture, collocated to a second alignment location (204) in the current picture, having a second predetermined locational relationship to the predetermined inter-predicted block, wherein the second alignment location lies outside the predetermined inter-predicted block and offset relative to predetermined inter-predicted block along a direction perpendicular to the predetermined side.
43. Block-based video decoder of any of claims 40 to 42, configured to if the predetermined inter-predicted block in the current picture adjoins the predetermined side of the current picture,
identify the aligned block in the reference picture out of blocks into which the reference is picture is partitioned by
identifying a first candidate block of the blocks of the reference picture (12b) or further reference picture (12b’) which includes a second predetermined location (204’), collocated to a second alignment location
(204) in the current picture, having a second predetermined locational relationship to the predetermined inter-predicted block, wherein the second alignment location lies outside the predetermined inter-predicted block and offset relative to predetermined inter-predicted block along a direction perpendicular to the predetermined side, and
checking whether the first candidate block is inter-prediction coded,
if yes, appoint the first candidate block the aligned block, and
if not, identify the aligned block out of the blocks into which the reference picture (12b) or further reference picture (12b’) is partitioned as one of the blocks which includes a first predetermined location (210) in the reference picture, collocated to a first alignment location (208) in the current picture, having a first predetermined locational relationship to the predetermined inter-predicted block, wherein the first alignment location lies inside the predetermined inter-predicted block.
44. Block-based video decoder of claim 42 or 43,
wherein the first alignment location is a sample position in the current picture centered in the predetermined inter-predicted block.
45. Block-based video decoder of any of claims 40 to 44,
wherein the second alignment location is a sample position in the current picture lying outside the predetermined inter-predicted block and diagonally neighboring a corner sample of the predetermined inter-predicted block which adjoins the predetermined side.
46. Block-based video decoder of any of claims 40 to 45, configured to
decode a motion information prediction residual (152) from the data stream,
determine the motion information (154) for the predetermined inter-predicted block on the basis of the motion information prediction (150) and motion information prediction residual (152).
47. Block-based video decoder of any of claims 1 to 46, configured to
derive, by prediction, a temporal motion information prediction candidate for the predetermined inter-predicted block by
predict a motion vector (240) for the predetermined inter-predicted block
clip the predicted motion vector (240) so as to, starting from the predetermined inter-predicted block, stay within boundaries of a tile (100a) by which the predetermined inter-predicted block is comprised, to obtain a clipped motion vector (248), and
derive the temporal motion information prediction candidate for the predetermined inter-predicted block from reference motion information using which a block (248) of the reference picture (12b) or further reference picture (12b’) has been predicted into which the clipped motion vector (248) points.
48. Block-based video decoder of claim 47, configured to
decode a motion information prediction residual (152) from the data stream,
determine the motion information (154) for the predetermined inter-predicted block on the basis of the temporal motion information prediction candidate (150) and the motion information prediction residual (152).
49. Block-based video decoder of Claim 47 or 48, configured to
In determining the motion information for the predetermined inter-predicted block on the basis of the temporal motion information prediction candidate, and the motion information prediction residual, derive from the data stream for the predetermined inter-predicted block a pointer (193) into a motion information prediction candidate list containing the temporal motion information prediction candidate and a further motion information prediction candidate comprising the predicted motion vector and use a motion information prediction candidate pointed to by the pointer along with the for the determination of the motion information prediction residual for the determination.
50. Block-based video decoder of any of claims 1 to 49, configured to
derive, by prediction, a temporal motion information prediction candidate for the predetermined inter-predicted block by
deriving first and second predicted motion vectors (240, 260) for the predetermined inter-predicted block,
check whether the first predicted motion vector, starting from the predetermined inter-predicted block, ends within boundaries of a tile by which the predetermined inter-predicted block is comprised, and
if yes, derive the temporal motion information prediction candidate for the predetermined inter-predicted block from reference motion information using which a block (242) of the reference picture (12b) or a further reference picture (12b’) has been predicted, to which the first predicted motion vector points, and
if not, derive the temporal motion information prediction candidate for the predetermined inter-predicted block from reference motion information using which a further block (262) of the reference picture (12b) or further reference picture (12b’) has been predicted, to which the second predicted motion vector points.
51. Block-based video decoder of claim 50, configured to
decode a motion information prediction residual (152) from the data stream, determine the motion information for the predetermined inter-predicted block on the basis of the temporal motion information prediction candidate and the motion information prediction residual.
52. Block-based video decoder of Claim 50 or 51 , configured to
In determining the motion information for the predetermined inter-predicted block on the basis of the temporal motion information prediction candidate and the motion information prediction residual, derive from the data stream for the predetermined inter-predicted block a pointer (193) into a motion information prediction candidate list containing a first temporal motion information prediction candidate comprising the first predicted motion vector, a second motion
information prediction candidate comprising the second predicted motion vector and the temporal motion information prediction candidate and use a motion information prediction candidate pointed to by the pointer along with the motion information prediction residual for the determination.
53. Block-based video decoder supporting motion-compensated prediction configured to
decode motion information (306a, 306b) for a predetermined inter-predicted block of a current picture of a video from a data stream into which the video is coded,
derive from the motion information a motion vector (304) for each sub-block of sub blocks (300) into which the predetermined inter-predicted block is partitioned, the motion vector indicating a translational displacement between the respective sub block and a patch (302) in a reference picture (12b), from which the respective sub-block is to be predicted,
predict the predetermined inter-predicted block by predicting each sub-block using the motion vector for the respective sub-block,
wherein the block-based video decoder is configured to perform the derivation and/or the prediction depending on a position of boundaries between tiles, into which the video is spatially partitioned.
Block-based video decoder of claim 53, configured to
In predicting the predetermined inter-predicted block by predicting each sub-block using the motion vector for the respective sub-block, predict samples of a predetermined sub-bock
by means of a convolution of a filter kernel with the patch so that the patch is widened compared to the predetermined sub-bock owing to a width of the filter kernel, if the motion vector has a sub-pel part which is non-zero, and
by setting each sample the of the predetermined sub-bock to one corresponding sample of the patch so that the patch is as wide as the predetermined sub-bock, if the motion vector has a sub-pel part which is zero, and
perform the derivation with respect to the predetermined sub-block by
computing a preliminary version of the motion vector for the predetermined sub-block on the basis of the motion information,
checking whether a distance of a footprint of the predetermined sub-block, displaced relative to the predetermined inter-predicted block according to the preliminary version of the motion vector, from the boundaries of a tile by which the predetermined inter-predicted block is comprised, is below a widening reach at which the patch is widened compared to the predetermined sub-bock owing to a width of the filter kernel and
setting to zero the sub-pel part of the preliminary version of the motion vector to obtain the motion vector if the distance of the footprint is below the widening reach.
Block-based video decoder of claim 54, configured to
leaving the sub-pel part of the preliminary version of the motion vector unamended to obtain the motion vector if the distance of the footprint is not below the widening reach.
56. Block-based video decoder of claim 54, configured to
when setting to zero the sub-pel part of the preliminary version of the motion vector, adapt a full-part of the preliminary version of the motion vector to obtain the motion vector in a manner so that the zero-setting of the sub-pel part and the adaptation of the full-pel part results in a rounding of the preliminary version of the motion vector to a nearest full-pel motion vector, resulting in all samples of the patch lying within the boundaries of the tile by which the predetermined inter- predicted block is comprised and being nearest to the preliminary version of the motion vector before zero-setting the sub-pel part and the adaptation of the full-pel part.
57. Block-based video decoder of claim 54, configured to
when setting to zero the sub-pel part of the preliminary version of the motion vector, clip a full-part of the preliminary version of the motion vector to obtain the motion vector in a manner so that the zero-setting of the sub-pel part and the adaptation of the full-pel part results in a rounding of the preliminary version of the motion vector to a nearest full-pel motion vector, resulting in all samples of the patch lying within the boundaries of the tile by which the predetermined inter- predicted block is comprised and being nearest to and smaller than the preliminary version of the motion vector before zero-setting the sub-pel part and the clipping of the full-pel part.
58. Block-based video decoder of any of claims 53 to 57, configured to
In predicting the predetermined inter-predicted block by predicting each sub-block using the motion vector for the respective sub-block,
check whether the patch from which a predetermined sub-block is predicted using the motion vector for the predetermined sub-block, is within
boundaries of a tile by which the predetermined inter-predicted block is comprised,
if not, spatially predict the predetermined sub-block, and
if yes, predict the predetermined sub-block from the patch.
59. Block-based video decoder of claim 58, configured to
in spatially predicting the predetermined sub-block, also use samples of one or more sub-blocks of the predetermined inter-predicted block having been predicted from the patch the translational displacement of which to the one or more sub blocks is indicated by the motion vector of the one or more sub-blocks.
60. Block-based video decoder of any of claims 53 to 59, configured to
Wherein the motion information comprises a first motion vector (306a) and a second motion vector (306b) which define a motion field at different corners of the predetermined inter-predicted block.
61. Block-based video decoder, configured to
establish, by prediction, a motion information prediction candidate list for a predetermined inter-predicted block by
identifying an aligned block in a reference picture, spatially aligned to the predetermined inter-predicted block,
identifying spatially neighboring blocks in the current picture, spatially neighboring the predetermined inter-predicted block,
populating the motion information prediction candidate list by one or more spatial motion information prediction candidates derived from first reference motion information using which the spatially neighboring blocks in the current picture have been predicted and a temporal motion information prediction candidate derived from second reference motion information
using which the aligned block in the reference picture has been predicted with positioning the one or more spatial motion information prediction candidates in the motion information prediction candidate list at ranks preceding the temporal motion information prediction candidate,
derive for the predetermined inter-predicted block a pointer pointing, in rank order, into the motion information prediction candidate list,
perform a selection out of the motion information prediction candidate list using the rank to obtain a motion information prediction for predetermined inter-predicted block.
Block-based video decoder, configured to
establish, by prediction, a motion information prediction candidate list for predetermined inter-predicted block of a current picture by
identifying spatially neighboring blocks in the current picture, spatially neighboring the predetermined inter-predicted block,
populating the motion information prediction candidate list by one or more spatial motion information prediction candidates derived from first reference motion information using which the spatially neighboring blocks in the current picture have been predicted and a history based motion information prediction candidate derived from a history-based temporal motion information prediction candidate list with positioning the one or more spatial motion information prediction candidates in the motion information prediction candidate list at ranks preceding the history based motion information prediction candidate,
derive for the predetermined inter-predicted block a pointer pointing, in rank order, into the motion information prediction candidate list,
perform a selection out of the motion information prediction candidate list using the rank to obtain a motion information prediction for predetermined inter-predicted block.
Block-based video decoder of claim 62, configured to
manage the history-based temporal motion information prediction candidate list by inserting thereinto most recently used motion information most recently used for predicting previous blocks preceding the predetermined inter-predicted block.
Block-based video decoder of claim 62 or 63, configured to
decode a motion information prediction residual from the data stream,
determine the motion information for the predetermined inter-predicted block on the basis of the motion information prediction and motion information prediction residual.
Block-based video decoder supporting motion-compensated bi-directional prediction and comprising a bi-directional optical flow tool for improving the motion-compensated bi-directional prediction, wherein the block-based video decoder is configured to
deactivate the bi-directional optical flow tool depending on whether at least one of first and second patches (130-i, 1302) of a predetermined inter-predicted block of a current picture (12a) to be subject to motion-compensated bi-directional prediction, which are displaced relative to the predetermined inter-predicted block according to first and second motion vectors signaled in the data stream for the predetermined inter-predicted block, crosses boundaries of a tile of the current picture by which the predetermined inter-predicted block is comprised, or
use boundary padding so as to fill a portion of first and second patches of a predetermined inter-predicted block of a current picture to be subject to the motion-compensated bi-directional prediction, which are displaced relative to the predetermined inter-predicted block according to first and second motion vectors signaled in the data stream for the predetermined inter-predicted block, which portion lies beyond boundaries of a tile of the current picture, by which the predetermined inter-predicted block is comprised.
Block-based video decoder of claim 65, configured to, for each of the first and second motion vectors,
check whether a signaled state of the respective motion vector in the data stream leads to the respective patch (130b) exceeding a tile (100a) by which the predetermined inter-predicted block (104) is comprised, by more than a predetermined sample width (n) associated with the bi-directional optical flow tool and, if so, redirect (142) the respective motion vector from the signaled state to a redirected state leading to the respective patch (130b’) to be within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, or by no more than the predetermined sample width (n) associated with the bi-directional optical flow tool.
Block-based video decoder of claim 65 or 66, configured to, for each of the first and second motion vectors,
predict the respective motion vector for the predetermined inter-predicted block (104) to obtain a respective motion information prediction (150), and
decode a respective motion vector prediction residual (152) from the data stream (14),
determine the respective motion vector for the predetermined inter-predicted block (104) on the basis of the respective motion vector prediction (150) and respective motion vector prediction residual (152) with obeying a constraint on the respective motion vector so that the respective patch does not exceed boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, or by no more than a predetermined sample width (n) associated with the bi directional optical flow tool.
Block-based video decoder of any of claims 65 to 67, configured to, for each of the first and second motion vectors,
obtain a respective preliminary predictor (402^ 4022) for the predetermined inter-predicted block using the respective motion vector by
by means of a convolution of a filter kernel with the respective patch so that the respective patch in widened compared to the predetermined inter- predicted block by a width of the filter kernel and a n-sample wide extension, if the motion vector has a sub-pel part which is non-zero, and
by setting each sample the of the predetermined inter-predicted block to one corresponding sample (164) of the patch so that the patch is widened compared to the predetermined inter-predicted block by the n-sample wide extension, if the motion vector has a sub-pel part which is zero, and
wherein the bi-directional prediction and comprising a bi-directional optical flow tool is configured to bi-predictively predicted the predetermined inter-predicted block by
locally determine a luminance gradient over the preliminary predictors and combine (436) same to derive a predictor (438) for the inter-predicted block in a manner locally varying according to the luminance gradient.
Block-based video decoder configured to
establish a motion information prediction candidate list for predetermined inter-predicted blocks of a current picture by
populating the motion information prediction candidate list with one or more motion information prediction candidates (192),
selecting a predetermined motion information prediction candidate out of a reservoir (502) of further motion information prediction candidates (504) depending on a dissimilarity of each of the further motion information prediction candidates from the one or more motion information prediction candidates,
populating the motion information prediction candidate list with the predetermined motion information prediction candidate,
wherein the selection depends on the dissimilarity such that a mutual motion information prediction candidate dissimilarity within the motion information
prediction candidate list is increased compared to performing the selection using an equal selection probability among the further motion information prediction candidates,
derive for a predetermined inter-predicted block a pointer pointing into the motion information prediction candidate list, and
perform a selection out of the motion information prediction candidate list using the rank to obtain a motion information prediction for predetermined inter-predicted block.
70. Block-based video decoder of claim 69, wherein the reservoir (502) of further motion information prediction candidates (504) is a history-based temporal motion information prediction candidate list and the block-based video decoder is configured to
manage the history-based temporal motion information prediction candidate list by inserting thereinto most recently used motion information most recently used for predicting previous inter-predicted blocks.
71. Block-based video encoder for encoding a video (1 1 ) into a data stream (14) and supporting motion-compensated prediction configured to
determine motion information for a predetermined inter-predicted block (104) of a current picture (12a) of a video (11 ), which locates a patch (130) in a reference picture (12b), from which the predetermined inter-predicted block (104) is to be predicted, in a manner so that the patch (130) is within, and does not cross, boundaries (102) of a tile (100a) by which the predetermined inter-predicted block (104) is comprised,
predict the predetermined inter-predicted block (104) using the motion information from the patch (130) of the reference picture (12b),
encode the motion information into the data stream (14), so that a signalization thereof into the data stream (14) is to be performed depending on a position of boundaries (102) between tiles (100), into which the video (1 1 ) is spatially partitioned.
Block-based video encoder of claim 71 , configured to, in encoding the motion information into the data stream (14)
predict the motion information for the predetermined inter-predicted block (104) to obtain a motion information prediction (150), and
encode a motion information prediction residual (152) into the data stream (14),
so that the motion information prediction (150) and the motion information prediction residual (152) are mapped onto the motion information for the predetermined inter-predicted block (104) by way of an non-invertible mapping which maps all possible combinations for the motion information prediction and the motion information prediction residual (152) exclusively onto a respective possible motion information so that, if the patch (130) was located using the respective possible motion information, the patch (130) would not exceed boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, and which maps more than one possible combination for the motion information prediction and the motion information prediction residual (152), equaling in the motion information prediction (150) and differing in the motion information prediction residual (152), onto one possible motion information.
Block-based video encoder of claim 72, configured to, in encoding the motion information prediction residua (152)1 into the data stream (14),
encode the motion information prediction residual (152) as being zero if the non- invertible mapping maps the motion information prediction (150) combined with the motion information prediction residual (152) being zero and the motion information prediction (150) combined with the motion information prediction residual (152) being a predetermined non-zero values onto the motion information of the predetermined inter-predicted block (104).
74. Block-based video encoder of claim 72 or 73, configured
to predict the predetermined inter-predicted block (104) using the motion information by predicting each of samples (162) of the predetermined inter- predicted block (104) by means of a mathematical combination of samples (164) within a respective portion of the patch (130) and/or by deriving two motion vectors (306a, 306b) from the motion information which are defined for two different corners of the predetermined inter-predicted block (104) and computing for each of subblocks into which the predetermined inter-predicted block (104) is partitioned, a subblock’s motion vector (304) locating a respective subblock patch within the reference picture (12b), wherein the subblock’s patch of all subblocks form the patch of the predetermined inter-predicted block, and
so that the non-invertible mapping which maps all possible combinations for the motion information prediction (150) and the motion information prediction residual (152) exclusively onto the respective possible motion information so that, if the patch (130) was located using the respective possible motion information, all samples of the portion for all samples (162) of the predetermined inter-predicted block (104) lie within the boundaries (102) of the tile (100) by which the predetermined inter-predicted block (104) is comprised.
75. Block-based video encoder of claim 72, configured
To predict the predetermined inter-predicted block (104) using the motion information by predicting each of samples (162) of the predetermined inter- predicted block (104)
by means of a weighted sum of samples (164) within a respective portion of the patch, so that the patch (130) is widened compared to the predetermined inter-predicted block (104) by an extension edge portion, if the motion information falls within a first subset of a value domain of the motion information,
by setting same equal to one corresponding sample of the patch, so that the patch (130) is as wide as the predetermined inter-predicted block (104), if the motion information falls within a second subset of the value domain of the motion information, and
wherein the non-invertible mapping maps all possible combinations for the motion information prediction (150) and the motion information prediction residual (152) exclusively onto a respective possible motion information so that, if the patch (130) was located using the respective possible motion information, all samples of the patch (130) lie within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised and so that, if the respective possible motion information falls within the second subset of the value domain of the motion information, the constraint allows the patch (130) to get closer to the boundaries (102) of the tile (100a) by which the predetermined inter- predicted block (104) is comprised than a width of the extension edge portion.
76. Block-based video encoder of claim 72, configured
to predict the predetermined inter-predicted block (104) using the motion information by predicting each of samples (162) of the predetermined inter- predicted block (104)
by means of a weighted sum of samples (164) within a respective portion of the patch, so that the patch (130) is widened compared to the predetermined inter-predicted block (104) by an extension edge portion, if the motion information falls within a first subset of a value domain of the motion information,
by setting same equal to one corresponding sample of the patch, so that the patch (130) is as wide as the predetermined inter-predicted block (104), if the motion information falls within a second subset of the value domain of the motion information, and
wherein the non-invertible mapping maps all possible combinations for the motion information prediction (150) and the motion information prediction residual (152) exclusively onto a respective possible motion information so that, if the patch (130) was located using the respective possible motion information,
all samples (164) of the patch (130) lie within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is
comprised, if the motion information falls within a first subset of a value domain of the motion information,
all samples (164) of the patch (130) are distanced from the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, by at least a distance accommodating the extension edge portion, if the motion information falls within a first subset of a value domain of the motion information.
Block-based video encoder of claim 71 , wherein the motion information comprises a motion vector indicating a translational displacement between the patch (130) and the predetermined inter-predicted block (104) and the block-based video encoder is configured to
predict the motion vector for the predetermined inter-predicted block (104) to obtain a motion vector prediction, and
encoder a motion vector prediction residual into the data stream (14),
so that the motion vector prediction and the motion vector prediction residual are mapped onto the motion vector for the predetermined inter-predicted block (104) by way of an non-invertible mapping from the motion vector prediction and the motion vector prediction residual to the motion vector which maps all possible combinations for the motion vector prediction and the motion vector prediction residual exclusively onto a respective possible motion vector so that, if the patch (130) was located using the respective possible motion vector, the patch (130) would not exceed boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, and which maps more than one possible combination for the motion vector prediction and the motion vector prediction residual, equaling in the motion vector prediction and differing in the motion vector prediction residual, onto one possible motion vector..
Block-based video encoder of claim 77, configured to, in encoding the motion vector prediction residual into the data stream (14),
encode the motion vector prediction residual as being zero if the non-invertible mapping maps the motion vector prediction combined with the motion vector prediction residual being zero and the motion vector prediction combined with the motion vector prediction residual being a predetermined non-zero value onto the motion vector of the predetermined inter-predicted block (104).
79. Block-based video encoder of claim 71 , wherein the motion information comprises a motion vector indicating a translational displacement between the patch (130) and the predetermined inter-predicted block (104) and the block-based video encoder is configured to
predict the motion vector for the predetermined inter-predicted block (104) to obtain a motion vector prediction, and
encode a motion vector prediction residual into the data stream (14),
predict the predetermined inter-predicted block (104) using the motion vector by predicting samples (162) of the predetermined inter-predicted block (104)
by means of a convolution of a filter kernel with the patch (130) so that the patch (130) in widened compared to the predetermined inter-predicted block owing to a width of the filter kernel, if the motion vector has a sub-pel part which is non-zero, and
by setting each sample the of the predetermined inter-predicted block (104) to one corresponding sample of the patch (130) so that the patch (130) is as wide as the predetermined inter-predicted block (104), if the motion vector has a sub-pel part which is zero, and
so that the motion vector prediction and the motion vector prediction residual are mapped onto the motion vector for the predetermined inter-predicted block (104) by way of an non-invertible mapping from the motion vector prediction and the motion vector prediction residual to the motion vector which maps all possible combinations for the motion vector prediction and the motion vector prediction residual exclusively onto a respective possible motion vector so that, if the patch (130) was located using the respective possible motion vector, all samples of the patch (130) would lie within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, and which maps more than one possible combination for the motion vector prediction and the motion vector prediction residual, differing in the motion vector prediction residual, onto one possible motion vector.
Block-based video encoder of claim 79, configured to
so that the non-invertible mapping from the motion vector prediction and the motion vector prediction residual to the motion vector maps the possible combinations for the motion vector prediction and the motion vector prediction residual onto the respective possible motion vector so that the respective possible motion vector is the sum of the motion vector prediction and the motion vector prediction residual, as long as, if the patch (130) was located using the respective possible motion vector, all samples of the patch (130) would lie within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, and maps possible combinations for the motion vector prediction and the motion vector prediction residual for which the sum of the motion vector prediction and the motion vector prediction residual would result into a distance of a footprint of the predetermined inter-predicted block (104), displaced relative to the predetermined inter-predicted block (104) according to the sum of the motion vector prediction and the motion vector prediction residual from the boundaries (102) of the tile (100a) by which the predetermined inter- predicted block (104) is comprised, is below a widening reach at which the patch (130) is widened compared to the predetermined inter-predicted block (104) owing to a width of the filter kernel, to a respective motion vector a sub-pel part of which is set to zero
Block-based video encoder of claim 80, configured to
leaving the sub-pel part of the preliminary version of the motion vector unamended to obtain the motion vector if the distance (171 ) of the footprint (160) is not below the widening reach.
82. Block-based video encoder of claim 80, configured to
so that the non-invertible mapping from the motion vector prediction and the motion vector prediction residual to the motion vector maps the possible combinations for the motion vector prediction and the motion vector prediction residual for which the sum of the motion vector prediction and the motion vector prediction residual would result into a distance (171 ) of a footprint (160) of the predetermined inter-predicted block (104), displaced relative to the predetermined inter-predicted block (104) according to the sum of the motion vector prediction and the motion vector prediction residual from the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, is below a widening reach at which the patch (130) is widened compared to the predetermined inter-predicted block (104) owing to a width of the filter kernel, to a respective motion vector a sub-pel part of which is set to zero and a full-pel part of which is equal to the full-pel part of the sum.
Block-based video encoder of any of claims 80 to 82, configured to
so that the non-invertible mapping from the motion vector prediction and the motion vector prediction residual to the motion vector maps the possible combinations for the motion vector prediction and the motion vector prediction residual for which the sum of the motion vector prediction and the motion vector prediction residual would result into a distance (171 ) of a footprint (160) of the predetermined inter-predicted block (104), displaced relative to the predetermined inter-predicted block (104) according to the sum of the motion vector prediction and the motion vector prediction residual from the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, is below a widening reach at which the patch (130) is widened compared to the predetermined inter-predicted block (104) owing to a width of the filter kernel, to a respective motion vector a sub-pel part of which is set to zero and a full-part of which is set in a manner so that the respective motion vector corresponds to the sum rounded to a nearest full-pel motion vector, being nearest to the sum.
Block-based video encoder of any of claims 80 to 82, configured to
so that the non-invertible mapping from the motion vector prediction and the motion vector prediction residual to the motion vector maps the possible
combinations for the motion vector prediction and the motion vector prediction residual for which the sum of the motion vector prediction and the motion vector prediction residual would result into a distance (171 ) of a footprint (160) of the predetermined inter-predicted block (104), displaced relative to the predetermined inter-predicted block (104) according to the sum of the motion vector prediction and the motion vector prediction residual from the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, is below a widening reach at which the patch (130) is widened compared to the predetermined inter-predicted block (104) owing to a width of the filter kernel, to a respective motion vector a sub-pel part of which is set to zero and a full-part of which is set in a manner so that the respective motion vector corresponds to the sum rounded to a nearest full-pel motion vector, being nearest to, and smaller than, the sum.
85. Block-based video encoder of any of claims 71 to 84, configured to
signal, by prediction, a motion information prediction candidate (192c) for the predetermined inter-predicted block (104) with obeying, for the motion information prediction candidate (192c), a constraint on the motion information prediction candidate so that, if the motion information was set to be equal to the motion information prediction candidate (192c), the patch (130) would not exceed boundaries (102) of a tile (100a) by which the predetermined inter-predicted block (104) is comprised, and
encode a motion information prediction residual (152) into the data stream (14),
so that the motion information for the predetermined inter-predicted block (104) may be signaled by correcting the motion information prediction candidate using the motion information prediction residual (152).
86. Block-based video encoder of claim 85, configured to
obey the constraint on the motion information prediction candidate (192c) so that, if the motion information was set to be equal to the motion information prediction candidate, the patch (130) would not exceed boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised by, in signaling the motion information prediction candidate (192c) use an non-invertible mapping to map a preliminary motion information prediction candidate (192c’) to the motion information prediction candidate (192c) which maps exclusively onto a respective mapped motion information candidate so that, if the patch (130) was located using the respective mapped motion information candidate, the patch (130) would not exceed boundaries (102) of the tile (100a) by which the predetermined inter- predicted block (104) is comprised, and which maps different settings for the preliminary motion information prediction candidate onto one possible setting for the respective mapped motion information candidate.
87. Block-based video encoder of claim 85 or 86, configured to
establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (104) including the motion information prediction candidate (192).
88. Block-based video encoder of claim 87, configured to
signal, by prediction, each motion information prediction candidate (192) of the motion information prediction candidate list (190) with obeying, for the respective motion information prediction candidate, a constraint on the motion information prediction candidate so that, if the motion information was set to be equal to the respective motion information prediction candidate, the patch (130) would not exceed boundaries (102) of a tile (100a) by which the predetermined inter- predicted block (104) is comprised.
89. Block-based video encoder of claim 87 or 88, configured to
insert into the data stream (14) for the predetermined inter-predicted block (104) a pointer (193) into the motion information prediction candidate list (190) which points to the motion information prediction candidate (192) on the basis of which the motion information is determined using which the predetermined inter- predicted block (104) is predicted.
90. Block-based video encoder of any of claims 85 to 89, configured to
to predict the predetermined inter-predicted block (104) using the motion information by predicting each of samples of the predetermined inter-predicted block (104) by means of a mathematical combination of samples (164) within a respective portion of the patch (130) and/or by deriving two motion vectors (306a, 306b) from the motion information which are defined for two different corners of the predetermined inter-predicted block (104) and computing for each of subblocks into which the predetermined inter-predicted block (104) is partitioned, a subblock’s motion vector (304) locating a respective subblock patch within the reference picture (12b), wherein the subblock’s patch of all subblocks form the patch of the predetermined inter-predicted block, and
so that the constraint on the motion information prediction candidate is selected so that, if the motion information was set to be equal to the motion information prediction candidate, all samples of the portion of the patch (130) for all samples of the predetermined inter-predicted block (104) would lie within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised.
91. Block-based video encoder of any of claims 85 to 89, configured
to predict the predetermined inter-predicted block (104) using the motion information by predicting each of samples of the predetermined inter-predicted block (104)
by means of a weighted sum of samples (164) within a respective portion of the patch (130), so that the patch (130) is widened compared to the predetermined in ter- predicted block (104) by an extension edge portion, if the motion information falls within a first subset of a value domain of the motion information,
by setting same equal to one corresponding sample of the patch (130), so that the patch (130) is as wide as the predetermined inter-predicted block (104), if the motion information falls within a second subset of the value domain of the motion information, and
so that the constraint on the motion information prediction candidate is selected so that, if the motion information was set to be equal to the motion information prediction candidate, all samples of the patch (130) would lie within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised and so that, if the motion information falls within the second subset of the value domain of the motion information, the constraint allows the patch (130) to get closer to the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised than a width of the extension edge portion.
Block-based video encoder of any of claims 85 to 89, configured
to predict the predetermined inter-predicted block (104) using the motion information by predicting each of samples of the predetermined inter-predicted block (104)
by means of a weighted sum of samples (164) within a respective portion of the patch (130), so that the patch (130) is widened compared to the predetermined inter-predicted block (104) by an extension edge portion, if the motion information falls within a first subset of a value domain of the motion information,
by setting same equal to one corresponding sample of the patch (130), so that the patch (130) is as wide as the predetermined inter-predicted block (104), if the motion information falls within a second subset of the value domain of the motion information, and
so that the constraint on the motion information prediction candidate is selected so that, if the motion information was set to be equal to the motion information prediction candidate,
all samples of the patch (130) would lie within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, if the motion information falls within a first subset of a value domain of the motion information,
all samples of the patch (130) would be distanced from the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, by at least a distance accommodating the extension edge portion, if the motion information falls within a first subset of a value domain of the motion information.
93. Block-based video encoder of any of claims 85 to 89, wherein the motion information comprises a motion vector indicating a translational displacement between the patch (130) and the predetermined inter-predicted block (104) and the block-based video encoder is configured to
predict the predetermined inter-predicted block (104) using the motion vector by predicting samples of the predetermined inter-predicted block (104)
by means of a convolution of a filter kernel with the patch (130) so that the patch (130) in widened compared to the predetermined inter-predicted block (104) owing to a width of the filter kernel, if the motion vector has a sub-pel part which is non-zero, and
by setting each sample the of the predetermined inter-predicted block (104) to one corresponding sample of the patch (130) so that the patch (130) is as wide as the predetermined inter-predicted block (104), if the motion vector has a sub-pel part which is zero, and
perform the signalization of the motion vector prediction candidate for the predetermined inter-predicted block (104) so that, if the motion vector was set to be equal to the motion vector prediction candidate, all samples of the patch (130) would lie within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised.
94. Block-based video encoder of any of claims 85 to 89, configured to
perform the signalization of the motion vector prediction candidate for the predetermined inter-predicted block (104) by
signaling, by prediction, a preliminary version of the motion vector prediction candidate for the predetermined inter-predicted block (104),
checking whether a distance (171 ) of a footprint (160) of the predetermined inter-predicted block (104), displaced relative to the predetermined inter- predicted block (104) according to the preliminary version of the motion vector prediction candidate from the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, is below a widening reach at which the patch (130) is widened compared to the predetermined inter-predicted block (104) owing to a width of the filter kernel and
setting to zero the sub-pel part of the preliminary version of the motion vector prediction candidate to obtain the motion vector prediction candidate if the distance of the footprint (171 ) is below the widening reach.
Block-based video encoder of claim 94, configured to
leaving the sub-pel part of the preliminary version of the motion vector prediction candidate unamended to obtain the motion vector prediction candidate if the distance (171 ) of the footprint (160) is not below the widening reach.
Block-based video encoder of claim 94 or 95, configured to
when setting to zero the sub-pel part of the preliminary version of motion vector prediction candidate, adapt a full-part of the preliminary version of the motion vector prediction candidate to obtain the motion vector prediction candidate in a manner so that the zero-setting of the sub-pel part and the adaptation of the full-pel part results in a rounding of the preliminary version of the motion vector prediction candidate to a nearest full-pel motion vector prediction candidate, resulting in all samples of the patch (130) lying within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised and being nearest to the preliminary version of the motion vector prediction candidate before zero-setting the sub-pel part and the adaptation of the full-pel part.
Block-based video encoder of claim 94 or 95, configured to
when setting to zero the sub-pel part of the preliminary version of motion vector prediction candidate, clip a full-part of the preliminary version of the motion vector prediction candidate to obtain the motion vector prediction candidate in a manner so that the zero-setting of the sub-pel part and the adaptation of the full-pel part results in a rounding of the preliminary version of the motion vector prediction candidate to a nearest full-pel motion vector prediction candidate, resulting in all samples of the patch (130) lying within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised and being nearest to and smaller than the preliminary version of the motion vector prediction candidate before zero-setting the sub-pel part and the clipping of the full-pel part.
Block-based video encoder of any of claims 71 to 97, configured to
establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (104) by selectively populating the motion information prediction candidate list (190) with at least one motion information prediction candidate out of a plurality (200) of motion information prediction candidates by
checking whether, if the motion information was set to be equal to the at least one motion information prediction candidate, the patch (130) would exceed the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised,
if yes, do not populate the motion information prediction candidate list (190) with the at least one motion information prediction candidate, and
if not, populate the motion information prediction candidate list (190) with the at least one motion information prediction candidate.
Block-based video encoder of claim 98, configured to
if, if the motion information was set to be equal to the at least one motion information prediction candidate, the patch (103) would not exceed the
boundaries (102) of the tile (100a) by which the predetermined inter- predicted block (104) is comprised, and if the at least one motion information prediction candidate is part of a bi-predictive motion information the other part of which relates to another reference picture and, if the motion information was set to be equal to the other part, the patch (130) would exceed the boundaries (102) of the tile by which the predetermined inter-predicted block (104) is comprised, enter the at least one motion information prediction candidate into a preliminary motion information list of motion information prediction candidates and use same to populate the motion information prediction candidate list (190) with a pair of entries in the preliminary motion information list of motion information prediction candidates or a combination of one entry in the preliminary motion information list and a default motion information in case of lack a availability of further motion information prediction candidates.
Block-based video encoder of claim 98 or 99, configured to
Perform a selection out of the motion information prediction candidate list (190) to obtain a motion information prediction (150), and
encode a motion information prediction residual (152) into the data stream (14),
determine the motion information for the predetermined inter-predicted block (104) on the basis of the motion information prediction (150) and motion information prediction residual (152).
Block-based video encoder of claim 100, configured to
signal for the predetermined inter-predicted block (104) a pointer (193) into the motion information prediction candidate list (190) and
use the pointer (193) for the selection.
Block-based video encoder of any of claims 71 to 101 , configured to
establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (104) by selectively populating the motion information prediction candidate list (190) with at least one motion information prediction candidate (192) out of a plurality (200) of motion information prediction candidates by
checking whether the predetermined inter-predicted block (104) in the current picture (12a) adjoins a predetermined side of a tile (100a) by which the predetermined inter-predicted block (104) is comprised,
if yes, identify a first block (212) out of blocks of the reference picture (12b) or a further reference picture (12b’) and populate the motion information prediction candidate list (190) with reference motion information using which the first block (212) has been predicted, and
if not, identify a second block (206) out of blocks of the reference picture (12b) or the further reference picture (12b’) and populate the motion information prediction candidate list (190) with reference motion information using which the second block (206) has been predicted.
103. Block-based video encoder of claim 102, configured to
identify the first block (212) in the reference picture (12b) or further reference picture (12b’) as one of the blocks of the reference picture (12b) or further reference picture (12b’) which includes a first predetermined location (210), collocated to a first alignment location (208) in the current picture (12a), having a first predetermined locational relationship to the predetermined inter-predicted block (104),
wherein the first alignment location (208) lies inside the predetermined inter-predicted block (104).
104. Block-based video encoder of claim 103, configured to
wherein the first alignment location (208) is a sample position in the current picture (12a) centered in the predetermined inter-predicted block (104).
105. Block-based video encoder of any of claims 102 to 104, configured to
identify the second block (206) as one of the blocks of the reference picture (12b) or further reference picture (12b’) which includes a second predetermined location (204’), collocated to a second alignment location (204) in the current picture (12a), having a second predetermined locational relationship to the predetermined inter-predicted block (104), ,
wherein the second alignment location (204) lies outside the predetermined inter-predicted block (104) and offset relative to predetermined inter- predicted block (104) along a direction perpendicular to the predetermined side.
106. Block-based video encoder of claim 105, configured to
wherein the second alignment location (204) is a sample position in the current picture (12a) lying outside the predetermined inter-predicted block (104) and diagonally neighboring a corner sample of the predetermined inter-predicted block (104) which adjoins the predetermined side.
107. Block-based video encoder of any of claims 102 to 106, configured to
Perform a selection out of the motion information prediction candidate list (190) to obtain a motion information prediction (150), and
encode a motion information prediction residual (152) into the data stream (14),
determine the motion information for the predetermined inter-predicted block (104) on the basis of the motion information prediction (150) and motion information prediction residual (152).
108. Block-based video encoder of claim 99, configured to
signal for the predetermined inter-predicted block (104) a pointer (193) into the motion information prediction candidate list (190) and
use the pointer (193) for the selection.
109. Block-based video encoder of any of claims 71 to 108, configured to
identifying an aligned block (206; 212) in the reference picture(12b) or further reference picture (12b’), spatially aligned to the predetermined inter-predicted block (104),
identifying spatially neighboring blocks (220) in the current picture (12a), spatially neighboring the predetermined inter-predicted block (104),
checking whether the predetermined inter-predicted block (104) in the current picture (12a) adjoins a predetermined side of a tile (100a) by which the predetermined inter-predicted block (104) is comprised,
if so, establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (140) by populating the motion information prediction candidate list (190) by one or more spatial motion information prediction candidates (200b) signaled into first reference motion information using which the spatially neighboring blocks in the current picture (12a) have been predicted and a temporal motion information prediction candidate (200a) signaled into second reference motion information using which the aligned block (206; 212) has been predicted with positioning the one or more spatial motion information prediction candidates (200b) in the motion information prediction candidate list (190) at ranks preceding the temporal motion information prediction candidate,
if not, establish, by prediction, a motion information prediction candidate list (190) for the predetermined inter-predicted block (140) by populating the motion information prediction candidate list (190) by one or more spatial motion information prediction candidates (200b) signaled into first reference motion information using which the spatially neighboring blocks (220) in the current picture (12a) have been predicted and a temporal motion information prediction candidate (200a) signaled into second reference motion information using which the aligned block (206; 212) has been predicted with positioning the one or more spatial
motion information prediction candidates in the motion information prediction candidate list (190) at ranks following the temporal motion information prediction candidate,
signal for the predetermined inter-predicted block (104) a pointer (193) pointing, in rank order (195), into the motion information prediction candidate list (190),
perform a selection out of the motion information prediction candidate list (190) using the pointer (193) to obtain the motion information prediction (150).
1 10. Block-based video encoder of claim 109, wherein
the one or more spatial motion information prediction candidates (200b) signaled into first reference motion information using which the spatially neighboring blocks (220) in the current picture (12a) have been predicted, comprise at least one equaling the first reference motion information using which one of the spatially neighboring blocks (200) in the current picture (12a) have been predicted and at least one equaling a combination of the first reference motion information using which different ones of the spatially neighboring blocks (200) in the current picture (12a) have been predicted.
1 1 1. Block-based video encoder of claim 109 or 110, configured to
check whether,
if the predetermined inter-predicted block (104) in the current picture (12a) adjoins the predetermined side of the current picture (12a), identify the aligned block out of blocks into which the reference picture (12b) or further reference picture (12b ) is partitioned as one of the blocks which includes a first predetermined location (210) , collocated to a first alignment location (208) in the current picture (12a), having a first predetermined locational relationship to the predetermined inter-predicted block (104), , wherein the first alignment location (208) lies inside the predetermined inter-predicted block (104), and
if the predetermined inter-predicted block (104) in the current picture (12a) does not adjoin the predetermined side of the current picture (12a), identify the aligned block out of blocks into which the reference picture (12b) or further reference picture (12b’) is partitioned as one of the blocks which includes a second predetermined location (204’) in the reference picture, collocated to a second alignment location (204) in the current picture (12a), having a second predetermined locational relationship to the predetermined inter-predicted block, wherein the second alignment location lies outside the predetermined inter- predicted block (104) and offset relative to predetermined inter-predicted block (104) along a direction perpendicular to the predetermined side.
1 12. Block-based video encoder of any of claims 109 to 1 11 , configured to if the predetermined inter-predicted block (104) in the current picture (12a) adjoins the predetermined side of the current picture (12a),
identify the aligned block in the reference picture out of blocks into which the reference is picture is partitioned by
identifying a first candidate block of the blocks of the reference picture (12b) or further reference picture (12b’) which includes a second predetermined location (204’), collocated to a second alignment location (204) in the current picture (12a), having a second predetermined locational relationship to the predetermined inter-predicted block (104), wherein the second alignment location lies outside the predetermined inter-predicted block (104) and offset relative to predetermined inter-predicted block (104) along a direction perpendicular to the predetermined side, and
checking whether the first candidate block is inter-prediction coded,
if yes, appoint the first candidate block the aligned block, and
if not, identify the aligned block out of the blocks into which the reference picture (12b) or further reference picture (12b’) is partitioned as one of the blocks which includes a first predetermined location (210) in the reference picture, collocated to a first alignment location (208) in the current picture (12a), having a first predetermined locational relationship to the predetermined inter-predicted block (104), wherein the first alignment location lies inside the predetermined inter-predicted block (104).
113. Block-based video encoder of claim 11 1 or 1 12,
wherein the first alignment location is a sample position in the current picture (12a) centered in the predetermined inter-predicted block (104).
114. Block-based video encoder of any of claims 109 to 1 13,
wherein the second alignment location is a sample position in the current picture (12a) lying outside the predetermined inter-predicted block (104) and diagonally neighboring a corner sample of the predetermined inter-predicted block (104) which adjoins the predetermined side.
115. Block-based video encoder of any of claims 109 to 114, configured to
encode a motion information prediction residual (152) into the data stream (14),
determine the motion information (154) for the predetermined inter-predicted block (104) on the basis of the motion information prediction (150) and motion information prediction residual (152).
1 16. Block-based video encoder of any of claims 71 to 115, configured to
signal, by prediction, a temporal motion information prediction candidate for the predetermined inter-predicted block (104) by
predict a motion vector (240) for the predetermined inter-predicted block (104),
clip the predicted motion vector (240) so as to, starting from the predetermined inter-predicted block (104), stay within boundaries (102) of a tile (100a) by which the predetermined inter-predicted block (104) is comprised, to obtain a clipped motion vector (248), and
signal the temporal motion information prediction candidate for the predetermined inter-predicted block (104) into reference motion information using which a block (248) of the reference picture (12b) or further reference picture (12b’) has been predicted into which the clipped motion vector (248) points.
117. Block-based video encoder of claim 116, configured to
encode a motion information prediction residual (152) into the data stream (14),
determine the motion information (154) for the predetermined inter-predicted block (104) on the basis of the temporal motion information prediction candidate (150) and the motion information prediction residual (152).
118. Block-based video encoder of Claim 1 16 or 117, configured to
In determining the motion information for the predetermined inter-predicted block
(104) on the basis of the temporal motion information prediction candidate, and the motion information prediction residual (152), signal into the data stream (14) for the predetermined inter-predicted block (104) a pointer (193) into a motion information prediction candidate list containing the temporal motion information prediction candidate and a further motion information prediction candidate comprising the predicted motion vector and use a motion information prediction candidate pointed to by the pointer (193) along with the for the determination of the motion
information prediction residual (152) for the determination.
119. Block-based video encoder of any of claims 71 to 118, configured to
signal, by prediction, a temporal motion information prediction candidate for the predetermined inter-predicted block (104) by
signaling first and second predicted motion vectors (240, 260) for the predetermined inter-predicted block (104),
check whether the first predicted motion vector, starting from the predetermined inter-predicted block (104), ends within boundaries (102) of a tile by which the predetermined inter-predicted block (104) is comprised, and
if yes, signal the temporal motion information prediction candidate for the predetermined inter-predicted block (104) into reference motion information using which a block (242) of the reference picture (12b) or a further reference picture (12b’) has been predicted, to which the first predicted motion vector points, and
if not, signal the temporal motion information prediction candidate for the predetermined inter-predicted block (104) into reference motion information using which a further block (262) of the reference picture (12b) or further reference picture (12b’) has been predicted, to which the second predicted motion vector points.
120. Block-based video encoder of claim 119, configured to
encode a motion information prediction residual (152) into the data stream (14),
determine the motion information for the predetermined inter-predicted block (104) on the basis of the temporal motion information prediction candidate and the motion information prediction residual (152).
121. Block-based video encoder of Claim 1 19 or 120, configured to
In determining the motion information for the predetermined inter-predicted block (104) on the basis of the temporal motion information prediction candidate and the motion information prediction residual (152), signal into the data stream (14) for the predetermined inter-predicted block (104) a pointer (193) into a motion information prediction candidate list containing a first temporal motion information prediction candidate comprising the first predicted motion vector, a second motion
information prediction candidate comprising the second predicted motion vector and the temporal motion information prediction candidate and use a motion information prediction candidate pointed to by the pointer (193) along with the motion information prediction residual (152) for the determination.
122. Block-based video encoder supporting motion-compensated prediction configured to
encode motion information (306a, 306b) for a predetermined inter-predicted block (104) of a current picture (12a) of a video (11 ) into a data stream (14) into which the video (11 ) is coded,
signal into the motion information a motion vector (304) for each sub-block of sub blocks (300) into which the predetermined inter-predicted block (104) is partitioned, the motion vector indicating a translational displacement between the respective sub-block and a patch (302) in a reference picture (12b), from which the respective sub-block is to be predicted,
predict the predetermined inter-predicted block (104) by predicting each sub-block using the motion vector for the respective sub-block,
wherein the block-based video encoder is configured to perform the signalization and/or the prediction depending on a position of boundaries (102) between tiles, into which the video (1 1 ) is spatially partitioned.
123 . Block-based video encoder of claim 122, configured to
In predicting the predetermined inter-predicted block (104) by predicting each sub block using the motion vector for the respective sub-block, predict samples of a predetermined sub-bock
by means of a convolution of a filter kernel with the patch (302) so that the patch (302) is widened compared to the predetermined sub-bock owing to a width of the filter kernel, if the motion vector has a sub-pel part which is non-zero, and
by setting each sample the of the predetermined sub-bock to one corresponding sample of the patch (302) so that the patch (302) is as wide as the predetermined sub-bock, if the motion vector has a sub-pel part which is zero, and
perform the signalization with respect to the predetermined sub-block by
computing a preliminary version of the motion vector for the predetermined sub-block on the basis of the motion information,
checking whether a distance (171 ) of a footprint (160) of the predetermined sub-block, displaced relative to the predetermined inter-predicted block (104) according to the preliminary version of the motion vector, from the boundaries (102) of a tile by which the predetermined inter-predicted block (104) is comprised, is below a widening reach at which the patch (302) is widened compared to the predetermined sub-bock owing to a width of the filter kernel and
setting to zero the sub-pel part of the preliminary version of the motion vector to obtain the motion vector if the distance (171 ) of the footprint (160) is below the widening reach.
124. Block-based video encoder of claim 123, configured to
leaving the sub-pel part of the preliminary version of the motion vector unamended to obtain the motion vector if the distance (171 ) of the footprint (160) is not below the widening reach.
125. Block-based video encoder of claim 123, configured to
when setting to zero the sub-pel part of the preliminary version of the motion vector, adapt a full-part of the preliminary version of the motion vector to obtain the motion vector in a manner so that the zero-setting of the sub-pel part and the adaptation of the full-pel part results in a rounding of the preliminary version of the motion vector to a nearest full-pel motion vector, resulting in all samples of the patch (302) lying within the boundaries (102) of the tile by which the predetermined inter-predicted block (104) is comprised and being nearest to the preliminary version of the motion vector before zero-setting the sub-pel part and the adaptation of the full-pel part.
126. Block-based video encoder of claim 123, configured to
when setting to zero the sub-pel part of the preliminary version of the motion vector, clip a full-part of the preliminary version of the motion vector to obtain the motion vector in a manner so that the zero-setting of the sub-pel part and the adaptation of the full-pel part results in a rounding of the preliminary version of the motion vector to a nearest full-pel motion vector, resulting in all samples of the patch (302) lying within the boundaries (102) of the tile by which the predetermined inter-predicted block (104) is comprised and being nearest to and smaller than the preliminary version of the motion vector before zero-setting the sub-pel part and the clipping of the full-pel part.
127 . Block-based video encoder of any of claims 123 to 126, configured to
In predicting the predetermined inter-predicted block (104) by predicting each sub- block using the motion vector for the respective sub-block,
check whether the patch (302) from which a predetermined sub-block is predicted using the motion vector for the predetermined sub-block, is within boundaries (102) of a tile by which the predetermined inter-predicted block (104) is comprised,
if not, spatially predict the predetermined sub-block, and
if yes, predict the predetermined sub-block from the patch (302).
128 . Block-based video encoder of claim 127, configured to
in spatially predicting the predetermined sub-block, also use samples of one or more sub-blocks of the predetermined inter-predicted block (104) having been predicted from the patch (302) the translational displacement of which to the one or more sub-blocks is indicated by the motion vector of the one or more sub blocks.
129 . Block-based video encoder of any of claims122 to 129, configured to
Wherein the motion information comprises a first motion vector (306a) and a second motion vector (306b) which define a motion field at different corners of the predetermined inter-predicted block (104).
130. Block-based video encoder, configured to
establish, by prediction, a motion information prediction candidate list for a predetermined inter-predicted block (104) by
identifying an aligned block in a reference picture (12b), spatially aligned to the predetermined inter-predicted block (104),
identifying spatially neighboring blocks in the current picture (12a), spatially neighboring the predetermined inter-predicted block (104),
populating the motion information prediction candidate list (190) by one or more spatial motion information prediction candidates signaled into first reference motion information using which the spatially neighboring blocks in the current picture (12a) have been predicted and a temporal motion information prediction candidate signaled into second reference motion information using which the aligned block in the reference picture (12b) has been predicted with positioning the one or more spatial motion information prediction candidates in the motion information prediction candidate list (190) at ranks preceding the temporal motion information prediction candidate,
signal for the predetermined inter-predicted block (104) a pointer (193) pointing, in rank order, into the motion information prediction candidate list,
perform a selection out of the motion information prediction candidate list (190) using the rank to obtain a motion information prediction for predetermined inter- predicted block (104).
31 Block-based video encoder, configured to
establish, by prediction, a motion information prediction candidate list for predetermined inter-predicted block (104) of a current picture (12a) by
identifying spatially neighboring blocks in the current picture (12a), spatially neighboring the predetermined inter-predicted block (104),
populating the motion information prediction candidate list (190) by one or more spatial motion information prediction candidates signaled into first reference motion information using which the spatially neighboring blocks in the current picture (12a) have been predicted and a history based motion information prediction candidate signaled into a history-based temporal motion information prediction candidate list with positioning the one or more spatial motion information prediction candidates in the motion information prediction candidate list (190) at ranks preceding the history based motion information prediction candidate,
signal for the predetermined inter-predicted block (104) a pointer (193) pointing, in rank order, into the motion information prediction candidate list (190),
perform a selection out of the motion information prediction candidate list (190) using the rank to obtain a motion information prediction for predetermined inter- predicted block (104).
132. Block-based video encoder of claim 131 , configured to
manage the history-based temporal motion information prediction candidate list by inserting thereinto most recently used motion information most recently used for predicting previous blocks preceding the predetermined inter-predicted block (104).
133. Block-based video encoder of claim 131 or 132, configured to
encode a motion information prediction residual (152) into the data stream (14), determine the motion information for the predetermined inter-predicted block (104) on the basis of the motion information prediction (152) and motion information prediction residual (152).
134. Block-based video encoder supporting motion-compensated bi-directional prediction and comprising a bi-directional optical flow tool for improving the motion- compensated bi-directional prediction, wherein the block-based video encoder is configured to
deactivate the bi-directional optical flow tool depending on whether at least one of first and second patches (130^ 1302) of a predetermined inter-predicted block (104) of a current picture (12a) to be subject to motion-compensated bi-directional prediction, which are displaced relative to the predetermined inter-predicted block (104) according to first and second motion vectors signaled in the data stream (14) for the predetermined inter-predicted block (104), crosses boundaries (102) between tiles, into which the video (11 ) is spatially partitioned, or
use boundary padding so as to fill a portion of first and second patches of a predetermined inter-predicted block (104) of a current picture to be subject to the motion-compensated bi-directional prediction, which are displaced relative to the predetermined inter-predicted block (104) according to first and second motion vectors signaled in the data stream (14) for the predetermined inter-predicted block (104), which portion lies beyond boundaries (102) of a tile of the current picture, by which the predetermined inter-predicted block is comprised.
135. Block-based video encoder of claim 134, configured to, for each of the first and second motion vectors,
check whether a signaled state of the respective motion vector in the data stream leads to the respective patch (130b) exceeding a tile (100a) by which the predetermined inter-predicted block (104) is comprised, by more than a predetermined sample width (n) associated with the bi-directional optical flow tool and, if so, redirect (142) the respective motion vector from the signaled state to a redirected state leading to the respective patch (130b’) to be within the boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, or by no more than the predetermined sample width (n) associated with the bi-directional optical flow tool.
136. Block-based video encoder of claim 134 or 135, configured to, for each of the first and second motion vectors,
predict the respective motion vector for the predetermined inter-predicted block (104) to obtain a respective motion information prediction (150), and
encode a respective motion vector prediction residual (152) into the data stream
(14),
so that the motion information prediction (150) and the motion information prediction residual (152) are mapped onto the motion information for the predetermined inter-predicted block (104) by way of an non-invertible mapping which maps all possible combinations for the motion information prediction and the motion information prediction residual (152) exclusively onto a respective possible motion information so that, if the patch (130) was located using the respective possible motion information, the patch (130) would not exceed boundaries (102) of the tile (100a) by which the predetermined inter-predicted block (104) is comprised, or by no more than a predetermined sample width (n) associated with the bi directional optical flow tool and which maps more than one possible combination for the motion information prediction and the motion information prediction residual (152), equaling in the motion information prediction (150) and differing in the motion information prediction residual (152), onto one possible motion information.
137. Block-based video encoder of any of claims 134 to 136, configured to, for each of the first and second motion vectors,
obtain a respective preliminary predictor (402^ 4022) for the predetermined inter- predicted block using the respective motion vector by
by means of a convolution of a filter kernel with the respective patch so that the respective patch in widened compared to the predetermined inter- predicted block by a width of the filter kernel and a n-sample wide extension, if the motion vector has a sub-pel part which is non-zero, and by setting each sample the of the predetermined inter-predicted block to one corresponding sample (164) of the patch so that the patch is widened compared to the predetermined inter-predicted block by the n-sample wide extension, if the motion vector has a sub-pel part which is zero, and
wherein the bi-directional prediction and comprising a bi-directional optical flow tool is configured to bi-predictively predicted the predetermined inter-predicted block by
locally determine a luminance gradient over the preliminary predictors and combine (436) same to derive a predictor (438) for the inter-predicted block in a manner locally varying according to the luminance gradient.
138. Block-based video encoder configured to
establish a motion information prediction candidate list for predetermined inter- predicted blocks of a current picture by
populating the motion information prediction candidate list with one or more motion information prediction candidates (192),
selecting a predetermined motion information prediction candidate out of a reservoir (502) of further motion information prediction candidates (504) depending on a dissimilarity of each of the further motion information prediction candidates from the one or more motion information prediction candidates,
populating the motion information prediction candidate list with the predetermined motion information prediction candidate,
wherein the selection depends on the dissimilarity such that a mutual motion information prediction candidate dissimilarity within the motion information prediction candidate list is increased compared to performing the selection using an equal selection probability among the further motion information prediction candidates,
derive for a predetermined inter-predicted block a pointer pointing into the motion information prediction candidate list, and
perform a selection out of the motion information prediction candidate list using the rank to obtain a motion information prediction for predetermined inter-predicted block.
139. Block-based video decoder of claim 138, wherein the reservoir (502) of further motion information prediction candidates (504) is a history-based temporal motion information prediction candidate list and the block-based video encoder is configured to
manage the history-based temporal motion information prediction candidate list by inserting thereinto most recently used motion information most recently used for predicting previous inter-predicted blocks.
140. Method for block-based video decoding that supports motion-compensated prediction, comprising:
deriving motion information for a predetermined inter-predicted block (104) of a current picture (12a) of a video (1 1 ), which locates a patch (130) in a reference picture (12b), from which the predetermined inter-predicted block (104) is to be predicted, from a data stream (14) into which the video (11 ) is coded, depending on a position of boundaries (102) between tiles (100), into which the video (1 1 ) is spatially partitioned, and
predicting the predetermined inter-predicted block (104) using the motion information from the patch (130) of the reference picture (12b).
141. Method or block-based video decoding that supports motion-compensated prediction, comprising:
decoding motion information (306a, 306b) for a predetermined inter-predicted block of a current picture of a video from a data stream into which the video is coded,
deriving from the motion information a motion vector (304) for each sub-block of sub-blocks (300) into which the predetermined inter-predicted block is partitioned, the motion vector indicating a translational displacement between the respective sub-block and a patch (302) in a reference picture (12b), from which the respective sub-block is to be predicted,
predicting the predetermined inter-predicted block by predicting each sub-block using the motion vector for the respective sub-block,
wherein the method for block-based video decoding is configured to perform the derivation and/or the prediction depending on a position of boundaries between tiles, into which the video is spatially partitioned.
142. Method for block-based video decoding, comprising:
establishing, by prediction, a motion information prediction candidate list for a predetermined inter-predicted block by
identifying an aligned block in a reference picture, spatially aligned to the predetermined inter-predicted block,
identifying spatially neighboring blocks in the current picture, spatially neighboring the predetermined inter-predicted block,
populating the motion information prediction candidate list by one or more spatial motion information prediction candidates derived from first reference motion information using which the spatially neighboring blocks in the current picture have been predicted and a temporal motion information prediction candidate derived from second reference motion information using which the aligned block in the reference picture has been predicted with positioning the one or more spatial motion information prediction candidates in the motion information prediction candidate list at ranks preceding the temporal motion information prediction candidate,
deriving for the predetermined inter-predicted block a pointer pointing, in rank order, into the motion information prediction candidate list,
performing a selection out of the motion information prediction candidate list using the rank to obtain a motion information prediction for predetermined inter-predicted block.
143. Method for block-based video decoding, comprising:
establishing, by prediction, a motion information prediction candidate list for predetermined inter-predicted block of a current picture by
identifying spatially neighboring blocks in the current picture, spatially neighboring the predetermined inter-predicted block,
populating the motion information prediction candidate list by one or more spatial motion information prediction candidates derived from first reference motion information using which the spatially neighboring blocks in the current picture have been predicted and a history based motion information prediction candidate derived from a history-based temporal motion information prediction candidate list with positioning the one or more spatial motion information prediction candidates in the motion information prediction candidate list at ranks preceding the history based motion information prediction candidate,
deriving for the predetermined inter-predicted block a pointer pointing, in rank order, into the motion information prediction candidate list,
performing a selection out of the motion information prediction candidate list using the rank to obtain a motion information prediction for predetermined inter-predicted block.
144. Method for block-based video decoding that supports motion-compensated bi directional prediction and comprising a bi-directional optical flow tool for improving the motion-compensated bi-directional prediction, the method comprising:
deactivating the bi-directional optical flow tool depending on whether at least one of first and second patches (1301 , 1302) of a predetermined inter-predicted block of a current picture (12a) to be subject to motion-compensated bi-directional prediction, which are displaced relative to the predetermined inter-predicted block according to first and second motion vectors signaled in the data stream for the predetermined inter-predicted block, crosses boundaries of a tile of the current picture by which the predetermined inter-predicted block is comprised, or
using boundary padding so as to fill a portion of first and second patches of a predetermined inter-predicted block of a current picture to be subject to the motion-compensated bi-directional prediction, which are displaced relative to the predetermined inter-predicted block according to first and second motion vectors signaled in the data stream for the predetermined inter-predicted block, which portion lies beyond boundaries of a tile of the current picture, by which the predetermined inter-predicted block is comprised.
145. Method for block-based video decoding, comprising:
establishing a motion information prediction candidate list for predetermined inter- predicted blocks of a current picture by
populating the motion information prediction candidate list with one or more motion information prediction candidates (192),
selecting a predetermined motion information prediction candidate out of a reservoir (502) of further motion information prediction candidates (504) depending on a dissimilarity of each of the further motion information prediction candidates from the one or more motion information prediction candidates,
populating the motion information prediction candidate list with the predetermined motion information prediction candidate,
wherein the selection depends on the dissimilarity such that a mutual motion information prediction candidate dissimilarity within the motion information prediction candidate list is increased compared to performing the selection using an equal selection probability among the further motion information prediction candidates,
deriving for a predetermined inter-predicted block a pointer pointing into the motion information prediction candidate list, and
performing a selection out of the motion information prediction candidate list using the rank to obtain a motion information prediction for predetermined inter-predicted block.
146. Method for block-based video encoding for encoding a video (1 1 ) into a data stream (14) and supporting motion-compensated prediction, comprising:
determining motion information for a predetermined inter-predicted block (104) of a current picture (12a) of a video (11 ), which locates a patch (130) in a reference picture (12b), from which the predetermined inter-predicted block (104) is to be predicted, in a manner so that the patch (130) is within, and does not cross, boundaries (102) of a tile (100a) by which the predetermined inter-predicted block (104) is comprised,
predicting the predetermined inter-predicted block (104) using the motion information from the patch (130) of the reference picture (12b),
encoding the motion information into the data stream (14), so that a signalization thereof into the data stream (14) is to be performed depending on a position of boundaries (102) between tiles (100), into which the video (1 1 ) is spatially partitioned.
147. Method for block-based video encoding that supports motion-compensated prediction, comprising:
encoding motion information (306a, 306b) for a predetermined inter-predicted block (104) of a current picture (12a) of a video (1 1 ) into a data stream (14) into which the video (11 ) is coded,
signaling into the motion information a motion vector (304) for each sub-block of sub-blocks (300) into which the predetermined inter-predicted block (104) is partitioned, the motion vector indicating a translational displacement between the respective sub-block and a patch (302) in a reference picture (12b), from which the respective sub-block is to be predicted,
predicting the predetermined inter-predicted block (104) by predicting each sub block using the motion vector for the respective sub-block,
wherein the method for block-based video encoding is configured to perform the signalization and/or the prediction depending on a position of boundaries (102) between tiles, into which the video (11 ) is spatially partitioned.
148. Method for block-based video encoding, comprising:
establishing, by prediction, a motion information prediction candidate list for a predetermined inter-predicted block (104) by
identifying an aligned block in a reference picture (12b), spatially aligned to the predetermined inter-predicted block (104),
identifying spatially neighboring blocks in the current picture (12a), spatially neighboring the predetermined inter-predicted block (104),
populating the motion information prediction candidate list (190) by one or more spatial motion information prediction candidates signaled into first reference motion information using which the spatially neighboring blocks in the current picture (12a) have been predicted and a temporal motion information prediction candidate signaled into second reference motion information using which the aligned block in the reference picture (12b) has been predicted with positioning the one or more spatial motion information prediction candidates in the motion information prediction candidate list (190) at ranks preceding the temporal motion information prediction candidate,
signaling for the predetermined inter-predicted block (104) a pointer (193) pointing, in rank order, into the motion information prediction candidate list,
performing a selection out of the motion information prediction candidate list (190) using the rank to obtain a motion information prediction for predetermined inter- predicted block (104).
149. Method for block-based video encoding, comprising:
establishing, by prediction, a motion information prediction candidate list for predetermined inter-predicted block (104) of a current picture (12a) by
identifying spatially neighboring blocks in the current picture (12a), spatially neighboring the predetermined inter-predicted block (104),
populating the motion information prediction candidate list (190) by one or more spatial motion information prediction candidates signaled into first reference motion information using which the spatially neighboring blocks in the current picture (12a) have been predicted and a history based motion information prediction candidate signaled into a history-based temporal motion information prediction candidate list with positioning the one or more spatial motion information prediction candidates in the motion information prediction candidate list (190) at ranks preceding the history based motion information prediction candidate,
signaling for the predetermined inter-predicted block (104) a pointer (193) pointing, in rank order, into the motion information prediction candidate list (190),
performing a selection out of the motion information prediction candidate list (190) using the rank to obtain a motion information prediction for predetermined inter- predicted block (104).
150. Method for block-based video encoding that supports motion-compensated bi directional prediction and comprising a bi-directional optical flow tool for improving the motion-compensated bi-directional prediction, the method comprising:
deactivating the bi-directional optical flow tool depending on whether at least one of first and second patches (1301 , 1302) of a predetermined inter-predicted block (104) of a current picture (12a) to be subject to motion-compensated bi-directional prediction, which are displaced relative to the predetermined inter-predicted block (104) according to first and second motion vectors signaled in the data stream (14) for the predetermined inter-predicted block (104), crosses boundaries (102) between tiles, into which the video (1 1 ) is spatially partitioned, or
using boundary padding so as to fill a portion of first and second patches of a predetermined inter-predicted block (104) of a current picture to be subject to the motion-compensated bi-directional prediction, which are displaced relative to the predetermined inter-predicted block (104) according to first and second motion vectors signaled in the data stream (14) for the predetermined inter-predicted block (104), which portion lies beyond boundaries (102) of a tile of the current picture, by which the predetermined inter-predicted block is comprised.
151. Method for block-based video encoding, comprising:
establishing a motion information prediction candidate list for predetermined inter- predicted blocks of a current picture by
populating the motion information prediction candidate list with one or more motion information prediction candidates (192),
selecting a predetermined motion information prediction candidate out of a reservoir (502) of further motion information prediction candidates (504) depending on a dissimilarity of each of the further motion information prediction candidates from the one or more motion information prediction candidates,
populating the motion information prediction candidate list with the predetermined motion information prediction candidate,
wherein the selection depends on the dissimilarity such that a mutual motion information prediction candidate dissimilarity within the motion information prediction candidate list is increased compared to performing the selection using an equal selection probability among the further motion information prediction candidates,
deriving for a predetermined inter-predicted block a pointer pointing into the motion information prediction candidate list, and
performing a selection out of the motion information prediction candidate list using the rank to obtain a motion information prediction for predetermined inter-predicted block.
152. Data stream encoded by a method according to any one of claims 146 to 151.
153. Computer program having a program code for executing a method according to any one of claims 140 to 151 , when the program runs on one or several computers.

Documents

Application Documents

# Name Date
1 202127023141-STATEMENT OF UNDERTAKING (FORM 3) [24-05-2021(online)].pdf 2021-05-24
2 202127023141-REQUEST FOR EXAMINATION (FORM-18) [24-05-2021(online)].pdf 2021-05-24
3 202127023141-FORM 18 [24-05-2021(online)].pdf 2021-05-24
4 202127023141-FORM 1 [24-05-2021(online)].pdf 2021-05-24
5 202127023141-FIGURE OF ABSTRACT [24-05-2021(online)].jpg 2021-05-24
6 202127023141-DRAWINGS [24-05-2021(online)].pdf 2021-05-24
7 202127023141-DECLARATION OF INVENTORSHIP (FORM 5) [24-05-2021(online)].pdf 2021-05-24
8 202127023141-COMPLETE SPECIFICATION [24-05-2021(online)].pdf 2021-05-24
9 202127023141-Proof of Right [01-07-2021(online)].pdf 2021-07-01
10 202127023141-FORM-26 [05-07-2021(online)].pdf 2021-07-05
11 202127023141-FORM 3 [14-10-2021(online)].pdf 2021-10-14
12 202127023141.pdf 2021-10-19
13 202127023141-ORIGINAL UR 6(1A) FORM 26-230821.pdf 2021-10-25
14 202127023141-ORIGINAL UR 6(1A) FORM 1-230821.pdf 2021-10-29
15 Abstract1.jpg 2022-01-03
16 202127023141-POA [24-02-2022(online)].pdf 2022-02-24
17 202127023141-FORM 13 [24-02-2022(online)].pdf 2022-02-24
18 202127023141-AMENDED DOCUMENTS [24-02-2022(online)].pdf 2022-02-24
19 202127023141-FER.pdf 2022-03-07
20 202127023141-FORM 3 [16-03-2022(online)].pdf 2022-03-16
21 202127023141-Certified Copy of Priority Document [06-05-2022(online)].pdf 2022-05-06
22 202127023141-FORM 4(ii) [06-09-2022(online)].pdf 2022-09-06
23 202127023141-Information under section 8(2) [07-09-2022(online)].pdf 2022-09-07
24 202127023141-FORM 3 [07-09-2022(online)].pdf 2022-09-07
25 202127023141-OTHERS [06-12-2022(online)].pdf 2022-12-06
26 202127023141-FER_SER_REPLY [06-12-2022(online)].pdf 2022-12-06
27 202127023141-COMPLETE SPECIFICATION [06-12-2022(online)].pdf 2022-12-06
28 202127023141-CLAIMS [06-12-2022(online)].pdf 2022-12-06
29 202127023141-PatentCertificate11-07-2024.pdf 2024-07-11
30 202127023141-IntimationOfGrant11-07-2024.pdf 2024-07-11

Search Strategy

1 SearchPattern202127023141E_03-03-2022.pdf

ERegister / Renewals

3rd: 15 Jul 2024

From 25/11/2021 - To 25/11/2022

4th: 15 Jul 2024

From 25/11/2022 - To 25/11/2023

5th: 15 Jul 2024

From 25/11/2023 - To 25/11/2024

6th: 15 Jul 2024

From 25/11/2024 - To 25/11/2025

7th: 12 Nov 2025

From 25/11/2025 - To 25/11/2026