Abstract: Apparatus for predicting a predetermined block (18) of a picture using a plurality of reference samples (17a, c) The apparatus is configured to form (100) a sample value vector (102, 400) out of the plurality of reference samples, derive from the sample value vector a further vector onto which the sample value vector is mapped by a predetermined invertible linear transform, compute a matrix-vector product between the further vector and a predetermined prediction matrix so as to obtain a prediction vector, and predict samples of the predetermined block on the basis of the prediction vector.
18, it is possible to mode > no, n1, n2, which are the number of matrixes for each set of matrixes S0, S1, S2, respectively). Further, the sets may have different numbers of matrixes each (for example, it may be that S0 has 16 matrixes S1 has eight matrixes, and S2 has six matrixes). The mode and transposed information are hot necessarily stored and/or transmitted as one combined mode index‘mode’: in some examples there is the possibility of signalling explicitly as a transposed flag and the matrix index (0-15 for S0, 0-7 for S1 and 0-5 for S2). In some cases, the combination of the transposed flag and matrix index may be interpreted as a set index. For example, there may be one bit operating as transposed flag, and some bits indicating the matrix index, collectively indicated as“set index”. 6.3 Generation of the reduced prediction signal by matrix vector multiplication Here, features are provided regarding step 812. Out of the reduced input vector bdryred (boundary vector 17P) one may generate a reduced prediction signal predred . The latter signal may be a signal on the downsampled block of width Wred and height Hred. Here, Wred and Hred may be defined as: Wred 4, Hred 4, if max(W, H) £ 8, Wred = min(W, 8) , Hred = min(H, 8) ; else. The reduced prediction signal predred may be computed by calculating a matrix vector-product and adding an offset: predred = A bdryred + b. Here, A is a matrix (e.g., prediction matrix 17M) that may have Wred * Hred rows and 4 columns if W=H=4 and 8 columns in all other cases and b is a vector that may be of size Wred * Hred. If W = H 4, then A may have 4 columns and 16 rows and thus 4 multiplications per sample may be needed in that case to compute predred. In all other cases, A may have 8 columns and one may verify that in these cases one has 8 * Wred * Hred < 4 * W * H, i.e. also in these cases, at most 4 multiplications per sample are needed to compute predred. The matrix A and the vector b may be taken from one of the sets S0, S1, S2 as follows. One defines an index idx = idx(W, H) by setting idx(W, H) = 0, if W = H = 4, idx(W, H) = 1, if max(W, H) = 8 and idx(W, H) = 2 in all other cases. Moreover, one may put m = mode, if mode < 18 and m = mode - 17, else. Then, if idx < 1 or idx = 2 and min(W, H) > 4, one may put A In the case that idx= 2 and min(W, H) = 4, one lets A be the matrix that arises by leaving out every row of that, in the case W=4, corresponds to an odd x-coordinate in the downsampled block, or, in the case H=4, corresponds to an odd y-coordinate in the downsampled block. If mode > 18, one replaces the reduced prediction signal by its transposed signal. In alternative examples, different strategies may be carried out. For example, instead of reducing the size of a larger matrix (“leave out”), a smaller matrix of S1 (idx=1) with Wred=4 and Hred=4 is used. I.e., such blocks are now assigned to S1 instead of S2. Other strategies may be carried out. In other examples, the mode index 'mode' is not necessarily in the range 0 to 35 (other ranges may be defined). Further, it is not necessary that each of the three sets S0, S1, S2 has 18 matrices (hence, instead of expressions like mode < 18, it is possible to mode < n0, ni, n2, which are the number of matrixes for each set of matrixes S0, S1 , S2, respectively). Further, the sets may have different numbers of matrixes each (for example, it may be that S0 has 16 matrixes S1 has eight matrixes, and S2 has six matrixes). 6.4 Linear interpolation to generate the final prediction signal Here, features are provided regarding step 812. Interpolation of the subsampled prediction signal, on large blocks a second version of the averaged boundary may be needed. Namely, if min(W,H) > 8 and W ³ H, one writes W= 8 * 2l, and for 0 < i < 8 defines If min(W,H) > 8 and H > W, one defines analogously. In addition or alternative, it is possible to have a“hard downsampling”, in which the is equal to Also, can be defined analogously. At the sample positions that were left out in the generation of predred the final prediction signal may arise by linear interpolation from predred (e.g., step 813 in examples of Figs. 7.2-7.4). This linear interpolation may be unnecessary, in some examples if W— H = 4 (e.g., example of Fig. 7.1 ). The linear interpolation may be given as follows (other examples are notwithstanding possible). It is assumed that W ³ H. Then, if H > Hred, a vertical upsampling of predred may be performed. In that case, predred may be extended by one line to the top as follows. If W = 8, predred may have width Wred = 4 and may be extended to the top by the averaged boundary signal % e g. as defined above. If W > 8, predred is of width Wred = 8 and it is extended to the top by the averaged boundary signal e.g. as defined above. One may write predred [x] [- 1] for the first line of predred. Then the signal on a block of width Wred and height 2 * Hred may be given as where 0 £ x < Wred and 0 £ y < Hred. The latter process may be carried out k times until 2k*Hred=H. Thus, if H= 8 or H= 16, it may be carried out at most once. If H= 32, it may be carried out twice. If H= 64, it may be carried out three times. Next, a horizontal upsampling operation may be applied to the result of the vertical upsampling. The latter upsampling operation may use the full boundary left of the prediction signal. Finally, if H > W, one may proceed analogously by first upsampling in the horizontal direction (if required) and then in the vertical direction. This is an example of an interpolation using reduced boundary samples for the first interpolation (horizontally or vertically) and original boundary samples for the second interpolation (vertically or horizontally). Depending on the block size, only the second or no interpolation is required. If both horizontal and vertical interpolation is required, the order depends on the width and height of the block. However, different techniques may be implemented: for example, original boundary samples may be used for both the first and the second interpolation and the order may be fixed, e.g. first horizontal then vertical (in other cases, first vertical then horizontal). Hence, the interpolation order (horizontal/vertical) and the use of reduced/original boundary samples may be varied. 6.5 Illustration of an example of the entire ALWIP process The entire process of averaging, matrix-vector-multiplication and linear interpolation is illustrated for different shapes in Figs. 7.1-7.4. Note, that the remaining shapes are treated as in one of the depicted cases. 1 . Given a 4 x 4 block, ALWIP may take two averages along each axis of the boundary by using the technique of Fig. 7.1. The resulting four input samples enter the matrix-vector- multiplication. The matrices are taken from the set S0. After adding an offset, this may yield the 16 final prediction samples. Linear interpolation is not necessary for generating the prediction signal. Thus, a total of (4 * 16)/(4 * 4) = 4 multiplications per sample are performed. See, for example, Fig. 7.1. 2. Given an 8 x 8 block, ALWIP may take four averages along each axis of the boundary. The resulting eight input samples enter the matrix-vector-multiplication, by using the technique of Fig. 7.2. The matrices are taken from the set S1. This yields 16 samples on the odd positions of the prediction block. Thus, a total of (8 * 16)/(8 * 8) = 2 multiplications per sample are performed. After adding an offset, these samples may be interpolated, e.g., vertically by using the top boundary and, e.g., horizontally by using the left boundary. See, for example, Fig. 7.2. 3. Given an 8 x 4 block, ALWIP may take four averages along the horizontal axis of the boundary and the four original boundary values on the left boundary by using the technique of Fig. 7.3. The resulting eight input samples enter the matrix-vector- multiplication. The matrices are taken from the set S1. This yields 16 samples on the odd horizontal and each vertical positions of the prediction block. Thus, a total of (8 * 16)/(8 * 4) = 4 multiplications per sample are performed. After adding an offset, these samples are interpolated horizontally by using the left boundary, for example. See, for example, Fig. 7.3. The transposed case is treated accordingly. 4. Given a 16 x 16 block, ALWIP may take four averages along each axis of the boundary. The resulting eight input samples enter the matrix-vector-multiplication by using the technique of Fig. 7.2. The matrices are taken from the set S2. This yields 64 samples on the odd positions of the prediction block. Thus, a total of (8 * 64)/ (16 * 16) = 2 multiplications per sample are performed. After adding an offset, these samples are interpolated vertically by using the top boundary and horizontally by using the left boundary, for example. See, for example, Fig. 7.2. See, for example, Fig. 7.4. For larger shapes, the procedure may be essentially the same and it is easy to check that the number of multiplications per sample is less than two. For W*8 blocks, only horizontal interpolation is necessary as the samples are given at the odd horizontal and each vertical positions. Thus, at most (8 * 64)/(16 * 8) = 4 multiplications per sample are performed in these cases. Finally for W*4 blocks with W>8, let Akbe the matrix that arises by leaving out every row that correspond to an odd entry along the horizontal axis of the downsampled block. Thus, the output size may be 32 and again, only horizontal interpolation remains to be performed. At most (8 * 32)/(16 * 4) = 4 multiplications per sample may be performed. The transposed cases may be treated accordingly. 6.6 Number of parameters needed and complexity assessment The parameters needed for all possible proposed intra prediction modes may be comprised by the matrices and offset vectors belonging to the sets S0, S1, S2. All matrix-coefficients and offset vectors may be stored as 10-bit values. Thus, according to the above description, a total number of 14400 parameters, each in 10-bit precision, may be needed for the proposed method. This corresponds to 0,018 Megabyte of memory. It is pointed out that currently, a CTU of size 128 x 128 in the standard 4:2:0 chroma-subsampling consists of 24576 values, each in 10 bit. Thus, the memory requirement of the proposed intra-prediction tool does not exceed the memory requirement of the current picture referencing tool that was adopted at the last meeting. Also, it is pointed out that the conventional intra prediction modes require four multiplications per sample due to the PDPC tool or the 4-tap interpolation filters for the angular prediction modes with fractional angle positions. Thus, in terms of operational complexity the proposed method does not exceed the conventional intra prediction modes. 6.7 Signalization of the proposed intra prediction modes For luma blocks, 35 ALWIP modes are proposed, for example, (other numbers of modes may be used). For each Coding Unit (CU) in intra mode, a flag indicating if an ALWIP mode is to be applied on the corresponding Prediction Unit (PU) or not is sent in the bitstream. The signalization of the latter index may be harmonized with MRL in the same way as for the first CE test. If an ALWIP mode is to be applied, the index predmode of the ALWIP mode may be signaled using an MPM-list with 3 MPMS. Here, the derivation of the MPMs may be performed using the intra-modes of the above and the left PU as follows. There may be tables, e g. three fixed tables map_angular_to_alwiptdx, idx e {0,1,2} that may assign to each conventional intra prediction mode predmodeAngular an ALWIP mode predmode ALWIP = map_angular_to_alwipidx[predmodeAngular]. For each PU of width W and height H one defines and index idx(PU ) = idx(W, H) Î {0,1,2} that indicates from which of the three sets the ALWIP-parameters are to be taken as in section 4 above. If the above Prediction Unit PUabove is available, belongs to the same CTU as the current PU and is in intra mode, if idx(PU ) = idx(PUabove ) and if ALWIP is applied on PUabove with ALWIP-mode one puts If the above PU is available, belongs to the same CTU as the current PU and is in intra mode and if a conventional intra prediction mode >s applied on the above PU, one puts In all other cases, one puts which means that this mode is unavailable. In the same way but without the restriction that the left PU needs to belong to the same CTU as the current PU, one derives a mode Finally, three fixed default lists listidx, idx Î {0,1,2} are provided, each of which contains three distinct ALWIP modes. Out of the default list listidX(PU) and the modes and one constructs three distinct MPMs by substituting -1 by default values as well as eliminating repetitions. The herein described embodiments are not limited by the above described Signalization of the proposed intra prediction modes. According to an alternative embodiment, no MPMs and/or mapping tables are used for MIP (ALWIP). 6.8 Adapted MPM-list derivation for conventional luma and chroma intra-prediction modes The proposed ALWIP-modes may be harmonized with the MPM-based coding of the conventional intra-prediction modes as follows. The luma and chroma MPM-list derivation processes for the conventional intra-prediction modes may use fixed tables map_lwip_to_angularidx, idx Î {0,1,2}, mapping an ALWIP-mode predmodeLWIP on a given PU to one of the conventional intra-prediction modes predmodeAngular = mapJwip_to_angularidx{PU) [predmodeLWIP]. For the luma MPM-list derivation, whenever a neighboring luma block is encountered which uses an ALWIP-mode predmodeLW1P, this block may be treated as if it was using the conventional intra-prediction mode predmodeAngular. For the chroma MPM-list derivation, whenever the current luma block uses an LWIP-mode, the same mapping may be used to translate the ALWIP-mode to a conventional intra prediction mode. It is clear, that the ALWIP-modes can be harmonized with the conventional intra-prediction modes also without the usage of MPMs and/or mapping tables. It is, for example, possible that for the chroma block, whenever the current luma block uses an ALWIP-mode, the ALWIP-mode is mapped to a planar-intra prediction mode. 7 Implementation efficient embodiments Let’s briefly summarize the above examples as they might form a basis for further extending the embodiments described herein below. For predicting a predetermined block 18 of the picture 10, using a plurality of neighboring samples 17a, c is used. A reduction 100, by averaging, of the plurality of neighboring samples has been done to obtain a reduced set 102 of samples values lower, in number of samples, than compared to the plurality of neighboring samples. This reduction is optional in the embodiments herein and yields the so called sample value vector mentioned in the following. The reduced set of sample values is the subject to a linear or affine linear transformation 19 to obtain predicted values for predetermined samples 104 of the predetermined block. It is this transformation, later on indicated using matrix A and offset vector b which has been obtained by machine learning (ML) and should be implementation efficiently preformed. By interpolation, prediction values for further samples 108 of the predetermined block are derived on the basis of the predicted values for the predetermined samples and the plurality of neighboring samples. It should be said that, theoretically, the outcome of the affine/linear transformation could be associated with non-full-pel sample positions of block 18 so that all samples of block 18 might be obtained by interpolation in accordance with an alternative embodiment. No interpolation might be necessary at all, too. The plurality of neighboring samples might extend one-dimensionally along two sides of the predetermined block, the predetermined samples are arranged in rows and columns and, along at least one of the rows and columns, wherein the predetermined samples may be positioned at every nth position from a sample (112) of the predetermined sample adjoining the two sides of the predetermined block. Based on the plurality of neighboring samples, for each of the at least one of the rows and the columns, a support value for one (1 18) of the plurality of neighboring positions might be determined, which is aligned to the respective one of the at least one of the rows and the columns, and by interpolation, the prediction values for the further samples 108 of the predetermined block might be derived on the basis of the predicted values for the predetermined samples and the support values for the neighboring samples aligned to the at least one of rows and columns. The predetermined samples may be positioned at every nth position from the sample 1 12 of the predetermined sample which adjoins the two sides of the predetermined block along the rows and the predetermined samples may be positioned at every mth position from the sample 1 12 of the predetermined sample which adjoins the two sides of the predetermined block along the columns, wherein n,m>1. It might be that n=m. Along at least one of the rows and column, the determination of the support values may be done by averaging (122), for each support value, a group 120 of neighboring samples within the plurality of neighboring samples which includes the neighboring sample 1 18 for which the respective support value is determined. The plurality of neighboring samples may extend one-dimehSionally along two sides of the predetermined block and the reduction may be done by grouping the plurality of neighboring samples into groups 110 of one or more consecutive neighboring samples and performing an averaging on each of the group of one or more neighboring samples which has more than two neighboring samples. For the predetermined block, a prediction residual might be transmitted in the data stream. It might be derived therefrom at the decoder and the predetermined block be reconstructed using the prediction residual and the predicted values for the predetermined samples. At the encoder, the prediction residual is encoded into the data stream at the encoder. The picture might be subdivided into a plurality of blocks of different block sizes, which plurality comprises the predetermined block., Then, it might be that the linear or affine linear transformation for block 18 is selected depending on a width W and height H of the predetermined block such that the linear or affine linear transformation selected for the predetermined block is selected out of a first set of linear or affine linear transformations as long as the width W and height H of the predetermined block are within a first set of width/height pairs and a second set of linear or affine linear transformations as long as the width W and height H of the predetermined block are within a second set of width/height pairs which is disjoint to the first set of width/height pairs. Again, later on it gets clear that the affine/linear transformations are represented by way of other parameters, namely weights of C and, optionally, offset and scale parameters. Decoder and encoder may be configured to subdivide the picture into a plurality of blocks of different block sizes, which comprises the predetermined block, and to select the linear or affine linear transformation depending on a width W and height H of the predetermined block such that the linear or affine linear transformation selected for the predetermined block is selected out of a first set of linear or affine linear transformations as long as the width W and height H of the predetermined block are within a first set of width/height pairs, a second set of linear or affine linear transformations as long as the width W and height H of the predetermined block are within a second set of width/height pairs which is disjoint to the first set of width/height pairs, and a third set of linear or affine linear transformations as long as the width W and height H of the predetermined block are within a third set of one or more width/height pairs, which is disjoint to the first and second sets of width/height pairs. The third set of one or more width/height pairs merely comprises one width/height pair, W, H’, and each linear or affine linear transformation within first set of linear or affine linear transformations is for transforming N’ sample values to W’*H’ predicted values for an W’xH’ array of sample positions. Each of the first and second sets of width/height pairs may comprise a first width/height pairs Wp.Hp with Wp being unequal to Hp and a second width/height pair Wq,Hq with Hq=Wp and Wq=Hp. Each of the first and second sets of width/height pairs may additionally comprise a third width/height pairs Wp,Hp with Wp being equal to Hp and Hp > Hq. For the predetermined block, a set index might be transmitted in the data stream, which indicates which linear or affine linear transformation to be selected for block 18 out of a predetermined set of linear or affine linear transformations. The plurality of neighboring samples may extend one-dimensionally along two sides of the predetermined block and the reduction may be done by, for a first subset of the plurality of neighboring samples, which adjoin a first side of the predetermined block, grouping the first subset into first groups 1 10 of one or more consecutive neighboring samples and, for a second subset of the plurality of neighboring samples, which adjoin a second side of the predetermined block, grouping the second subset into second groups 1 10 of one or more consecutive neighboring samples and performing an averaging on each of the first and second groups of one or more neighboring samples which has more than two neighboring samples, so as to obtain first sample values from the first groups and second sample values for the second groups. Then, the linear or affine linear transformation may be selected depending on the set index out of a predetermined set of linear or affine linear transformations such that two different states of the set index result into a selection of one of the linear or affine linear transformations of the predetermined set of linear or affine linear transformations, the reduced set of sample values may be subject to the predetermined linear or affine linear transformation in case of the set index assuming a first of the two different states in form of a first vector to yield an output vector of predicted values, and distribute the predicted values of the output vector along a first scan order onto the predetermined samples of the predetermined block and in case of the set index assuming a second of the two different states in form of a second vector, the first and second vectors differing so that components populated by one of the first sample values in the first vector are populated by one of the second sample values in the second vector, and components populated by one of the second sample values in the first vector are populated by one of the first sample values in the second vector, so as to yield an output vector of predicted values, and distribute the predicted values of the output vector along a second scan order onto the predetermined samples of the predetermined block which is transposed relative to the first scan order. Each linear or affine linear transformation within first set of linear or affine linear transformations may be for transforming Ni sample values to wi*hV predicted values for an wixhi array of sample positions and each linear or affine linear transformation within first set of linear or affine linear transformations is for transforming N2 sample values to w2*h2 predicted values for an w2xh2 array of sample positions, wherein for a first predetermined one of the first set of width/height pairs, wi may exceed the width of the first predetermined width/height pair or hi may exceed the height of the first predetermined width/height pair, and for a second predetermined one of the first set of width/height pairs neither wi may exceed the width of the second predetermined width/height pair nor hi exceeds the height of the second predetermined width/height pair. The reducing (100), by averaging, the plurality of neighboring samples to obtain the reduced set (102) of samples values might then be done so that the reduced set 102 of samples values has Ni sample values if the predetermined block is of the first predetermined width/height pair and if the predetermined block is of the second predetermined width/height pair, and the subjecting the reduced set of sample values to the selected linear or affine linear transformation might be performed by using only a first sub-portion of the selected linear or affine linear transformation which is related to a subsampling of the wixhi array of sample positions along width dimension if wi exceeds the width of the one width/height pair, or along height dimension if hi exceeds the height of the one width/height pair if the predetermined block is of the first predetermined width/height pair, and the selected linear or affine linear transformation completely if the predetermined block is of the second predetermined width/height pair. Each linear or affine linear transformation within first set of linear or affine linear transformations may be for transforming Ni sample values to wi*hi predicted values for an wixhi array of sample positions with wi=hi and each linear or affine linear transformation within second set of linear or affine linear transformations is for transforming N2 sample values to w2*h2 predicted values for an w2xh2 array of sample positions with w2=h2. All of the above described embodiments are merely illustrative in that they may form the basis for the embodiment described herein below. That is, above concepts and details shall serve to understand the following embodiments and shall serve as a reservoir of possible extensions and amendments of the embodiments described herein below. In particular, many of the above described details are optional such as the averaging of neighboring samples, the fact the neighboring samples are used as reference samples and so forth. More generally, the embodiments described herein assume that a prediction signal on a rectangular block is generated out of already reconstructed samples such as an intra prediction signal on a rectangular block is generated out of neighboring, already reconstructed samples left and above the block. The generation of the prediction signal is based on the following steps. 1 . Out of the reference samples, called boundary sample now without, however, excluding the possibility of transferring the description to reference samples positioned elsewhere, samples may be extracted by averaging. Here, the averaging is carried out either for both the boundary samples left and above the block or only for the boundary samples on one of the two sides. If no averaging is carried out on a side, the samples on that side are kept unchanged. 2. A matrix vector multiplication, optionally followed by addition of an offset, is carried out where the input vector of the matrix vector multiplication is either the concatenation of the averaged boundary samples left of the block and the original boundary samples above the block if averaging was applied only on the left side, or the concatenation of the original boundary samples left of the block and the averaged boundary samples above the block if averaging was applied only on the above side or the concatenation of the averaged boundary samples left of the block and the averaged boundary samples above the block if averaging was applied on both sides of the block. Again, alternatives would exist, such as ones where averaging isn’t used at all. 3. The result of the matrix vector multiplication and the optional offset addition may optionally be a reduced prediction signal on a subsampled set of samples in the original block. The prediction signal at the remaining positions may be generated from the prediction signal on the subsampled set by linear interpolation. The computation of the matrix vector product in Step 2 should preferably be carried out in integer arithmetic. Thus, if x = ( x1, ... , xn ) denotes the input for the matrix vector product, i.e. x denotes the concatenation of the (averaged) boundary samples left and above the block, then out of x, the (reduced) prediction signal computed in Step 2 should be computed using only bit shifts, the addition of offset vectors, and multiplications with integers. Ideally, the prediction signal in Step 2 would be given as Ax + b where b is an offset vector that might be zero and where A is derived by some machine-learning based training algorithm. However, such a training algorithm usually only results in a matrix A = Afloat that is given in floating point precision. Thus, one is faced with the problem to specify integer operations in the aforementioned sense such that the expression Afloat x is well approximated using these integer operations. Here, it is important to mention that these integer operations are not necessarily chosen such that they approximate the expression Afloat x assuming a uniform distribution of the vector x but typically take into account that the input vectors x for which the expression Afloatx is to be approximated are (averaged) boundary samples from natural video signals where one can expect some correlations between the components xi of x. Fig. 8 shows an embodiment of an apparatus 1000 for predicting a predetermined block 18 of a picture 10 using a plurality of reference samples 17. The plurality of reference samples 17 can depend on a prediction mode used by the apparatus 1000 to predict the predetermined block 18. If the prediction mode is, for example, an intra prediction, reference samples 17i neighboring the predetermined block can be used. In other words, the plurality of reference samples 17 is, for example, arranged within the picture 10 alongside an outer edge of the predetermined block 18. If the prediction mode is, for example, an inter prediction, reference samples 172 out of another picture 10' can be used. The apparatus 1000 is configured to form 100 a sample value vector 400 out of the plurality of reference samples 17. The sample value vector can be obtained by different technics. The sample value vector can, for example, comprise all reference samples 17. Optionally the reference samples can be weighted. According to another example the sample value vector 400 can be formed as described with regard to one of Figs. 7.1 to 7.4 for the sample value vector 102. In other words, the sample value vector 400 can be formed by averaging or downsampling. Thus, for example, groups of reference samples can be averaged to obtain the sample value vector 400 with a reduced set of values. In other words, the apparatus is, for example, configured to form 100 the sample value vector 102 out of the plurality of reference samples 17 by, for each component of the sample value vector 400, adopting one reference sample of the plurality of reference samples 17 as the respective component of the sample value vector, and/or averaging two or more components of the sample value vector 400, i.e. averaging two or more reference samples of the plurality of reference samples 17, to obtain the respective component of the sample value vector 400. The apparatus 1000 is configured to derive 401 from the sample value vector 400 a further vector 402 onto which the sample value vector 400 is mapped by a predetermined invertible linear transform 403. The further vector 402 comprises, for example, only integer and/or fixed-point values. The invertible linear transform 403 is, for example, chosen such that a prediction of samples of the predetermined block 18 is performed by integer arithmetic or fixed-point arithmetic. Furthermore, the apparatus 1000 is configured to compute a matrix-vector product 404 between the further vector 402 and a predetermined prediction matrix 405 so as to obtain a prediction vector 406, and predict samples of the predetermined block 18 on the basis of the prediction vector 406. Based on the advantageous further vector 402, the predetermined prediction matrix can be quantized to enable integer and/or fixed-point operations with only marginal impact of a quantization error on the predicted samples of the predetermined block 18. According to an embodiment, the apparatus 1000 is configured to compute the matrix-vector product 404 using fixed point arithmetic operations. Alternatively, integer operations can be used. According to an embodiment, the apparatus 1000 is configured to compute the matrix-vector product 404 without floating point arithmetic operations. According to an embodiment, the apparatus 1000 is configured to store a fixed point number representation of the predetermined prediction matrix 405. Additionally or alternatively, an integer representation of the predetermined prediction matrix 405 can be stored. According to an embodiment, the apparatus 1000 is configured to, in predicting the samples of the predetermined block 18 on the basis of the prediction vector 406, use interpolation to compute at least one sample position of the predetermined block 18 based on the prediction vector 406 each component of which is associated with a corresponding position within the predetermined block 18. The interpolation can be performed as described with regard to one of the embodiments shown in Figs. 7.1 to 7.4. Fig. 9 shows the idea of the herein described invention. Samples of a predetermined block can be predicted based on a first matrix-vector product between a matrix A 1100 derived by some machine-learning based training algorithm and a sample value vector 400. Optionally an offset b 1 1 10 can be added. To achieve an integer approximation or a fixed-point approximation of this first matrix-vector product, the sample value vector can undergo an invertible linear transformation 403 to determine a further vector 402. A second matrix-vector product between a further matrix B 1200 and the further vector 402 can equal the result of the first matrix-vector product. Because of the features of the further vector 402 the second matrix-vector product can be integer approximated by a matrix-vector product 404 between a predetermined prediction matrix C 405 and the further vector 402 plus a further offset 408. The further vector 402 and the further offset 408 can consist of integer or fixed-point values. All components of the further offset are, for example, the same. The predetermined prediction matrix 405 can be a quantized matrix or a matrix to be quantized. The result of the matrix-vector product 404 between the predetermined prediction matrix 405 and the further vector 402 can be understood as a prediction vector 406. In the following more details regarding this integer approximation are provided. Possible Solution according to an embodiment I: Subtracting and adding mean values One possible incorporation of an integer approximation of an expression Afloatx useable in a scenario above is to replace the i0-th component xi0 , i.e. a predetermined component 1500, of x, i.e. the sample value vector 400, by the mean value mean (x), i.e. a predetermined value 1400, of the components of x and to subtract this mean value from all other components. In other words, the invertible linear transform 403, as shown in Fig. 10a, is defined such that a predetermined component 1500 of the further vector 402 becomes a, and each of other components of the further vector 402, except the predetermined component 1500, equal a corresponding component of the sample value vector minus a, wherein a is a predetermined value 1400 which is, for example, an average, such as an arithmetic mean or weighted average, of components of the sample value vector 400. This operation on the input is given by an invertible transform T 403 that has an obvious integer implementation in particular if the dimension n of x is a power of two. Since Afloat = (AfIoatT-1)T , if one does such a transformation on the input x, one has to find an integral approximation of the matrix vector product By, where B = ( (AfIoatT-1)) aTnd y = Tx. Since the matrix-vector product Afloatx represents a prediction on a rectangular block, i.e. a predetermined block, and since x is comprised by (e.g., averaged) boundary samples of that block, one should expect that in the case where all sample values of x are equal, i.e. where xt = mean(x) for all i, each sample value in the prediction signal Afloatx should be close to mean(x ) or be exactly equal to mean(x). This means that one should expect that the i0-th column, i.e. the column corresponding to the predetermined component, of 5, i.e. of a further matrix 1200, is very close or equal to a column that consist only of ones. Thus, if M(i0), i.e. an integer matrix 1300, is the matrix whose i0th column consists of ones and all of whose other columns are zero, writing By = Cy + M(i0)y with C = B - M(i0), one should expect that the i0-th column of C, i.e. the predetermined prediction matrix 405, has rather small entries or is zero, as shown in Fig. 10b. Moreover, since the components of x are correlated, one can expect that for each i ¹ i0, the i-th component yi = xi - mean(x ) of y often has a much smaller absolute value than the i-th component of x. Since the matrix M(i0) is an integer matrix, an integer approximation of By is achieved if an integer approximation of Cy is given and, by the above arguments, one can expect that the quantization error that arises by quantizing each entry of C in a suitable way should only marginally impact the error in the resulting quantization of By resp. of Afloatx. The predetermined value 1400 is not necessarily the mean value mean (x). The herein described integer approximation of the expression Afloatx can also be achieved with the following alternative definitions of the predetermined value 1400: In another possible incorporation of an integer approximation of an expression Afloatx, the i0-th component xi0of x remains unaltered and the same value xio is subtracted from all other components. That is, yi0 = xi0 and yi = xi - xi0 for each i ¹ i0. In other words, the predetermined value 1400 can be a component of the sample value vector 400 corresponding to the predetermined component 1500. Alternatively, the predetermined value 1400 is a default value or a value signaled in a data stream into which a picture is coded. The predetermined value 1400 equals, for example, 2bitdepth-1. In this case, the further vector 402 can be defined by y0=2bitdepth-1 and yi=xi-x0 for i>0. Alternatively, the predetermined component 1500 becomes a constant minus the predetermined value 1400. The constant equals, for example, 2bitdepth-1. According to an embodiment, the predetermined component yi0 1500 of the further vector y 402 equals 2bitdepth-1 minus a component xi0 of the sample value vector 400 corresponding to the predetermined component 1500 and all other components of the further vector 402 equal the corresponding component of the sample value vector 400 minus the component of the sample value vector 400 corresponding to the predetermined component 1500. It is, for example, advantageous if the predetermined value 1400 has a small deviation from prediction values of samples of the predetermined block. According to an embodiment, the apparatus 1000 is configured to comprise a plurality of invertible linear transforms 403, each of which is associated with one component of the further vector 402. Furthermore, the apparatus is, for example, configured to select the predetermined component 1500 out of the components of the sample value vector 400 and use the invertible linear transform 403 out of the plurality of invertible linear transforms which is associated with the predetermined component 1500 as the predetermined invertible linear transform. This is, for example, due to different positions of the i0th row, i.e. a row of the invertible linear transform 403 corresponding to the predetermined component, dependent on a position of the predetermined component in the further vector 402. If, for example, the first component, i.e. y-i, of the further vector 402 is the predetermined component, the i0th row would replace the first row of the invertible linear transform. As shown in Fig. 10 b, matrix components 414 of the predetermined prediction matrix C 405 within a column 412, i.e. an i0th column, of the predetermined prediction matrix 405 which corresponds to the predetermined component 1500 of the further vector 402 are, for example, all zero. In this case, the apparatus is, for example, configured to compute the matrix-vector product 404 by performing multiplications by computing a matrix vector product 407 between a reduced prediction matrix C’ 405 resulting from the predetermined prediction matrix C 405 by leaving away the column 412 and an even further vector 410 resulting from the further vector 402 by leaving away the predetermined component 1500, as shown in Fig. 10c. Thus a prediction vector 406 can be calculated with less multiplications. As shown in Figs. 9, 10b and 10c, the apparatus 1000 can be configured to, in predicting the samples of the predetermined block on the basis of the prediction vector 406, compute for each component of the prediction vector 406 a sum of the respective component and a, i.e. the predetermined value 1400. This summation can be represented by a sum of the prediction vector 406 and a vector 409 with all components of the vector 409 being equal to the predetermined value 1400, as shown in Fig. 9 and Fig. 10c. Alternatively the summation can be represented by a sum of the prediction vector 406 and a matrix-vector product 1310 between an integer matrix M 1300 and the further vector 402, as shown in Fig. 10b, wherein matrix components of the integer matrix 1300 are 1 within a column, i.e. an i0th column, of the integer matrix 1300 which corresponds to the predetermined component 1500 of the further vector 402, and all other components are, for example, zero. A result of a summation of the predetermined prediction matrix 405 and the integer matrix 1300 equals or approximates, for example, the further matrix 1200, shown in Fig. 9. In other words, a matrix, i.e. the further matrix B 1200, which results from summing each matrix component of the predetermined prediction matrix C 405 within a column 412, i.e. the i0th column, of the predetermined prediction matrix 405, which corresponds to the predetermined component 1500 of the further vector 402, with one, (i.e. matric B) times the invertible linear transform 403 corresponds, for example, to a quantized version of a machine learning prediction matrix A 1 100, as shown in Fig. 9, Fig. 10a and Fig. 10b. The summing of each matrix component of the predetermined prediction matrix C 405 within the i0th column 412 with one can correspond to the summation of the predetermined prediction matrix 405 and the integer matrix 1300, as shown in Fig. 10b. As shown in Fig. 9 the machine learning prediction matrix A 1100 can equal the result of the further matrix 1200 times the invertible linear transform 403. This is due to A - x = BT yT-1. The predetermined prediction matrix 405 is, for example, a quantized matrix, an integer matrix and/or a fixed-point matrix, whereby the quantized version of the machine learning prediction matrix A 1 100 can be realized. Matrix multiplication using integer operations only For a low complexity implementation (in terms of complexity of adding and multiplying scalar values, as well as in terms of storage required for the entries of the partaking matrix), it is desirable to perform the matrix multiplications 404 using integer arithmetic only. To calculate an approximation of z = Cy, i.e. using operations on integers only, the real values must be mapped to integer values according to an embodiment. This can be done for example by uniform scalar quantization, or by taking into account specific correlations between values yi. The integer values represent, for example fixed-point numbers that can each be stored with a fixed number of bits n_bits, for example n_bits=8. The matrix-vector product 404 with a matrix, i.e. the predetermined prediction matrix 405, of size m x n can then be carried out like shown in this pseudo code, where «, » are arithmetic binary left- and right-shift operations and +, - and * operate on integer values only. (1 ) final_offset = 1 « (right_shift_result - 1); for i in 0... m-1 { accumulator = 0 for j in 0...n-1 { accumulator: = accumulator + y[j]*C[i,j] } z[i] = (accumulator + final_offset) » right_shift_result; } Here, the array C, i.e. the predetermined prediction matrix 405, stores the fixed point numbers, for example, as integers. The final addition of final_offset and the right-shift operation with right _ shift _ result reduce precision by rounding to obtain a fixed point format required at the output. To allow for an increased range of real values representable by the integers in C, two additional matrices offseti,j and scale i,j can be used, as shown in the embodiments of Fig. 11 and Fig. 12, such that each coefficient bi,j of yj· in the matrix- vector product is given by The values offset and scalei,j are themselves integer values. For example these integers can represent fixed-point numbers that can each be stored with a fixed number of bits, for example 8 bits, or for example the same number of bits n_bits that is used to store the values In other words, the apparatus 1000 is configured to represent the predetermined prediction matrix 405 using prediction parameters, e.g. integer values and the values offseti,j and scalei,j, and to compute the matrix- vector product 404 by performing multiplications and summations on the components of the further vector 402 and the prediction parameters and intermediate results resulting therefrom, wherein absolute values of the prediction parameters are representable by an n-bit fixed point number representation with n being equal to or lower than 14, or, alternatively, 10, or, alternatively, 8. For instance, the components of the further vector 402 are multiplied with the prediction parameters to yield products as intermediate results which, in turn, are subject to, or form addends of, a summation. According to an embodiment, the prediction parameters comprise weights each of which is associated with a corresponding matrix component of the prediction matrix. In other words, the predetermined prediction matrix is, for example, replaced or represented by the prediction parameters. The weights are, for example, integer and/or fixed point values. According to an embodiment, the prediction parameters further comprise one or more scaling factors, e.g. the values scale^, each of which is associated with one or more corresponding matrix components of the predetermined prediction matrix 405 for scaling the weight, e.g. an integer value , associated with the one or more corresponding matrix component of the predetermined prediction matrix 405. Additionally or alternatively, the prediction parameters comprise one or more offsets, e.g. the values offseti,j, each of which is associated with one or more corresponding matrix components of the predetermined prediction matrix 405 for offsetting the weight, e.g. an integer value associated with the one or more corresponding matrix component of the predetermined prediction matrix 405. In order to reduce the amount of storage necessary for offseti,j and scalei,j , their values can be chosen to be constant for particular sets of indices i,j. For example, their entries can be constant for each column or they can be constant for each row, or they can be constant for all i,j, as shown in Fig. 1 1. For example, in one preferred embodiment, offseti,j and scalei,j are constant for all values of a matrix of one prediction mode, as shown in Fig. 12. Thus, when there are K prediction modes with k =0..K-1 , only a single value ok and a single value sk is required to calculate the prediction for mode k. According to an embodiment, offseti,j and/or scale are constant, i.e. identical, for all matrix-based intra prediction modes. Additionally or Alternatively, it is possible, that offseti,j and/or scalei,j are constant, i.e. identical, for all block sizes. With offset representing ok and scale representing sk , the calculation in (1) can be modified to be: (2) final_offset = 0; for i in 0. .h-1 { final_offset: = final_offset - y[i]; } final_offset *= final_offset * offset * scale; final_offset += 1 « (right_shift_result - 1); for i in 0...m-1 { accumulator = 0 for j in 0...P-1 { accumulator: = accumulator + y[j]*C[i,j] } z[i] = (accumulator*scale + final_offset) » right_shift _ result; } Broadened embodiments arising from that solution The above solution implies the following embodiments: 1. A prediction method as in Section I, where in Step 2 of Section I, the following is done for an integer approximation of the involved matrix vector product: Out of the (averaged) boundary samples x = (x1, ...,xn), for a fixed t0 with 1 < i0 < n, the vector y = (y1, ... ,yn) is computed, where yi = xi— mean(x ) for i ¹ i0 and where yi0 = mean(x) and where mean(x) denotes the mean-value of x. The vector y then serves as an input for (an integer realization of) a matrix vector product Cy such that the (downsampled) prediction signal pred from Step 2 of Section I is given as pred = Cy + meanpred(x). In this equation, meanpred(x]) denotes the signal that is equal to mean(x) for each sample position in the domain of the (downsampled) prediction signal (see, e.g., Fig. 10b) 2. A prediction method as in Section I, where in Step 2 of Section I, the following is done for an integer approximation of the involved matrix vector product: Out of the (averaged) boundary samples a: = (x1, ...,xn), for a fixed i0 with 1 £ i0 £ n, the vector y = y1, ... ,yn -1) is computed, where yi = xi— mean(x ) for i < i0 and where yi— xi+1— mean{x ) for i ³ i0 and where mean(x :) denotes the mean-value of x. The vector y then serves as an input for (an integer realization of) a matrix vector product Cy such that the (downsampled) prediction signal pred from Step 2 of Section I is given as pred = Cy + meanpred(x). In this equation, meanpred(x) denotes the signal that is equal to mean(x)' for each sample position in the domain of the (downsampled) prediction signal (see, e.g., Fig. 10c) 3. A prediction method as in Section I, where the integer realization of the matrix vector product Cy is given by using coefficients in the matrix- vector product = åj bi,j * yj. (see, e.g., Fig. 11) 4. A prediction method as in Section I, where step 2 uses one of K matrices, such that multiple prediction modes can be calculated, each using a different matrix with k=0... K-1 , where the integer realization of the matrix vector product Ck y is given by using coefficients in the matrix-vector product zi = åj- bi,j * yj. (see, e.g., Fig. 12) That is, in accordance with embodiments of the present application, encoder and decoder act as follows in order to predict a predetermined block 18 of a picture 10, see Fig. 9. For predicting, a plurality of reference samples is used. As outlined above, embodiments of the present application would not be restricted to intra-coding and accordingly, reference samples would not be restricted to be neighboring samples, i.e. samples of picture 10 neighboring block 18. In particular, the reference samples would not be restricted to the ones arranged alongside an outer edge of block 18 such as samples abutting the block’s outer edge. However, this circumstance is certainly one embodiment of the present application. In order to perform the prediction, a sample value vector 400 is formed out of the reference samples such as reference samples 17a and 17c. A possible formation has been described above. The formation may involve an averaging, thereby reducing the number of samples 102 or the number of components of vector 400 compared to the reference samples 17 contributing to the formation. The formation may also, as described above, somehow depend on the dimension or size of block 18 such as its width and height. It is this vector 400 which is ought to be subject to an affine or linear transform in order to obtain the prediction of block 18. Different nomenclatures have been used above. Using the most recent one, it is the aim to perform the prediction by applying vector 400 to matrix A by way of a matrix vector product within performing a summation with an offset vector b. The offset vector b is optional. The affine or linear transformation determined by A or A and b, might be determined by encoder and decoder or, to be more precise, for sake of prediction on the basis of the size and dimension of block 18 as already described above. However, in order to achieve the above-outlined computational efficiency improvement or render the prediction more effective in terms of implementation, the affine or linear transform has been quantized, and encoder and decoder, or the predictor thereof, used the above-mentioned C and T in order to represent and perform the linear or affine transformation, with C and T, applied in the manner described above, representing a quantized version of the affine transformation. In particular, instead of applying vector 400 directly to a matrix A, the predictor in encoder and decoder, applies vector 402 resulting from the sample value vector 400 by way of subjecting same to a mapping via a predetermined invertible linear transform T. It might be that transform T as used here is the same as long as vector 400 has the same size, i.e. does not depend on the block’s dimensions, i.e. width and height, or is at least the same for different affine/linear transformations. In the above, vector 402 has been denoted y. The exact matrix in order to perform the affine/linear transform as determined by machine learning would have been B. However, instead of exactly performing B, the prediction in encoder and decoder is done by way of an approximation or quantized version thereof. In particular, the representation is done via appropriately representing C in the manner outlined above with C + M representing the quantized version of B. Accordingly, the prediction in encoder and decoder is further prosecuted by computing the matric-vector product 404 between vector 402 and the predetermined prediction matrix C appropriately represented and stored at encoder and decoder in the manner described above. The vector 406 which results from this matrix-vector product, is then used for predicting the samples 104 of block 18. As described above, for sake of prediction, each component of vector 406 might be subject to a summation with parameter a as indicated at 408 in order to compensate for the corresponding definition of C. The optional summation of vector 406 with offset vector b may also be involved in the derivation of the prediction of block 18 on the basis of vector 406. It might be that, as described above, each component of vector 406, and accordingly, each component of the summation of vector 406, the vector of all a's indicated at 408 and the optional vector b, might directly correspond to samples 104 of block 18 and, thus, indicate the predicted values of the samples. It may also be that only a sub-set of the block’s samples 104 is predicted in that manner and that the remaining samples of block 18, such as 108, are derived by interpolation. As described above, there are different embodiments for setting a. For instance, it might be the arithmetic mean of the components of vector 400. For that case, see Fig. 10. The invertible linear transform T may be as indicated in the Fig. 10. io is the predetermined component of the sample value vector and vector 402, respectively, which is replaced by a. However, as also indicated above, there are other possibilities. However, as far as the representation of C is concerned, it has also been indicated above that same may be embodied differently. For instance, the matrix-vector product 404 may, in its actual computation, end up in the actual computation of a smaller matrix-vector product with lower dimensionality. In particular, as indicated above, it might be that owing to the definition of C, its whole i0th column 412 gets 0 so that the actual computation of product 404 may be done by a reduced version of vector 402 which results from vector 402 by the omission of component yi0, namely by multiplying this reduced vector 410 with the reduced matrix C resulting from C by leaving out the i0th column 412. The weights of C or the weights of C, i.e. the components of this matrix, may be represented and stored in fixed-point number representation. These weights 414 may, however, also, as described above, be stored in a manner related to different scales and/or offsets. Scale and offset might be defined for the whole matrix C, i.e. be equal for all weights 414 of matrix C or matrix C’, or may be defined in a manner so as to be constant or equal for all weights 414 of the same row or all weights 414 of the same column of matrix C and matrix C’, respectively. Fig. 11 illustrates, in this regard, that the computation of the matrix-vector product, i.e. the result of the product, may in fact be performed slightly different, namely for instance, by shifting the multiplication with the scale(s) towards the vector 402 or 404, thereby reducing the number of multiplications having to be performed further. Fig. 12 illustrates the case of using one scale and one offset for all weights 414 of C or C’ such as done in above calculation (2). According to an embodiment, the herein described apparatus for predicting a predetermined block of a picture can be configured to use a matrix-based intra sample prediction comprising the following features: The apparatus is configured to form a sample value vector pTemp[x] 400 out of the plurality of reference samples 17. Assuming pTemp[x] to be 2* boundarySize, pTemp[x] might be populated by - e g. by direct copying or by sub-sampling or pooling - the neighboring samples located at the top of the predetermined block, redT[ x ] with x = 0.. boundarySize - 1 , followed by the neighboring samples located to the left of the predetermined block, redl_[ x ] with x = 0..boundarySize - 1 , (e.g. in case of isTransposed=0) or vice versa in case of the transposed processing (e.g. in case of isTransposed=1). The input values p[ x ] with x = 0.. inSize - 1 are derived, i.e. the apparatus is configured to derive from the sample value vector pTemp[x] a further vector p[x] onto which the sample value vector pTemp[x] is mapped by a predetermined invertible linear transform, or to be more specific predetermined invertible affine linear transform, as follows: - If mipSizeld is equal to 2, the following applies: p[ x ] = pTemp[ x + 1 ] - pTemp[ 0 ] - Otherwise (mipSizeld is less than 2), the following applies: P[ 0 ] = ( 1 « ( BitDepth - 1 ) ) - pTemp[ 0 ] p[ x ] = pTemp[ x ] - pTemp[ 0 ] for x = 1.. inSize - 1 Here, the variable mipSizeld is indicative of the size of predetermined block. That is, according to the present embodiment, the invertible transform using which the further vector is derived from the sample value vector, depends on the size of the predetermined block. The dependency might be given according to Where predSize is indicative of the number of predicted samples within the predetermined block, and 2*bondarySize is indicative of the size of the sample value vector and is related to inSize, i.e. the furhter vector's size, according to inSize = ( 2 * boundarySize ) - ( mipSizeld = = 2 ) ? 1 : 0. To be more precise, inSize indicates the number of those components of the further vector which actually particpate in the computation. inSize is as large as the size of sample value vector for smaller block sizes, and one component smaller for larger block sizes. In the former case, one component may be disrgearded, namely the one which would correspond to the predetermined component of the further vector, as in the matrix vector product to be computed later on, the contribution of the corresponding vector component would yield zero anyway and, thus, needs not to be actualy computed. The dependency on the block size might be left off in case of alterntive embodiments, where merely one of the two alternatives is used inevtiabley, i.e. irrespctive of the block size (the option corresponding to mipSizeld is less than 2, or the option corresponding to mipSizeld equal to 2). In other words, the predetermined invertible linear transform is, for example, defined such that a predetermined component of the further vector p becomes a, while all others correspond to a component of the sample value vector minus a, wherein e.g. a=pTemp[0] In case of the first option corresponding to mipSizeld equal to 2, this is readily visible and only the differentially formed components of the further vector are further taken into account. That is, in the case of the first option, the further vector is actually {p[0... inSize];pTemp[0]} wherein pTemp[0] is a and the actually computed part of the matrix vector multiplication to yiled the matrix vector product, i.e. the result of the multiplication, is restricted to only inSize components of the further vector and the corresponding columns of the matrix, as the matrix has a zero column which needs no computation. In other case, corresponding to mipSizeld smaller than 2, a=pTemp[0] is chosen, as all components of the further vector except p[0], i.e. each of other components p[x] (for x = 1.. inSize - 1) of the further vector p, except the predetermined component p[0], equal a corresponding component of the sample value vector pTemp[x] minus a, but p[0] is chosen to be a constant minus a. The matrix vector product is then computed. The constant is the mean of representable values, i.e. 2X_1 (i.e. 1 « ( BitDepth - 1 )) with x denoting the bit depth of the computational representation used. It should be noted that, if p[0] was selected to be pTemp[0] instead, then the product computed would simply deviate from the one computed using p[0] as indicated above (p[ 0 ] = ( 1 « ( BitDepth - 1 ) ) - pTemp[ 0 ]), by a constant vector which could be taken into account when predicting the block inner based on the product, i.e. the prediction vector. The value a is, thus, a predetermined value, e.g., pTemp[0]. The predetermined value pTemp[0] is in this case, for example, a component of the sample value vector pTemp corresponding to the predetermined component p[0]. It might be the neighboring sample to the top of the predetermined, or the left of the predetermined block, nearest to the upper left corner of the predetermined block. For the intra sample prediction process according to predModelntra, e.g., specifying the intra prediction mode, the apparatus is, for example, configured to apply the following steps, e.g. perform at least the first step: 1 . The matrix-based intra prediction samples predMip[ x ][ y ], with x = 0..predSize - 1 , y = 0..predSize - 1 are derived as follows: - The variable modeld is set equal to predModelntra. The weight matrix mWeight[ x ][ y ] with x = 0..inSize - 1 , y = 0.. predSize * predSize - 1 is derived by invoking the MIP weight matrix derivation process with mipSizeld and modeld as inputs. The matrix-based intra prediction samples predMip[ x ][ y ], with x = 0..predSize - 1 , y = CLpredSize - 1 are derived as follows: In other words, the apparatus is configured to compute a matrix-vector product between the further vector p[i] or, in case of mipSizeld equal to 2, {p[i];pTemp[0]} and a predetermined prediction matrix mWeight or, in case of mipSizeld smaller than 2, the prediction matrix mWeight having an additional zero weight line corresponding to the omitted component of p, so as to obtain a prediction vector, which, here, has already been assigned to an array of block positions {x,y} distributed in the inner of the predetermined block so as to result in the array predMip[ x ][ y ]. The prediction vecotor would correspond to a concatenation of the rows of predMip[ x ][ y ] or the columns of predMip[ x ][ y ], respectively. According to an embodiment, or according to a different interpretation, only the component is understood as the prediction vector and the apparatus is configured to, in predicting the samples of the predetermined block on the basis of the prediction vector, compute for each component of the prediction vector a sum of the respective component and a, e.g. pTemp[0]. The apparatus can optionally be configured to perform additionally the following steps in predicting the samples of the predetermined block on the basis of the prediction vector, e.g. predMip or ( 2. The matrix-based intra prediction samples predMip[ x ][ y ], with x = 0.. predSize - 1 y = 0.. predSize - 1 are, for example, clipped as follows: predMip[ x ][ y ] = Clip1 ( predMip[ x ][ y ] ) 3. When isTransposed is equal to TRUE, the predSize x predSize array predMip[ x ][ y ] with x = 0.. predSize - 1 , y = 0.. predSize - 1 is, for example, transposed as follows: predTemp[ y ][ x ] = predMip[ x ][ y ] predMip = predTemp 4. The predicted samples predSamples[ x ][ y ], with x = 0..nTbW - 1 , y = 0..nTbH - 1 are, for example, derived as follows: - If nTbW, specifying the transform block width, is greater than predSize or nTbH, specifying the transform block height, is greater than predSize, the MIP prediction upsampling process is invoked with the input block size predSize, matrix-based intra prediction samples predMip[ x ][ y ] with x = 0.. predSize - 1 , y = 0 . predSize - 1 , the transform block width nTbW, the transform block height nTbH, the top reference samples refT[ x ] with x = 0..nTbW - 1 , and the left reference samples refL[ y ] with y = 0..nTbH - 1 as inputs, and the output is the predicted sample array predSamples. - Otherwise, predSamples[ x ][ y ], with x = 0..nTbW - 1 , y = 0..nTbH - 1 is set equal to predMip[ x ][ y ]. In other words, the apparatus is configured to predict samples predSamples of the predetermined block on the basis of the prediction vector predMip. Fig. 13 shows a method 2000 for predicting a predetermined block of a picture using a plurality of reference samples, comprising forming 2100 a sample value vector out of the plurality of reference samples, deriving 2200 from the sample value vector a further vector onto which the sample value vector is mapped by a predetermined invertible linear transform, computing 2300 a matrix-vector product between the further vector and a predetermined prediction matrix so as to obtain a prediction vector, and predicting 2400 samples of the predetermined block on the basis of the prediction vector. References [1] P. Helle et al.,“Non-linear weighted intra prediction”, JVET-L0199, Macao, China, October 2018. [2] F. Bossen, J. Boyce, K. Suehring, X. Li, V. Seregin, "JVET common test conditions and software reference configurations for SDR video”, JVET-K1010, Ljubljana, SI, July 2018. Further embodiments and examples Generally, examples may be implemented as a computer program product with program instructions, the program instructions being operative for performing one of the methods when the computer program product runs on a computer. The program instructions may for example be stored on a machine readable medium. Other examples comprise the computer program for performing one of the methods described herein, stored on a machine-readable carrier. In other words, an example of method is, therefore, a computer program having program instructions for performing one of the methods described herein, when the computer program runs on a computer. A further example of the methods is, therefore, a data carrier medium (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier medium, the digital storage medium or the recorded medium are tangible and/or non-transitionary, rather than signals which are intangible and transitory. A further example of the method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be transferred via a data communication connection, for example via the Internet. A further example comprises a processing means, for example a computer, or a programmable logic device performing one of the methods described herein. A further example comprises a computer having installed thereon the computer program for performing one of the methods described herein. A further example comprises an apparatus or a system transferring (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver. In some examples, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some examples, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods may be performed by any appropriate hardware apparatus. The above described examples are merely illustrative for the principles discussed above. It is understood that modifications and variations of the arrangements and the details described herein will be apparent. It is the intent, therefore, to be limited by the scope of the impending claims and not by the specific details presented by way of description and explanation of the examples herein. Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals even if occurring in different figures. Claims 1 . Apparatus (1000) for predicting a predetermined block (18) of a picture (10) using a plurality of reference samples (17a, c), configured to form (100) a sample value vector (102, 400) out of the plurality of reference samples (17a, c), derive from the sample value vector (102, 400) a further vector (402) onto which the sample value vector (102, 400) is mapped by a predetermined invertible linear transform (403), compute a matrix-vector product (404) between the further vector (402) and a predetermined prediction matrix (405) so as to obtain a prediction vector (406), and predict samples of the predetermined block (18) on the basis of the prediction vector (406). 2. Apparatus (1000) of claim 1 , wherein the predetermined invertible linear transform (403) is defined such that a predetermined component (1500) of the further vector (402) becomes a or a constant minus a, and each of other components of the further vector (402), except the predetermined component (1500), equal a corresponding Component of the sample value vector (102, 400) minus a, wherein a is a predetermined value (1400). 3. Apparatus (1000) of claim 2, wherein the predetermined value (1400) is one of an average, such as an arithmetic mean or weighted average, of components of the sample value vector (102, 400), a default value, a value signalled in a data stream into which the picture (10) is coded, and a component of the sample value vector (102, 400) corresponding to the predetermined component (1500). 4. Apparatus (1000) of claim 1 , wherein the predetermined invertible linear transform (403) is defined such that a predetermined component (1500) of the further vector (402) becomes a or a constant minus a, and each of other components of the further vector (402), except the predetermined component (1500), equal a corresponding component of the sample value vector (102, 400) minus a, wherein a is an arithmetic mean of components of the sample value vector (102, 400). 5. Apparatus (1000) of claim 1 , wherein the predetermined invertible linear transform (403) is defined such that a predetermined component (1500) of the further vector (402) becomes a or a constant minus a, and each of other components of the further vector (402), except the predetermined component (1500), equal a corresponding component of the sample value vector (102, 400) minus a, wherein a is a component of the sample value vector (102, 400) corresponding to the predetermined component (1500), wherein the apparatus (1000) is configured to comprise a plurality of invertible linear transforms, each of which is associated with one component of the further vector (402), select the predetermined component (1500) out of the components of the sample value vector (102, 400) and use the invertible linear transform out of the plurality of invertible linear transforms which is associated with the predetermined component (1500) as the predetermined invertible linear transform (403). 6. Apparatus (1000) of any of claims 2 to 5, wherein matrix components of the predetermined prediction matrix (405) within a column of the predetermined prediction matrix (405) which corresponds to the predetermined component (1500) of the further vector (402) are all zero and the apparatus (1000) is configured to compute the matrix-vector product (404) by performing multiplications by computing a matrix vector product (407) between a reduced prediction matrix (405) resulting from the predetermined prediction matrix (405) by leaving away the column (412) and an even further vector (410) resulting from the further vector (402) by leaving away the predetermined component (1500). 7. Apparatus (1000) of any of claims 2 to 6, configured to, in predicting the samples of the predetermined block (18) on the basis of the prediction vector (406), compute for each component of the prediction vector (406) a sum of the respective component and a. 8. Apparatus (1000) of any of claims 2 to 7, wherein a matrix, which results from summing each matrix component of the predetermined prediction matrix (405) within a column of the predetermined prediction matrix (405), which corresponds to the predetermined component (1500) of the further vector (402), with one, times the predetermined invertible linear transform (403) corresponds to a quantized version of a machine learning prediction matrix (1 100). 9. Apparatus (1000) of any of the previous claims, configured to form (100) the sample value vector (102, 400) out of the plurality of reference samples (17a, c) by, for each component of the sample value vector (102, 400), adopting one reference sample of the plurality of reference samples (17a,c) as the respective component of the sample value vector (102, 400), and/or averaging two or more components of the sample value vector (102, 400) to obtain the respective component of the sample value vector (102, 400). 10. Apparatus (1000) of any of the previous claims, wherein the plurality of reference samples (17a, c) is arranged within the picture (10) alongside an outer edge of the predetermined block (18). 1 1. Apparatus (1000) of any of the previous claims, configured to compute the matrix-vector product (404) using fixed point arithmetic operations. 12. Apparatus (1000) of any of the previous claims, configured to compute the matrix-vector product (404) without floating point arithmetic operations. 13. Apparatus (1000) of any of the previous claims, configured to store a fixed point number representation of the predetermined prediction matrix (405). 14. Apparatus (1000) of any of the previous claims, configured to represent the predetermined prediction matrix (405) using prediction parameters and to compute the matrix-vector product (404) by performing multiplications and summations on the components of the further vector (402) and the prediction parameters and intermediate results resulting therefrom, wherein absolute values of the prediction parameters are representable by an n-bit fixed point number representation with n being equal to or lower than 14, or, alternatively, 10, or, alternatively, 8. 15. Apparatus (1000) of claim 14, wherein the prediction parameters comprise weights each of which is associated with a corresponding matrix component of the predetermined prediction matrix (405). 16. Apparatus (1000) of claim 15, wherein the prediction parameters further comprise one or more scaling factors each of which is associated with one or more corresponding matrix components of the predetermined prediction matrix (405) for scaling the weight associated with the one or more corresponding matrix component of the predetermined prediction matrix (405), and/or one or more offsets each of which is associated with one or more corresponding matrix components of the predetermined prediction matrix (405) for offsetting the weight associated with the one or more corresponding matrix component of the predetermined prediction matrix (405). 17. Apparatus (1000) of any of the previous claims, configured to, in predicting the samples of the predetermined block (18) on the basis Of the prediction vector (406), use interpolation to compute at least one sample position of the predetermined block (18) based on the prediction vector (406) each component of which is associated with a corresponding position within the predetermined block (18). 18. Apparatus for encoding a picture comprising, an apparatus for predicting a predetermined block (18) of the picture using a plurality of reference samples (17a,c) according to any of the previous claims, to obtain a prediction signal, and an entropy encoder configured to encode a prediction residual for the predetermined block for correcting the prediction signal. 19. Apparatus for decoding a picture comprising, an apparatus for predicting a predetermined block (18) of the picture using a plurality of reference samples (17a,c) according to any of claims 1 to 17, to obtain a prediction signal, an entropy decoder configured to decode a prediction residual for the predetermined block, and a prediction corrector configured to correct the prediction signal using the prediction residual. 20. Method (2000) for predicting a predetermined block (18) of a picture using a plurality of reference samples (17a, c), comprising forming (2100, 100) a sample value vector (102, 400) out of the plurality of reference samples, deriving (2200) from the sample value vector a further vector (402) onto which the sample value vector is mapped by a predetermined invertible linear transform (403), computing (2300) a matrix-vector product (404) between the further vector (402) and a predetermined prediction matrix (405) so as to obtain a prediction vector (406), and predicting (2400) samples of the predetermined block on the basis of the prediction vector (406). 21. Method for encoding a picture comprising, predicting a predetermined block (18) of the picture using a plurality of reference samples (17a, c) according to the method (2000) of claim 20, to obtain a prediction signal, and entropy encoding a prediction residual for the predetermined block for correcting the prediction signal. 22. Method for decoding a picture comprising, predicting a predetermined block (18) of the picture using a plurality of reference samples (17a, c) to the method (2000) of claim 20, to obtain a prediction signal, entropy decoding a prediction residual for the predetermined block, and correcting the prediction signal using the prediction residual. 23. Data stream having a picture encoded thereinto using a method of claim 21. 24. Computer program having a program code for performing, when running on a computer, a method of any of claims 20 to 22.
| # | Name | Date |
|---|---|---|
| 1 | 202228068977-FER.pdf | 2022-12-31 |
| 1 | 202228068977-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [30-11-2022(online)].pdf | 2022-11-30 |
| 2 | 202228068977.pdf | 2022-12-14 |
| 2 | 202228068977-STATEMENT OF UNDERTAKING (FORM 3) [30-11-2022(online)].pdf | 2022-11-30 |
| 3 | Abstract1.jpg | 2022-12-14 |
| 3 | 202228068977-REQUEST FOR EXAMINATION (FORM-18) [30-11-2022(online)].pdf | 2022-11-30 |
| 4 | 202228068977-COMPLETE SPECIFICATION [30-11-2022(online)].pdf | 2022-11-30 |
| 4 | 202228068977-PROOF OF RIGHT [30-11-2022(online)].pdf | 2022-11-30 |
| 5 | 202228068977-POWER OF AUTHORITY [30-11-2022(online)].pdf | 2022-11-30 |
| 5 | 202228068977-DECLARATION OF INVENTORSHIP (FORM 5) [30-11-2022(online)].pdf | 2022-11-30 |
| 6 | 202228068977-FORM 18 [30-11-2022(online)].pdf | 2022-11-30 |
| 6 | 202228068977-DRAWINGS [30-11-2022(online)].pdf | 2022-11-30 |
| 7 | 202228068977-FORM 1 [30-11-2022(online)].pdf | 2022-11-30 |
| 8 | 202228068977-FORM 18 [30-11-2022(online)].pdf | 2022-11-30 |
| 8 | 202228068977-DRAWINGS [30-11-2022(online)].pdf | 2022-11-30 |
| 9 | 202228068977-POWER OF AUTHORITY [30-11-2022(online)].pdf | 2022-11-30 |
| 9 | 202228068977-DECLARATION OF INVENTORSHIP (FORM 5) [30-11-2022(online)].pdf | 2022-11-30 |
| 10 | 202228068977-COMPLETE SPECIFICATION [30-11-2022(online)].pdf | 2022-11-30 |
| 10 | 202228068977-PROOF OF RIGHT [30-11-2022(online)].pdf | 2022-11-30 |
| 11 | 202228068977-REQUEST FOR EXAMINATION (FORM-18) [30-11-2022(online)].pdf | 2022-11-30 |
| 11 | Abstract1.jpg | 2022-12-14 |
| 12 | 202228068977.pdf | 2022-12-14 |
| 12 | 202228068977-STATEMENT OF UNDERTAKING (FORM 3) [30-11-2022(online)].pdf | 2022-11-30 |
| 13 | 202228068977-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [30-11-2022(online)].pdf | 2022-11-30 |
| 13 | 202228068977-FER.pdf | 2022-12-31 |
| 1 | SearchHistoryE_30-12-2022.pdf |