Abstract: A video decoder (151) for decoding an encoded video signal comprising encoded picture data and indication data of a picture of a video to reconstruct the picture of the video is provided. The video decoder (151) comprises an interface (160) configured for receiving the encoded video signal, and a data decoder (170) configured for reconstructing the picture of the video by decoding the encoded picture data using the indication data. The picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein, using the indication data, the data decoder (170) is configured to decode the encoded picture data depending on the plurality of coding areas, wherein the indication data comprises information on the plurality of coding areas. One or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein, using the indication data, the data decoder (170) is configured to decode the encoded picture data depending on the coding order of the one or more coding areas which comprise two or more coding tree units, wherein the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
Encoder and Decoder, Encoding Method and Decoding Method
for Versatile Spatial Partitioning of Coded Pictures
Description
The present invention relates to video encoding and video decoding and, in particular, to an encoder and a decoder, to an encoding method and to a decoding method for versatile spatial partitioning of coded pictures.
H.265/HEVC is video codec which already provides tools for elevating or even enabling parallel processing at encoder and/or decoder. For instance, HE VC supports a sub-division of pictures into an array of tiles which are encoded independently from each other. Another concept supported by HEVC pertains to WPP, according to which CTU rows or CTU-lines of the pictures may be processed in parallel from left to right, e.g. in stripes, provided that some minimum CTU offset is obeyed in the processing of consecutive CTU lines (CTU = coding tree unit). It would be favorable, however, to have a video codec at hand which supports parallel processing capabilities of video encoders and/or video decoders even more efficiently.
In the following section “VCL partitioning according to the state-of-the-art”, an introduction to VCL partitioning according to the state-of-the-art is described (VCL = video coding layer).
Typically, in video coding, a coding process of picture samples requires smaller partitions, where samples are divided into some rectangular areas for joint processing such as prediction or transform coding. Therefore, a picture is partitioned into blocks of a particular size that is constant during encoding of the video sequence. In H.264/AVC standard fixed-size blocks of 16×16 samples, so called macroblocks, are used. In the state-of-the-art HEVC standard (see [1]), there are Coded Tree Blocks (CTB) or Coding Tree Units (CTU) of a maximum size of 64 × 64 samples. In the further description of HEVC, for such a kind of blocks, the more common term CTU is used.
CTUs are processed in raster scan order, starting with the top-left CTU, processing CTUs in the picture line-wise, down to the bottom-right CTU.
The coded CTU data is organized into a kind of container called slice. Originally, slice means a segment comprising one or more consecutive CTUs of a picture.
In the subsection “picture partitioning with slices" below, it is explained, how slices are employed for a segmentation of coded data. From another point of view, the complete picture can also be defined as one big segment and hence, historically, the term slice is still applied. Besides the coded picture samples, slices also comprise additional information related to the coding process of the slice itself which is placed into a so-called slice header.
According to the state-of-the-art, a VCL (video coding layer) also comprises techniques for fragmentation and spatial partitioning. Such partitioning may, e.g., be applied in video coding for various reasons, among which are processing load-balancing in parallelization, CTU size matching in network transmission, error-mitigation etc., as described in more detail in the following.
In the following subsection “picture partitioning with slices”, picture partitioning with slices is described.
Beginning with the H.263 standard, the sequence of data representing contiguous blocks in a particular scan order can be organized into groups called slices. Typically, the dependencies between CTUs of different slices of a picture, e.g. in terms of prediction and entropy coding, are prohibited, so individual slices within a picture can be independently reconstructed.
Fig. 2 illustrates picture segmentation with slices in raster scan order. The size of a slice is determined by the number of CTUs (coding tree units) and the size of each coded CTU that belongs to a slice as illustrated in Fig. 2. Fig. 2 comprises 50 CTUs, for example, CTU 21, CTU 24 and CTU 51.
In the following subsection “picture partitioning with tiles”, picture partitioning with tiles is described with reference to Fig. 3. Fig. 3 comprises 50 CTUs, for example, CTU 23, CTU 27 and CTU 41.
Tiles are a concept introduced in HEVC, although the concept is a quite similar to Flexible Macroblock Ordering (FMO) was added to H.264/AVC. The concept of tiles allows dividing the picture into several rectangular regions.
Tiles are thus a result of dividing the original picture into a given number of rows and columns with a specified height and width respectively as illustrated in Fig. 3. As a result of that, tiles within an HEVC bitstream are required to have common boundaries that form a
regular grid.
In the following sub-section “partitioning use cases and shortcomings of the state-of-the-art”, partitioning use cases and shortcomings of the state-of-the-art are described with reference to Fig. 4 and to Fig. 5.
Fig. 4 illustrates tile-based 360 video streaming with an unequal resolution. In 360º video streaming, a viewport adaptive streaming technique is gaining traction called tile-based 360º video streaming. The main idea is to provide the 360º video comprising several tiles at different resolutions. Thus, depending on the current viewport the client downloads some tiles matching to the viewport at high resolution other tiles that are located outside the viewport at lower resolutions, as shown in Fig. 4.
At the client side, the receiver combines these downloaded tiles at different resolutions into a single HEVC bitstream for which the downloaded tiles cannot be described as tiles in HEVC syntax anymore. The reason for this is that these downloaded tiles do not share the same boundaries across the whole picture, and the tiling granularity of the original streams is different. See, for example, the top and bottom dashed lines for the low-resolution part in Fig. 4, which can only be expressed as slice boundaries in HEVC syntax as there is no corresponding boundary in the high-resolution part on the left-hand side.
Another scenario in which a more flexible picture partitioning would be beneficial are low-delay conversational applications. Picture partitioning is also used for decoding/encoding parallelization. In some cases, this parallelization is the only way to achieve real time decoding/encoding. One could imagine a video conferencing scenario where 2 persons appear in the picture.
Fig. 5 illustrates flexible coding area partitioning.
Probably, a good picture partitioning that could be used to achieve a (coding complexity wise-)fair load balancing could be the one shown in Fig. 5. However, in this scenario, tiling lacks desirable flexibility due to the need of a rectangular tiling grid.
Another main purpose of segmentation (with slices) is error robustness. If some slices of the picture are lost, it will not prevent the reconstruction of the successfully received slices and so the partially reconstructed picture can be output and can be further used as the reference picture in the temporal prediction process for the following pictures. This feature is very important in transmission systems with higher error rates or where the content of
each picture is of extreme importance. On the other hand, there also exist transmission schemes, e.g, dynamic adaptive streaming via HTTP as in MPEG DASH, that feature error detection and mitigation techniques on the transport layer (TCP/IP) and for which buffer stalling until successful retransmission is the solution to transmission errors. In such scenarios, introduction of additional dependencies can be allows which would be damaging to error resilience. The state-of-the-art slice and tile partitioning lack flexibility in the respect as well.
The object of the present invention is to provide improved concepts for video encoding and video decoding.
The object of the present invention is solved by a video encoder according to claim 1, by a video decoder according to claim 57, by a system according to claim 113 by a video encoding method according to claim 114 by a video decoding method according to claim 115, by a computer program according to claim 116, and by an encoded video signal according to claim 117.
A video encoder for encoding a picture by generating an encoded video signal is provided. The video encoder comprises a data encoder configured for encoding a picture of a video into encoded picture data, wherein the data encoder is moreover configured to generate indication data. Moreover, the video encoder comprises a signal generator configured for generating the encoded video signal comprising the encoded picture data and the indication data. The picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein the data encoder is configured to encode the picture depending on the plurality of coding areas, and wherein the data encoder is configured to generate the indication data such that the indication data comprises information on the plurality of coding areas. One or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein the data encoder is configured to encode the picture depending on the coding order of the one or more coding areas which comprise two or more coding tree units, and wherein the data encoder is configured to generate the indication data such that the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
Moreover, a video decoder for decoding an encoded video signal comprising encoded picture data and indication data of a picture of a video to reconstruct the picture of the video is provided. The video decoder comprises an interface configured for receiving the encoded video signal, and a data decoder configured for reconstructing the picture of the video by decoding the encoded picture data using the indication data. The picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein, using the indication data, the data decoder is configured to decode the encoded picture data depending on the plurality of coding areas, wherein the indication data comprises information on the plurality of coding areas. One or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein, using the indication data, the data decoder is configured to decode the encoded picture data depending on the coding order of the one or more coding areas which comprise two or more coding tree units, wherein the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
Furthermore, a method for encoding a picture by generating an encoded video signal is provided. The method comprises:
- Encoding a picture of a video into encoded picture data.
- Generating indication data. And:
- Generating the encoded video signal comprising the encoded picture data and the indication data.
The picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein encoding the picture is conducted depending on the plurality of coding areas, and wherein generating the indication data is conducted such that the indication data comprises information on the plurality of coding areas. One or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas
which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein encoding the picture is conducted depending on the coding order of the one or more coding areas which comprise two or more coding tree units, and wherein generating the indication data is conducted such that the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
Moreover, a method for decoding an encoded video signal comprising encoded picture data and indication data of a picture of a video to reconstruct the picture of the video is provided. The method comprises:
- Receiving the encoded video signal. And:
- Reconstructing the picture of the video by decoding the encoded picture data using the indication data.
The picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein, using the indication data, wherein decoding the encoded picture data is conducted depending on the plurality of coding areas, wherein the indication data comprises information on the plurality of coding areas. One or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein, using the indication data, decoding the encoded picture data is conducted depending on the coding order of the one or more coding areas which comprise two or more coding tree units, wherein the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
Moreover, a computer program is provided for implementing one of the above-described methods when being executed on a computer or signal processor.
Furthermore, an encoded video signal encoding a picture is provided, wherein the encoded video signal comprises encoded picture data and indication data, wherein the picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located
within the picture, wherein the picture is encoded depending on the plurality of coding areas, and wherein the indication data comprises information on the plurality of coding areas, wherein one or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein the picture is encoded depending on the coding order of the one or more coding areas which comprise two or more coding tree units, and wherein the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
In the following, embodiments of the present invention are described in more detail with reference to the figures, in which:
Fig. 1a illustrates a video encoder for encoding a picture by generating an encoded video signal according to an embodiment.
Fig. 1b illustrates a video decoder for decoding an encoded video signal comprising encoded picture data and indication data of a picture of a video to reconstruct the picture of the video according to an embodiment.
Fig. 1c illustrates a system according to an embodiment.
Fig. 2 illustrates picture segmentation with slices in raster scan order.
Fig. 3 illustrates picture partitioning with tiles.
Fig. 4 illustrates tile-based 360º video streaming with unequal resolution.
Fig. 5 illustrates flexible coding area partitioning.
Fig. 6 illustrates a bitstream comprising single picture with one implicit coding area according to an embodiment.
Fig. 7 illustrates a bitstream comprising single picture with four coding areas according to another embodiment.
Fig. 8 illustrates spatial subdivision of picture with single CA at the top and three
CAs at the bottom according to an embodiment.
Fig. 9 illustrates spatial subdivision of picture with five coding areas according to another embodiment.
Fig. 10 illustrates two coding areas of which the one comprising the picture boundary CTUs consists of non-contiguous CTUs according to an embodiment.
Fig. 11 illustrates coding areas signaling at sub-area (CTU) level according to embodiments.
Fig. 12 illustrates a CTU scan-order and spatial references for
CTU_dependency_offset_id = 1 according to an embodiment.
Fig. 13 illustrates another CTU scan-order and spatial references for CTU_dependency_offset_id = 2 according to another embodiment.
Fig. 14 illustrates coding areas with Z-Scan CTU order according to an embodiment.
Fig. 15 illustrates implicit CTU scan direction derivation according to another embodiment.
Fig. 16 illustrates coding areas with different CTU-scan directions according to an embodiment.
Fig. 17 illustrates dependent coding areas with inter-region prediction options according to an embodiment.
Fig. 18 illustrates parallel processing of dependent coding areas according to an embodiment ( (A) CTU Raster Scan; (B) CTU Diagonal Scan ).
Fig. 19 illustrates an execution order and inter CA dependencies according to an embodiment ( (A) Dependency Driven; (B) Lockstep ).
Fig. 20 illustrates a deblocking filter process on CA boundaries with respect to the CA order according to an embodiment.
Fig. 21 illustrates a deblocking filter employing inter-CA filtering with hatched filter regions according to an embodiment.
Fig. 22 illustrates a bitstream comprising single picture with one Coding Area, whereas CA is fragmented into multiple transport units.
Fig. 23 illustrates a bitstream comprising single picture with multiple Coding Areas, whereas each CA has own transport unit.
Fig. 24 illustrates a bitstream comprising single picture with multiple Coding Areas, whereas each CA is fragmented into multiple transport units.
Fig. 25 illustrates a generalized presentation of a block surrounded by regions according to an embodiment.
Fig. 26 illustrates an example of a picture partitioned into tiles, bricks, and rectangular slices according to an embodiment, where the picture is divided into 4 tiles, 11 bricks, and 4 rectangular slices.
Fig. 27 illustrates the picture being split hierarchically according to an embodiment, in a first step, in the horizontal and in the vertical direction to obtain a first partitioning of the picture, and, in a second step, only in the horizontal direction, to obtain a second partitioning of the picture.
Fig. 28 illustrates the picture being split hierarchically according to another embodiment, in a first step, in the horizontal and in the vertical direction to obtain a first partitioning of the picture, and, in a second step, only in the vertical direction, to obtain a second partitioning of the picture.
Fig. 29 illustrates the picture being split hierarchically according to a further embodiment, in a first step, only in the horizontal direction to obtain a first partitioning of the picture, and, in a second step, only in the vertical direction, to obtain a second partitioning of the picture.
Fig. 30 illustrates the picture being split hierarchically according to a yet further embodiment, in a first step, only in the vertical direction to obtain a first partitioning of the picture, and, in a second step, only in the horizontal direction, to obtain a second partitioning of the picture.
Fig. 1a illustrates a video encoder 101 for encoding a picture by generating an encoded video signal according to an embodiment.
The video encoder 101 comprises a data encoder 110 configured for encoding a picture of a video into encoded picture data. The data encoder 110 is moreover configured to generate indication data.
Moreover, the video encoder 101 comprises a signal generator 120 configured for generating the encoded video signal comprising the encoded picture data and the indication data.
The picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture. The data encoder 110 is configured to encode the picture depending on the plurality of coding areas, and wherein the data encoder 110 is configured to generate the indication data such that the indication data comprises information on the plurality of coding areas.
One or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area. The data encoder 110 is configured to encode the picture depending on the coding order of the one or more coding areas which comprise two or more coding tree units. Moreover, the data encoder 110 is configured to generate the indication data such that the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
Fig. 1b illustrates a video decoder 151 for decoding an encoded video signal comprising encoded picture data and indication data of a picture of a video to reconstruct the picture of the video according to an embodiment.
The video decoder 151 comprises an interface 160 configured for receiving the encoded video signal.
Moreover, the video decoder 151 comprises data decoder 170 configured for reconstructing the picture of the video by decoding the encoded picture data using the indication data.
The picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture. Using the indication data, the data decoder 170 is configured to decode the encoded picture data depending on the plurality of coding areas, wherein the indication data comprises information on the plurality of coding areas.
One or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area. Using the indication data, the data decoder 170 is configured to decode the encoded picture data depending on the coding order of the one or more coding areas which comprise two or more coding tree units, wherein the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
According to an embodiment, the video decoder 151 of Fig. 1b may, e.g., be configured to output the picture of the video on an output device, for example, on a display of e.g., a TV, a computer, a mobile phone, etc.
Fig. 1c illustrates a system according to an embodiment. The system comprises a video encoder 101 according to Fig. 1a. Moreover, the system comprises a video decoder 151 according to Fig. 1b.
The video encoder 101 of Fig. 1a is configured to generate the encoded video signal, and
The video decoder 151 of Fig. 1b is configured to decode the encoded video signal to reconstruct the picture of the video.
In the following section “versatile picture partitioning with coding areas", versatile picture partitioning with coding areas is described.
In the following subsection “coding areas”, coding areas are described.
Beyond the current state-of-the art partitioning schemes, such as tiles, embodiments provide a more flexible spatial region definition concept, which may, e.g., be referred to as Coding Area (CA). This is an advantageous concept for the spatial subdivision of pictures
into rectangular regions. With Coding Areas, the partitioning is more flexible and individual regions are allowed to have their own, area specific, coding characteristics.
The Coding Area is defined by the dimensions and location (width, height, position) of a particular region as well as how the data of the region is to be processed. Signalling can be implemented either in in terms of low-level coding process specification or in terms of high-level parameters such as scan order, scan direction, scan start, etc.
Fig. 6 illustrates a bitstream comprising single picture with one implicit coding area according to an embodiment. (NAL = Network Abstraction Layer; NALUH = Network Abstraction Layer Unit Header; SH = Slice Header)
Fig. 7 illustrates a bitstream comprising single picture with four coding areas according to another embodiment.
If no partitioning is applied, then the picture comprising one Coding Area (CA) implicitly, see Fig. 6. This can be a default Coding Area with pre-defined functionality. Fig. 7 illustrates a sub-divisioning of a picture into multiple CAs.
In an embodiment, the data encoder 110 of Fig. 1a may, e.g., be configured to partition the picture into the plurality of coding areas.
Fig. 8 illustrates spatial subdivision of picture with single coding area (CA) at the top and three CAs at the bottom according to an embodiment. Each square in Fig. 8 and Fig. 9 represents a coding tree unit (CTU). Although Fig. 8 and Fig. 9 illustrates CTUs that have the shape of a square, a CTU may, e.g., in other examples, have a rectangular shape or any other shape.
As can be seen in Fig. 8 and Fig. 9, each of the coding areas in Fig. 8 and Fig. 9 extend rectangularly within the picture. Moreover, in Fig. 8 and Fig. 9, each CTU extend rectangularly within the picture.
Thus, according to an embodiment, each coding area of the plurality of coding areas may, e.g,, extend rectangularly within the picture. Each coding tree unit of the one or more coding tree units of each of the plurality of coding areas may, e.g., extend rectangularly within the picture.
Fig. 9 illustrates spatial subdivision of picture with five coding areas according to another embodiment.
One important advantage of CA partitioning is demonstrated in the following. In Fig. 8 and in Fig. 9, two examples of the new partitioning concept are provided.
For some use cases, it is possible to achieve partitioning with CAs that are impossible with Tiles, and where CA partitioning results in less partitions. As it can be seen from the Tile-based partitioning in Fig. 3, creating three separate regions at the bottom of the picture (Tile4, Tile5, Tile6), requires to encode three additional regions on top of the picture (Tile1, Tile2, Tile3). Using Coding Areas, the region on top can be encoded as one Coding Area CA1, as seen in Fig. 8. The partition shown in Fig. 9 can also not be achieved through tiles, as CA 1, CA2 and CA3 do not have the same height.
According to an embodiment, each of the plurality of coding tree units may, e.g., have a horizontal position within the picture and a vertical position within the picture. Fig. 8 and Fig.
9 illustrate horizontal positions of the CTUs from 1 - 10 and the vertical positions of the CTUs from 1 - 5. It is, of course, not necessary in such an embodiment that the positions start with 1 and that the step from one horizontal or vertical position to the next is 1 . Instead, other start positions and other steps from CTU to CTU are also possible. The step size from one CTU to the next for the vertical positions may be different from the step size from one CTU to the next for the horizontal positions.
The new partitioning of Fig. 9 may, e.g., characterized as follows.
In such an embodiment (alternative 1 :) a first coding area of the plurality of coding areas may, e.g., comprise a first coding tree unit having a first vertical position being identical to a second vertical position of a different second coding tree unit of a different second coding area of the plurality of coding areas, and a third coding tree unit of the first coding area has a third vertical position being different from the vertical position of any other coding tree unit of the second coding area, and a fourth coding tree unit of the second coding area has a fourth vertical position being different from the vertical position of any other coding tree unit of the first coding area.
Or (alternative 2:), the first coding area of the plurality of coding areas may, e.g., comprise the first coding tree unit having a first horizontal position being identical to a second horizontal position of the different second coding tree unit of the different second coding area of the plurality of coding areas, and the third coding tree unit of the first coding area has a third horizontal position being different from the horizontal position of any other coding tree unit of the second coding area, and the fourth coding tree unit of the second coding area has a fourth horizontal position being different from the horizontal position of any other coding tree unit of the first coding area.
Fig. 9 fulfills alternative 1 :
CA2 comprises CTU 91. CTU 91 has a vertical position 3. CTU 96 of CA4 also has the vertical position 3. CTU 92 of CA2 has a vertical position 2. None of the CTUs of CA4 has the vertical position 2 (as the vertical positions of CA4 are in the range from 3 - 5). Moreover, CTU 97 of CA4 has a vertical position 4. None of the CTUs of CA2 has the vertical position 4 (as the vertical positions of CA2 are in the range from 1 - 3).
In contrast, Fig. 8 does not fulfill alternative 1 and does not fulfil alternative 2:
Alternative 1:
Only CA2, CA3 and CA4 have same vertical positions (in the range from 3 - 5). CA1 does not have same vertical positions with any other coding area. However, CA2 does not have a vertical position that is different from any other vertical position in the coding areas CA3 or CA4. The same is true for CA3 and CA4, respectively, with respect to CA2, CA4 and CA2, CA3, respectively.
Alternative 2:
CA2, CA3 and CA4 have no CTUs with same horizontal positions.
In contrast, CTU 81 of CA1 has a same horizontal position (3) as CTU 86 of CA2. Moreover, none of the CTUs of CA2 has a horizontal position 6 of CTU 82 of CA1, as the horizontal positions of CA2 are in the range from 1 - 4. However, there is no CTU in CA2 that differs from the horizontal position of any other CTU of CA1, as the horizontal positions of the CTUs of CA2 are in the range from 1 - 4, and as the horizontal positions of the CTUs of CA1 are in the range from 1 - 10.
The same is true from CA3 and CA4 with respect to CA1 for analogous reasons.
Therefore, the partitioning of Fig. 8 does not fulfill alternative 1 and does not fulfill alternative
2.
A signaling mechanism for a number of CA parameters related to fragmentation and scan order is presented in sub-subsection "general properties of coding areas”.
Spatially, the Coding Area is covering a particular number of CTUs. The flexible arrangement of the CAs is one of the features of some of the embodiments. We suggest several variants for this: explicit signaling, as well as a hierarchical method, both used outside of VCL. Also, if no bitstream fragmentation is employed, the signaling within the VCL can be used. The detailed description for CA arrangement is provided in sub-subsection “size and arrangement of coding areas".
Besides the arrangement of segments, embodiments provide a new flexible processing or scan order of CTUs within the Coding Area. The details of this technique are presented in sub-subsection “CTU scan order”.
One of the other invented features is that, within a single picture, one CA can be independent and/or dependent from other Coding Areas in some respect as explained in the sub-subsection “dependent coding areas”.
Using a combination of the new proposed methods leads to new opportunities in the high-level parallelism. The parallel processing of the spatial picture regions can be done in a more efficient way as detailed in sub-subsection “parallel processing of coding areas”.
Aspects of error resilience are described in sub-subsection “error resiliency aspects”.
Summarizing the concepts above:
According to an embodiment, each coding area of the plurality of coding areas may, e.g., exhibit a spatial characteristic comprising a position, a width and a height of said coding area, wherein the width and the height of said coding area depend on the rectangular extension of said coding area, and wherein the position of said coding area depends on the location of said coding area within the picture.
In an embodiment, a first height of a first one of the plurality of coding areas may, e.g., be different from a second height of a second one of the plurality of coding areas. Or, a first width of the first one of the plurality of coding areas is different from a second width of the second one of the plurality of coding areas.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the information on the plurality of coding areas comprises information on the spatial characteristic of each coding area of the plurality of coding areas.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the information on the plurality of coding areas comprises the position, the width and the height of each coding area of the plurality of coding areas.
According to an embodiment, the data encoder 110 may, e.g., be configured to encode image data of a picture portion of each of the plurality of coding areas independently from encoding the image data of the picture portion of any other coding area of the plurality of coding areas to obtain the encoded picture data.
In an embodiment, the data encoder 110 may, e.g., be configured to encode the picture by encoding image data of a picture portion within each coding area of the plurality of coding areas to obtain the encoded picture data. The data encoder 110 may, e.g., be configured to encode the image data of the picture portion of at least one of the plurality of coding areas such that encoding the image data of said at least one of the plurality of coding areas depends on the encoding of the image data of at least another one of the plurality of coding areas.
In an embodiment, the data encoder 110 may, e.g., be configured to determine the coding order for each one of the one or more coding areas which comprise two or more coding tree units.
According to an embodiment, the data encoder 110 may, e.g., be configured to determine the coding order for each one of the one or more coding areas by selecting a scan order from two or more scan orders for each one of the one or more coding areas.
In an embodiment, the signal generator 120 may, e.g., be configured to generate the encoded video signal, such that the encoded video signal comprises a bitstream, wherein the bitstream comprises the encoded picture data and the indication data.
Likewise, according to an embodiment, each coding area of the plurality of coding areas may, e.g., exhibit a spatial characteristic comprising a position, a width and a height of said coding area, wherein the width and the height of said coding area depend on the rectangular extension of said coding area, and wherein the position of said coding area depends on the location of said coding area within the picture. The data decoder 170 may, e.g., be
configured to decode the encoded picture data depending on the spatial characteristic of the plurality of coding areas.
In an embodiment, a first height of a first one of the plurality of coding areas may, e.g., be different from a second height of a second one of the plurality of coding areas. Or a first width of the first one of the plurality of coding areas is different from a second width of the second one of the plurality of coding areas.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information within the indication data on the spatial characteristic of the plurality of coding areas.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information within the indication data on the plurality of coding areas, which comprises the position, the width and the height of each coding area of the plurality of coding areas.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data of each of the plurality of coding areas independently from decoding the encoded picture data of any other coding area of the plurality of coding areas.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data of at least one of the plurality of coding areas such that decoding the encoded picture data of said at least one of the plurality of coding areas depends on the decoding of the encoded picture data of at least another one of the plurality of coding areas.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using a coding order for each one of the one or more coding areas which comprise two or more coding tree units, said coding order for each one of the one or more coding areas being indicated by the indication data.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using an indication on a scan order from two or more scan orders for each one of the one or more coding areas, wherein the indication data may, e.g., comprise the indication on the scan order for each one of the one or more coding areas.
According to an embodiment, the interface 160 may, e.g., be configured to receive a bitstream, wherein the bitstream comprises the encoded picture data and the indication data.
In the following sub-subsection “general properties of coding areas”, general properties of coding areas are described.
The general information about the Coding Areas is concentrated into so called Coding Areas Set (CAS). The CAS can be settled outside of VCL in the high-level parameter sets and then it has impact on the whole sequence or a part of the sequence, e.g. a picture.
x = video, sequence, picture...
Table 2-1
coding_area_expIicit_positioning_flag if true coding_area_top_left_CTU_address is present in the bitstream, otherwise default implicit CA positioning is used
num_coding_areas_minus1 number of coding areas signaled in the bitstream
dependent_coding_areas_enabled_flag if true indicating the CA following in the bitstream depend on each other in the order of appearance in the bitstream, otherwise the CAs are treated as independent regions.
coding_areas_CTU_wise_dependent_flag if true indicating dependencies of CTUs between adjacent CAs are treated in a CTU-wise processing interleaving CA-processing, otherwise the CA are processed individually in a fixed order.
coding_area_no_slices_flag if true indicating one slice per CA. This implies that EOS_flag and CTU_start_address is not present in the bitstream. The CAs are addressed by CAJdx instead which is a fix length code derived from ceil(log2 (num_coding_areas_minus 1 + 1 )).
Otherwise, default slice syntax is present in the bitstream.
coding area CTU scan flag indicating if true, that CTU scan parameter are present in the bitstream. The flexible CTU scan technique is described in the sub-subsection “CTU scan order”.
coding_area_CTU_scan_type_idx maps into a scan type table (raster scan, diagonal scan).
coding_area_CTU_scan_start_left_flag[i] present if the i-th CA comprises more than one CTU per CTU-line, indicating if true to start the CTU scan at the leftmost CTU within the CA. Otherswise the CTU scan of the the i-th CA is started with the rightmost CTU.
coding area CTU scan start top flag[i] present if the i-th CA comprises more than one CTU per CTU-column, indicating if true to start the CTU scan in the top CTU row within the CA. Otherswise the CTU scan of the the i-th CA begins in the CTU bottom line of the CA.
coding_area_CTU_scan_direction_flag[i] present if the i-th CA conatins at least two CTU rows and two CTU columns, indicating the start scan direction of the CTU-scan. If true than the scan direction of the the i-th CA starts in horizontal direction, otherwise the scan is a vertical scan.
Summarizing the above concepts:
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data may, e.g., comprise information for each coding area on whether an address of a top left coding tree unit is specified. A particular example, may, e.g., be the coding area explicit positioning flag. If true, the coding_area_top_left_CTU_address is present in the bitstream, otherwise default implicit CA positioning is used.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data may, e.g., comprise information on a number of the plurality of coding areas or the number of the plurality of coding areas minus 1 or the number of the plurality of coding areas minus 2. A particular example, may, e.g., be the num_coding_areas_minus1 field described above.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas which succeeds another one of the plurality of coding areas whether said one of the plurality of coding areas depends on said another one of the plurality of coding areas. A particular example may, e.g., be the dependent_coding_areas_enabled_flag, which, if true, may, e.g., indicate that the CAs following in the bitstream depend on each other in the order of appearance in the bitstream, otherwise the CAs are treated as independent regions.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates whether exactly one slice of a plurality of slices is assigned to exactly one coding area of the plurality of coding areas. A particular example may, e.g., be the coding_area_no_slices_flag described above.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates whether the indication data may, e.g., comprise information on how to scan within each of the one or more coding tree units of the plurality of coding areas. A particular example may, e.g., be the coding_area_CTU_scan_flag described above.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates how to scan within each of the one or more coding tree units of the plurality of coding areas. A particular example, may, e.g., be coding_area_CTU_scan_type_idx which maps into a scan type table. Scan types may, e.g., be raster scan and/or, e.g., diagonal scan, etc.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for each of the plurality of coding areas whether the coding area comprises more than one coding tree unit.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a leftmost coding tree unit or whether the coding tree unit scan is started with a rightmost coding tree unit. A particular example may, e.g., be the coding_area_CTU_scan_start_left_flag[i] described above.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a top coding tree unit row of the coding area or whether the coding tree unit scan is started with a bottom coding tree unit row of the coding area. A particular example may, e.g., be the coding_area_CTU_scan_start_top_flag[i] described above.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas whether a coding tree unit scan is started in a horizontal direction or whether a coding tree unit scan is started in a vertical direction. A particular example, may, e.g., be the coding_area_CTU_scan_direction_flag[i], which may, e.g., be present if the i-th CA conatins at least two CTU rows and two CTU columns, indicating the start scan direction of the CTU-scan. If true than the scan direction of the the i-th CA may, e.g., start in horizontal direction, otherwise the scan may, e.g., be a vertical scan.
Likewise, according to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information for each coding area on whether an address of a top left coding tree unit is specified, wherein the indication data may, e.g., comprise said information.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information on a number of the plurality of coding areas or the number of the plurality of coding areas minus 1 or the number of the plurality of coding areas minus 2, wherein the indication data may, e.g., comprise said information.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas which succeeds another one of the plurality of coding areas whether said one of the plurality of coding areas depends on said another one of the plurality of coding areas, wherein the indication data may, e.g., comprise said information.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates whether exactly one slice of a plurality of slices is assigned to exactly one coding area of the plurality of coding areas, wherein the indication data may, e.g., comprise said information.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information on whether the indication data may, e.g., comprise information on how to scan within each of the one or more coding tree units of the plurality of coding areas, wherein the indication data may, e.g., comprise said information.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information on how to scan within each of the one or more coding tree units of the plurality of coding areas, wherein the indication data may, e.g., comprise said information.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information that indicates for each of the plurality of coding areas whether the coding area comprises more than one coding tree unit, wherein the indication data may, e.g., comprise said information.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a leftmost coding tree unit or whether the coding tree unit scan is started with a rightmost coding tree unit, wherein the indication data may, e.g., comprise said information.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a top coding tree unit row of the coding area or whether the coding tree unit scan is started with a bottom coding tree unit row of the coding area, wherein the indication data may, e.g., comprise said information.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas whether a coding tree unit scan is started in a horizontal direction or whether a coding tree unit scan is started in a vertical direction, wherein the indication data may, e.g., comprise said information.
In the following sub-subsection “size and arrangement of coding areas”, a size and an arrangement of coding areas are described.
It is crucial to signal how coding areas are arranged within the picture. Depending on the application, different targets are pursued that might affect the desired flexibility on the picture partitioning, e.g., CAs size and position. The layout can also be transmitted in the High-Level section of the bitstream in order to use a temporally fixed CA-layout or adapt the layout according to the content over time.
For instance, in the example provided in subsection “picture partitioning with tiles” for 360º video streaming the CAs are typically constant for the whole CVS (coding video sequence) and their size only needs to accommodate partitions that are generated from different resolutions of the same content, e.g., as illustrated in Fig. 4. For such a case, the most proper place for CA arrangement signaling is at a high-level syntax structure (see for instance VPS, SPS, PPS for HEVC) as described in sub-sub-subsection “embodiment A” and sub-sub-subsection “embodiment B”, later on.
However, in other examples as in subsection “picture partitioning with tiles” for load balancing (e.g., videoconferencing scenario), dynamic per picture partitioning might be desirable. In addition, the initial configuration selected for the CAs might be found not optimal and therefore in-picture modification (while encoding) of the partitioning might be required. For such a case, where the CA arrangement is not known a priori, in-VCL (Video
Coding Layer, within the slice payload) signaling is preferred over high-level signaling (within the parameter sets) as described in sub-sub-subsection “embodiment C”.
In the following sub-sub-subsection “embodiment A", sample- or unit-wise position and size signaling is described.
Explicit CA positioning using CTU start address to select the according area:
Table 2-2
Fig. 10 illustrates two coding areas of which the one comprising the picture boundary CTUs consists of non-contiguous CTUs according to an embodiment.
In principle the syntax above, num_coding_areas_minus2, could define all CAs but one and have the last CA as the remaining part not covered by previous CAs. However, this would allow for CAs not having contiguous CTUs as shown in Fig. 10, which might add some complexity to the encoding/decoding process.
Therefore, a constraint should be added to prohibit such a case and require the syntax to fulfil some constraints: for instance, given four CTUs (CTUa, CTUb, CTUc, CTUa) corresponding to the CTU at the top-left, top-right, bottom-left and bottom-right of a CA with CTUb_addr - CTUa_addr = C AWidthInNumCTU s_minus1 and CTUc_addr - CTUa_addr = CAHeightInNumCTUs_minus1 all CTU and only those that have the following CTU_address shall belong to the remaining CA CTUx_addr=CTUa_addr+j+k*( CAWidthInNumCTUs_minus 1 + 1 ) with j=0...
CAWidthInNumCTUs_minus1 and k=0... CAHeightInNumCTUs_minus1.
Alternatively, a syntax element could be added that (e.g,
“non_contiguous_remaining_coding_unit_enabled_flag") that could indicate/constraint the CAs to fulfill such a condition. Thus, some profiles could mandate that flag not to be enabled to prevent complexity added by non-contiguous CAs.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data comprises non-contiguous information which indicates whether at least one of the plurality of coding areas encloses another one of the plurality of coding areas, or whether none of the plurality of coding areas encloses another one of the coding areas. Fig. 10 is a particular example, where two coding areas CA1, CA2 are illustrated. The outer coding area CA1 encloses the inner coding area CA2.
In Fig. 10, CA1 encloses CA2. The noncontiguous_remaining_coding_unit_enabled_flag may be used to provide a respective indication.
Likewise, in an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using non-contiguous information which indicates whether at least one of the plurality of coding areas encloses another one of the plurality of coding areas, or whether none of the plurality of coding areas encloses another one of the coding areas, wherein the indication data comprises said information.
The syntax in the table above allows for flexible signaling with CAs that are not multiple of the maximum coding unit (e.g. 128x128), which is beneficial for some applications. However, signaling CAs using sample units might lead to a lot of bits being used. Alternatively, the signaling could be done in CTUs to save bits. The problem, is that sizes not multiple of CTUs cannot be signaled. In the proposed syntax below, with some additional information, this problem could be solved.
T able 2-3
The proposed signaling indicates whether the last CTU row and column of a CA comprise a CTU of the indicated max. size or the last CTU is smaller. In the case it is smaller, the size of the last CTU row and column of a CA is indicated.
However, in the case that a CA has not a size that is a multiple of max. CTU size (i.e. the last CTU in a row or column is smaller), it could lead to a picture that has a different number of CTUs per row or column, as CAs might have some CTUs that are smaller than other CAs. Such a misalignment might be undesirable since storage of variables that are used for prediction within a picture are usually addressed with the CTU address and a varying number of CTUs per row or column can make the CTU addressing not viable or too complex. Therefore, a constraint should be added that whenever a CA ends before a full
CTU (or a CTU of size max. CTU), other neighboring CAs should be aligned within the dimension (horizontal or/and vertical) which does not comprise a full CTU.
Implicit positioning recursively starting at the leftmost position in the uppermost CTU-row that has not been covered by a Coding Area. In other words, for example, if coding_area_explicit_positioning_flag, as shown in Table 2-4, is set to 0, then the start address of the CA is not indicated but only the width and height and a decoder would have to derive the start address of the i-th CA by looking for the smallest CTU address that is not yet covered by the 1..(i-1)-th CAs.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for a coding area whether a start address indicating one of the one or more coding tree units of said coding area is indicated, or whether the start address indicating said one of the one or more coding tree units of said coding area is not indicated. A particular example, may, e.g., be the coding_area_CTU_start_address described above.
Likewise, according to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using an indication for a coding area whether a start address indicating one of the one or more coding tree units of said coding area is indicated, or whether the start address indicating said one of the one or more coding tree units of said coding area is not indicated. If the start address indicating said one of the one or more coding tree units of said coding area is not indicated, the data decoder 170 may, e.g., be configured to determine the start address indicating said one of the one or more coding tree units of said coding area depending on a coding area width and depending on a coding height of said coding area and depending on a coding area scan direction of said coding area.
The syntax in both tables above comprises coding_area_start_address[i] , which in principle simplifies the operation of the parser but is to some extent “redundant” information that could be simply derived from the signaled sizes. Therefore, both tables could be same as provided but without that syntax element, where the address is derived as "the leftmost position in the uppermost CTU-row that has not been covered by a Coding Area”.
For higher picture resolutions an additional step size might be useful to scale width and height of the CA partitions and if available position codes.
Table 2-4
The final CA position and size shall be derived as follows;
coding_area_width_in_CTU = coding_area_width_in_units * (coding_area_unit_scaling_factorjminus 1 + 1 )
coding area height in CTU = coding_area_height_in_units *
(coding_area_unit_scaling_factor_minus 1 + 1 )
pic_width_in_units = (pic_width_in_CTU+coding_area_unit_scaling_factor_minus 1 )
/ (coding_area_unit_scaling_factor_minus 1 + 1 ) coding_area_CTU_start_address = ( coding_area_unit_scaling_factor_minus 1 + 1 )
*( (coding_area_start_address_in_units % pic_width_in_units ) + (coding_area_CTU_start_address unit / pic_width_in_units )* picture_width_in_CTU))
Table 2-5
coding_area_scaling_factor_minus1 scaling factor used to scale CA postion and size parameters
coding_area_top_left_CTU_address_in_units[i] address of CTU at the top-left boundary of the CA. The address is given in units and has to be scaled according to coding_area_scaling_factor minus 1+1 to obtain the CTU-address
coding_area__width_in_units[i] width of the CA in units has to be scaled accrding to coding_area_scaling_factor_minus 1 + 1 to obtain the CA width in CTUs.
coding_area_height_in_units[i] height of the CA in units has to be scaled accrding to coding_area_scaling_factor_minus 1 + 1 to obtain the CA width in CTUs.
Summarizing the above concepts:
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas a coding area width in coding tree units that specifies a number of coding tree units that are arranged in a horizontal direction within one of the coding areas. A particular example, may, e.g., be the coding_area_width_in_CTU described above.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas a coding area height in coding tree units that specifies a number of coding tree units that are arranged in a vertical direction within one of the coding areas. A particular example, may, e.g., be the coding area height_in_CTU described above.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for a coding area of the plurality of coding areas whether a last coding tree unit in horizontal direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the horizontal direction.
In a particular embodiment, said coding area may, e.g., comprise a plurality of last coding tree units in the horizontal direction, said last coding tree unit in the horizontal direction being one of said plurality of last coding tree units in the horizontal direction. If said last coding tree unit in the horizontal direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the horizontal direction, each of the plurality of last coding tree units in the horizontal direction may, e.g., have same width.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates for a coding area of the plurality of coding areas whether a last coding tree unit in vertical direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the vertical direction.
In a particular embodiment, said coding area may, e.g., comprise a plurality of last coding tree units in the vertical direction, said last coding tree unit in the vertical direction being one of said plurality of last coding tree units in the vertical direction. If said last coding tree unit in the vertical direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the vertical direction, each of the plurality of last coding tree units in the vertical direction may, e.g., have same height.
Likewise, in an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas a coding area width in coding tree units that specifies a number of coding tree units that are arranged in a horizontal direction within one of the coding areas, wherein the indication data may, e.g., comprise said information.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas a coding area height in coding tree units that specifies a number of coding tree units that are arranged in a vertical direction within one of the coding areas, wherein the indication data may, e.g., comprise said information.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates for a coding area of the plurality of coding areas whether a last coding tree unit in horizontal direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the horizontal direction, wherein the indication data may, e.g., comprise said information.
In a particular embodiment, said coding area may, e.g., comprise a plurality of last coding tree units in the horizontal direction, said last coding tree unit in the horizontal direction being one of said plurality of last coding tree units in the horizontal direction. If said last coding tree unit in the horizontal direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the horizontal direction, each of the plurality of last coding tree units in the horizontal direction may, e.g., have same width.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates for a coding area of the plurality of coding areas whether a last coding tree unit in vertical direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the vertical direction, wherein the indication data may, e.g., comprise said information.
In a particular embodiment, said coding area may, e.g., comprise a plurality of last coding tree units in the vertical direction, said last coding tree unit in the vertical direction being one of said plurality of last coding tree units in the vertical direction, If said last coding tree unit in the vertical direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the vertical direction, each of the plurality of last coding tree units in the vertical direction may, e.g., have same height.
In the following sub-sub-subsection “embodiment B”, hierarchical sub-divisioning through splitting is described.
In the particular, two variants for hierarchical CA partitioning are provided.
The first method is to transmit hierarchically the coding area split position values in scaled CTU units. Here, the granularity of partitioning depends on CTU size.
Table 2-6
Table 2-7
The second variant of hierarchical CA partitioning is to transmit the coding area split position values in general uniform units. The unit dimension can be derived by the scaling of the already parsed original picture size with a particular factor, which is also signaled in the parameter set. Here, the granularity of partitioning is varying and depends on unit size
chosen at the encoder side.
Table 2-8
Table 2-9
The final CA position and size shall be derived as follows;
UnitWidth = pic_width_in_luma_samples / PicWidthInUnits;
UnitHeight = pic_height_in_luma_samples / PicHeightInUnits;
CodingAreaWidth[AreaIdx] = CodingAreaWidthInUnits[AreaIdx] * UnitWidth CodingAreaHeight[AreaIdx] = CodingAreaHeight InUnits [Arealdx] * UnitHeight CodingAreaPosX[AreaIdx] = CodingAreaPosUnitX[AreaIdx] * UnitWidth CodingAreaPosY[AreaIdx] = CodingAreaPosUnitY[AreaIdx] * UnitHeight coding_area_start_address[AreaIdx] = CodingAreaPosY[AreaIdx] * PicWidthInCtbsY + CodingAreaPosX[AreaIdx]
The advantage of hierarchical CA partitioning is that for some partitioning scenarios this method might require less bits for signaling. Besides, it enforces some kind of alignment among CAs or group of CAs where some boundaries are shared, which might be beneficial for some implementations.
Signalling can be constraint to be only possible for RAPs and inherited from there for subsequent pictures.
An additional flag in the bitstream indicating the layout is retransmitted. Otherwise a previous CA-layout is used, that could further be selected out of a set of previously transmitted CA-layouts by a bitstream transmitted index.
Summarizing the above:
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the information on the plurality of coding areas comprises information on how to split the picture one or more times to obtain the plurality of coding areas by splitting the picture the one or more times.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates a plurality coding area split positions.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates the plurality coding area split positions as an ordered sequence.
In an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the indication data indicates the plurality coding area split positions as a plurality of coding area split position values, wherein each of the plurality of coding area split position values depends on a width of the picture or depends on a height of the picture.
According to an embodiment, the data encoder 110 may, e.g., be configured to generate the indication data such that the information on the plurality of coding areas comprises information on how to split the picture hierarchically one or more times to obtain the plurality of coding areas by splitting the picture hierarchically the one or more times.
Likewise, according to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information on how to split the picture one or more times to obtain the plurality of coding areas by splitting the picture the one or more times, wherein the indication data may, e.g., comprise said information.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates a plurality coding area split positions, wherein the indication data may, e.g., comprise said information.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates the plurality coding area split positions as an ordered sequence, wherein the indication data may, e.g., comprise said information.
In an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information which indicates the plurality coding area split positions as a plurality of coding area split position values, wherein each of the plurality of coding area split position values depends on a width of the picture or depends on a height of the picture, wherein the indication data may, e.g., comprise said information.
According to an embodiment, the data decoder 170 may, e.g., be configured to decode the encoded picture data using information on the plurality of coding areas comprises information on how to split the picture hierarchically one or more times to obtain the plurality of coding areas by splitting the picture hierarchically the one or more times, wherein the indication data may, e.g., comprise said information.
Now, further examples are provided, on how the data encoder 110 may, e.g., be configured to generate the indication data such that the information on the plurality of coding areas comprises information on how to split the picture one or more times to obtain the plurality of coding areas by splitting the picture the one or more times.
In particular, further examples are provided on how the data encoder 110 may, e.g., be configured to generate the indication data such that the information on the plurality of coding areas comprises information on how to split the picture hierarchically one or more times to obtain the plurality of coding areas by splitting the picture hierarchically the one or more times.
Moreover, further examples are provided on how the data decoder 170 may, e.g., be configured to decode the encoded picture data using information on how to split the picture one or more times to obtain the plurality of coding areas by splitting the picture the one or more times, wherein the indication data comprises said information.
In particular, further examples are provided on how the data decoder 170 may, e.g., be configured to decode the encoded picture data using information on the plurality of coding areas comprises information on how to split the picture hierarchically one or more times to obtain the plurality of coding areas by splitting the picture hierarchically the one or more times, wherein the indication data may, e.g., comprise said information.
Furthermore, further examples are provided on how the indication data of an encoded video signal may, e.g., indicate how to split the picture one or more times to obtain the plurality of coding areas by splitting the picture the one or more times.
Moreover, in particular, further examples are provided on how to split the picture hierarchically one or more times to obtain the plurality of coding areas by splitting the picture hierarchically the one or more times.
In some embodiments, the data encoder 110 and/or the data decoder 170 may, e.g., be configured
- in a first step, to split the picture in horizontal and vertical direction to obtain a first partitioning of the picture, and
- in a second step, to split the first partitioning of the picture (only) in horizontal direction to obtain a second partitioning of the picture.
In Fig. 27, the picture is split hierarchically, in a first step, in the horizontal and in the vertical direction to obtain a first partitioning of the picture (e.g., a tile splitting) (Fig. 27, step 1), and, in a second step, only in the horizontal direction, to obtain a second partitioning of the picture (e.g., a brick splitting) (Fig. 27, step 2).
In some other embodiments, the data encoder 110 and/or the data decoder 170 may, e.g., be configured
- in a first step, to split the picture in horizontal and vertical direction to obtain a first partitioning of the picture, and
- in a second step, to split the first partitioning of the picture (only) in vertical direction to obtain a second partitioning of the picture.
In Fig. 28, the picture is split hierarchically, in a first step, in the horizontal and in the vertical direction to obtain a first partitioning of the picture (e.g., a tile splitting) (Fig. 28, step 1), and, in a second step, only in the vertical direction, to obtain a second partitioning of the picture (e.g., a brick splitting) (Fig. 28, step 2).
In some further embodiments, the data encoder 110 and/or the data decoder 170 may, e.g., be configured
- in a first step, to split the picture (only) in horizontal direction to obtain a first partitioning of the picture, and
- in a second step, to split the first partitioning of the picture (only) in vertical direction to obtain a second partitioning of the picture.
In Fig. 29, the picture is split hierarchically, in a first step, only in the horizontal direction to obtain a first partitioning of the picture (e.g., a tile splitting) (Fig. 29, step 1), and, in a second step, only in the vertical direction, to obtain a second partitioning of the picture (e.g., a brick splitting) (Fig. 29, step 2).
In some yet further embodiments, the data encoder 110 and/or the data decoder 170 may, e.g., be configured
- in a first step, to split the picture (only) in vertical direction to obtain a first partitioning of the picture, and
- in a second step, to split the first partitioning of the picture (only) in horizontal direction to obtain a second partitioning of the picture.
In Fig. 30, the picture is split hierarchically, in a first step, only in the vertical direction to obtain a first partitioning of the picture (e.g., a tile splitting) (Fig. 30, step 1), and, in a second step, only in the horizontal direction, to obtain a second partitioning of the picture (e.g., a brick splitting) (Fig. 30, step 2).
In embodiments, in the code below for a picture parameter set RBSP syntax, the brick_split_flag[i] and the num_brick _rows_minus1 [i] parameters implement a exemplary way on how to split the picture hierarchically one or more times to obtain the plurality of coding areas.
Pictures may, e.g., be partitioned into pictures, slices, tiles, bricks, and CTUs
A picture may, e.g., be divided into one or more tile rows and one or more tile columns. A tile is a sequence of CTUs that covers a rectangular region of a picture.
A tile is divided into one or more bricks, each of which consisting of a number of CTU rows within the tile.
A tile that is not partitioned into multiple bricks is also referred to as a brick. However, a brick that is a true subset of a tile is not referred to as a tile.
A slice either contains a number of tiles of a picture or a number of bricks of a tile.
Two modes of slices are supported, namely the raster-scan slice mode and the rectangular slice mode. In the raster-scan slice mode, a slice contains a sequence of tiles in a tile raster scan of a picture. In the rectangular slice mode, a slice contains a number of bricks of a picture that collectively form a rectangular region of the picture. The bricks within a rectangular slice are in the order of brick raster scan of the slice.
Fig. 26 shows an example of a picture partitioned into tiles, bricks, and rectangular slices, where the picture is divided into 4 tiles (2 tile columns and 2 tile rows), 11 bricks (the top-left tile contains 1 brick, the top-right tile contains 5 bricks, the bottom-left tile contains 2 bricks, and the bottom-right tile contain 3 bricks), and 4 rectangular slices.
In the following, brick_split_flag[i] equal to 1 specifies that the i-th tile is divided into two or more bricks. brick_split_flag[i] equal to 0 specifies that the i-th tile is not divided into two or more bricks. When not present, the value of brick_split_flag[i] is inferred to be equal to 0.
Moreover, in the following, num_brick_rows_minus1[i] plus 1 specifies the number of bricks partitioning the i-th tile when uniform_brick_spacing_flag[i] is equal to 0. When present, the value of num_brick_rows_minus1[i] shall be in the range of 1 to RowHeight[i] - 1, inclusive. If brick_split_flag[i] is equal to 0, the value of num_brick_rows_minus1[i] is inferred to be equal to 0. Otherwise, when uniform_brick_spacing_flag[i] is equal to 1, the value of num_brick_rows_minus1[i] is inferred (e.g., as specified below with respect to the CTB raster scanning, the tile scanning, and the brick scanning process).
Therefore, with brick_split_flag[i] and num_brick_rows_minus1[i] and implement a examplary way on how to split the picture hierarchically one or more times to obtain the plurality of coding areas. A tile may, e.g., be divided into two or more bricks which may, e.g., consist of a number of CTU rows.
In more detail, the CTB raster scanning, the tile scanning, and the brick scanning process may, e.g., be conducted as follows:
The list colWidth[i] for i ranging from 0 to num_tile_columns_minus1, inclusive, specifying the width of the i-th tile column in units of CTBs, is derived, and when uniform_tile_spacing_flag is equal to 1, the value of num_tile_columns_minus1 is inferred, as follows:
The list RowHeight[ j ] for j ranging from 0 to num_tile_rows_minus1, inclusive, specifying the height of the j-th tile row in units of CTBs, is derived, and when uniform_tile_spacing_flag is equal to 1, the value of num_tile_rows_minus1 is inferred, as follows:
The list tileColBd[i] for i ranging from 0 to num_tile_columns_minus1+ 1, inclusive, specifying the location of the i-th tile column boundary in units of CTBs, is derived as follows:
The list tileRowBd[j] for j ranging from 0 to num_tile_rows_minus1 + 1, inclusive, specifying the location of the j-th tile row boundary in units of CTBs, is derived as follows:
The variable NumBrickslnPic, specifying the number of bricks in a picture referring to the PPS, and the lists BrickColBd[ brickldx ], BrickRowBd[ brickldx ], BrickWidth[ brickldx ], and BrickHeight[ brickldx ] for brickldx ranging from 0 to NumBrickslnPic - 1, inclusive, specifying the locations of the vertical brick boundaries in units of CTBs, the locations of the horizontal brick boundaries in units of CTBs, the widths of the bricks in units of CTBs, and the heights of bricks in units of CTBs, are derived, and for each i ranging from 0 to NumTilesInPic- 1, inclusive, when uniform_brick_spacing_flag[i] is equal to 1, the value of num_brick_rows_minus1 [i] is inferred, as follows:
Claims
1. A video encoder (101) for encoding a picture by generating an encoded video signal, comprising:
a data encoder (110) configured for encoding a picture of a video into encoded picture data, wherein the data encoder (110) is moreover configured to generate indication data, and
a signal generator (120) configured for generating the encoded video signal comprising the encoded picture data and the indication data,
wherein the picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein the data encoder (110) is configured to encode the picture depending on the plurality of coding areas, and wherein the data encoder (110) is configured to generate the indication data such that the indication data comprises information on the plurality of coding areas,
wherein one or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein the data encoder (110) is configured to encode the picture depending on the coding order of the one or more coding areas which comprise two or more coding tree units, and wherein the data encoder (110) is configured to generate the indication data such that the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
2. A video encoder (101) according to claim 1,
wherein the data encoder (110) is configured to partition the picture into the plurality of coding areas.
3. A video encoder (101) according to claim 1 or 2,
wherein each coding area of the plurality of coding areas extends rectangularly within the picture, and
wherein each coding tree unit of the one or more coding tree units of each of the plurality of coding areas extends rectangularly within the picture.
4. A video encoder (101) according to claim 3,
wherein each of the plurality of coding tree units has a horizontal position within the picture and a vertical position within the picture,
wherein a first coding area of the plurality of coding areas comprises a first coding tree unit having a first vertical position being identical to a second vertical position of a different second coding tree unit of a different second coding area of the plurality of coding areas, and a third coding tree unit of the first coding area has a third vertical position being different from the vertical position of any other coding tree unit of the second coding area, and a fourth coding tree unit of the second coding area has a fourth vertical position being different from the vertical position of any other coding tree unit of the first coding area, or
wherein the first coding area of the plurality of coding areas comprises the first coding tree unit having a first horizontal position being identical to a second horizontal position of the different second coding tree unit of the different second coding area of the plurality of coding areas, and the third coding tree unit of the first coding area has a third horizontal position being different from the horizontal position of any other coding tree unit of the second coding area, and the fourth coding tree unit of the second coding area has a fourth horizontal position being different from the horizontal position of any other coding tree unit of the first coding area.
5. A video encoder (101) according to claim 3 or 4
wherein each coding area of the plurality of coding areas exhibits a spatial characteristic comprising a position, a width and a height of said coding area, wherein the width and the height of said coding area depend on the rectangular extension of said coding area, and wherein the position of said coding area depends on the location of said coding area within the picture.
6. A video encoder (101) according to claim 5,
wherein a first height of a first one of the plurality of coding areas is different from a second height of a second one of the plurality of coding areas, or
wherein a first width of the first one of the plurality of coding areas is different from a second width of the second one of the plurality of coding areas.
7. A video encoder (101) according to claim 5 or 6,
wherein the data encoder (110) is configured to generate the indication data such that the information on the plurality of coding areas comprises information on the spatial characteristic of each coding area of the plurality of coding areas.
8. A video encoder (101) according to claim 7,
wherein the data encoder (110) is configured to generate the indication data such that the information on the plurality of coding areas comprises the position, the width and the height of each coding area of the plurality of coding areas.
9. A video encoder (101 ) according to one of the preceding claims,
wherein the data encoder (110) is configured to encode image data of a picture portion of each of the plurality of coding areas independently from encoding the image data of the picture portion of any other coding area of the plurality of coding areas to obtain the encoded picture data.
10. A video encoder (101 ) according to one of claims 1 to 8,
wherein the data encoder (110) is configured to encode a picture portion by encoding image data of the picture portion within each coding area of the plurality of coding areas to obtain the encoded picture data,
wherein the data encoder (110) is configured to encode the image data of the picture portion of at least one of the plurality of coding areas such that encoding the image data of said at least one of the plurality of coding areas depends on the encoding of the image data of at least another one of the plurality of coding areas.
11. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to determine the coding order for each one of the one or more coding areas which comprise two or more coding tree units.
12. A video encoder (101) according to claim 11,
wherein the data encoder (110) is configured to determine the coding order for each one of the one or more coding areas by selecting a scan order from two or more scan orders for each one of the one or more coding areas.
13. A video encoder (101) according to one of the preceding claims,
wherein the signal generator (120) is configured to generate the encoded video signal, such that the encoded video signal comprises a bitstream, wherein the bitstream comprises the encoded picture data and the indication data.
14. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data comprises information for each coding area on whether an address of a top left coding tree unit is specified.
15. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data comprises information on a number of the plurality of coding areas or the number of the plurality of coding areas minus 1 or the number of the plurality of coding areas minus 2.
16. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas which succeeds another one of the plurality of coding areas whether said one of the plurality of coding areas depends on said another one of the plurality of coding areas.
17. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates whether exactly one slice of a plurality of slices is assigned to exactly one coding area of the plurality of coding areas.
18. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates whether the indication data comprises information on how to scan within each of the one or more coding tree units of the plurality of coding areas.
19. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates how to scan within each of the one or more coding tree units of the plurality of coding areas.
20. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for each of the plurality of coding areas whether the coding area comprises more than one coding tree unit.
21. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a leftmost coding tree unit or whether the coding tree unit scan is started with a rightmost coding tree unit.
22. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a top coding tree unit row of the coding area or whether the coding tree unit scan is started with a bottom coding tree unit row of the coding area.
23. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas whether a coding tree unit scan is started in a horizontal direction or whether a coding tree unit scan is started in a vertical direction.
24. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data comprises non-contiguous information which indicates whether at least one of the plurality of coding areas encloses another one of the plurality of coding areas, or whether none of the plurality of coding areas encloses another one of the coding areas.
25. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas a coding area width in coding tree units that specifies a number of coding tree units that are arranged in a horizontal direction within one of the coding areas.
26. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for one of the plurality of coding areas a coding area height in coding tree units that specifies a number of coding tree units that are arranged in a vertical direction within one of the coding areas.
27. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for a coding area of the plurality of coding areas whether a last coding tree unit in horizontal direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the horizontal direction.
28. A video encoder (101) according to claim 27,
wherein said coding area comprises a plurality of last coding tree units in the horizontal direction, said last coding tree unit in the horizontal direction being one of said plurality of last coding tree units in the horizontal direction,
wherein, if said last coding tree unit in the horizontal direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the horizontal direction, each of the plurality of last coding tree units in the horizontal direction has same width.
29. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for a coding area of the plurality of coding areas whether a last coding tree unit in vertical direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the vertical direction.
30. A video encoder (101) according to claim 29,
wherein said coding area comprises a plurality of last coding tree units in the vertical direction, said last coding tree unit in the vertical direction being one of said plurality of last coding tree units in the vertical direction,
wherein, if said last coding tree unit in the vertical direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the vertical direction, each of the plurality of last coding tree units in the vertical direction has same height.
31. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates for a coding area whether a start address indicating one of the one or more coding tree units of said coding area is indicated, or whether the start address indicating said one of the one or more coding tree units of said coding area is not indicated.
32. A video encoder (101) according to one of claims 1 to 23
wherein the data encoder (110) is configured to generate the indication data such that the information on the plurality of coding areas comprises information on how to split the picture one or more times to obtain the plurality of coding areas by splitting the picture the one or more times.
33. A video encoder (101) according to claim 32,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates a plurality coding area split positions.
34. A video encoder (101) according to claim 33,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates the plurality coding area split positions as an ordered sequence.
35. A video encoder (101) according to claim 33 or 34,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates the plurality coding area split positions as a plurality of coding area split position values, wherein each of the plurality of coding area split position values depends on a width of the picture or depends on a height of the picture.
36. A video encoder (101) according to one of claims 32 to 35,
wherein the data encoder (110) is configured to generate the indication data such that the information on the plurality of coding areas comprises information on how to split the picture hierarchically one or more times to obtain the plurality of coding areas by splitting the picture hierarchically the one or more times.
37. A video encoder (101) according to one of claims 1 to 23,
wherein the data encoder (110) is configured to generate the indication data such that the information on the plurality of coding areas comprises one or more area column stop flags for a coding area of the one or more coding areas, wherein, if an
area column stop flag of a coding area of the one or more coding areas is set to a stop value, said area column stop flag indicates a width of said coding area, or
wherein the data encoder (110) is configured to generate the indication data such that the information on the plurality of coding areas comprises one or more area line stop flags for a coding area of the one or more coding areas, wherein, if an area line stop flag of a coding area of the one or more coding areas is set to a stop value, said area line stop flag indicates a height of said coding area.
38. A video encoder (101) according to one of claims 1 to 23,
wherein the data encoder (110) is configured to generate the indication data so that the indication data indicates whether an explicit signalling mode is active or whether the explicit signalling mode is inactive,
wherein, if the explicit signalling mode is active, and if a coding tree unit of the one or more coding tree units of the coding area is located at a picture boundary of the picture, the data encoder (110) is configured to generate the indication data so that the indication data comprises at least one of an area column stop flag and an area line stop flag for said coding tree unit, and
wherein, if the explicit signalling mode is inactive, and if said coding tree unit of the one or more coding tree units of the coding area is located at said picture boundary of the picture, the data encoder (110) is configured to generate the indication data so that the indication data does not comprise the area column stop flag for said coding tree unit and/or does not comprise the area line stop flag for said coding tree unit.
39. A video encoder (101) according to one of the preceding claims,
wherein the coding order of each coding area of the one or more coding areas which comprises two or more coding tree units depends on a raster scan,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates that the raster scan has been employed to encode each coding area of the one or more coding areas which comprise two or more coding tree units.
40. A video encoder (101) according to one of claims 1 to 38,
wherein the coding order of a coding area of the one or more coding areas which comprises two or more coding tree units depends on a scan order that depends on a slope indicating an angle,
wherein, after encoding a first one of the two or more coding tree units of said coding area, the data encoder (110) is configured to determine a second one of the two or more coding tree units within the coding area depending on a position of the first one of the coding tree units of said coding area, depending on the other coding tree units of said coding area which have not been encoded and depending on the slope, and
wherein the data encoder (110) is configured to encode said second one of the two or more coding tree units.
41. A video encoder (101) according to claim 40,
wherein the data encoder (110) is configured to determine a second one of the two or more coding tree units within the coding area, such that an arrow, being defined by a starting point and by the slope, wherein the starting point is a position of the first one of the coding tree units within the coding area, points to a position of the second one of the two or more coding tree units.
42. A video encoder (101) according to claim 40 or 41,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates that a diagonal scan has been employed to encode each coding area of the one or more coding areas which comprise two or more coding tree units.
43. A video encoder (101) according to one of claims 1 to 36,
wherein the coding order of a coding area which comprises five or more coding tree units depends on a scan order, said scan order depending on a first angle being 0º and depending on a second angle being 135º, and depending on a third angle being
45º,
wherein, after encoding a first coding tree unit of said coding area, the data encoder (110) is configured to determine a second coding tree unit of said coding tree area, such that a first arrow that has first starting point at a position of the first coding tree unit within said coding area encloses the first angle being 0º with respect to a predefined direction, and such that the first arrow points to said second coding tree unit within the coding area, and the data encoder (110) is configured to encode said second coding tree unit of said coding area,
wherein, after encoding the second coding tree unit of said coding area, the data encoder (110) is configured to determine a third coding tree unit of said coding tree area, such that a second arrow that has second starting point at a position of the second coding tree unit within said coding area encloses the second angle being 135º with respect to said predefined direction, and such that the second arrow points to said third coding tree unit within the coding area, and the data encoder (110) is configured to encode said third coding tree unit of said coding area,
wherein, after encoding the third coding tree unit of said coding area, the data encoder (110) is configured to determine a fourth coding tree unit of said coding tree area, such that a third arrow that has third starting point at a position of the third coding tree unit within said coding area encloses the first angle being 0º with respect to said predefined direction, and such that the third arrow points to said fourth coding tree unit within the coding area, and the data encoder (110) is configured to encode said fourth coding tree unit of said coding area.
44. A video encoder (101) according to claim 43,
wherein, after encoding the fourth coding tree unit of said coding area, the data encoder (110) is configured to determine a fifth coding tree unit of said coding tree area, such that a fourth arrow that has fourth starting point at a position of the fourth coding tree unit within said coding area encloses the third angle being 45 with respect to said predefined direction, and such that the fourth arrow points to said fifth coding tree unit within the coding area, and the data encoder (110) is configured to encode said fifth coding tree unit of said coding area.
45. A video encoder (101) according to claim 43 or 44
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates that a z-scan has been employed to encode each said area of the one or more coding areas which comprises five or more coding tree units.
46. A video encoder (101) according to one of claims 1 to 36,
wherein the coding order of a coding area of the one or more coding areas which comprises two or more coding tree units depends on a scan order comprising one or more scan directions,
wherein, after encoding a first coding tree unit of said coding area, the data encoder (110) is configured to determine a second coding tree unit of said coding tree area depending on a position of the first coding tree unit and depending on a first one of the one or more scan directions, and the data encoder (110) is configured to encode said second coding tree unit of said coding area.
47. A video encoder (101) according to claim 46,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates said scan order comprising the one or more scan directions.
48. A video encoder (101) according to claim 46 or 47,
wherein the data encoder (110) is configured to derive a scan direction of the one or more scan directions by evaluating an adjacent neighborhood of a first coding tree unit of said coding area,
wherein, if the first coding tree unit has a coding tree unit neighbor of the two or more coding tree units adjacent to its bottom boundary, or if the first coding tree unit is located in a bottom coding tree unit line of the picture, and has no adjacent neighbors to a right and is not located in a rightmost coding tree unit column, then the scan direction is a right upwards scan,
wherein, if the first coding tree unit has a coding tree unit neighbor of the two or more coding tree units adjacent to its bottom boundary, or if the first coding tree unit is located in a bottom coding tree unit line of the picture, and has an adjacent neighbor to the right or is located in a rightmost column of the picture, then the scan direction is a left upwards scan,
wherein, if a coding tree unit neighbor of the two or more coding tree units right to the first coding tree unit is available, or if the first coding tree unit is located in the rightmost column of the picture then the scan direction is left-downwards,
wherein, otherwise, the scan direction is right-downwards.
49. A video encoder (101) according to one of claims 46 to 48,
wherein the data encoder (110) is configured to generate the indication data such that the indication data indicates an index which indicates a selected scan direction of the one or more scan directions.
50. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to generate the indication data such that the indication data comprises information on an error resiliency of a coded video sequence.
51. A video encoder (101) according to claim 50,
wherein the information on the error resiliency indicates one of three or more different states on the error resiliency of the coded video sequence.
52. A video encoder (101) according to claim 51,
wherein a first state of the three or more different states indicates that an access unit is not error resilient,
wherein a second state of the three or more different states indicates that a first plurality of access units of a picture parameter set is not error resilient, and
wherein a third state of the three or more different states indicates that a second plurality of access units of a sequence parameter set is not error resilient.
53. A video encoder (101) according to claim 50,
wherein the information on the error resiliency indicates one of four or more different states on the error resiliency of the coded video sequence,
wherein a first state of the four or more different states indicates that the error resiliency is harmed on a picture-level and is harmed on a multiple-picture-level and is harmed on a sequence level,
wherein a second state of the four or more different states indicates that the error resiliency is harmed on the picture-level and is harmed on the multiple-picture-level, but is not harmed on the sequence level,
wherein a third state of the four or more different states indicates that the error resiliency is harmed on the picture-level, but that the error resiliency is not harmed on the multiple-picture-level and is not harmed on the sequence level, and
wherein a fourth state of the four or more different states indicates that the error resiliency is not harmed on the picture-level and is not harmed on the multiple- picture-level and is not harmed on the sequence level.
54. A video encoder (101) according to one of the preceding claims,
wherein the data encoder (110) is configured to encode a coding tree unit of the plurality of coding tree units being located within the picture depending on one or more coding tree units of eight neighboring coding tree units of the plurality of coding tree units being located within the picture, wherein the eight neighboring coding tree units are neighbored to said coding tree unit.
55. A video encoder (101) according to claim 54,
wherein the data encoder (110) is configured to encode said coding tree unit of the plurality of coding tree units being located within the picture by shifting a coding tree unit of the eight neighboring coding tree units into another one of the eight neighboring coding tree units.
56. A video encoder (101) according to c laim 54,
wherein said eight neighboring coding tree units are a first neighborhood
wherein the data encoder (110) is configured to encode said coding tree unit of the plurality of coding tree units by shifting a third coding tree unit of another eight neighbouring coding tree units of a second neighborhood into a second coding tree unit of the eight neighboring coding tree units of the first neighborhood, if said second coding tree unit of the eight neighboring coding tree units of the first neighborhood is unavailable, said another eight neighbouring coding tree units of the second neighborhood being neighbored to said second coding tree unit.
57. A video decoder (151) for decoding an encoded video signal comprising encoded picture data and indication data of a picture of a video to reconstruct the picture of the video, comprising:
an interface (160) configured for receiving the encoded video signal,
a data decoder (170) configured for reconstructing the picture of the video by decoding the encoded picture data using the indication data,
wherein the picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein, using the indication data, the data decoder (170) is configured to decode the encoded picture data depending on the plurality of coding areas, wherein the indication data comprises information on the plurality of coding areas,
wherein one or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein, using the indication data, the data decoder (170) is configured to decode the encoded picture data depending on the coding order of the one or more coding areas which comprise two or more coding tree units, wherein the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
58. A video decoder (151) according to claim 57
wherein the video decoder (151) is configured to output the picture of the video on an output device.
59. A video decoder (151) according to claim 57 or 58,
wherein each coding area of the plurality of coding areas extends rectangularly within the picture, and
wherein each coding tree unit of the one or more coding tree units of each of the plurality of coding areas extends rectangularly within the picture.
60. A video decoder (151) according to claim 59,
wherein each of the plurality of coding tree units has a horizontal position within the picture and a vertical position within the picture,
wherein a first coding area of the plurality of coding areas comprises a first coding tree unit having a first vertical position being identical to a second vertical position of a different second coding tree unit of a different second coding area of the plurality of coding areas, and a third coding tree unit of the first coding area has a third vertical position being different from the vertical position of any other coding tree unit of the second coding area, and a fourth coding tree unit of the second coding area has a fourth vertical position being different from the vertical position of any other coding tree unit of the first coding area, or
wherein the first coding area of the plurality of coding areas comprises the first coding tree unit having a first horizontal position being identical to a second horizontal position of the different second coding tree unit of the different second coding area of the plurality of coding areas, and the third coding tree unit of the first coding area has a third horizontal position being different from the horizontal position of any other coding tree unit of the second coding area, and the fourth coding tree unit of the second coding area has a fourth horizontal position being different from the horizontal position of any other coding tree unit of the first coding area.
61. A video decoder (151) according to claim 59 or 60,
wherein each coding area of the plurality of coding areas exhibits a spatial characteristic comprising a position, a width and a height of said coding area, wherein the width and the height of said coding area depend on the rectangular extension of said coding area, and wherein the position of said coding area depends on the location of said coding area within the picture,
wherein the data decoder (170) is configured to decode the encoded picture data depending on the spatial characteristic of the plurality of coding areas.
62. A video decoder (151) according to claim 61,
wherein a first height of a first one of the plurality of coding areas is different from a second height of a second one of the plurality of coding areas, or
wherein a first width of the first one of the plurality of coding areas is different from a second width of the second one of the plurality of coding areas.
63. A video decoder (151) according to claim 61 or 62,
wherein the data decoder (170) is configured to decode the encoded picture data using information within the indication data on the spatial characteristic of the plurality of coding areas.
64. A video decoder (151) according to claim 63,
wherein the data decoder (170) is configured to decode the encoded picture data using information within the indication data on the plurality of coding areas, which comprises the position, the width and the height of each coding area of the plurality of coding areas.
65. A video decoder (151) according to one of claims 57 to 64,
wherein the data decoder (170) is configured to decode the encoded picture data of each of the plurality of coding areas independently from decoding the encoded picture data of any other coding area of the plurality of coding areas.
66. A video decoder (151) according to one of claims 57 to 64,
wherein the data decoder (170) is configured to decode the encoded picture data of at least one of the plurality of coding areas such that decoding the encoded picture data of said at least one of the plurality of coding areas depends on the decoding of the encoded picture data of at least another one of the plurality of coding areas.
67. A video decoder (151) according to one of claims 57 to 66,
wherein the data decoder (170) is configured to decode the encoded picture data using a coding order for each one of the one or more coding areas which comprise two or more coding tree units, said coding order for each one of the one or more coding areas being indicated by the indication data.
68. A video decoder (151) according to claim 67,
wherein the data decoder (170) is configured to decode the encoded picture data using an indication on a scan order from two or more scan orders for each one of the one or more coding areas, wherein the indication data comprises the indication on the scan order for each one of the one or more coding areas.
69. A video decoder (151) according to one of claims 57 to 68,
wherein the interface (160) is configured to receive a bitstream, wherein the bitstream comprises the encoded picture data and the indication data.
70. A video decoder (151) according to one of claims 57 to 69,
wherein the data decoder (170) is configured to decode the encoded picture data using information for each coding area on whether an address of a top left coding tree unit is specified, wherein the indication data comprises said information.
71. A video decoder (151) according to one of claims 57 to 70,
wherein the data decoder (170) is configured to decode the encoded picture data using information on a number of the plurality of coding areas or the number of the plurality of coding areas minus 1 or the number of the plurality of coding areas minus 2, wherein the indication data comprises said information.
72. A video decoder (151) according to one of claims 57 to 71,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas which succeeds another one of the plurality of coding areas whether said one of the plurality of coding areas depends on said another one of the plurality of coding areas, wherein the indication data comprises said information.
73. A video decoder (151) according to one of claims 57 to 72,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates whether exactly one slice of a plurality of slices is assigned to exactly one coding area of the plurality of coding areas, wherein the indication data comprises said information.
74. A video decoder (151) according to one of claims 57 to 73,
wherein the data decoder (170) is configured to decode the encoded picture data using information on whether the indication data comprises information on how to scan within each of the one or more coding tree units of the plurality of coding areas, wherein the indication data comprises said information.
75. A video decoder (151) according to one of claims 57 to 74,
wherein the data decoder (170) is configured to decode the encoded picture data using information on how to scan within each of the one or more coding tree units of the plurality of coding areas, wherein the indication data comprises said information.
76. A video decoder (151) according to one of claims 57 to 75,
wherein the data decoder (170) is configured to decode the encoded picture data using information that indicates for each of the plurality of coding areas whether the coding area comprises more than one coding tree unit, wherein the indication data comprises said information.
77. A video decoder (151) according to one of claims 57 to 76,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a leftmost coding tree unit or whether the coding tree unit scan is started with a rightmost coding tree unit, wherein the indication data comprises said information.
78. A video decoder (151) according to one of claims 57 to 77,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a top coding tree unit row of the coding area or whether the coding tree unit scan is started with a bottom coding tree unit row of the coding area, wherein the indication data comprises said information.
79. A video decoder (151) according to one of claims 57 to 78,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas whether a coding tree unit scan is started in a horizontal direction or whether a coding tree unit scan is started in a vertical direction, wherein the indication data comprises said information.
80. A video decoder (151) according to one of claims 57 to 79,
wherein the data decoder (170) is configured to decode the encoded picture data using non-contiguous information which indicates whether at least one of the plurality of coding areas encloses another one of the plurality of coding areas, or whether none of the plurality of coding areas encloses another one of the coding areas, wherein the indication data comprises said information.
81. A video decoder (151 ) according to one of claims 57 to 80,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas a coding area width in coding tree units that specifies a number of coding tree units that are arranged in a horizontal direction within one of the coding areas, wherein the indication data comprises said information.
82. A video decoder (151) according to one of claims 57 to 81,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates for one of the plurality of coding areas a coding area height in coding tree units that specifies a number of coding tree units that are arranged in a vertical direction within one of the coding areas, wherein the indication data comprises said information.
83. A video decoder (151) according to one of claims 57 to 82,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates for a coding area of the plurality of coding areas whether a last coding tree unit in horizontal direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the horizontal direction, wherein the indication data comprises said information.
84. A video decoder (151) according to claim 83,
wherein said coding area comprises a plurality of last coding tree units in the horizontal direction, said last coding tree unit in the horizontal direction being one of said plurality of last coding tree units in the horizontal direction,
wherein, if said last coding tree unit in the horizontal direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the horizontal direction, each of the plurality of last coding tree units in the horizontal direction has same width.
85. A video decoder (151) according to one of claims 57 to 84,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates for a coding area of the plurality of coding areas whether a last coding tree unit in vertical direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the vertical direction, wherein the indication data comprises said information.
86. A video decoder (151) according to claim 85,
wherein said coding area comprises a plurality of last coding tree units in the vertical direction, said last coding tree unit in the vertical direction being one of said plurality of last coding tree units in the vertical direction,
wherein, if said last coding tree unit in the vertical direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the vertical direction, each of the plurality of last coding tree units in the vertical direction has same height.
87. A video decoder (151) according to one of claims 57 to 86,
wherein the data decoder (170) is configured to decode the encoded picture data using an indication for a coding area whether a start address indicating one of the one or more coding tree units of said coding area is indicated, or whether the start address indicating said one of the one or more coding tree units of said coding area is not indicated,
wherein, if the start address indicating said one of the one or more coding tree units of said coding area is not indicated, the data decoder (170) is configured to determine the start address indicating said one of the one or more coding tree units of said coding area depending on a coding area width and depending on a coding height of said coding area and depending on a coding area scan direction of said coding area.
88. A video decoder (151) according to one of claims 57 to 79,
wherein the data decoder (170) is configured to decode the encoded picture data using information on how to split the picture one or more times to obtain the plurality of coding areas by splitting the picture the one or more times, wherein the indication data comprises said information.
89. A video decoder (151) according to claim 88 ,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates a plurality coding area split positions, wherein the indication data comprises said information.
90. A video decoder (151) according to claim 89,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates the plurality coding area split positions as an ordered sequence, wherein the indication data comprises said information.
91. A video decoder (151) according to claim 89 or 90,
wherein the data decoder (170) is configured to decode the encoded picture data using information which indicates the plurality coding area split positions as a plurality of coding area split position values, wherein each of the plurality of coding area split position values depends on a width of the picture or depends on a height of the picture, wherein the indication data comprises said information.
92. A video decoder (151) according to one of claims 88 to 91,
wherein the data decoder (170) is configured to decode the encoded picture data using information on the plurality of coding areas comprises information on how to split the picture hierarchically one or more times to obtain the plurality of coding areas by splitting the picture hierarchically the one or more times, wherein the indication data comprises said information.
93. A video decoder (151) according to one of claims 57 to 79,
wherein the data decoder (170) is configured to decode the encoded picture data using information on the plurality of coding areas, said information comprising one or more area column stop flags for a coding area of the one or more coding areas, wherein, if an area column stop flag of a coding area of the one or more coding areas is set to a stop value, said area column stop flag indicates a width of said coding area, or
wherein the data decoder (170) is configured to decode the encoded picture data using information on the plurality of coding areas, said information one or more area line stop flags for a coding area of the one or more coding areas, wherein, if an area line stop flag of a coding area of the one or more coding areas is set to a stop value, said area line stop flag indicates a height of said coding area.
94. A video decoder (151) according to one of claims 57 to 79,
wherein the data decoder (170) is configured to decode the encoded picture data using an indication within the indication data that indicates whether an explicit signalling mode is active or whether the explicit signalling mode is inactive,
wherein, if the explicit signalling mode is inactive, and if a coding tree unit of the one or more coding tree units of the coding area is located at said picture boundary of the picture, the data decoder (170) is configured to decode the encoded picture data depending on the picture boundary the picture.
95. A video decoder (151) according to one of claims 57 to 94,
wherein the coding order of each coding area of the one or more coding areas which comprises two or more coding tree units depends on a raster scan,
wherein the data decoder (170) is configured to decode the encoded picture data depending on the raster scan,
wherein the data decoder (170) is configured to receive the information that indicates that the raster scan has been employed to encode each coding area of the one or more coding areas which comprise two or more coding tree units, wherein the indication data comprises said information.
96. A video decoder (151) according to one of claims 57 to 94,
wherein the coding order of a coding area of the one or more coding areas which comprises two or more coding tree units depends on a scan order that depends on a slope indicating an angle, wherein the indication data comprises said information on the coding area,
wherein, after decoding a first one of the two or more coding tree units of said coding area, the data decoder (170) is configured to determine a second one of the two or more coding tree units within the coding area depending on a position of the first one of the coding tree units of said coding area, depending on the other coding tree units of said coding area which have not been decoded and depending on the slope, and
wherein the data decoder (170) is configured to decode said second one of the two or more coding tree units.
97. A video decoder (151) according to claim 96,
wherein the data decoder (170) is configured to determine a second one of the two or more coding tree units within the coding area, such that an arrow, being defined by a starting point and by the slope, wherein the starting point is a position of the first one of the coding tree units within the coding area, points to a position of the second one of the two or more coding tree units.
98. A video decoder (151) according to claim 96 or 97,
wherein the data decoder (170) is configured to receive information indication data which indicates that a diagonal scan has been employed to encode each coding area of the one or more coding areas which comprise two or more coding tree units, wherein the indication data comprises said information.
99. A video decoder (151) according to one of claims 57 to 92,
wherein the coding order of a coding area which comprises five or more coding tree units depends on a scan order, said scan order depending on a first angle being 0º and depending on a second angle being 135º, and depending on a third angle being
45º,
wherein, after decoding a first coding tree unit of said coding area, the data decoder (170) is configured to determine a second coding tree unit of said coding tree area, such that a first arrow that has first starting point at a position of the first coding tree unit within said coding area encloses the first angle being 0º with respect to a predefined direction, and such that the first arrow points to said second coding tree
unit within the coding area, and the data decoder (170) is configured to decode said second coding tree unit of said coding area,
wherein, after decoding the second coding tree unit of said coding area, the data decoder (170) is configured to determine a third coding tree unit of said coding tree area, such that a second arrow that has second starting point at a position of the second coding tree unit within said coding area encloses the second angle being 135º with respect to said predefined direction, and such that the second arrow points to said third coding tree unit within the coding area, and the data decoder (170) is configured to decode said third coding tree unit of said coding area,
wherein, after decoding the third coding tree unit of said coding area, the data decoder (170) is configured to determine a fourth coding tree unit of said coding tree area, such that a third arrow that has third starting point at a position of the third coding tree unit within said coding area encloses the first angle being 0º with respect to said predefined direction, and such that the third arrow points to said fourth coding tree unit within the coding area, and the data decoder (170) is configured to decode said fourth coding tree unit of said coding area.
100. A video decoder (151) according to claim 99,
wherein, after decoding the fourth coding tree unit of said coding area, the data decoder (170) is configured to determine a fifth coding tree unit of said coding tree area, such that a fourth arrow that has fourth starting point at a position of the fourth coding tree unit within said coding area encloses the third angle being 45º with respect to said predefined direction, and such that the fourth arrow points to said fifth coding tree unit within the coding area, and the data decoder (170) is configured to decode said fifth coding tree unit of said coding area.
101. A video decoder (151) according to claim 99 or 100,
wherein the data decoder (170) is configured to receive information which indicates that a z-scan has been employed to encode each said area of the one or more coding areas which comprises five or more coding tree units, wherein the indication data comprises said information.
102. A video decoder (151) according to one of claims 57 to 92,
wherein the coding order of a coding area of the one or more coding areas which comprises two or more coding tree units depends on a scan order comprising one or more scan directions,
wherein, after decoding a first coding tree unit of said coding area, the data decoder (170) is configured to determine a second coding tree unit of said coding tree area depending on a position of the first coding tree unit and depending on a first one of the one or more scan directions, and the data decoder (170) is configured to decode said second coding tree unit of said coding area.
103. A video decoder (151) according to claim 102,
wherein the data decoder (170) is configured to receive information which indicates said scan order comprising the one or more scan directions, wherein the indication data comprises said information.
104. A video decoder (151 ) according to claim 102 or 103,
wherein the data decoder (170) is configured to derive a scan direction of the one or more scan directions by evaluating an adjacent neighborhood of a first coding tree unit of said coding area,
wherein, if the first coding tree unit has a coding tree unit neighbor of the two or more coding tree units adjacent to its bottom boundary, or if the first coding tree unit is located in a bottom coding tree unit line of the picture, and has no adjacent neighbors to a right and is not located in a rightmost coding tree unit column, then the scan direction is a right upwards scan,
wherein, if the first coding tree unit has a coding tree unit neighbor of the two or more coding tree units adjacent to its bottom boundary, or if the first coding tree unit is located in a bottom coding tree unit line of the picture, and has an adjacent neighbor to the right or is located in a rightmost column of the picture, then the scan direction is a left upwards scan,
wherein, if a coding tree unit neighbor of the two or more coding tree units right to the first coding tree unit is available, or if the first coding tree unit is located in the rightmost column of the picture then the scan direction is left-downwards,
wherein, otherwise, the scan direction is right-downwards.
105. A video decoder (151) according to one of claims 102 to 104,
wherein the data decoder (170) is configured to receive information which indicates an index which indicates a selected scan direction of the one or more scan directions, wherein the indication data comprises said information.
106. A video decoder (151) according to one of claims 57 to 105,
wherein the data decoder (170) is configured to receive information which indicates an error resiliency of a coded video sequence, and
wherein the data decoder (170) is configured to decode the encoded picture data depending on the information which indicates the error resiliency of the coded video sequence.
107. A video decoder (151) according to claim 106,
wherein the information on the error resiliency indicates one of three or more different states on the error resiliency of the coded video sequence.
108. A video decoder (101) according to claim 107,
wherein a first state of the three or more different states indicates that an access unit is not error resilient,
wherein a second state of the three or more different states indicates that a first plurality of access units of a picture parameter set is not error resilient, and
wherein a third state of the three or more different states indicates that a second plurality of access units of a sequence parameter set is not error resilient.
109. A video decoder (101) according to claim 106,
wherein the information on the error resiliency indicates one of four or more different states on the error resiliency of the coded video sequence,
wherein a first state of the four or more different states indicates that the error resiliency is harmed on a picture-level and is harmed on a multiple-picture-level and is harmed on a sequence level,
wherein a second state of the four or more different states indicates that the error resiliency is harmed on the picture-level and is harmed on the multiple-picture-level, but is not harmed on the sequence level,
wherein a third state of the four or more different states indicates that the error resiliency is harmed on the picture-level, but that the error resiliency is not harmed on the multiple-picture-level and is not harmed on the sequence level, and
wherein a fourth state of the four or more different states indicates that the error resiliency is not harmed on the picture-level and is not harmed on the multiple- picture-level and is not harmed on the sequence level.
110. A video decoder (151) according to one of claims 57 to 109,
wherein the data decoder (170) is configured to decode a coding tree unit of the plurality of coding tree units being located within the picture depending on one or more coding tree units of eight neighboring coding tree units of the plurality of coding tree units being located within the picture, wherein the eight neighboring coding tree units are neighbored to said coding tree unit.
111. A video decoder (151) according to claim 110,
wherein the data decoder (170) is configured to decode said coding tree unit of the plurality of coding tree units being located within the picture by shifting a coding tree unit of the eight neighboring coding tree units into another one of the eight neighboring coding tree units.
112. A video decoder (151) according to claim 110,
wherein said eight neighboring coding tree units are a first neighborhood
wherein the data decoder (170) is configured to decode a coding tree unit of the plurality of coding tree units being located within the picture by shifting a third coding tree unit of another eight neighbouring coding tree units of a second neighborhood into a second coding tree unit of the eight neighboring coding tree units of the first neighborhood, if said second coding tree unit of the eight neighboring coding tree units of the first neighborhood is unavailable, said another eight neighbouring coding tree units of the second neighborhood being neighbored to said second coding tree unit.
113. A system comprising:
the video encoder (101) according to one of claims 1 to 56, and
the video decoder (151) according to one of claims 57 to 112,
wherein the video encoder (101) according to one of claims 1 to 56 is configured to generate the encoded video signal, and
wherein the video decoder (151) according to one of claims 57 to 112 is configured to decode the encoded video signal to reconstruct the picture of the video.
114. A method for encoding a picture by generating an encoded video signal, comprising:
encoding a picture of a video into encoded picture data,
generating indication data, and
generating the encoded video signal comprising the encoded picture data and the indication data,
wherein the picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein encoding the picture is conducted depending on the plurality of coding areas, and wherein generating the indication data is conducted such that the indication data comprises information on the plurality of coding areas,
wherein one or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein encoding the picture is conducted depending on the coding order of the one or more coding areas which comprise two or more coding tree units, and wherein generating the indication data is conducted such that the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
115. A method for decoding an encoded video signal comprising encoded picture data and indication data of a picture of a video to reconstruct the picture of the video, comprising:
receiving the encoded video signal, and
reconstructing the picture of the video by decoding the encoded picture data using the indication data,
wherein the picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein, using the indication data, wherein decoding the encoded picture data is conducted depending on the plurality of coding areas, wherein the indication data comprises information on the plurality of coding areas,
wherein one or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein, using the indication data, decoding the encoded picture data is conducted depending on the coding order of the one or more coding areas which comprise two or more coding tree units, wherein the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
116. A computer program for implementing the method of claim 114 or 115 when being executed on a computer or signal processor.
117. An encoded video signal encoding a picture, wherein the encoded video signal comprises encoded picture data and indication data,
wherein the picture is partitioned into a plurality of coding areas, wherein each coding area of the plurality of coding areas is located within the picture, wherein each of the plurality of coding areas comprises one or more coding tree units of a plurality of coding tree units being located within the picture, wherein the picture is encoded depending on the plurality of coding areas, and wherein the indication data comprises information on the plurality of coding areas,
wherein one or more coding areas of the plurality of coding areas comprise two or more coding tree units of the plurality of coding tree units, wherein each coding area of the one or more coding areas which comprises two or more coding tree units exhibits a coding order for the two or more coding tree units of said coding area, wherein the picture is encoded depending on the coding order of the one or more coding areas which comprise two or more coding tree units, and wherein the indication data comprises information on the coding order of the one or more coding areas which comprise two or more coding tree units.
118. An encoded video signal according to claim 117,
wherein each coding area of the plurality of coding areas extends rectangularly within the picture, and
wherein each coding tree unit of the one or more coding tree units of each of the plurality of coding areas extends rectangularly within the picture.
119. An encoded video signal according to claim 118,
wherein each of the plurality of coding tree units has a horizontal position within the picture and a vertical position within the picture,
wherein a first coding area of the plurality of coding areas comprises a first coding tree unit having a first vertical position being identical to a second vertical position of a different second coding tree unit of a different second coding area of the plurality of coding areas, and a third coding tree unit of the first coding area has a third vertical position being different from the vertical position of any other coding tree unit of the second coding area, and a fourth coding tree unit of the second coding area has a fourth vertical position being different from the vertical position of any other coding tree unit of the first coding area, or
wherein the first coding area of the plurality of coding areas comprises the first coding tree unit having a first horizontal position being identical to a second horizontal position of the different second coding tree unit of the different second coding area of the plurality of coding areas, and the third coding tree unit of the first coding area has a third horizontal position being different from the horizontal position of any other coding tree unit of the second coding area, and the fourth coding tree unit of the second coding area has a fourth horizontal position being different from the horizontal position of any other coding tree unit of the first coding area.
120. An encoded video signal according to claim 118 or 119,
wherein each coding area of the plurality of coding areas exhibits a spatial characteristic comprising a position, a width and a height of said coding area, wherein the width and the height of said coding area depend on the rectangular extension of said coding area, and wherein the position of said coding area depends on the location of said coding area within the picture.
121. An encoded video signal according to claim 120,
wherein a first height of a first one of the plurality of coding areas is different from a second height of a second one of the plurality of coding areas, or
wherein a first width of the first one of the plurality of coding areas is different from a second width of the second one of the plurality of coding areas.
122. An encoded video signal according to claim 120 or 121,
wherein the indication comprises information on the spatial characteristic of each coding area of the plurality of coding areas.
123. An encoded video signal according to claim 122,
wherein the indication data comprises the position, the width and the height of each coding area of the plurality of coding areas.
124. An encoded video signal according to one of claims 117 to 123,
wherein the image data of a picture portion of each of the plurality of coding areas is encoded independently from encoding the image data of the picture portion of any other coding area of the plurality of coding areas to obtain the encoded picture data within the encoded video signal.
125. An encoded video signal according to one of claims 117 to 123,
wherein a picture portion is encoded by encoding image data of the picture portion within each coding area of the plurality of coding areas to obtain the encoded picture data,
wherein the image data of the picture portion of at least one of the plurality of coding areas is encoded within the encoded video signal such that encoding the image data of said at least one of the plurality of coding areas depends on the encoding of the image data of at least another one of the plurality of coding areas.
126. An encoded video signal according to one of claims 117 to 125,
wherein the encoded video signal comprises a bitstream, wherein the bitstream comprises the encoded picture data and the indication data.
127. An encoded video signal according to one of claims 117 to 126,
wherein the indication data comprises information for each coding area on whether an address of a top left coding tree unit is specified.
128. An encoded video signal according to one of claims 117 to 127,
wherein the indication data comprises information on a number of the plurality of coding areas or the number of the plurality of coding areas minus 1 or the number of the plurality of coding areas minus 2.
129. An encoded video signal according to one of claims 117 to 128,
wherein the indication data indicates for one of the plurality of coding areas which succeeds another one of the plurality of coding areas whether said one of the plurality of coding areas depends on said another one of the plurality of coding areas.
130. An encoded video signal according to one of claims 117 to 129,
wherein the indication data indicates whether exactly one slice of a plurality of slices is assigned to exactly one coding area of the plurality of coding areas.
131. An encoded video signal according to one of claims 117 to 130,
wherein the indication data indicates whether the indication data comprises information on how to scan within each of the one or more coding tree units of the plurality of coding areas.
132. An encoded video signal according to one of claims 117 to 131,
wherein the indication data indicates how to scan within each of the one or more coding tree units of the plurality of coding areas.
133. An encoded video signal according to one of claims 117 to 132,
wherein the indication data indicates for each of the plurality of coding areas whether the coding area comprises more than one coding tree unit.
134. An encoded video signal according to one of claims 117 to 133,
wherein the indication data indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a leftmost coding tree unit or whether the coding tree unit scan is started with a rightmost coding tree unit.
135. An encoded video signal according to one of claims 117 to 134,
wherein the indication data indicates for one of the plurality of coding areas whether a coding tree unit scan is started with a top coding tree unit row of the coding area or whether the coding tree unit scan is started with a bottom coding tree unit row of the coding area.
136. An encoded video signal according to one of claims 117 to 135,
wherein the indication data indicates for one of the plurality of coding areas whether a coding tree unit scan is started in a horizontal direction or whether a coding tree unit scan is started in a vertical direction.
137. An encoded video signal according to one of claims 117 to 136,
wherein the indication data indicates non-contiguous information which indicates whether at least one of the plurality of coding areas encloses another one of the plurality of coding areas, or whether none of the plurality of coding areas encloses another one of the coding areas.
138. An encoded video signal according to one of claims 117 to 137,
wherein the indication data indicates for one of the plurality of coding areas a coding area width in coding tree units that specifies a number of coding tree units that are arranged in a horizontal direction within one of the coding areas.
139. An encoded video signal according to one of claims 117 to 138,
wherein the indication data indicates for one of the plurality of coding areas a coding area height in coding tree units that specifies a number of coding tree units that are arranged in a vertical direction within one of the coding areas.
140. An encoded video signal according to one of claims 117 to 139,
wherein the indication data indicates for a coding area of the plurality of coding areas whether a last coding tree unit in horizontal direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the horizontal direction.
141. A encoded video signal according to claim 140,
wherein said coding area comprises a plurality of last coding tree units in the horizontal direction, said last coding tree unit in the horizontal direction being one of said plurality of last coding tree units in the horizontal direction,
wherein, if said last coding tree unit in the horizontal direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the horizontal direction, each of the plurality of last coding tree units in the horizontal direction has same width.
142. An encoded video signal according to one of claims 117 to 141,
wherein the indication data indicates fora coding area of the plurality of coding areas whether a last coding tree unit in vertical direction within said coding area is smaller than another coding tree unit of said coding area that precedes said coding tree unit in the vertical direction.
143. A encoded video signal according to claim 142,
wherein said coding area comprises a plurality of last coding tree units in the vertical direction, said last coding tree unit in the vertical direction being one of said plurality of last coding tree units in the vertical direction,
wherein, if said last coding tree unit in the vertical direction within said coding area is smaller than said another coding tree unit of said coding area that precedes said last coding tree unit in the vertical direction, each of the plurality of last coding tree units in the vertical direction has same height.
144. An encoded video signal according to one claims 117 to 143
wherein the indication data indicates for a coding area whether a start address indicating one of the one or more coding tree units of said coding area is indicated, or whether the start address indicating said one of the one or more coding tree units of said coding area is not indicated.
145. An encoded video signal according to one of claims 117 to 137,
wherein the indication data indicates how to split the picture one or more times to obtain the plurality of coding areas by splitting the picture the one or more times.
146. An encoded video signal according to claim 145,
wherein the indication data indicates a plurality coding area split positions.
147. An encoded video signal according to claim 146,
wherein the indication data indicates the plurality coding area split positions as an ordered sequence.
148. An encoded video signal according to claim 146 or 147,
wherein the indication data indicates the plurality coding area split positions as a plurality of coding area split position values, wherein each of the plurality of coding area split position values depends on a width of the picture or depends on a height of the picture.
149. An encoded video signal according to one of claims 145 to 148,
wherein the indication data indicates how to split the picture hierarchically one or more times to obtain the plurality of coding areas by splitting the picture hierarchically the one or more times.
150. An encoded video signal according to one of claims 117 to 137,
wherein the indication data comprises one or more area column stop flags for a coding area of the one or more coding areas, wherein, if an area column stop flag
of a coding area of the one or more coding areas is set to a stop value, said area column stop flag indicates a width of said coding area, or
wherein the indication data comprises one or more area line stop flags for a coding area of the one or more coding areas, wherein, if an area line stop flag of a coding area of the one or more coding areas is set to a stop value, said area line stop flag indicates a height of said coding area.
151. An encoded video signal according to one of claims 117 to 137,
wherein the indication data indicates whether an explicit signalling mode is active or whether the explicit signalling mode is inactive,
wherein, if the explicit signalling mode is active, and if a coding tree unit of the one or more coding tree units of the coding area is located at a picture boundary of the picture, the indication data comprises at least one of an area column stop flag and an area line stop flag for said coding tree unit, and
wherein, if the explicit signalling mode is inactive, and if said coding tree unit of the one or more coding tree units of the coding area is located at said picture boundary of the picture, the indication data does not comprise the area column stop flag for said coding tree unit and/or does not comprise the area line stop flag for said coding tree unit.
152. An encoded video signal according to one of claims 117 to 151,
wherein the coding order of each coding area of the one or more coding areas which comprises two or more coding tree units depends on a raster scan,
wherein the indication data comprises information that indicates that the raster scan has been employed to encode each coding area of the one or more coding areas which comprise two or more coding tree units.
153. An encoded video signal according to one of claims 117 to 151,
wherein the indication data comprises information that indicates that a diagonal scan has been employed to encode each coding area of the one or more coding areas which comprise two or more coding tree units.
154 An encoded video signal according to one of claims 117 to 151,
wherein the indication data indicates that a z-scan has been employed to encode each said area of the one or more coding areas which comprises five or more coding tree units.
155. An encoded video signal according to one of claims 117 to 151,
wherein the indication data indicates a scan order comprising the one or more scan directions.
156. An encoded video signal according to claim 155,
wherein the indication data indicates an index which indicates a selected scan direction of the one or more scan directions.
157. An encoded video signal according to one of claims 117 to 156,
wherein the indication data comprises information on an error resiliency of a coded video sequence.
158. An encoded video signal according to claim 157,
wherein the information on the error resiliency indicates one of three or more different states on the error resiliency of the coded video sequence.
159. An encoded video signal according to claim 158,
wherein a first state of the three or more different states indicates that an access unit is not error resilient,
wherein a second state of the three or more different states indicates that a first plurality of access units of a picture parameter set is not error resilient, and wherein a third state of the three or more different states indicates that a second plurality of access units of a sequence parameter set is not error resilient.
160. An encoded video signal according to claim 159,
wherein the information on the error resiliency indicates one of four or more different states on the error resiliency of the coded video sequence,
wherein a first state of the four or more different states indicates that the error resiliency is harmed on a picture-level and is harmed on a multiple-picture-level and is harmed on a sequence level,
wherein a second state of the four or more different states indicates that the error resiliency is harmed on the picture-level and is harmed on the multiple-picture-level, but is not harmed on the sequence level,
wherein a third state of the four or more different states indicates that the error resiliency is harmed on the picture-level, but that the error resiliency is not harmed on the multiple-picture-level and is not harmed on the sequence level, and
wherein a fourth state of the four or more different states indicates that the error resiliency is not harmed on the picture-level and is not harmed on the multiple- picture-level and is not harmed on the sequence level.
| # | Name | Date |
|---|---|---|
| 1 | 202137001738-IntimationOfGrant12-01-2024.pdf | 2024-01-12 |
| 1 | 202137001738-STATEMENT OF UNDERTAKING (FORM 3) [14-01-2021(online)].pdf | 2021-01-14 |
| 2 | 202137001738-FORM 1 [14-01-2021(online)].pdf | 2021-01-14 |
| 2 | 202137001738-PatentCertificate12-01-2024.pdf | 2024-01-12 |
| 3 | 202137001738-FORM 3 [14-09-2023(online)].pdf | 2023-09-14 |
| 3 | 202137001738-FIGURE OF ABSTRACT [14-01-2021(online)].pdf | 2021-01-14 |
| 4 | 202137001738-FORM 3 [27-03-2023(online)].pdf | 2023-03-27 |
| 4 | 202137001738-DRAWINGS [14-01-2021(online)].pdf | 2021-01-14 |
| 5 | 202137001738-DECLARATION OF INVENTORSHIP (FORM 5) [14-01-2021(online)].pdf | 2021-01-14 |
| 5 | 202137001738-CLAIMS [10-10-2022(online)].pdf | 2022-10-10 |
| 6 | 202137001738-FER_SER_REPLY [10-10-2022(online)].pdf | 2022-10-10 |
| 6 | 202137001738-COMPLETE SPECIFICATION [14-01-2021(online)].pdf | 2021-01-14 |
| 7 | 202137001738-OTHERS [10-10-2022(online)].pdf | 2022-10-10 |
| 7 | 202137001738-MARKED COPIES OF AMENDEMENTS [21-01-2021(online)].pdf | 2021-01-21 |
| 8 | 202137001738-FORM 4(ii) [08-07-2022(online)].pdf | 2022-07-08 |
| 8 | 202137001738-FORM 13 [21-01-2021(online)].pdf | 2021-01-21 |
| 9 | 202137001738-AMENDED DOCUMENTS [16-03-2022(online)].pdf | 2022-03-16 |
| 9 | 202137001738-AMMENDED DOCUMENTS [21-01-2021(online)].pdf | 2021-01-21 |
| 10 | 202137001738-FORM 13 [16-03-2022(online)].pdf | 2022-03-16 |
| 10 | 202137001738-FORM 18 [03-02-2021(online)].pdf | 2021-02-03 |
| 11 | 202137001738-POA [16-03-2022(online)].pdf | 2022-03-16 |
| 11 | 202137001738-Proof of Right [22-03-2021(online)].pdf | 2021-03-22 |
| 12 | 202137001738-FER.pdf | 2022-01-10 |
| 12 | 202137001738-FORM-26 [23-03-2021(online)].pdf | 2021-03-23 |
| 13 | 202137001738-Information under section 8(2) [05-01-2022(online)].pdf | 2022-01-05 |
| 13 | 202137001738-Information under section 8(2) [11-06-2021(online)].pdf | 2021-06-11 |
| 14 | 202137001738-Information under section 8(2) [10-11-2021(online)].pdf | 2021-11-10 |
| 14 | 202137001738.pdf | 2021-10-18 |
| 15 | 202137001738-Information under section 8(2) [10-11-2021(online)].pdf | 2021-11-10 |
| 15 | 202137001738.pdf | 2021-10-18 |
| 16 | 202137001738-Information under section 8(2) [05-01-2022(online)].pdf | 2022-01-05 |
| 16 | 202137001738-Information under section 8(2) [11-06-2021(online)].pdf | 2021-06-11 |
| 17 | 202137001738-FORM-26 [23-03-2021(online)].pdf | 2021-03-23 |
| 17 | 202137001738-FER.pdf | 2022-01-10 |
| 18 | 202137001738-POA [16-03-2022(online)].pdf | 2022-03-16 |
| 18 | 202137001738-Proof of Right [22-03-2021(online)].pdf | 2021-03-22 |
| 19 | 202137001738-FORM 13 [16-03-2022(online)].pdf | 2022-03-16 |
| 19 | 202137001738-FORM 18 [03-02-2021(online)].pdf | 2021-02-03 |
| 20 | 202137001738-AMENDED DOCUMENTS [16-03-2022(online)].pdf | 2022-03-16 |
| 20 | 202137001738-AMMENDED DOCUMENTS [21-01-2021(online)].pdf | 2021-01-21 |
| 21 | 202137001738-FORM 13 [21-01-2021(online)].pdf | 2021-01-21 |
| 21 | 202137001738-FORM 4(ii) [08-07-2022(online)].pdf | 2022-07-08 |
| 22 | 202137001738-MARKED COPIES OF AMENDEMENTS [21-01-2021(online)].pdf | 2021-01-21 |
| 22 | 202137001738-OTHERS [10-10-2022(online)].pdf | 2022-10-10 |
| 23 | 202137001738-COMPLETE SPECIFICATION [14-01-2021(online)].pdf | 2021-01-14 |
| 23 | 202137001738-FER_SER_REPLY [10-10-2022(online)].pdf | 2022-10-10 |
| 24 | 202137001738-CLAIMS [10-10-2022(online)].pdf | 2022-10-10 |
| 24 | 202137001738-DECLARATION OF INVENTORSHIP (FORM 5) [14-01-2021(online)].pdf | 2021-01-14 |
| 25 | 202137001738-FORM 3 [27-03-2023(online)].pdf | 2023-03-27 |
| 25 | 202137001738-DRAWINGS [14-01-2021(online)].pdf | 2021-01-14 |
| 26 | 202137001738-FORM 3 [14-09-2023(online)].pdf | 2023-09-14 |
| 26 | 202137001738-FIGURE OF ABSTRACT [14-01-2021(online)].pdf | 2021-01-14 |
| 27 | 202137001738-PatentCertificate12-01-2024.pdf | 2024-01-12 |
| 27 | 202137001738-FORM 1 [14-01-2021(online)].pdf | 2021-01-14 |
| 28 | 202137001738-STATEMENT OF UNDERTAKING (FORM 3) [14-01-2021(online)].pdf | 2021-01-14 |
| 28 | 202137001738-IntimationOfGrant12-01-2024.pdf | 2024-01-12 |
| 1 | 202137001738E_22-12-2021.pdf |