Sign In to Follow Application
View All Documents & Correspondence

Concept For Picture/Video Data Streams Allowing Efficient Reducibility Or Efficient Random Access

Abstract: A video data stream is rendered reducible in a manner so that the reduction leads to a restriction of pictures of the reduced video data stream to merely a predetermined subarea of the pictures of the original video data stream and in a manner so that transcoding, such as re-quantization, may be avoided and a conformance of the reduced video data stream relative to the codec underlying the original video data stream be maintained. This is achieved by providing the video data stream with information including an indication of the predetermined subarea and replacement indices for redirecting the indices included by the payload portion so as to refer to, and/or replacement parameters for adjusting the first set of coding parameter settings so as to result in, a second set of coding parameter settings.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 February 2021
Publication Number
11/2021
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
kolkatapatent@Lsdavar.in
Parent Application
Patent Number
Legal Status
Grant Date
2025-01-31
Renewal Date

Applicants

FRAUNHOFER-GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
Hansastraße 27c, 80686 München, Germany

Inventors

1. SKUPIN, Robert
Naugarder Straße 42, 10409 Berlin, Germany
2. SANCHEZ, Yago
Warschauer Strasse 67, 10243 Berlin, Germany
3. SCHIERL, Thomas
Boris-Pasternak-Weg 7b, 13156 Berlin, Germany
4. HELLGE, Cornelius
Erich-Weinert-Straße 5, 10439 Berlin, Germany
5. GRÜNEBERG, Karsten
Adickesstraße 43, 13599 Berlin, Germany
6. WIEGAND, Thomas
Otto-Appel-Straße 52, 14185 Berlin, Germany

Specification

Concept for Picture/Video Data Streams Allowing Efficient Reducibility or Efficient

Random Access

Description

The present application is concerned with video/picture coding, and particularly with a concept allowing for an efficient reduction of such data streams, a concept allowing for an easier handling of such data streams and/or concept allowing for a more efficient random access into a video data stream.

There are many video codecs allowing for a scalability of the video data stream without transcoding, i.e. without the need for a sequential performance with decoding and encoding. An example of such scalable video data streams are data streams which are scalable in terms of, for example, temporal resolution, spatial resolution or signal-to-noise ratio by simply leaving off some of the enhancement layers of the respective scalable video data stream. However, until now there is no video codec allowing for an computationally non-complex scalability in terms of scene sectioning. In HEVC, there are, or there have been proposed, also concepts for restricting an HEVC data stream to a picture subarea, but still same are computationally complex.

Moreover, depending on the application, the picture content to be encoded into a data stream might be in a form which may not be effectively coded within the usually offered rectangular picture areas. For example, panoramic picture content may have been projected onto a two-dimensional plane, forming the picture area, in a manner so that the projection target, i.e. the footprint of the panoramic scene onto the picture area, may be non-rectangular and even non-convex. In that case, a more efficient coding of the picture/video data would be advantageous.

Further, random access points are provided in existing video data streams in a manner causing considerable bitrate peaks. In order to reduce the negative effect resulting from these bitrate peaks one could think of a reduction in the temporal granularity of the occurrence of these random access points. However, this increases the mean time duration for randomly accessing such a video data stream and accordingly it would be advantageous to have a concept at hand which solves this problem in a more efficient way.

Accordingly, it is an object of the present invention to solve the above-outlined problems. In accordance with the present application, this is achieved by the subject matter of the independent claims.

In accordance with a first aspect of the present application, a video data stream is rendered reducible in a manner so that the reduction leads to a restriction of pictures of the reduced video data stream to merely a predetermined subarea of the pictures of the original video data stream and in a manner so that transcoding, such as re-quantization, may be avoided and a conformance of the reduced video data stream relative to the codec underlying the original video data stream be maintained. This is achieved by providing the video data stream with information comprising an indication of the predetermined subarea and replacement indices for redirecting the indices comprised by the payload portion so as to refer to, and/or replacement parameters for adjusting the first set of coding parameter settings so as to result in, a second set of coding parameter settings. The payload portion of the original video data stream has the pictures of the video encoded there into parameterized using the first set of coding parameter settings indexed by indices comprised by the payload portion. Additionally or alternatively, similar measures are feasible with respect to supplemental enhancement information. Thus, it is feasible to reduce the video data stream to the reduced video data stream by performing the redirection and/or adjustment so that the second set of coding parameter settings is indexed by the payload portion's indices and accordingly becomes the effective coding parameter setting set, removing portions of the payload portion referring to an area of the pictures outside the predetermined subarea and changing location indications such as slice address in the payload portion to indicate a location measured from a circumference of the predetermined subarea instead of the circumference of the pictures. Alternatively, a data stream already reduced so as to not comprise the portions of the payload portion referring to outside the predetermined subarea, may be modified by in the fly adjustment of the parameters and/or supplement enhancement information.

In accordance with a further aspect of the present application, the transmission of picture content is rendered more efficient in that the picture content does not need to be shaped or ordered in a predetermined manner, such as in such a manner that typically rectangular picture area supported by the underlying codec is filled-out. Rather, a data stream having a picture encoded there into is provided to comprise a displacing information which indicates, for a set of at least one predetermined subregion of the picture, a displacement within an area of a target picture relative to an undistorted or one-to-one or congruent copying of the set into the area of the target picture. The provision of such displacing information is useful, for instance, in conveying within the picture a projection of a panoramic scene in cases where the projection is non-rectangular, for instance. This displacing information is also effective in cases where, owing to data stream reduction, the picture content lost its suitability for being conveyed within the smaller pictures of the reduced video data stream such as, for instance, in case of an interesting panoramic view section to be transmitted within the reduced video data stream crossing the transition borders of the prediction or the like.

In accordance with a further aspect of the present application, the negative effects of bitrate peaks in a video data stream caused by random access points are reduced by providing the video data stream with two sets of random access points: a first set of one or more pictures are encoded into the video data stream with suspending temporal prediction at least within a first picture subarea so as to form a set of a set of one or more first random access points and a second set of one or more pictures is encoded into the video data stream with suspending temporal prediction within a second picture subarea different from the first picture subarea so as to form a set of one or more second random access points. In this manner, it is feasible for a decoder seeking to randomly access, or resume decoding of, the video data stream to choose one of the first and second random access points which, in turn, may be distributed temporally and allow for at least a random access with respect to the second picture subarea in case of the second random access points and with respect to the at least first picture subarea with respect to the first random access points.

The concepts mentioned above may be advantageously used together. Moreover, advantageous implementations are the subject of the dependent claims. Preferred embodiments of the present application are described below with respect to the figures, among which:

Fig. 1 shows a schematic diagram of a video data stream in accordance with an embodiment of the present application pertaining to a first aspect according to which the video data stream is reducible to a reduced video data stream concerning a subregion of the pictures of the reducible video data stream;

Fig. 2 shows a schematic diagram illustrating the interdependency between payload portion and parameter set portion of the reducible video data

stream of Fig. 1 in accordance with an embodiment so as to illustrate the parameterization at which the pictures are encoded into the reducible video data stream;

Fig. 3 shows a schematic diagram for illustrating a possible content of the information with which the video data stream of Fig. 1 is provided in accordance with an embodiment to allow for the reduction;

Fig. 4 shows a schematic diagram showing a network device receiving a reducible video data stream and deriving therefrom a reduced video data stream;

Fig. 5 shows a schematic diagram illustrating the mode of operation in reducing a video data stream in accordance with an embodiment using parameter set redirection;

Fig. 6 shows a schematic diagram showing a video decoder 82 receiving a reduced video data stream to reconstruct therefrom the pictures of the reduced video data stream which, in turn, merely show the subarea of the pictures of the original video data stream;

Fig. 7 shows a schematic diagram of an alternative mode of operation in reducing a reducible video data stream, this time using parameter set adjustment using replacements within the information with which the reducible video data stream is provided;

Fig. 8 shows a syntax example for an information with which a reducible video data stream could be provided;

Fig. 9 shows an alternative syntax example of the information with which a reducible video data stream could be provided;

Fig. 10 shows an even further example for a syntax of the information with which the reducible video data stream could be provided.

Fig. 11 shows a further example of a syntax for the information, here in order to replace SEI messages;

Fig. 12 shows an example for a syntax table which could be used in order to form the information in connection with multilayer video data streams;

Fig. 13 shows a schematic diagram illustrating a relationship between tiles within a subregion of the pictures within the reducible video data stream on the one hand and the corresponding tile within the pictures of the reduced video data stream on the other hand in accordance with an embodiment to illustrate the possibility of spatial rearrangement of these tiles;

Fig. 14 shows an example of a picture obtained by rectilinear prediction of a panoramic scene;

Fig. 15 shows an example of a picture carrying picture content corresponding to a cubic projection of a panoramic scene.

Fig. 16 shows a picture efficiently filled using the cubic projection content of Fig. 15 by rearrangement;

Fig. 17 shows a syntax table example for displacing information using which a data stream could be provided in accordance with an embodiment concerning a second aspect of the present application;

Fig. 18 shows a schematic diagram illustrating a construction of a video data stream in accordance with an embodiment pertaining to the second aspect of the present application;

Fig. 19 shows a schematic diagram illustrating a possible content of the displacing information in accordance with an embodiment;

Fig. 20 shows a schematic diagram illustrating an encoder configured to form a data stream comprising displacing information and concurrently being reducible;

Fig. 21 shows a schematic diagram illustrating a decoder configured to receive a data stream comprising displacing information in order to illustrate a possibility how the displacing information may advantageously be used;

Fig. 22 shows a schematic diagram illustrating a video data stream comprising subarea-specific random access pictures in accordance with an embodiment pertaining to a further aspect to the present application;

Fig. 23a-23e show schematic diagrams illustrating possible arrangements of the subareas used in accordance with different alternatives;

Fig. 24 shows a schematic diagram illustrating a video decoder configured to receive a video data stream having interspersed therein subarea-specific random access pictures in accordance with an embodiment;

Fig. 25 shows a schematic illustrating the situation of Fig. 24, but illustrating an alternative mode of operation of the video decoder in that the video decoder waits until a complete coverage of the picture area of the inbound video data stream by the subareas of the subarea-specific random access pictures until outputting or presenting the video in randomly accessing the video data stream;

Fig. 26 shows a schematic diagram illustrating a network device receiving a video data stream comprising subarea-specific random access pictures, the subareas of which concurrently form a subregion with respect to which the video data stream is reducible;

Fig. 27 shows a schematic diagram illustrating a network device receiving a data stream provided with displacing information and being reducible to illustrate possibilities how network device 231 could provide the reduced video data stream with subregion-specific displacing information;

Fig. 28 shows an example for a disjoint region of interest subregion of a picture which is, exemplarily, a cylindrical panorama; and

Fig. 29 shows a syntax table of a TMCTS SEI message of HEVC.

The description of the present application is concerned with the above-identified aspects of the present application. In order to provide a background relating to a first aspect of the present application, which is concerned with subarea-extraction/reduction of video data streams, an example of an application where such a desire may stem from and the problems in fulfilling this desire are described and their overcoming motivated in the following by exemplarily referring to HEVC.

Spatial subsets, i.e. sets of tiles, can be signaled in HEVC using the Temporal Motion Constraint Tile Sets (TMCTS) SEI Message. The tile sets defined in such a message have the characteristic that "the inter prediction process is constrained such that no sample value outside each identified tile set, and no sample value at a fractional sample position that is derived using one or more sample values outside the identified tile set, is used for inter prediction of any sample within the identified tile set". In other words, the samples of a TMCTS can be decoded independently of samples that are not associated with the same TMCTS in the same layer. A TMCTS encompasses one or more rectangular union of one or more tiles as illustrated in Fig. A using a rectangle 900. In the figure, the region of interest 900 looked at by a user encompasses two disjoint image patches.

The precise syntax of the TMCTS SEI message is given in Fig. B for reference.

There are numerous applications where it is beneficial to create an independently decodable rectangular spatial subset of a video bitstream, i.e. a region of interest (Rol), without the burden of heavy processing such as video transcoding. These applications comprise but are not limited to:

• Panorama video streaming: only a specific spatial region of a wide angle video, e.g. 360° viewing angle, is displayed to the end user through a head mounted display.

• Aspect ratio adjusted streaming: the aspect ratio of coded video is adjusted live on server side according to the display characteristics on client side.

• Decoding complexity adjustment: low-cost/low-tech devices that are not able to decode a given encoded video bitstream due to level limits could potentially cope with a spatial subset of the video.

A number of problems arise given the so far described state-of-the-art techniques for the above list of exemplary applications.

• There exist no means to make HRD parameters, i.e. buffering/timing information, of a spatial subset of the video bitstream available to the system layer.

• There exists no conformance point in the video in order to trivially convert a spatial subset of a given video bitstream into a conforming video bitstream.

• There exist no means for an encoder to convey the guarantee that the tile set with a given identifier may be trivially converted into a conforming video bitstream.

Given solutions to the listed problems, all of the above example applications could be realized in a standard conformant way. Defining this capability within the video coding layer is expected to be an important conformance point for applications and systems layers.

The HEVC specification already includes processes for the extraction of sub-bitstreams that may reduce the temporal resolution or the amount of layers, i.e. reduce the spatial resolution, signal fidelity or number of views, of a coded video bitstream.

The present invention provides solutions for the identified problems, in particular:

1. Means for extraction of a spatial subset, i.e. a video bitstream based on a single TMCTS, from a coded video sequence via the definition of sub picture extraction process based on TMCTS

2. Means to convey and identify the correct Parameter Set values and (optionally) SEI information for an extracted sub picture video sequence.

3. Means for an encoder to convey the guarantee of certain sub-region extraction enabling bitstream constraints regarding the video bitstream and the TMCTS.

The embodiment described in the following overcomes the just outlined problem by providing a video data stream with information which is not required for reconstruction of the video's pictures from the payload portion of the video data stream, the information comprising an indication of the predetermined subarea and replacement indices and/or replacement parameters, the significance and function of which is described in more detail below. The following description is not to be restricted to HEVC or a modification of HEVC only. Rather, the embodiment described next could be implemented in any video codec technology so as to provide such video coding technology with an additional conformance point for providing a reduced subarea specific video data stream. Later on, details are presented how the embodiment described next may be specifically implemented to form an extension of HEVC.

Fig. 1 shows a video data stream 10 in accordance with an embodiment of the present application. That is, the video data stream is, in a conformance-maintaining manner, reducible to a reduced video data stream, the pictures of which merely show a predetermined subarea of the pictures 12 of the video 14 encoded into video data stream 10 without the need for transcoding or, to be more precise, time consuming and computationally complex operations such as re-quantization, spatial-to-spectral transformation and the inverse thereof and/or re-performing motion estimation.

The video data stream 10 of Fig. 1 is shown to comprise a parameter set portion 16 indicating coding parameter settings 80 and a payload portion 18 into which the pictures 12 of the video 14 are coded. In Fig. 1, portions 16 and 18 are exemplarily distinguished from one another by using hatching for the payload portion 18 while showing the parameter set portion 16 non-hatched. Moreover, portions 16 and 18 are exemplarily shown to be mutually interleaved within data stream 10 although this is not necessarily the case.

The payload portion 18 has the pictures 12 of video 14 encoded thereinto in a special manner. In particular, Fig. 1 shows an exemplary predetermined subarea 22 with respect to which video data stream 10 is to have the capability of being reducible to a reduced video data stream. The payload portion 18 has pictures 12 encoded thereinto in such a manner that, as far as the predetermined subarea 22 is concerned, any coding dependency is restricted so as to not cross a boundary of subarea 22. That is, a certain picture 12 is coded into payload portion 18 such that, within subarea 22, the coding of the subarea 22 does not depend on a spatial neighborhood of such area 22 within this picture. In case of pictures 12 being encoded into payload portion 18 also using temporal prediction, temporal predication may be restricted within subarea 22 such that no portion within the subarea 22 of a first picture of video 14 is coded in a manner dependent on an area of a reference (other picture) of video 14 external to subarea 22. That is, the corresponding encoder generating the video data stream 14 restricts the set of available motion vectors for coding subarea 22 in such a manner that same do not point to portions of reference pictures the formation of the motion-compensated prediction signal resulting from which would necessitate or involve samples outside the subarea 22 of the reference picture. As far as the spatial dependencies are concerned, it is noted that the restriction of same may pertain to spatial prediction concerning sample-wise spatial prediction, spatial prediction of coding parameters and coding-dependencies which would, for instance, result from continuing arithmetic coding across the boundary of subarea 22 spatially.

Thus, the payload portion 18 has encoded thereinto the pictures 12 with the just-outlined obeying of restricting coding dependencies so as to not reach-out towards portions external to predetermined subarea 22 and may accordingly be composed of a syntactically ordered sequence 24 of syntax elements including, for example, motion vectors, picture reference indices, partitioning information, coding modes, transform coefficients or residual samples values representing a quantized prediction residual, or one or any combination thereof. Most importantly, however, the payload portion 18 has the pictures 12 of video 14 encoded thereinto in a manner parameterized using a first set 20a of the coding parameter settings 20. For example, the coding parameter settings in set 20a define, for instance the picture size of pictures 12 such as the vertical height and the horizontal width of pictures 12. In order to illustrate how the picture size "parameterizes" the coding of pictures 12 into payload portion 18, reference is made briefly to Fig. 2. Fig. 2 shows the picture size coding parameter 26 as an example of one of the coding parameter settings of set 20a. Obviously, picture size 26 indicates the size of the picture area which has to be "coded" by payload portion 18 and it may be by signaling that a respective subblock of a certain picture 12 is left uncoded and accordingly, for instance, to be filled by a predetermined sample value such as zero, which may correspond to black. Accordingly, the picture size 26 influences 28 an amount or size 30 of the syntactical description 24 of the payload portion 18. Further, the picture size 26 influences 28 location indication 32 within the syntactical description 24 of payload portion 18 in terms of, for instance, value range of the location indication 32 and the order at which location indication 32 may appear in the syntactical description 24. For instance, location indication 32 may comprise slice addresses within the payload portion 18. Slices 34 are, as illustrated in Fig. 1, portions of data stream 10 in units of which, for instance, data stream 10 is transmittable to a decoder. Each picture 12 may be coded into data stream 10 in units of such slices 34, with the subdivision into slices 34 following a decoding order at which pictures 12 are coded into data stream 10. Each slice 34 corresponds to, and has thus encoded thereinto, a corresponding area 36 of a picture 12, wherein area 36 is, however, either within or external to subarea 22, i.e. it does not cross the boundary of the subarea. In such a case, each slice 34 may be provided with a slice address indicating the position of the corresponding area 36 within the picture area of

pictures 12, i.e. relative to a circumference of pictures 12. To mention a concrete example, the slice address may be measured relative to an upper-left hand corner of pictures 12. Obviously, such a slice address may not exceed a value exceeding the values of slice addresses within a picture with the picture size 26.

In a manner similar to picture size 26, the set 20a of coding parameter settings may also define a tile structure 38 of tiles into which picture 12 may be subdivided. Using dash-dotted lines 40, Fig. 1 presents an example of a sub-division of pictures 12 into tiles 42 such that the tiles are arranged in a tile array of columns and rows, !n the optional case of pictures 12 being encoded into payload portion 18 using tile subdivision into tiles 42, this may, for instance, mean that 1) spatial interdependences across tile boundaries is disallowed and, accordingly, not used and that 2) the decoding order at which pictures 12 are coded into data stream 10 traverses pictures 12 in a raster scan tile order, i.e. each tile is traversed before visiting the next tile in tile order. Accordingly, the tile structure 38 influences 28 the decoding order 44 at which pictures 12 are encoded into payload portion 18 and accordingly influences the syntactical description 24. In a way similar to picture size 26, the tile structure 38 also influences 28 the location indication 32 within payload portion 18, namely in terms of the order at which different instantiations of the location indication 32 are allowed to occur within the syntactical description 24.

The coding parameter settings of set 20a may also comprise buffer timing 46. Buffer timing 46 may, for instance, signal coded picture buffer removal times at which certain portions of data stream 10, such as individual slices 34 or portions of data stream 10 referring to one picture 12, are to be removed from a coded picture buffer of a decoder and these temporal values influence 28, or are related to, the sizes of the corresponding portions within data stream 10 so that the buffer timing 46 also influences 28 the amount/size 30 of payload portion 18.

That is, as the description of Fig. 2 exemplified, the coding of pictures 12 into payload portion 18 is "parameterized" or "described" using the set 20a of coding parameter settings in the sense that any discrepancy between the set 20a of coding parameter settings 20 on the one hand and the payload portion 18 and its syntactical description 24 on the other hand would be identified as being in conflict with the conformance requirements required to be obeyed by any data stream to be identified as conforming.

The first set 20a of coding parameter settings is referred to, or indexed, by indices 48 comprised by the payload portion 18 and being interspersed or comprised by the syntactical description 24. For instance, indices 48 may be contained in slice headers of slices 34.

Although the indexed set 20a of coding parameter settings could, in concert or along with the payload portion 18, be amended in a manner so that portions of payload portion 18 are canceled which do not pertain to subarea 22 and the resulting reduced data stream maintains conformance, this approach is not followed by the embodiment of Fig. 1. Although such correlated modification of both coding parameter settings within indexed set 20a on the one hand and payload portion 18 on the other hand would not require a detour via a complete decoding and encoding, the computational overhead in order to perform this correlated modification would nevertheless require a considerable amount of parsing steps and the like.

Accordingly, the embodiment of Fig. 1 follows another approach according to which the video data stream 10 comprises, i.e. is provided with, an information 50 which is not required for reconstruction of the video's pictures 12 from payload portion 18, the information comprising an indication of the predetermined subarea and replacement indices and/or replacement parameters. For example, information 50 may indicate the predetermined subarea 22 in terms of its location within pictures 12. The information 50 could, for instance, indicate the location of subarea 22 in units of tiles. Thus, information 50 may identify a set of tiles 42 within each picture so as to form subarea 22. The set of tiles 42 within each picture 12 may be fixed among pictures 12, i.e. the tiles forming, within each picture 12, subarea 22 may be co-located to each other and the tile boundaries of these tiles forming subarea 22 may spatially coincide between different pictures 12. It should be mentioned that the set of tiles is not restricted to form a contiguous rectangular tile subarray of pictures 12. An overlay-free and gapless abutment of the tiles within each picture 12 which form subarea 22 may, however, exist with this gapless and overlay-free abutment or juxtaposition forming an rectangular area. Naturally, however, indication 50 is not restricted to indicate subarea 22 in units of tiles. It should be recalled that the usage of the tile subdivision of pictures 12 is merely optional anyway. Indication 50 may, for instance, indicate subarea 22 in units of samples or by some other means. In an even further embodiment, the location of subarea 22 could even form a default information known to participating network devices and decoders supposed to handle the video data stream 10 with information 50 merely indicating the reducibility with respect to, or the

existence of, subarea 22. As already described above and as illustrated in Fig. 3, information 50 comprises, besides the indication 52 of the predetermined subarea, replacement indices 54 and/or replacement parameters 56. Replacement indices and/or replacement parameters are for changing the indexed set of coding parameter settings, i.e. the set of coding parameter settings indexed by the indices within payload portion 18, such that the indexed set of coding parameter settings fits to the payload portion of a reduced video data stream wherein the payload portion 18 has been modified by removal of those portions relating to portions of pictures 12 external to subarea 22 on the one hand and changing the location indicates 32 so as to relate to a circumference of subarea 22 rather than a circumference of pictures 12.

To render the latter circumstance clear, reference is made to Fig. 4 which shows a network device 60 configured to receive and process a video data stream 10 according to Fig. 1 so as to derive therefrom a reduced video data stream 62. The term "reduced" in "reduced video data stream" 62 shall denote two things, namely first, the fact that the reduced video data stream 62 corresponds to a lower bitrate than compared to video data stream 10, and second, pictures which the reduced video data stream 62 has encoded thereinto are smaller than pictures 12 of video data stream 10 in that the smaller pictures of reduced video data stream 62 merely show subarea 22 of pictures 12.

In order to fulfill its task as explained in more detail below, method device 60 comprises a reader 64 configured to read from data stream 10 information 50, and a reducer 66 which performs the reduction or extraction process on the basis of information 50 in a manner described in more detail below.

Fig. 5 illustrates the functionality of network device 60 for the exemplary case of using replacement indices 54 in information 50. In particular, as illustrated in Fig. 5, network device 60 uses information 50, for instance, in order to remove 68 from the payload portion 18 of data stream 10 portions 70 which do not relate to subarea 22, i.e. refer to an area of pictures 12 outside subarea 22. The removal 68 may, for instance, be performed on a slice basis, wherein reducer 66 identifies, on the basis of location indication or slice addresses within slice headers of slices 34 on the one hand and indication 52 within information 50 on the other hand, those slices 34 within payload portion 18 which do not relate to subarea 22.

In the example of Fig. 5, where information 50 carries replacement indices 54, the parameter set portion 16 of video data stream 10 carries besides the index set 20a of coding parameter settings, a non-indexed set 20b of coding parameter settings which are not referred to, or indexed, by the indices 48 within payload portion 18. In performing the reduction, reducer 66 replaces the indices 48 within data stream 10, with one being illustratively shown in Fig. 5, by the replacement indices 54 with the replacement being illustrated in Fig. 5 using curved arrow 72. By the replacement of indices 48 with replacement indices 54 a redirection 72 takes place according to which the indices comprised by the payload portion of the reduced video data stream 62 refer to, or index, the second set 20b of coding parameter settings so that the first set 20a of coding parameter settings becomes not-indexed. The redirection 72 may accordingly also involve reducer 66 removing 74 the no longer indexed set 20a of coding parameter settings from parameter set portion 16.

Reducer 66 also changes location indications 32 within the payload portion 18 so as to be measured relative to the circumference of the predetermined subarea 22. The change is indicated in Fig. 5 by way of a curved arrow 78, with the change of the exemplarily merely one depicted location indication 32 from data stream 10 to reduced video data stream 62 being schematically indicated by showing location indication 32 in the reduced video data stream 62 in a cross-hatched manner while showing location indication 32 in data stream 10 using no hatching.

Thus, summarizing the description of Fig. 5, network device 60 is able to obtain reduced video data stream 62 in a manner which involves merely a relatively low complexity. The cumbersome task of correctly adapting the set 20b of coding parameter settings to correctly parameterize, or fit to, the amount/size 30, location indication 32 and decoding order 44 of the payload portion 18 of the reduced video data stream 62, may have been performed elsewhere such as within an encoder 80 which is representatively illustrated by using a dashed box in Fig. 1. An alternative would be to change the order among assessment of information 50 and reduction by reducer 66 as described further below.

Fig. 6 illustrates a situation where the reduced video data stream 62 is fed to a video decoder 82 in order to illustrate that the reduced video data stream 62 has encoded thereinto a video 84 of smaller pictures 86, i.e. pictures 86 smaller in size than compared to pictures 12 and merely showing subarea 22 thereof. Thus, a reconstruction of video 84 results by video decoder 82 decoding reduced video data stream 62. As explained with

respect to Fig. 5, reduced video data stream 62 has a reduced payload portion 18 which has encoded thereinto the smaller pictures 86 in a manner parameterized, or correspondingly described, by the second set 20b of coding parameter settings.

The video encoder 80 may, for instance, encode pictures 12 into video data stream 10 while obeying the coding restrictions explained above with respect to Fig. 1 in connection with subarea 22. Encoder 80 may, for instance, perform this encoding using an optimization of an appropriate rate-distortion optimization function. As an outcome of this encoding, the payload portion 18 indexes set 20a. Additionally, encoder 80 generates set 20b. To this end, encoder 80 may, for instance, adapt picture size 26 and tile structure 38 from their values in set 20a so as to correspond to the size and occupied tile set of subarea 22. Beyond that, encoder 80 would substantially perform the reduction process as explained above with respect to Fig. 5 itself and compute the buffer timing 46 so as to enable a decoder, such as the video decoder 82, to correctly manage its coded picture buffer using the thus computed buffer timing 46 within the second set 20b of coding parameter settings.

Fig. 7 illustrates an alternative way of network device's mode of operation, namely in case of using replacement parameters 56 within information 50. According to this alternative, as it is depicted in Fig. 7, the parameter set portion 16 merely comprises the indexed set 20a of coding parameter settings so that re-indexing or redirection 72 and set remover 74 do not need to be performed by reducer 66. However, instead of this, reducer 66 uses replacement parameters 56 obtained from information 50 so as to adjust 88 indexed set 20a of coding parameter settings so as to become set 20b of coding parameter settings. Even in accordance with this alternative, reducer 66, which performs steps 68, 78 and 88, is free of relatively complex operations in order to derive the reduced video data stream 62 out of original video data stream 10.

In other words, in the case of Fig. 7, the replacement parameters 56 may, for instance, comprise one or more of picture size 26, tile structure 38 and/or buffer timing 46, for example.

It should be noted with respect to Figs. 5 and 7 that there may also be mixtures of both alternatives with information 50 comprising both replacement indices and replacement parameters. For instance, coding parameter settings which are subject to a change from set 20a to 20b could be distributed onto, or comprised by, different parameter set slices such as SPS, PPS, VPS, or the like. Accordingly, for different ones of these slices, for instance different processing according to Fig. 5 or 7 could be performed.

With respect to the task of changing 78 the location indications, it is noted that this task has to be performed relatively often since it is to be performed, for example, for each payload slice of slices 34 within payload portion 18, but the computation of the new replacement values for the location indications 32 is relatively incomplex. For example, location indicates could indicate a location by way of horizontal and vertical coordinates and change 78 could, for instance, compute a new coordinate of a location indication by forming a subtraction between the corresponding coordinate of the original location indication 32 and data stream 10 and the offset of subarea 22 relative to the upper left corner of pictures 12. Alternatively, location indicates 32 may indicate a location using some linear measure following, for instance, the aforementioned decoding order in some appropriate units such as, for instance, in units of coding blocks, such as tree root blocks, in which pictures 12 are regularly divided in rows and columns. In such a case, the location indication would be computed within step 78 anew with considering a coding order of these code blocks within subarea 22 only. In this regard, it should also be noted that the just-outlined reduction/extraction process so as to form the reduced video data stream 62 out of video data stream 10 would also be suitable for forming the reduced video data stream 62 in such a manner that the smaller pictures 86 of video 84 coded into reduced video data stream 62 show section 22 in a manner spatially stitched, and that the same picture content of pictures 84 may be located within pictures 12 at subarea 22 in a different spatially arranged manner.

With respect to Fig. 6, it is noted that video decoder 82 shown in Fig. 6 may or may not be able to decode video data stream 10 so as to reconstruct therefrom pictures 12 of video 14. The reason for video decoder 82 not being able to decode video data stream 10 could be that a profile level of video decoder 82 could, for instance, suffice to cope with the size and complexity of the reduced video data stream 62, but could be insufficient to decode the original video data stream 10. In principle, however, both data streams 62 and 10 conform to one video codec owing to the above-outlined appropriate adaptation of the indexed set of coding parameter settings by way of re-indexing and/or parameter adjustment.

After having described rather generally embodiments for video stream reduction/extraction with respect to a certain subarea of pictures of the video data stream to be reduced, the above description of the motivation and problems relating to such extraction with respect to HEVC is resumed in the following to provide a specific example for implementing the above described embodiments.

1. Signaling aspects for single layer sub region

1.1. Parameter Sets:

The following Parameter Sets aspects need adjustment when a spatial subset is to be extracted:

• VPS: no normative information for single layer coding

• SPS:

■ Level information

■ Picture dimensions

■ Cropping or conformance window information

■ Buffering and timing information (i.e. HRD information)

■ Potentially further Video Usability Information (VUI) items such as motion_vectors_over_pic_boundaries_flag, min_spatial_segmentation_idc

• PPS:

■ Spatial segmentation information, i.e. Tiling information with respect to amount and dimension of tiles in horizon

Claims

1. Video data stream representing a video (14) comprising

a parameter set portion (16) indicating coding parameter settings (20);

a payload portion (18) into which pictures (12) of the video are coded in a manner parameterized using a first set (20a) of the coding parameter settings (20), the first set (20a) being indexed by indices (48) comprised by the payload portion,

wherein the video data stream comprises an information (50) comprising

an indication (52) of a predetermined subarea (22) of the pictures (12), and replacement indices (54) for redirecting (72) the indices (48) comprised by the payload portion so as to refer to, and/or replacement parameters (56) for adjusting (88) the first set (20a) of coding parameter settings so as to result in, a second set

(20b) of coding parameter settings,

wherein the second set (20b) of coding parameters are selected so that a reduced video data stream (62) modified compared to the video data stream by

removing (68) portions (70) of the payload portion (18) referring to an area of the pictures (12) outside the predetermined subarea (22), and

changing (78) location indications (32) in the payload portion (18) so to indicate a location in a manner measured from a circumference of the predetermined subarea (22) instead of the pictures (12),

has a reduced payload portion having encoded thereinto subarea-specific pictures (86) showing the predetermined subarea (22) of the pictures in a manner parameterized using the second set (20b) of coding parameter settings.

2. Video data stream according to claim 1 wherein the video data stream and the reduced video data stream (62) are conforming to HEVC.

3. Video data stream according to claim 1 or 2 wherein the parameter set portion is comprised by SPS NAL units and /or VPS NAL units and/or PPS NAL units.

4. Video data stream according to any of claims 1 to 3 wherein the coding parameter settings define one or more of

picture size (26),

tile structure (38),

buffer size and timing (46).

5. Video data stream according to any of claims 1 to 4 wherein the indices (48) are contained in slice headers of slices (34) of the payload portion (18), each slice having encoded thereinto a corresponding area (36) of a picture (12) which does not cross a boundary of the predetermined subarea (22).

6. Video data stream according to any of claims 1 to 5 wherein the information is contained in a SEI message, a VUI or an parameter set extension.

7. Video data stream according to any of claims 1 to 6 wherein the payload portion has the pictures of the video coded thereinto in units of tiles (42) into which the pictures are subdivided and which are arranged in a tile array of columns and rows, wherein the indication (52) indicates the predetermined subarea as a set of the tiles.

8. Video data stream according to claims 7 wherein the payload portion has the pictures of the video coded thereinto in units of tiles (42) such that the payload portion is subdivided in units of slices (34), each slice having encoded thereinto a corresponding area of a picture which does not cross a tile border (40) between the tiles.

9. Video data stream according to any of claims 1 to 8 wherein

a portion of the first set (20a) of coding parameter settings indexed by the indices (48) from which the redirection (72) by the replacement indices is to be performed, and/or which is replaced by the replacement parameters, pertains one or more of

picture size (26),

tile structure (38),

buffer size and timing (46).

10. Video data stream according to any of claims 1 to 9 wherein the payload portion (18) has the pictures (12) of the video coded thereinto so that within the predetermined subarea (22) the pictures are coded with interruption of temporal prediction across boundaries of the predetermined subarea (22).

11. Encoder for encoding a video into a video data stream, comprising

a parameter setter (80a) configured to determine coding parameter settings (20) and generate a parameter set portion (16) of the video data stream indicating the coding parameter settings;

a coding core (80b) configured to encode pictures of the video into a payload portion of the video data stream in a manner parameterized using a first set (20a) of the coding parameter settings, the first set being indexed by indices (48) comprised by the payload portion,

wherein the encoder is configured to provide the video data stream with an information (50) comprising

an indication of a predetermined subarea (22) of the pictures (12), and replacement indices (54) for redirecting the indices comprised by the payload portion so as to refer to, and/or replacement parameters (56) for adjusting the first set of coding parameter settings so as to result in, a second set of coding parameter settings,

wherein the second set of coding parameters are selected so that a reduced video data stream modified compared to the video data stream by

removing (68) portions of the payload portion referring to an area of the pictures outside the predetermined subarea, and

changing (78) location indications in the payload portion so to indicate a location in a manner measured from a circumference of the predetermined subarea instead of the pictures,

has a reduced payload portion having encoded thereinto subarea-specific pictures showing the subarea of the pictures in a manner parameterized using the second set of coding parameter settings.

12. Encoder according to claim 11 wherein the video data stream and the reduced video data stream are conforming to HEVC.

13. Encoder according to claim 11 or 12 wherein the parameter set portion is comprised by SPS NAL units and /or VPS NAL units and/or PPS NAL units.

14. Encoder according to any of claims 11 to 13 wherein the coding parameter settings define one or more of

picture size,

tile structure,

buffer size and timing.

15. Encoder according to any of claims 11 to 14 wherein the encoder is configured to insert the indices into slice headers of slices of the payload portion, and to encode into each slice a corresponding area of a picture which does not cross a boundary of the predetermined subarea.

16. Encoder according to any of claims 11 to 15 wherein the encoder is configured to insert the information into a SEI message, a VUI or an parameter set extension.

17. Encoder according to any of claims 11 to 16 wherein the encoder is configured to encode into the payload portion the pictures of the video coded in units of tiles into which the pictures are subdivided and which are arranged in a tile array of columns and rows, wherein the indication indicates the predetermined subarea as a set of the tiles.

18. Encoder according to claim 17 wherein the encoder is configured to encode into the payload portion the pictures of the video coded in units of tiles such that the payload portion is subdivided in units of slices, each slice having encoded thereinto a corresponding area of a picture which does not cross a tile border between the tiles.

19. Encoder according to any of claims 11 to 18 wherein

a portion of the first set of coding parameter settings indexed by the indices from which the redirection by the replacement indices is to be performed, and/or which is replaced by the replacement parameters, pertains one or more of

picture size,

tile structure,

buffer timing.

20. Encoder according to any of claims 11 to 19 wherein the encoder is configured to encode into the payload portion the pictures of the video so that within the predetermined subarea (22) the pictures are coded with interruption of temporal prediction across boundaries of the predetermined subarea (22).

21. Network device for processing a video data stream, the video data stream comprising

a parameter set portion indicating coding parameter settings;

a payload portion into which pictures of the video are coded in a manner parameterized using a first set of the coding parameter settings, the first set being indexed by indices comprised by the payload portion,

wherein the network device is configured to

read (64) from the video data stream an information comprising

an indication of a predetermined subarea of the pictures, and

replacement indices for redirecting the indices comprised by the payload portion so as to refer to, and/or replacement parameters for adjusting the first set of coding parameter settings so as to result in, a second set of coding parameter settings,

reduce (66) the video data stream to a reduced video data stream modified by performing the redirection (72) and/or adjustment (88) so that the second set of coding parameter settings is indexed by the payload portion's indices: removing (68) portions of the payload portion referring to an area of the pictures outside the predetermined subarea, and

changing (78) location indications (32) in the payload portion so to indicate a location measured from a circumference of the predetermined subarea instead of the pictures,

so that the reduced video data stream has a reduced payload portion which has encoded thereinto subarea-specific pictures showing the predetermined subarea of the pictures in a manner parameterized using the second set of coding parameter settings.

22. Network device according to claim 21 wherein the video data stream and the reduced video data stream are conforming to HEVC.

23. Network device according to claim 21 or 22 wherein the parameter set portion is comprised by SPS NAL units and /or VPS NAL units and/or PPS NAL units.

24. Network device according to any of claims 21 to 23 wherein the coding parameter settings define one or more of

picture size,

tile structure,

buffer size and timing.

25. Network device according to any of claims 21 to 24 wherein the network device is configured to locate the indices in slice headers of slices of the payload portion, each slice having encoded thereinto a corresponding area of a picture which does not cross a boundary of the predetermined subarea.

26. Network device according to any of claims 21 to 25 wherein the network device is configured to read the information from a SEI message, a VUI or an parameter set extension of the video data stream.

27. Network device according to any of claims 21 to 26 wherein the payload portion has the pictures of the video coded thereinto in units of tiles into which the pictures are subdivided and which are arranged in a tile array of columns and rows, wherein the indication indicates the predetermined subarea as a set of the tiles.

28. Network device according to claims 27 wherein the payload portion has the pictures of the video coded thereinto in units of tiles such that the payload portion is subdivided in units of slices, each slice having encoded thereinto a corresponding area of a picture which does not cross a tile border between the tiles.

29. Network device according to any of claims 21 to 28 wherein

a portion of the first set of coding parameter settings indexed by the indices from which the redirection by the replacement indices is to be performed, and/or which is replaced by the replacement parameters, pertains one or more of

picture size,

tile structure,

buffer size and timing.

30. Network device according to any of claims 21 to 29 wherein the payload portion (18) has the pictures (12) of the video coded thereinto so that within the predetermined subarea (22) the pictures are coded with interruption of temporal prediction across boundaries of the predetermined subarea (22).

31. Network device according to any of claims 21 to 30 wherein the network device is one of

is a video decoder capable of decoding the video data stream and the reduced video data stream,

a video decoder a profile level of which suffices for the video decoder to be capable of decoding the reduced video data stream, but the profile level of which is insufficient so that the video decoder is unable to decode the video data stream, and

network device configured to forward, depending on a control signal, one of the video data stream and the reduced video data stream to a video decoder.

32. Network device according to any of claims 21 to 31, wherein the video data stream comprises a displacing information (206) which indicates for a set of at least one predetermined subregions (214) of the pictures (12) a displacement of the set of at least one predetermined subregions within a target picture area (216) relative to an undisplaced copying of the set of at least one predetermined subregions into the target picture area, and

the network device is configured to modify the displacing information into modified displacing information (206') so that the subarea-specific pictures, copied into the target picture area with having predetermined subregions of the subarea-specific pictures displaced according to the modified displacing information, coincides within the target picture area (216) with the predetermined subarea (22) of the pictures (12) copied into the target picture area with the set of at least one predetermined subregion (214) of the pictures (12) displaced according to the displacing information, and, in reducing the video data stream, replace the displacing information (206) with the modified displacing information (206'), or

the modified displacing information (206') is comprised by the video data stream associated with the predetermined subarea (22) of the pictures (12) and the displacing information (206) is comprised by the video data stream associated with the pictures and the network device is configured to, in reducing the video data stream, remove the displacing information (206) and carry over the modified displacing information (206') into the reduced video data stream so as to be associated with the subarea-specific pictures (86).

33. Video data stream representing a video (14) comprising

a payload portion (18) into which pictures (12) of the video are coded,

a supplemental enhancement information message indicating supplemental enhancement information matching the manner at which the pictures of the video are coded into the payload portion (18),

wherein the video data stream comprises an information (50) comprising

an indication (52) of a predetermined subarea (22) of the pictures (12), and a replacement supplemental enhancement information message for replacing the supplemental enhancement information message,

wherein the replacement supplemental enhancement information message is selected so that a reduced video data stream (62) modified compared to the video data stream by removing (68) portions (70) of the payload portion (18) referring to an area of the pictures (12) outside the predetermined subarea (22), and

changing (78) location indications (32) in the payload portion (18) so to indicate a location in a manner measured from a circumference of the predetermined subarea (22) instead of the pictures (12),

has a reduced payload portion having encoded thereinto subarea-specific pictures (86) showing the predetermined subarea (22) of the pictures in a manner so that the replacement supplemental enhancement information message indicates replacement supplemental enhancement information matching the manner at which the subarea-specific pictures (86) are coded into the reduced payload portion (18).

34. Encoder for encoding a video into a video data stream, comprising

a coding core (80b) configured to encode pictures of the video into a payload portion of the video data stream,

a parameter setter (80a) configured to generate a supplemental enhancement information message indicating supplemental enhancement information matching the manner at which the pictures of the video are coded into the payload portion (18);

wherein the encoder is configured to provide the video data stream with an information (50) comprising

an indication of a predetermined subarea (22) of the pictures (12), and

a replacement supplemental enhancement information message for to be replace the supplemental enhancement information message,

wherein the replacement supplemental enhancement information message is selected so that a reduced video data stream modified compared to the video data stream by removing (68) portions of the payload portion referring to an area of the pictures outside the predetermined subarea, and

changing (78) location indications in the payload portion so to indicate a location in a manner measured from a circumference of the predetermined subarea instead of the pictures,

has a reduced payload portion having encoded thereinto subarea-specific pictures showing the subarea of the pictures in a manner so that the replacement supplemental

enhancement information message indicates replacement supplemental enhancement information matching the manner at which the subarea-specific pictures (86) are coded into the reduced payload portion (18).

35. Network device for processing a video data stream, the video data stream comprising

a payload portion into which pictures of the video are coded,

a supplemental enhancement information message indicating supplemental enhancement information matching the manner at which the pictures of the video are coded into the payload portion (18),

wherein the network device is configured to

read (64) from the video data stream an information comprising

an indication of a predetermined subarea of the pictures, and

a replacement supplemental enhancement information message for to be replace the supplemental enhancement information message,

reduce (66) the video data stream to a reduced video data stream modified by replacing the supplemental enhancement information message by the replacement supplemental enhancement information message ;

removing (68) portions of the payload portion referring to an area of the pictures outside the predetermined subarea, and

changing (78) location indications (32) in the payload portion so to indicate a location measured from a circumference of the predetermined subarea instead of the pictures,

so that the reduced video data stream has a reduced payload portion which has encoded thereinto subarea-specific pictures showing the predetermined subarea of the pictures in a manner so that the replacement supplemental enhancement information message indicates replacement supplemental enhancement information matching the manner at which the subarea-specific pictures (86) are coded into the reduced payload portion (18).

36. Network device for processing a video data stream, configured to

receive a video data stream which comprises a fraction of a payload portion into which pictures of the video are coded, wherein the fraction corresponds to an exclusion of portions of the payload portion referring to an area of the pictures outside a predetermined subarea of the pictures, wherein the pictures of the video are coded into the payload portion,

in a manner parameterized, without exclusion, using coding parameter settings in a parameter set portion of the video data stream, and/or

in a manner matching, without exclusion, supplemental enhancement information indicated by a supplemental enhancement message of the video data stream, modify the video data stream by

changing (78) location indications (32) in the payload portion so to indicate a location measured from a circumference of the predetermined subarea instead of the pictures, and

adjust the coding parameter settings in the parameter set portion and/or adjust the supplemental enhancement information message so that the video data stream has the fraction of the payload portion into which subarea-specific pictures showing the predetermined subarea of the pictures are encoded in a manner parameterized using the coding parameter settings and/or

matching the supplemental enhancement information supplemental enhancement information indicated by the adjusted supplemental enhancement information supplemental enhancement message as adjusted.

37. Network device according to claim 36 configured to receive information on the predetermined subarea (22) along with the video data stream or use the information on the predetermined subarea (22) to restrict downloading the video data stream such that portions (70) of the payload portion (18) referring to an area of the pictures outside the predetermined subarea (22) are not downloaded.

38. Data stream having a picture (204) encoded thereinto, the data stream comprising

a displacing information (206) which indicates for a set of at least one predetermined subregion (214) of the picture (204) a displacement (218) of the set of at least one predetermined subregion within a target picture area (216) relative to an undisplaced copying of the set of at least one predetermined subregion into the target picture area (216).

39. Data stream having according to claim 38, wherein the displacing information (206) indicates the displacement (218) in units of one of

picture samples,

tiles into which the picture is subdivided, in units of which the picture is coded without coding inter-dependencies, and which each of the set of at least one subregion (214) is composed of.

40. Data stream having according to claim 38 or 39, wherein the set of at least one subregion of the picture is a subset of a gapless and overlap-free spatial partitioning of the picture into an array of subregions.

41. Data stream having according to any of claims 38 to 39, wherein the displacing information comprises

a count (220) of the predetermined subregions within the set of at least one predetermined subregions of the picture;

a size parameter (222) defining a size of the target picture area, and

for each predetermined subregion of the set of at least one predetermined subregion of the picture, coordinates (224) of a displacement of the respective predetermined subregion of the picture within the target picture area; or

the displacing information comprises for each predetermined subregion of the set of at least one predetermined subregion of the picture, first position information on a position of the respective predetermined subregion in the target region, second information on a position of the respective predetermined region within the picture, and information on a rotation and/or information on a mirroring when mapping the respective predetermined subregion between the target region and the picture.

42. Data stream having according to claim 41, wherein the displacing information further indicates a scaling (226) of the respective subregion within the target picture area.

43. Data stream having according to any of claims 38 to 42, wherein the data stream further comprises a default filling information (228) which indicates a default filling using which a portion (130) of the target picture area (216) is to be filled which is neither covered by any of the set of at least one predetermined subregion of the picture displaced according to the displacing information nor, if the set of at least one predetermined subregion does not completely cover the picture (204), any non-displaced portion of the picture.

44. Data stream having according to any of claims 38 to 42, wherein the data stream has a sequence of pictures encoded thereinto, wherein the displacing information is valid for the sequence of pictures.

45. Decoder for decoding a data stream having a picture encoded thereinto, the decoder comprising

a decoding core (234) configured to reconstruct the picture from the data stream, and

a displacer (236) configured to synthesize a target picture on the basis of the picture by, according to displacing information contained in the data stream, displacing each of a set of at least one predetermined subregion of the picture within an area of the target picture.

46. Decoder according to claim 45, wherein the displacer is configured to, if the set of at least one predetermined subregion does not completely cover the picture (204), copy in a undisplaced manner any non-displaced portion of the picture into the target picture area (216).

47. Decoder according to claim 45 or 46, wherein the displacing information indicates the displacement of the respective subregion in units of one of

picture samples,

tiles into which the picture is subdivided, in units of which the picture is coded without coding inter-dependencies, and which each of the set of at least one subregion (214) is composed of.

48. Decoder according to any of claims 45 to 47, wherein the set of at least one subregion of the picture is a subset of a gapless and overlap-free spatial partitioning of the picture into an array of subregions.

49. Decoder according to any of claims 45 to 48, wherein the displacer is configured to

read, as part the displacing information, from the data stream

a count of the predetermined subregions within the set of at least one predetermined subregions of the picture,

size parameters defining a size of the target picture area,

for each predetermined subregion of the set of at least one predetermined subregions of the picture, coordinates of a displacement of the respective predetermined subregion of the picture within an target picture area,

displace each of the predetermined subregions of the set of at least one predetermined subregions of the picture relative to an overlay of the picture onto the area of the target picture according to the coordinates of displacement read for the respective predetermined subregion.

50. Decoder according to claim 49, wherein the displacer is configured to synthesize a target picture on the basis of the picture by, according to displacing information contained in the data stream, scaling each of a set of at least one predetermined subregions of the picture, displaced relative to the undistorted copying of the picture into the area of the target picture according to the displacing information, within the area of the target picture.

51. Decoder according to any of claims 45 to 50, wherein the displacer is configured to fill, according to default filling information contained in the data stream, a portion of the target picture area where none of the set of at least one predetermined subregions of the picture lies is displaced to according to the displacing information nor, if the set of at least one predetermined subregion does not completely cover the picture (204), any non-displaced portion of the picture.

52. Decoder according to any of claims 45 to 51 , wherein the decoding core is configured to reconstruct a sequence of pictures from the data stream and the displacer is configured is configured to apply the displacing information onto the pictures of the sequence of pictures.

53. Decoder according to any of claims 45 to 52, further comprising

a renderer (238) configured to apply a injective projection onto the target picture area so as to form an output picture.

54. Network device configured to reduce a data stream (200) having encoded thereinto a first picture, into a reduced data stream (232) having encoded thereinto a subareas-specific picture showing a predetermined subarea of the first picture, wherein the data stream (200) comprises a displacing information (206) which indicates for a set of at least one predetermined subregion of the first picture a displacement (218) of the set of at least one predetermined subregion (214) within a target picture area (216) relative to an undisplaced copying of the set of at least one predetermined subregion into the target picture area, wherein

the network device is configured to modify the displacing information (206) into modified displacing information (206') so that the subarea-specific picture, copied into the target picture area with having a set of at least one predetermined subregion of the subarea-specific picture displaced according to the modified displacing information, coincides within the target picture area with the predetermined subarea of the first picture copied into the target picture area with the set of at least one predetermined subregion (214) of the picture (12) displaced according to the displacing information, and, in reducing the data stream, replace the displacing information with the modified displacing information, or

the modified displacing information is comprised by the data stream associated with the predetermined subarea of the first pictures and the displacing information is comprised by the data stream associated with the first pictures and the network device is configured to, in reducing the data stream, remove the displacing information and carry over the modified displacing information into the reduced data stream so as to be associated with the subarea-specific pictures.

55. Video data stream having encoded thereinto a sequence of pictures (302) using temporal prediction such that

a first set of one or more pictures (pictures A) are encoded into the video data stream with suspending temporal prediction at least within a first picture subarea (A) so as to form a set of one or more first random access points, and

a second set of one or more pictures (pictures B) are encoded into the video data stream with suspending temporal prediction within a second picture subarea (B) different from the first picture subarea (A) as to form a set of one or more second random access points.

56. Video data stream according to claim 55 wherein the first and the second subareas abut each other without overlap.

57. Video data stream according to claim 56 wherein the first and the second subareas form an overlap-free and gapless partitioning of the sequence of pictures (302) which is signaled (319) in the video data stream.

58. Video data stream according to claim 57 wherein the first set of one or more pictures (pictures A) are encoded into the video data stream with suspending temporal prediction over the picture's area completely so as to form a set of one or more picture-wise random access points.

59. Video data stream according to any of claims 55 to 58 wherein

the second set of one or more pictures (pictures B) are encoded into the video data stream with suspending within the second picture subarea (B) a spatial coding dependency on a spatial portion external to the second picture subarea.

60. Video data stream according to any of claims 55 to 59 further comprising

a signalization (318) indicating the second set of one or more pictures (pictures B).

61. Video data stream according to claim 60 wherein the signalization (318) indicates the second set of one or more pictures (pictures B) at least partially via a picture type signalization (320) of the video data stream which distinguishes between picture-wise random access points and subarea-specific random access points.

62. Video data stream according to claim 61 wherein the signalization (318) indicates the second set of one or more pictures (pictures B) by the picture type signalization (320) of the video data stream indicating the second set of one or more pictures (pictures B) to be subarea-specific random access points and further associating the second set of one or more pictures (pictures B) to the second picture subarea (B).

63. Video data stream according to claim 61 or 62 wherein the picture type signalization (320) of the video data stream as contained in NAL unit headers of the video data stream.

64. Video data stream according to any of claims 55 to 63 further comprising a further signalization (321 ) indicating for the sequence of pictures whether the pictures (302) are encoded into the video data stream in a manner so that temporal prediction of the second picture subarea (B) is restricted to not, or allowed to, reach-out to a picture area of the pictures (302) lying outside the second picture subarea .

65. Video data stream according to any of claims 55 to 64 wherein pictures (317) between one (picture B) of the second set of one or more pictures and a, in terms of a decoding order, following one (picture A) of the first set of one or more pictures are encoded into the video data stream with restricting temporal prediction within the second picture subarea (B) in a manner so as to not refer to a picture area (A) of reference pictures external to the second picture subarea (B).

66. Video data stream according to any of claims 55 to 65 wherein pictures (317) between one (picture B) of the second set of one or more pictures and a, in terms of a decoding order, following one (picture A) of the first set of one or more pictures are encoded into the video data stream with restricting temporal prediction within the second picture subarea (B) in a manner so as to not refer to reference pictures upstream, in decoding order, relative to the one (picture B) of the second set of one or more pictures.

67. Video data stream according to any of claims 65 and 66 wherein the pictures (317) between the one (picture B) of the second set of one or more pictures and a, in terms of a decoding order, following one (picture A) of the first set of one or more pictures are encoded into the video data stream with restricting temporal prediction within the picture area (A) outside the second picture subarea (B) in a manner so as to not refer to the second picture subarea (B) of reference pictures which belong to any of the pictures (317) between the one (picture B) of the second set of one or more pictures and the, in terms of a decoding order, following one (picture A) of the first set of one or more pictures and the one (picture B) of the second set of one or more pictures, respectively.

68. Video data stream according to any of claims 65 and 66 wherein

the pictures (317) between the one (picture B) of the second set of one or more pictures and a, in terms of a decoding order, following one (picture A) of the first set of one or more pictures are encoded into the video data stream using temporal prediction within the picture area (A) outside the second picture subarea (B) which, at least partially, refers to the second picture subarea (B) of reference pictures which belong to any of the pictures (317) between the one (picture B) of the second set of one or more pictures and the, in terms of a decoding order, following one (picture A) of the first set of one or more pictures and the one (picture B) of the second set of one or more pictures, respectively, and

pictures (317') following, in terms of a decoding order, the following one (picture A) of the first set of one or more pictures are encoded into the video data stream with restricting temporal prediction within the picture area (A) outside the second picture subarea (B) so as to not refer to the second picture subarea (B) of, in terms of a decoding order, preceding reference pictures which belong to any of the following one (picture A) of the first set of one or more pictures and the pictures (317') following the following one (picture A) of the first set of one or more pictures.

69. Encoder for encoding into a video data stream a sequence of pictures using temporal prediction, the encoder configured to

encode a first set of one or more pictures into the video data stream with suspending temporal prediction at least within a first picture subarea so as to form a set of one or more first random access points, and

encode a second set of one or more pictures into the video data stream with suspending temporal prediction within a second picture subarea different from the first picture subarea as to form a set of one or more second random access points.

70. Encoder according to claim 69 wherein the first and the second subarea abut each other without overlap.

71. Encoder according to claim 70 wherein the first and the second subareas form an overlap-free and gapless partitioning of the sequence of pictures (302) and the encoder is configured to provide the video data stream with a signalization (319) of the overlap-free and gapless partitioning.

72. Encoder according to claim 69 configured to encode the first set of one or more pictures into the video data stream with suspending temporal prediction over the picture's area completely so as to form a set of picture-wise random access points.

73. Encoder according to any of claims 69 to 72 configured to encode the second set of one or more pictures into the video data stream with suspending within the second picture subarea (B) a spatial coding dependency on a spatial portion external to the second picture subarea.

74. Encoder according to any of claims 69 to 74 further configured to insert a signalization into the video data stream indicating the second set of one or more pictures.

75. Encoder according to any of claims 69 to 74 configured to insert a further signalization (321) into the video data stream indicating for the sequence of pictures whether the pictures (302) are encoded into the video data stream in a manner so that temporal prediction of the second picture subarea (B) is restricted to not, or allowed to, reach-out to a picture area of the pictures (302) lying outside the second picture subarea.

76. Encoder according to any of claims 69 to 75 wherein the encoder is configured to

encode pictures between one of the second set of one or more pictures and a, in terms of a decoding order, following one of the first set of one or more pictures into the video data stream with restricting temporal prediction within the second picture subarea in a manner so as to not refer to a picture area of reference pictures external to the second picture subarea.

77. Encoder according to any of claims 69 to 75 wherein the encoder is configured to encode pictures (317) between one (picture B) of the second set of one or more pictures and a, in terms of a decoding order, following one (picture A) of the first set of one or more pictures into the video data stream with restricting temporal prediction within the second picture subarea (B) in a manner so as to not refer to reference pictures upstream, in decoding order, relative to the one (picture B) of the second set of one or more pictures.

78. Encoder according to any of claims 76 and 77 wherein the encoder is configured to encode the pictures (317) between the one (picture B) of the second set of one or more pictures and a, in terms of a decoding order, following one (picture A) of the first set of one or more pictures into the video data stream with restricting temporal prediction within the picture area (A) outside the second picture subarea (B) in a manner so as to not refer to the second picture subarea (B) of reference pictures which belong to any of the pictures (317) between the one (picture B) of the second set of one or more pictures and the, in terms of a decoding order, following one (picture A) of the first set of one or more pictures and the one (picture B) of the second set of one or more pictures, respectively.

79. Encoder according to any of claims 76 and 77 wherein the encoder is configured to

encode the pictures (317) between the one (picture B) of the second set of one or more pictures and a, in terms of a decoding order, following one (picture A) of the first set of one or more pictures into the video data stream using temporal prediction within the picture area (A) outside the second picture subarea (B) which, at least partially, refers to the second picture subarea (B) of reference pictures which belong to any of the pictures (317) between the one (picture B) of the second set of one or more pictures and the, in terms of a decoding order, following one (picture A) of the first set of one or more pictures and the one (picture B) of the second set of one or more pictures, respectively, and

encode pictures following, in terms of a decoding order, the following one (picture A) of the first set of one or more pictures into the video data stream with restricting temporal prediction within the picture area (A) outside the second picture subarea (B) so as to not refer to the second picture subarea (B) of, in terms of a decoding order, preceding reference pictures which belong to any of the following one (picture A) of the first set of one or more pictures and the pictures following the following one (picture A) of the first set of one or more pictures.

80. Decoder for decoding from a video data stream a sequence of pictures using temporal prediction, the decoder supporting random access using

a set of one or more first random access points at a first set of one or more pictures which are encoded into the video data stream with suspending temporal prediction at least within a first picture subarea, and

a set of one or more second random access points at a second set of one or more pictures which are encoded into the video data stream with suspending temporal prediction within a second picture subarea different from the first picture subarea.

81. Decoder according to claim 80 configured to, if a current picture one of the second set of one or more pictures,

resume decoding and/or outputting of the sequence of pictures starting with the current picture, but preliminarily restrict at least one of the decoding and/or outputting of the sequence of pictures to the second picture subarea.

82. Decoder according to claim 81 configured to stop the preliminary restriction upon encountering one of set of first random access points.

83. Decoder according to claim 80 configured to, if a current picture one of the first or second sets of one or more pictures,

resume decoding the sequence of pictures starting with the current picture, but restrict the outputting of the sequence of pictures until encountering, since resuming the decoding, one picture of each of sets of one or more pictures, wherein, for each set of one or more pictures, each picture of the respective set of one or more pictures is encoded into the video data stream with suspending temporal prediction within a picture subarea specific for the respective set of one or more pictures, the subareas of the sets of one or more pictures being different from each other and together completely covering an picture area of the sequence of pictures.

84. Decoder according to any of claims 80 to 83 wherein the first and the second subarea abut each other without overlap, and the decoder is configured to, if a current random access point is one of the second random access points and a next random

access point is one of the first access points, restrict decoding to the second picture subarea between the current and the next random access points and extending decoding from the second picture subarea to also include the first picture subarea.

85. Decoder according to claim 80 wherein the first and the second subareas form an overlap-free and gapless partitioning of the sequence of pictures (302) and the decoder is configured to derive same from a signalization (319) in the video data stream.

86. Decoder according to any of claims 80 to 83 configured to treat the first set of one or more pictures as a set of one or more picture-wise random access points.

87. Decoder according to any of claims 80 to 86 configured to decode a data stream according to any of claims 55 to 68.

88. Network device configured to receive a video data stream having encoded thereinto a sequence of pictures (302) using temporal prediction according to any of claims 50 to 60, wherein the network device is configured to reduce the data stream to obtain a reduced video data stream having subarea-specific pictures (326) encoded thereinto which show the second picture subarea, by removal of portions video data stream having encoded thereinto a picture area of the pictures external to the second picture subarea and replacing an information (318) within the video data stream which indicates the second set of one or more pictures as subarea-specific random access points by picture type information (320) which indicates the second set of one or more pictures as picture-wise random access pictures.

89. Digital storage medium having stored thereon a data stream according to any of claims 1 to 10, 33, 38 to 44 and 55 to 68.

90. Method for encoding a video into a video data stream, comprising

determining coding parameter settings (20) and generate a parameter set portion (16) of the video data stream indicating the coding parameter settings;

encoding pictures of the video into a payload portion of the video data stream in a manner parameterized using a first set (20a) of the coding parameter settings, the first set being indexed by indices (48) comprised by the payload portion,

providing the video data stream with an information (50) comprising

an indication of a predetermined subarea (22) of the pictures (12), and replacement indices (54) for redirecting the indices comprised by the payload portion so as to refer to, and/or replacement parameters (56) for adjusting the first set of coding parameter settings so as to result in, a second set of coding parameter settings,

wherein the second set of coding parameters are selected so that a reduced video data stream modified compared to the video data stream by

removing (68) portions of the payload portion referring to an area of the pictures outside the predetermined subarea, and

changing (78) location indications in the payload portion so to indicate a location in a manner measured from a circumference of the predetermined subarea instead of the pictures,

has a reduced payload portion having encoded thereinto subarea-specific pictures showing the subarea of the pictures in a manner parameterized using the second set of coding parameter settings.

91. Method for processing a video data stream, the video data stream comprising

a parameter set portion indicating coding parameter settings;

a payload portion into which pictures of the video are coded in a manner parameterized using a first set of the coding parameter settings, the first set being indexed by indices comprised by the payload portion,

wherein the method comprising

reading (64) from the video data stream an information comprising

an indication of a predetermined subarea of the pictures, and

replacement indices for redirecting the indices comprised by the payload portion so as to refer to, and/or replacement parameters for adjusting the first set of coding parameter settings so as to result in, a second set of coding parameter settings,

reducing (66) the video data stream to a reduced video data stream modified by

performing the redirection (72) and/or adjustment (88) so that the second set of coding parameter settings is indexed by the payload portion's indices; removing (68) portions of the payload portion referring to an area of the pictures outside the predetermined subarea, and

changing (78) location indications (32) in the payload portion so to indicate a location measured from a circumference of the predetermined subarea instead of the pictures,

so that the reduced video data stream has a reduced payload portion which has encoded thereinto subarea-specific pictures showing the predetermined subarea of the pictures in a manner parameterized using the second set of coding parameter settings.

92. Method for encoding a video into a video data stream, comprising

encoding pictures of the video into a payload portion of the video data stream,

generating a supplemental enhancement information message indicating supplemental enhancement information matching the manner at which the pictures of the video are coded into the payload portion (18);

providing the video data stream with an information (50) comprising

an indication of a predetermined subarea (22) of the pictures (12), and

a replacement supplemental enhancement information message for to be replace the supplemental enhancement information message,

wherein the replacement supplemental enhancement information message is selected so that a reduced video data stream modified compared to the video data stream by removing (68) portions of the payload portion referring to an area of the pictures outside the predetermined subarea, and

changing (78) location indications in the payload portion so to indicate a location in a manner measured from a circumference of the predetermined subarea instead of the pictures,

has a reduced payload portion having encoded thereinto subarea-specific pictures showing the subarea of the pictures in a manner so that the replacement supplemental enhancement information message indicates replacement supplemental enhancement information matching the manner at which the subarea-specific pictures (86) are coded into the reduced payload portion (18).

93. Method for processing a video data stream, the video data stream comprising

a payload portion into which pictures of the video are coded,

a supplemental enhancement information message indicating supplemental enhancement information matching the manner at which the pictures of the video are coded into the payload portion (18),

wherein method comprises

reading (64) from the video data stream an information comprising

an indication of a predetermined subarea of the pictures, and

a replacement supplemental enhancement information message for to be replace the supplemental enhancement information message,

reducing (66) the video data stream to a reduced video data stream modified by replacing the supplemental enhancement information message by the replacement supplemental enhancement information message ;

removing (68) portions of the payload portion referring to an area of the pictures outside the predetermined subarea, and

changing (78) location indications (32) in the payload portion so to indicate a location measured from a circumference of the predetermined subarea instead of the pictures,

so that the reduced video data stream has a reduced payload portion which has encoded thereinto subarea-specific pictures showing the predetermined subarea of the pictures in a manner so that the replacement supplemental enhancement information message indicates replacement supplemental enhancement information matching the manner at which the subarea-specific pictures (86) are coded into the reduced payload portion (18).

94. Method for processing a video data stream, comprising

receiving a video data stream which comprises a fraction of a payload portion into which pictures of the video are coded, wherein the fraction corresponds to an exclusion of portions of the payload portion referring to an area of the pictures outside a predetermined subarea of the pictures, wherein the pictures of the video are coded into the payload portion,

in a manner parameterized, without exclusion, using coding parameter settings in a parameter set portion of the video data stream, and/or

in a manner matching, without exclusion, supplemental enhancement information indicated by a supplemental enhancement message of the video data stream, modifying the video data stream by

changing (78) location indications (32) in the payload portion so to indicate a location measured from a circumference of the predetermined subarea instead of the pictures, and

adjust the coding parameter settings in the parameter set portion and/or adjust the supplemental enhancement information message so that the video data stream has the fraction of the payload portion into which subarea-specific pictures showing the predetermined subarea of the pictures are encoded in a manner parameterized using the coding parameter settings and/or

matching the supplemental enhancement information supplemental enhancement information indicated by the adjusted supplemental enhancement information supplemental enhancement message as adjusted.

95. Method for decoding a data stream having a picture encoded thereinto, the method comprising

reconstructing the picture from the data stream, and

synthesizing a target picture on the basis of the picture by, according to displacing information contained in the data stream, displacing each of a set of at least one predetermined subregion of the picture within an area of the target picture.

96. Method for reducing a data stream (200) having encoded thereinto a first picture, into a reduced data stream (232) having encoded thereinto a subareas-specific picture showing a predetermined subarea of the first picture, wherein the data stream (200) comprises a displacing information (206) which indicates for a set of at least one predetermined subregion of the first picture a displacement (218) of the set of at least one predetermined subregion (214) within a target picture area (216) relative to an

undisplaced copying of the set of at least one predetermined subregion into the target picture area, wherein

the method comprises modifying the displacing information (206) into modified displacing information (206') so that the subarea-specific picture, copied into the target picture area with having a set of at least one predetermined subregion of the subarea-specific picture displaced according to the modified displacing information, coincides within the target picture area with the predetermined subarea of the first picture copied into the target picture area with the set of at least one predetermined subregion (214) of the picture (12) displaced according to the displacing information, and, in reducing the data stream, replacing the displacing information with the modified displacing information, or

the modified displacing information is comprised by the data stream associated with the predetermined subarea of the first pictures and the displacing information is comprised by the data stream associated with the first pictures and the method comprises, in reducing the data stream, removing the displacing information and carrying over the modified displacing information into the reduced data stream so as to be associated with the subarea-specific pictures.

97. Method for encoding into a video data stream a sequence of pictures using temporal prediction, the method comprising

encoding a first set of one or more pictures into the video data stream with suspending temporal prediction at least within a first picture subarea so as to form a set of one or more first random access points, and

encoding a second set of one or more pictures into the video data stream with suspending temporal prediction within a second picture subarea different from the first picture subarea as to form a set of one or more second random access points.

98. Method for decoding from a video data stream a sequence of pictures using temporal prediction, the method comprising randomly accessing the video data stream using

a set of one or more first random access points at a first set of one or more pictures which are encoded into the video data stream with suspending temporal prediction at least within a first picture subarea, and

a set of one or more second random access points at a second set of one or more pictures which are encoded into the video data stream with suspending temporal prediction within a second picture subarea different from the first picture subarea.

99. Method comprising

receiving a video data stream having encoded thereinto a sequence of pictures (302) using temporal prediction according to any of claims 55 to 68,

reducing the data stream to obtain a reduced video data stream having subarea-specific pictures (326) encoded thereinto which show the second picture subarea, by removal of portions video data stream having encoded thereinto a picture area of the pictures external to the second picture subarea and replacing an information (318) within the video data stream which indicates the second set of one or more pictures as subarea-specific random access points by picture type information (320) which indicates the second set of one or more pictures as picture-wise random access pictures.

100. Computer program having a program code for performing, when running on a computer, a method according to any of claims 90 to 99.

Documents

Application Documents

# Name Date
1 202138006216-Information under section 8(2) [01-03-2024(online)].pdf 2024-03-01
1 202138006216-IntimationOfGrant31-01-2025.pdf 2025-01-31
1 202138006216-STATEMENT OF UNDERTAKING (FORM 3) [15-02-2021(online)].pdf 2021-02-15
2 202138006216-FORM 1 [15-02-2021(online)].pdf 2021-02-15
2 202138006216-FORM 3 [15-01-2024(online)].pdf 2024-01-15
2 202138006216-PatentCertificate31-01-2025.pdf 2025-01-31
3 202138006216-FIGURE OF ABSTRACT [15-02-2021(online)].jpg 2021-02-15
3 202138006216-Information under section 8(2) [01-03-2024(online)].pdf 2024-03-01
3 202138006216-Information under section 8(2) [18-09-2023(online)].pdf 2023-09-18
4 202138006216-FORM 3 [24-07-2023(online)].pdf 2023-07-24
4 202138006216-FORM 3 [15-01-2024(online)].pdf 2024-01-15
4 202138006216-DRAWINGS [15-02-2021(online)].pdf 2021-02-15
5 202138006216-Information under section 8(2) [18-09-2023(online)].pdf 2023-09-18
5 202138006216-Information under section 8(2) [08-07-2023(online)].pdf 2023-07-08
5 202138006216-DECLARATION OF INVENTORSHIP (FORM 5) [15-02-2021(online)].pdf 2021-02-15
6 202138006216-FORM 3 [24-07-2023(online)].pdf 2023-07-24
6 202138006216-COMPLETE SPECIFICATION [15-02-2021(online)].pdf 2021-02-15
6 202138006216-2. Marked Copy under Rule 14(2) [17-04-2023(online)].pdf 2023-04-17
7 202138006216-Retyped Pages under Rule 14(1) [17-04-2023(online)].pdf 2023-04-17
7 202138006216-Information under section 8(2) [08-07-2023(online)].pdf 2023-07-08
7 202138006216-FORM 18 [24-02-2021(online)].pdf 2021-02-24
8 202138006216-2. Marked Copy under Rule 14(2) [17-04-2023(online)].pdf 2023-04-17
8 202138006216-ABSTRACT [14-04-2023(online)].pdf 2023-04-14
8 202138006216-Proof of Right [26-02-2021(online)].pdf 2021-02-26
9 202138006216-CLAIMS [14-04-2023(online)].pdf 2023-04-14
9 202138006216-Proof of Right [31-03-2021(online)].pdf 2021-03-31
9 202138006216-Retyped Pages under Rule 14(1) [17-04-2023(online)].pdf 2023-04-17
10 202138006216-ABSTRACT [14-04-2023(online)].pdf 2023-04-14
10 202138006216-CORRESPONDENCE [14-04-2023(online)].pdf 2023-04-14
10 202138006216-FORM-26 [05-04-2021(online)].pdf 2021-04-05
11 202138006216-CLAIMS [14-04-2023(online)].pdf 2023-04-14
11 202138006216-DRAWING [14-04-2023(online)].pdf 2023-04-14
11 202138006216-Information under section 8(2) [26-06-2021(online)].pdf 2021-06-26
12 202138006216-CORRESPONDENCE [14-04-2023(online)].pdf 2023-04-14
12 202138006216-ENDORSEMENT BY INVENTORS [14-04-2023(online)].pdf 2023-04-14
12 202138006216-Information under section 8(2) [30-06-2021(online)].pdf 2021-06-30
13 202138006216-Information under section 8(2) [02-07-2021(online)].pdf 2021-07-02
13 202138006216-FER_SER_REPLY [14-04-2023(online)].pdf 2023-04-14
13 202138006216-DRAWING [14-04-2023(online)].pdf 2023-04-14
14 202138006216-ENDORSEMENT BY INVENTORS [14-04-2023(online)].pdf 2023-04-14
14 202138006216-FORM 3 [06-02-2023(online)].pdf 2023-02-06
14 202138006216-Information under section 8(2) [10-07-2021(online)].pdf 2021-07-10
15 202138006216-FER_SER_REPLY [14-04-2023(online)].pdf 2023-04-14
15 202138006216-Information under section 8(2) [06-02-2023(online)].pdf 2023-02-06
15 202138006216-Information under section 8(2) [31-08-2021(online)].pdf 2021-08-31
16 202138006216-FER.pdf 2023-01-17
16 202138006216-FORM 3 [06-02-2023(online)].pdf 2023-02-06
16 202138006216-Information under section 8(2) [09-11-2021(online)].pdf 2021-11-09
17 202138006216-FORM 3 [07-01-2023(online)].pdf 2023-01-07
17 202138006216-Information under section 8(2) [06-02-2023(online)].pdf 2023-02-06
17 202138006216-Information under section 8(2) [20-12-2021(online)].pdf 2021-12-20
18 202138006216-FER.pdf 2023-01-17
18 202138006216-Information under section 8(2) [02-12-2022(online)].pdf 2022-12-02
18 202138006216-Information under section 8(2) [30-12-2021(online)].pdf 2021-12-30
19 202138006216-FORM 3 [07-01-2023(online)].pdf 2023-01-07
19 202138006216-Information under section 8(2) [24-11-2022(online)].pdf 2022-11-24
19 202138006216-Information under section 8(2) [25-01-2022(online)].pdf 2022-01-25
20 202138006216-Information under section 8(2) [02-12-2022(online)].pdf 2022-12-02
20 202138006216-Information under section 8(2) [28-01-2022(online)].pdf 2022-01-28
20 202138006216-Information under section 8(2) [28-09-2022(online)].pdf 2022-09-28
21 202138006216-Information under section 8(2) [24-11-2022(online)].pdf 2022-11-24
21 202138006216-Information under section 8(2) [08-02-2022(online)].pdf 2022-02-08
21 202138006216-Information under section 8(2) [01-09-2022(online)].pdf 2022-09-01
22 202138006216-FORM 3 [23-07-2022(online)].pdf 2022-07-23
22 202138006216-Information under section 8(2) [09-03-2022(online)].pdf 2022-03-09
22 202138006216-Information under section 8(2) [28-09-2022(online)].pdf 2022-09-28
23 202138006216-Information under section 8(2) [01-09-2022(online)].pdf 2022-09-01
23 202138006216-Information under section 8(2) [12-04-2022(online)].pdf 2022-04-12
23 202138006216-Information under section 8(2) [19-07-2022(online)].pdf 2022-07-19
24 202138006216-FORM 3 [23-07-2022(online)].pdf 2022-07-23
24 202138006216-Information under section 8(2) [01-06-2022(online)].pdf 2022-06-01
24 202138006216-Information under section 8(2) [24-05-2022(online)].pdf 2022-05-24
25 202138006216-Information under section 8(2) [24-05-2022(online)].pdf 2022-05-24
25 202138006216-Information under section 8(2) [19-07-2022(online)].pdf 2022-07-19
25 202138006216-Information under section 8(2) [01-06-2022(online)].pdf 2022-06-01
26 202138006216-Information under section 8(2) [01-06-2022(online)].pdf 2022-06-01
26 202138006216-Information under section 8(2) [12-04-2022(online)].pdf 2022-04-12
26 202138006216-Information under section 8(2) [19-07-2022(online)].pdf 2022-07-19
27 202138006216-FORM 3 [23-07-2022(online)].pdf 2022-07-23
27 202138006216-Information under section 8(2) [09-03-2022(online)].pdf 2022-03-09
27 202138006216-Information under section 8(2) [24-05-2022(online)].pdf 2022-05-24
28 202138006216-Information under section 8(2) [01-09-2022(online)].pdf 2022-09-01
28 202138006216-Information under section 8(2) [08-02-2022(online)].pdf 2022-02-08
28 202138006216-Information under section 8(2) [12-04-2022(online)].pdf 2022-04-12
29 202138006216-Information under section 8(2) [09-03-2022(online)].pdf 2022-03-09
29 202138006216-Information under section 8(2) [28-01-2022(online)].pdf 2022-01-28
29 202138006216-Information under section 8(2) [28-09-2022(online)].pdf 2022-09-28
30 202138006216-Information under section 8(2) [08-02-2022(online)].pdf 2022-02-08
30 202138006216-Information under section 8(2) [24-11-2022(online)].pdf 2022-11-24
30 202138006216-Information under section 8(2) [25-01-2022(online)].pdf 2022-01-25
31 202138006216-Information under section 8(2) [02-12-2022(online)].pdf 2022-12-02
31 202138006216-Information under section 8(2) [28-01-2022(online)].pdf 2022-01-28
31 202138006216-Information under section 8(2) [30-12-2021(online)].pdf 2021-12-30
32 202138006216-FORM 3 [07-01-2023(online)].pdf 2023-01-07
32 202138006216-Information under section 8(2) [20-12-2021(online)].pdf 2021-12-20
32 202138006216-Information under section 8(2) [25-01-2022(online)].pdf 2022-01-25
33 202138006216-Information under section 8(2) [30-12-2021(online)].pdf 2021-12-30
33 202138006216-Information under section 8(2) [09-11-2021(online)].pdf 2021-11-09
33 202138006216-FER.pdf 2023-01-17
34 202138006216-Information under section 8(2) [06-02-2023(online)].pdf 2023-02-06
34 202138006216-Information under section 8(2) [20-12-2021(online)].pdf 2021-12-20
34 202138006216-Information under section 8(2) [31-08-2021(online)].pdf 2021-08-31
35 202138006216-Information under section 8(2) [10-07-2021(online)].pdf 2021-07-10
35 202138006216-Information under section 8(2) [09-11-2021(online)].pdf 2021-11-09
35 202138006216-FORM 3 [06-02-2023(online)].pdf 2023-02-06
36 202138006216-FER_SER_REPLY [14-04-2023(online)].pdf 2023-04-14
36 202138006216-Information under section 8(2) [02-07-2021(online)].pdf 2021-07-02
36 202138006216-Information under section 8(2) [31-08-2021(online)].pdf 2021-08-31
37 202138006216-ENDORSEMENT BY INVENTORS [14-04-2023(online)].pdf 2023-04-14
37 202138006216-Information under section 8(2) [10-07-2021(online)].pdf 2021-07-10
37 202138006216-Information under section 8(2) [30-06-2021(online)].pdf 2021-06-30
38 202138006216-DRAWING [14-04-2023(online)].pdf 2023-04-14
38 202138006216-Information under section 8(2) [02-07-2021(online)].pdf 2021-07-02
38 202138006216-Information under section 8(2) [26-06-2021(online)].pdf 2021-06-26
39 202138006216-CORRESPONDENCE [14-04-2023(online)].pdf 2023-04-14
39 202138006216-FORM-26 [05-04-2021(online)].pdf 2021-04-05
39 202138006216-Information under section 8(2) [30-06-2021(online)].pdf 2021-06-30
40 202138006216-CLAIMS [14-04-2023(online)].pdf 2023-04-14
40 202138006216-Information under section 8(2) [26-06-2021(online)].pdf 2021-06-26
40 202138006216-Proof of Right [31-03-2021(online)].pdf 2021-03-31
41 202138006216-ABSTRACT [14-04-2023(online)].pdf 2023-04-14
41 202138006216-FORM-26 [05-04-2021(online)].pdf 2021-04-05
41 202138006216-Proof of Right [26-02-2021(online)].pdf 2021-02-26
42 202138006216-FORM 18 [24-02-2021(online)].pdf 2021-02-24
42 202138006216-Proof of Right [31-03-2021(online)].pdf 2021-03-31
42 202138006216-Retyped Pages under Rule 14(1) [17-04-2023(online)].pdf 2023-04-17
43 202138006216-2. Marked Copy under Rule 14(2) [17-04-2023(online)].pdf 2023-04-17
43 202138006216-COMPLETE SPECIFICATION [15-02-2021(online)].pdf 2021-02-15
43 202138006216-Proof of Right [26-02-2021(online)].pdf 2021-02-26
44 202138006216-DECLARATION OF INVENTORSHIP (FORM 5) [15-02-2021(online)].pdf 2021-02-15
44 202138006216-FORM 18 [24-02-2021(online)].pdf 2021-02-24
44 202138006216-Information under section 8(2) [08-07-2023(online)].pdf 2023-07-08
45 202138006216-FORM 3 [24-07-2023(online)].pdf 2023-07-24
45 202138006216-DRAWINGS [15-02-2021(online)].pdf 2021-02-15
45 202138006216-COMPLETE SPECIFICATION [15-02-2021(online)].pdf 2021-02-15
46 202138006216-Information under section 8(2) [18-09-2023(online)].pdf 2023-09-18
46 202138006216-FIGURE OF ABSTRACT [15-02-2021(online)].jpg 2021-02-15
46 202138006216-DECLARATION OF INVENTORSHIP (FORM 5) [15-02-2021(online)].pdf 2021-02-15
47 202138006216-FORM 3 [15-01-2024(online)].pdf 2024-01-15
47 202138006216-FORM 1 [15-02-2021(online)].pdf 2021-02-15
47 202138006216-DRAWINGS [15-02-2021(online)].pdf 2021-02-15
48 202138006216-STATEMENT OF UNDERTAKING (FORM 3) [15-02-2021(online)].pdf 2021-02-15
48 202138006216-Information under section 8(2) [01-03-2024(online)].pdf 2024-03-01
48 202138006216-FIGURE OF ABSTRACT [15-02-2021(online)].jpg 2021-02-15
49 202138006216-PatentCertificate31-01-2025.pdf 2025-01-31
49 202138006216-FORM 1 [15-02-2021(online)].pdf 2021-02-15
50 202138006216-STATEMENT OF UNDERTAKING (FORM 3) [15-02-2021(online)].pdf 2021-02-15
50 202138006216-IntimationOfGrant31-01-2025.pdf 2025-01-31

Search Strategy

1 Search_Strategy_202138006216E_12-01-2023.pdf

ERegister / Renewals

3rd: 17 Feb 2025

From 08/02/2019 - To 08/02/2020

4th: 17 Feb 2025

From 08/02/2020 - To 08/02/2021

5th: 17 Feb 2025

From 08/02/2021 - To 08/02/2022

6th: 17 Feb 2025

From 08/02/2022 - To 08/02/2023

7th: 17 Feb 2025

From 08/02/2023 - To 08/02/2024

8th: 17 Feb 2025

From 08/02/2024 - To 08/02/2025

9th: 17 Feb 2025

From 08/02/2025 - To 08/02/2026