Abstract: Methods, apparatus, systems and articles of manufacture to identify a video decoding error are disclosed. An example apparatus includes an atlas generator to generate atlas data for one or more atlases generated from input views of video; a hash generator to: perform a hash operation on the atlas data to generate a hash value; and include the hash value in a message; and a multiplexer to combine the one or more atlases, coded atlas data corresponding to the atlas data, and the message to generate a video bitstream.
Description:RELATED APPLICATION
[0001] This application is a divisional of India Patent Application No. 202247050259, filed on 02 September 2022, entitled “METHODS AND APPARATUS TO IDENTIFY A VIDEO DECODING ERROR”.
[0002] This patent claims the benefit of U.S. Provisional Application No. 63/004,741, which was filed on April 3, 2020. U.S. Provisional Application No. 63/004,741 is hereby incorporated herein by reference in its entirety. Priority to U.S. Provisional Application No. 63/004,741 is hereby claimed.
FIELD OF THE DISCLOSURE
[0003] This disclosure relates generally to video processing, and, more particularly, to methods and apparatus to identify a video decoding error.
BACKGROUND
[0004] In video compression / decompression (codec) systems, compression efficiency and video quality are important performance criteria. For example, visual quality is an important aspect of the user experience in many video applications. Compression efficiency impacts the amount of memory needed to store video files and/or the amount of bandwidth needed to transmit and/or stream video content. A video encoder typically compresses video information so that more information can be sent over a given bandwidth or stored in a given memory space or the like. The compressed signal or data is then decoded by a decoder that decodes or decompresses the signal or data for display to a user. In most examples, higher visual quality with greater compression is desirable.
[0005] Currently, standards are being developed for immersive video coding and point cloud coding including the Video-based Point Cloud Compression (V-PCC) and MPEG Immersive Video Coding (MIV). Such standards seek to establish and improve compression efficiency and reconstruction quality in the context of immersive video and point cloud coding.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] FIG. 1 is an example environment for encoding and/or decoding video in conjunction with examples disclosed herein.
[0007] FIG. 2 is a flowchart representative of machine readable instruction which may be executed to implement an example encoding system of FIG. 1.
[0008] FIG. 3 is a flowchart representative of machine readable instruction which may be executed to implement an example decoding system of FIG. 1.
[0009] FIG. 4 is a block diagram of an example processing platform structured to execute the instructions of FIG. 3 to implement the encoding system of FIG. 1.
[0010] FIG. 5 is a block diagram of an example processing platform structured to execute the instructions of FIG. 3 to implement the decoding system of FIG. 1.
[0011] The figures are not to scale. Instead, the thickness of the layers or regions may be enlarged in the drawings. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. As used in this patent, stating that any part (e.g., a layer, film, area, region, or plate) is in any way on (e.g., positioned on, located on, disposed on, or formed on, etc.) another part, indicates that the referenced part is either in contact with the other part, or that the referenced part is above the other part with one or more intermediate part(s) located therebetween. Connection references (e.g., attached, coupled, connected, and joined) are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other. Stating that any part is in “contact” with another part means that there is no intermediate part between the two parts. Although the figures show layers and regions with clean lines and boundaries, some or all of these lines and/or boundaries may be idealized. In reality, the boundaries and/or lines may be unobservable, blended, and/or irregular.
[0012] Descriptors "first," "second," "third," etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor "first" may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as "second" or "third." In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components.
DETAILED DESCRIPTION
[0013] In the context of immersive video coding and point cloud coding, video standards such as the Visual Volumetric Video-based Coding (V3C) and Video-based Point Cloud Compression (V-PCC) and the MPEG Immersive Video Coding (MIV) may be utilized. For such standards, there may be requirements to confirm that a decoder is conforming to such standards. For example, there may be one or more requirements that the decoder is obtaining unaltered and/or uncorrupted bitstreams, and/or that the decoder is correctly decoding obtained bitstreams.
, C , C , Claims:1. An encoder comprising:
instructions; and
at least one programmable circuit to be programmed based on the instructions to:
generate tile data that maps a tile to one or more patches represented in the tile, the tile data based on at least one syntax element associated with at least one of the one or more patches;
hash the tile data to generate a hash value;
include the hash value in a supplemental enhancement information (SEI) message; and
generate a bitstream based on the tile and the SEI message.
| # | Name | Date |
|---|---|---|
| 1 | 202548124258-PRIORITY DOCUMENTS [09-12-2025(online)].pdf | 2025-12-09 |
| 2 | 202548124258-POWER OF AUTHORITY [09-12-2025(online)].pdf | 2025-12-09 |
| 3 | 202548124258-FORM 1 [09-12-2025(online)].pdf | 2025-12-09 |
| 4 | 202548124258-DRAWINGS [09-12-2025(online)].pdf | 2025-12-09 |
| 5 | 202548124258-DECLARATION OF INVENTORSHIP (FORM 5) [09-12-2025(online)].pdf | 2025-12-09 |
| 6 | 202548124258-COMPLETE SPECIFICATION [09-12-2025(online)].pdf | 2025-12-09 |