Sign In to Follow Application
View All Documents & Correspondence

Methods And Devices For Encoding And Decoding A Multi View Video Sequence Representative Of An Omnidirectional Video

Abstract: The invention relates to a method and device for decoding an encoded data signal representing a multi-view video sequence representative of an omnidirectional video, the multi-view video sequence comprising at least a first view and a second view. Parameters allowing a homographic matrix to be obtained (61), representing the transformation of a plane of the second view into a plane of the second view, are read (60) out from the signal. An image of the second view comprises a so-called active zone comprising pixels which, when they are projected via the homographic matrix onto an image of the first view, are included in the image of the first view. An image of the second view is decoded (62) by generation (620) of a reference image comprising pixel values determined from previously reconstructed pixels of an image of the first view and from the homographic matrix and, for at least one block of the image of the second view, the reference image generated is included in the list of reference images when the block belongs (622) to the active zone. The block is reconstructed (625) from a reference image indicated by an index read out (621) from the data signal.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 May 2020
Publication Number
35/2020
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
archana@anandandanand.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-06-07
Renewal Date

Applicants

ORANGE
78 rue Olivier de Serres 75015 PARIS

Inventors

1. JUNG, Joël
ORANGE GARDENS - TGI/OLR/IPL/PATENTS - 44 avenue de la République - CS 50010 92326 CHÂTILLON CEDEX
2. RAY, Bappaditya
ORANGE GARDENS - TGI/OLR/IPL/PATENTS - 44 avenue de la République - CS 50010 92326 CHÂTILLON CEDEX

Specification

17During a step 60, parameters allowingto obtain a homographic matrix𝐻𝑘,𝑘−1representative of the transformation from a planeof the view to be decoded kto a plane of the adjacent view k-1 are read in the signal.According to onevariant, the 9 parameters of the 3x3 homographic matrix 𝐻𝑘,𝑘−1are read in the signal. According to another variant, theintrinsic and extrinsic parameters of the cameras 5of the view k-1and of the view kare read inthe signal, i.e. the focal lengths of the cameras and the separation angle𝜃𝑠𝑒𝑝between the two cameras.During a step 61, the homographic matrix𝐻𝑘,𝑘−1is obtained. When the parameters of the matrix are read in the signal, the homographic matrix𝐻𝑘,𝑘−1is obtained directly from these parameters.10When the parameters read correspond to the camera parameters, the homographic matrix𝐻𝑘,𝑘−1is calculated, using equation (3) given above.Following step 61, the current view k is decoded image by image from the data contained in the data signal. During a step 62, a current image𝐼𝑡𝑘of a time instant t of the view kis decoded.For thispurpose, during astep 620, a new reference image𝐼𝑟𝑒𝑓is created. The new reference 15image 𝐼𝑟𝑒𝑓is created from the pixels of an image 𝐼𝑡𝑘−1̂at the same time instant t of the adjacent view k-1and which has been previously reconstructed. The same mechanism as that described in relation to step 420 of Figure4 is implemented to create the reference image 𝐼𝑟𝑒𝑓. The current image𝐼𝑡𝑘of the view kis then decoded. For thispurpose, the image is cut into blocks of pixels and the blocks of pixels of the image are scanned to be decoded and 20reconstructed.For each block𝐵𝑘of the current image𝐼𝑡𝑘, the following steps are implemented.During a step 621, theencodeddata of the block𝐵𝑘are read inthe signal. Particularly, when the block𝐵𝑘isencodedby prediction relative toa reference image comprisedin a list of reference images (inter-image prediction), a reference image index is read. Conventionally, for 25an imageencodedby inter-image prediction, the list of reference images comprises at least one image previously reconstructed from the same view as the current image to be reconstructed. Other information can possibly be read in the signal for the current block𝐵𝑘, such as an encodingmode, a movementvector or a disparity information, prediction residual coefficients. Conventionally, the data read for the block is decoded by an entropy decoder. A 30residue block is obtained by applying to the decoded coefficients a quantisation opposite to that implemented inencodingand, to the de-quantised decoded coefficients, atransform opposite tothat implemented inencoding.
18During a step 622, it is determined whether the block𝐵𝑘is located in the active area of the current image. In other words, it is determined whether the block𝐵𝑘comprises active pixels.According to the particular embodiment of the invention described here, the block𝐵𝑘belongs to the active area if all the pixels of the block𝐵𝑘are active, i.e. if all the pixels of the block𝐵𝑘are in the active area.5If the block 𝐵𝑘belongs to the active area, during a step 623, the new reference image 𝐼𝑟𝑒𝑓is added inthe list of reference images. Otherwise, i.e. if the block 𝐵𝑘does not belong to the active area, the list of reference images for decoding the block 𝐵𝑘is unchanged and only comprisespreviously reconstructed images of the current view kto be decoded.During a step 624, the prediction of the block 𝐵𝑘is then calculated conventionally. According 10to the particular embodiment of the invention described here, advantageously the conventional operation of the decoders for predicting a current block is not modified.When the block 𝐵𝑘is located in the active area, the new reference image has been added to the list of reference images. Thus, the construction of the prediction block for the current block 𝐵𝑘is carried out by movement or disparity compensation from the movement or disparity 15information determined for the current block and from the reference image indicated by the reference indexread in the signal.During a step 625, the current block 𝐵𝑘is reconstructed. For thispurpose, the prediction block constructed during step 624 is added to the residue block obtained during step 621.During step 626, it is checked whether all the blocks of the current image have been decoded. 20If there are still blocks to be decoded, the method goes to the next block in the image to be decoded and returns to step 621. Otherwise, the decoding of the current image ends. The reconstructed current image is stored to serve as a reference image for decoding subsequent images or subsequent views.In the particular embodiment of the invention described above, it is determined that the block 25𝐵𝑘to beencodedor decoded belongs to the active area of the current image if all the pixels of the block 𝐵𝑘are active, i.e.if all the pixels of block 𝐵𝑘are in the active area.In another particular embodiment of the invention, it is determined that the block 𝐵𝑘belongs to the active area if at least one pixel of the block to beencodedor to be decoded is an active pixel.30According to said particular embodiment of the invention, theencodingand decoding methods are similar when all the pixels of the block to beencodedor to be decoded are active.The same applies when all the pixels of the block to beencodedor decoded are non-active.According to said other embodiment, for a block to beencodedor decoded comprising at least one active pixel and at least one non-active pixel, the prediction of such a block is adapted.35
19Figure7 illustrates an example of a block to beencodedor decoded crossed throughthe border 70 between an active area 71 and a non-active area 72 of the image to beencodedor decoded.For this type of block, when the prediction block determined in steps 424 and 624 of Figures4 and 6 is constructed using the new reference image created in steps 420and 620, the 5prediction block then comprises, in the active area 71 of the block, pixels obtained by movementcompensation relative tothe new reference image and in the non-active area 72 of the block, pixels obtained by movementcompensation relative toa previously reconstructed image of the current view comprisedin the list of reference images. Thus, for the blocks crossed throughthe border between the active areaand the non-active area:10-a first reference index isencodedin the signal or decoded from the signal, the first reference index corresponding to the index of the reference image used to encodethe active area of the block, and-a second reference index, corresponding to the index of the previously reconstructed reference image of the current view used to encodethe non-active area of the block, is15encodedin the signal or decoded from the signal.An example of such a signal is illustrated in Figure 10B. The data signal of Figure10B comprises parameters PAR allowingto obtain the homographic matrix representative of the transformation from a plane of the current view to a plane of a neighbouring view. For each image of the current view, encodeddata DAT comprise, for at least one block crossed through20the border between the active area and the non-active area of the image, two indexes idx1and idx2indicating the reference images from a list of reference images, to be used to reconstruct the block.Alternatively, the second index idx2isencodedin the signal for the block crossed throughthe border between the active area and the non-active area of the image, only if the first index idx125indicates that the reference image tobeusedfor the active area of the block corresponds to the new reference image created in steps 420or 620. According to this variant, it is not necessary to encodea second index when the reference image used to predict the block is an image previously reconstructed from the current view.Figure8 shows the simplified structure of anencodingdevice COD adaptedto implementthe30encodingmethod according to any one of the particular embodiments of the invention described above.Such an encodingdevice comprises a memory MEM, a processing unit UT, equipped for example with a processor PROC, and controlled by the computer program PG stored in memory MEM. The computer program PG comprisesinstructions for implementing the steps 35
20of theencodingmethod as previously described, when the program is executed by the processor PROC.In theinitialisation, the code instructions of the computer program PG are for example loaded into a memory of the processing unit (not shown) before being executed by the processor PROC. The processor PROC of the processing unit UT implements in particular the steps of 5theencodingmethod described in relation to Figures4 and 7, according to the instructions of the computer program PG.According to a particular embodiment of the invention, theencodingdevice comprises a communication interface COM allowing in particular theencodingdevice to transmit an encodeddata signal representative of an omnidirectional video, via a communication network.10According to a particular embodiment of the invention, theencodingdevice described above is comprisedin a terminal.Figure9 shows the simplified structure of a decoding device DEC adapted to implementthe decoding methodaccording to any one of the particular embodiments of the invention 15described above.Such a decoding device comprises a memory MEM0, a processing unit UT0, equipped for example with a processor PROC0, and controlled by the computer program PG0 stored in memory MEM0. The computer program PG0 comprisesinstructions for implementing the stepsof the decoding method as described above, when the program is executed by the processor 20PROC0.According to a particular embodiment of the invention, the decoding device DEC comprises a communication interface COM0 allowing in particular the decoding device to receive an encodeddata signal representative of an omnidirectional video, via a communication network.In theinitialisation, the code instructions of the computer program PG0 are for example loaded 25into a memory of the processing unit (not shown) before being executed by the processor PROC0. The processor PROC0 of the processing unit UT0 implements in particular the steps of the decoding method described in relation to Figures6 and 7, according to the instructions of the computer program PG0.According to a particular embodiment of the invention, the decoding device described above 30is comprisedin a terminal.
21We Claim: 1.A method for decoding an encodeddata signal representative of a multi-view video sequence representative of an omnidirectional video, the multi-view video sequence comprising at least one first view and one second view, the decoding method comprises the 5following steps:-reading (60) in the data signal, parameters allowingto obtain (61) a homographic matrix representative of the transformation from a plane of the second view to a plane of the first view,-decoding (62) an image of the second view, the image of the second view comprising an area called active areacomprising pixels which when said pixels are projected via the homographic 10matrix onto an image of the first view, are comprisedin the image of the first view, the decoding of the image of the second view comprising:-generating(620) a reference image comprising pixel values determined from previously reconstructed pixels of an image of the first view and fromthe homographic matrix, and15-for at least one block of the image of the second view:-reading (621) in the data signal, an index representative of a reference image comprisedin a list of reference images comprising at least one image of the second previously reconstructedview,-determining(622) whether the block belongs to the activeareaor not,20-reconstructing(625) said block from said reference image indicated by the index read, the generated reference image being comprisedin said list of reference images when said block belongs to the active area, and the generated reference image is not comprisedin said list of reference imageswhen said block does not belong to the active area.2.The decoding method according to claim 1, wherein said parameters are camera 25parameters associated respectively with a first camera associated with the first view and with a second camera associated with the second view, the method further comprising the calculation of said homographic matrix from said camera parameters.3.The decoding method according to claim 1, wherein said parameters are the coefficients of the homographic matrix.304.Thedecoding method according to any one of claims 1 to 3, whereinwhen the border of the active area crosses the block to be reconstructed, the decoding method further comprises:-reading, in the data signal, another index representative of a reference image comprisedin the group of reference images, said group of reference images not comprising 35
22the generated reference image, the pixels of the block to be reconstructed which do not belong to the active area being reconstructed from pixels of the reference image indicated by the other index read.5.Thedecoding method according to any one of claims 1 to 4, further comprising:-reading in the data signal, parameters allowingto obtain another homographic matrix 5representative of the transformation from a planeof the second view to a plane of a third view, at least one pixel of the image of the second view projected into an image of the third view via the other homographic matrix being comprisedin the image of the third view,-the generated reference image furthercomprises pixel values determined from previously reconstructed pixels of the image of the third view and the other homographic matrix.106.The method forencodingin a data signal, a multi-view video sequence representative of an omnidirectional video, the multi-view video sequence comprising at least one first view and one second view, theencodingmethod comprises the following steps:-calculating(40) a homographic matrix representative of the transformation from a plane of the second view to a plane of the first view,15-encoding(41) in data signal parameters allowing to obtain said homographic matrix upon decoding,-encoding(42) an image of the second view, the image of the second view comprising an area called active areacomprising pixels which when said pixels are projected via the homographic matrix onto an image of the first view, are comprisedin the image of the first view, theencoding20of said image comprising:-generating(420) a reference image comprising pixel values determined from previously reconstructed pixels of an image of the first view and fromthe homographic matrix, and-for at least one block of the image of the second view:25-determining(421) whether the block belongs to the active area or not,-predicting(424) said block from a reference image comprisedin a list of reference images comprising at least one image of the second previously reconstructedview, the generated reference image being comprisedin said list of reference images when said block belongs to the active area, and the generated reference image not being comprisedin 30said list of reference images when said block does not belong to the active area,-encoding(424) in the data signal, an index representative of the reference image used to predict said block.
237.Theencodingmethod according to claim 6, whereinsaid parameters are camera parameters associated respectively with a first camera associated with the first view and with a second camera associated with the second view.8.The encoding method according to claim 6, whereinsaid parameters are the parameters of the homographic matrix.59.The encoding method according to any one of claims 6 to 8, whereinwhen the border of the active area crosses the block to beencoded, theencodingmethod further comprises:-encoding, in the data signal, another index representative of a reference image comprisedin the group of reference images, said group of reference images not comprising the generated reference image, the pixels of the block to beencodedwhich do not belong to 10the active area being predicted from pixels of the reference image indicated by the other index.10.The encoding method according to any one of claims 6 to 9, further comprising:-calculatinganother homographic matrix representative of the transformation from a planeof the second view toa plane of a third view, at least one pixel of the image of the second view projected into an image of the third view via the other homographic matrix being 15comprisedin the image of the thirdview,-encodingin the data signal, parameters allowingto obtain said other homographic matrix,-the generated reference image furthercomprises pixel values determined from previously reconstructed pixels of the image of the third view and the other homographic 20matrix.11.A device for decoding an encodeddata signal representative of a multi-view video sequence representative of an omnidirectional video, the multi-view video sequence comprising at least one first view and one second view, the decoding device comprises:-means for reading, in the data signal, parameters allowingto obtain a homographic matrix 25representative of the transformation from a plane of the second view to a plane of the first view,-means for decoding an image of the second view, the image of the second view comprising an area called active areacomprising pixels which when said pixels are projected via the homographic matrix onto an image of the first view, are comprisedin the image of the first view, said means for decoding the image of the second view comprising:30-means for generating a reference image comprising pixel values determined from previously reconstructed pixels of an image of the first view and fromthe homographic matrix, and-for at least one block of the image of the second view:
24- means for reading from the data signal, an index representative of a reference image comprised in a list of reference images comprising at least one image of the second previously reconstructed view, - means for determining whether the block belongs to the active area or not, 5- means for reconstructing said block from said reference image indicated by the index read, the generated reference image being comprised in said list of reference images when said block belongs to the active area, and the generated reference image is not comprised in said list of reference images when said block does not belong to the active area. 12.The device for encoding in a data signal, a multi-view video sequence representative 10of an omnidirectional video, the multi-view video sequence comprising at least one first view and one second view, the encoding device comprises:- means for calculating a homographic matrix representative of the transformation from a plane of the second view to a plane of the first view, - means for encoding in the data signal parameters allowing to obtain said homographic matrix, 15- means for encoding an image of the second view, the image of the second view comprising an area called active area comprising pixels which, when said pixels are projected via the homographic matrix onto an image of the first view, are comprised in the image of the first view, said means for encoding said image comprising: - means for generating a reference image comprising pixel values determined from 20previously reconstructed pixels of an image of the first view and from the homographic matrix, and - for at least one block of the image of the second view: - means for determining whether the block belongs to the active area or not, - means for predicting said block from a reference image comprised in a list of 25reference images comprising at least one image of the second previously reconstructed view, the generated reference image being comprised in said list of reference images when said block belongs to the active area, and the generated reference image not being comprised in said list of reference images when said block does not belong to the active area,- means for encoding in the data signal, an index representative of the reference 30image used to predict said block.13. A computer program including instructions for implementing the decoding method according to any one of claims 1 to 5 and/or instructions for implementing the encoding method according to any one of claims 6 to 10, when said program is executed by a processor

Documents

Application Documents

# Name Date
1 202017022397-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [28-05-2020(online)].pdf 2020-05-28
2 202017022397-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2020(online)].pdf 2020-05-28
3 202017022397-PRIORITY DOCUMENTS [28-05-2020(online)].pdf 2020-05-28
4 202017022397-NOTIFICATION OF INT. APPLN. NO. & FILING DATE (PCT-RO-105) [28-05-2020(online)].pdf 2020-05-28
5 202017022397-FORM 1 [28-05-2020(online)].pdf 2020-05-28
6 202017022397-DRAWINGS [28-05-2020(online)].pdf 2020-05-28
7 202017022397-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2020(online)].pdf 2020-05-28
8 202017022397-COMPLETE SPECIFICATION [28-05-2020(online)].pdf 2020-05-28
9 202017022397-MARKED COPIES OF AMENDEMENTS [30-05-2020(online)].pdf 2020-05-30
10 202017022397-FORM 13 [30-05-2020(online)].pdf 2020-05-30
11 202017022397-Annexure [30-05-2020(online)].pdf 2020-05-30
12 202017022397-AMMENDED DOCUMENTS [30-05-2020(online)].pdf 2020-05-30
13 202017022397-Information under section 8(2) [02-07-2020(online)].pdf 2020-07-02
14 202017022397-Proof of Right [19-08-2020(online)].pdf 2020-08-19
15 202017022397-FORM-26 [19-08-2020(online)].pdf 2020-08-19
16 202017022397-FORM 3 [19-08-2020(online)].pdf 2020-08-19
17 202017022397-FORM 3 [16-06-2021(online)].pdf 2021-06-16
18 202017022397-FORM 18 [26-08-2021(online)].pdf 2021-08-26
19 202017022397.pdf 2021-10-19
20 202017022397-FORM 3 [15-12-2021(online)].pdf 2021-12-15
21 202017022397-FER.pdf 2022-04-27
22 202017022397-Verified English translation [27-10-2022(online)].pdf 2022-10-27
23 202017022397-OTHERS [27-10-2022(online)].pdf 2022-10-27
24 202017022397-Information under section 8(2) [27-10-2022(online)].pdf 2022-10-27
25 202017022397-FORM 3 [27-10-2022(online)].pdf 2022-10-27
26 202017022397-FER_SER_REPLY [27-10-2022(online)].pdf 2022-10-27
27 202017022397-COMPLETE SPECIFICATION [27-10-2022(online)].pdf 2022-10-27
28 202017022397-CLAIMS [27-10-2022(online)].pdf 2022-10-27
29 202017022397-FORM 3 [18-09-2023(online)].pdf 2023-09-18
30 202017022397-PatentCertificate07-06-2024.pdf 2024-06-07
31 202017022397-IntimationOfGrant07-06-2024.pdf 2024-06-07
32 202017022397-Information under section 8(2) [07-06-2024(online)].pdf 2024-06-07
33 202017022397-FORM 3 [07-06-2024(online)].pdf 2024-06-07

Search Strategy

1 yea2009E_26-04-2022.pdf
2 SearchHistoryE_26-04-2022.pdf

ERegister / Renewals

3rd: 28 Aug 2024

From 26/11/2020 - To 26/11/2021

4th: 28 Aug 2024

From 26/11/2021 - To 26/11/2022

5th: 28 Aug 2024

From 26/11/2022 - To 26/11/2023

6th: 28 Aug 2024

From 26/11/2023 - To 26/11/2024

7th: 21 Nov 2024

From 26/11/2024 - To 26/11/2025

8th: 26 Oct 2025

From 26/11/2025 - To 26/11/2026