Sign In to Follow Application
View All Documents & Correspondence

Deep Novel View Synthesis From Unstructured Input

Abstract: Systems, apparatuses and methods may provide for technology that estimates poses of a plurality of input images, reconstructs a proxy three-dimensional (3D) geometry based on the estimated poses and the plurality of input images, detects a user selection of a virtual viewpoint, encodes, via a first neural network, the plurality of input images with feature maps, warps the feature maps of the encoded plurality of input images based on the virtual viewpoint and the proxy 3D geometry, and blends, via a second neural network, the warped feature maps into a single image, wherein the first neural network is deep convolutional network and the second neural network is a recurrent convolutional network.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 December 2020
Publication Number
05/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@iphorizons.com
Parent Application

Applicants

INTEL CORPORATION
2200 Mission College Boulevard, Santa Clara, California 95054, USA

Inventors

1. Gernot Riegler
Seebauerstr. 43, Munich BA 81735 Germany
2. Vladlen Koltun
3600 Juliette Lane Santa Clara, CA 95054 USA

Specification

Claims:1. A computing system comprising:
a network controller;
a processor coupled to the network controller, wherein the processor includes one or more substrates and logic coupled to the one or more substrates, and wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to:
encode, via a first neural network, a plurality of input images with feature maps,
warp the feature maps of the encoded plurality of input images based on a virtual viewpoint and a proxy three-dimensional (3D) geometry, and
blend, via a second neural network, the warped feature maps into a single image.
, Description:TECHNICAL FIELD
[0003] Embodiments generally relate to view synthesis. More particularly, embodiments relate to deep novel view synthesis from unstructured input.

BACKGROUND
[0004] Previously, a variety of methods have been proposed to tackle the problem of novel view synthesis from a set of input images. The proposed methods may be categorized by the restrictions on the image viewpoints and the possible deviations from the input viewpoints.

BRIEF DESCRIPTION OF THE DRAWINGS
[0005] The various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
[0006] FIG. 1 is an illustration of an example of offline and online process sequences according to an embodiment;
[0007] FIG. 2 is a block diagram of an example of a recurrent mapping and blending network according to an embodiment;
[0008] FIGs. 3-5 are comparative illustrations of examples of traditional results and enhanced results according to embodiments;
[0009] FIGs. 6A and 6B are flowcharts of examples of methods of operating performance-enhanced computing systems according to embodiments;
[0010] FIG. 7 is a block diagram of an example of a performance-enhanced computing system according to an embodiment;
[0001] FIG. 8 is an illustration of an example of a semiconductor package apparatus according to an embodiment;
[0002] FIG. 9 is a block diagram of an example of a processor according to an embodiment; and
[0011] FIG. 10 is a block diagram of an example of a multi-processor based computing system according to an embodiment.

DESCRIPTION OF EMBODIMENTS
[0012] Previous approaches to conducting novel view synthesis may have involved light field, three-dimensional geometry based and/or mapping based methods. Light field methods do not require information about the scene geometry, but assume a dense camera grid, or restrict the target view to be a linear interpolation of the input viewpoints. Light field methods have the problem of a restricted input set-up and/or a restricted deviation from the input viewpoints. For example, a typical light field set-up is a number of images arranged on a 2D plane.

Documents

Application Documents

# Name Date
1 202044054430-FORM 1 [15-12-2020(online)].pdf 2020-12-15
2 202044054430-DRAWINGS [15-12-2020(online)].pdf 2020-12-15
3 202044054430-DECLARATION OF INVENTORSHIP (FORM 5) [15-12-2020(online)].pdf 2020-12-15
4 202044054430-COMPLETE SPECIFICATION [15-12-2020(online)].pdf 2020-12-15
5 202044054430-FORM-26 [15-03-2021(online)].pdf 2021-03-15
6 202044054430-FORM 3 [14-06-2021(online)].pdf 2021-06-14
7 202044054430-FORM 3 [14-12-2021(online)].pdf 2021-12-14
8 202044054430-FORM 18 [22-07-2024(online)].pdf 2024-07-22
9 202044054430-FER.pdf 2025-08-08
10 202044054430-FORM 3 [17-09-2025(online)].pdf 2025-09-17

Search Strategy

1 202044054430_SearchStrategyNew_E_202044054430SearchStrategyMatrixE_27-03-2025.pdf