arXiv Open Access 2021

Light Field Neural Rendering

Mohammed Suhail Carlos Esteves Leonid Sigal Ameesh Makadia
Lihat Sumber

Abstrak

Classical light field rendering for novel view synthesis can accurately reproduce view-dependent effects such as reflection, refraction, and translucency, but requires a dense view sampling of the scene. Methods based on geometric reconstruction need only sparse views, but cannot accurately model non-Lambertian effects. We introduce a model that combines the strengths and mitigates the limitations of these two directions. By operating on a four-dimensional representation of the light field, our model learns to represent view-dependent effects accurately. By enforcing geometric constraints during training and inference, the scene geometry is implicitly learned from a sparse set of views. Concretely, we introduce a two-stage transformer-based model that first aggregates features along epipolar lines, then aggregates features along reference views to produce the color of a target ray. Our model outperforms the state-of-the-art on multiple forward-facing and 360° datasets, with larger margins on scenes with severe view-dependent variations.

Topik & Kata Kunci

Penulis (4)

M

Mohammed Suhail

C

Carlos Esteves

L

Leonid Sigal

A

Ameesh Makadia

Format Sitasi

Suhail, M., Esteves, C., Sigal, L., Makadia, A. (2021). Light Field Neural Rendering. https://arxiv.org/abs/2112.09687

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓