3D Photography Using Context-Aware Layered Depth Inpainting
Abstrak
We propose a method for converting a single RGB-D input image into a 3D photo, i.e., a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view. We use a Layered Depth Image with explicit pixel connectivity as underlying representation, and present a learning-based inpainting model that iteratively synthesizes new local color-and-depth content into the occluded region in a spatial context-aware manner. The resulting 3D photos can be efficiently rendered with motion parallax using standard graphics engines. We validate the effectiveness of our method on a wide range of challenging everyday scenes and show less artifacts when compared with the state-of-the-arts.
Topik & Kata Kunci
Penulis (4)
Meng-Li Shih
Shih-Yang Su
J. Kopf
Jia-Bin Huang
Akses Cepat
- Tahun Terbit
- 2020
- Bahasa
- en
- Total Sitasi
- 329×
- Sumber Database
- Semantic Scholar
- DOI
- 10.1109/cvpr42600.2020.00805
- Akses
- Open Access ✓