Semantic Scholar Open Access 2018 833 sitasi

Everybody Dance Now

C. Chan Shiry Ginosar Tinghui Zhou Alexei A. Efros

Abstrak

This paper presents a simple method for “do as I do” motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis. Although our method is quite simple, it produces surprisingly compelling results (see video). This motivates us to also provide a forensics tool for reliable synthetic content detection, which is able to distinguish videos synthesized by our system from real data. In addition, we release a first-of-its-kind open-source dataset of videos that can be legally used for training and motion transfer.

Topik & Kata Kunci

Penulis (4)

C

C. Chan

S

Shiry Ginosar

T

Tinghui Zhou

A

Alexei A. Efros

Format Sitasi

Chan, C., Ginosar, S., Zhou, T., Efros, A.A. (2018). Everybody Dance Now. https://doi.org/10.1109/ICCV.2019.00603

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1109/ICCV.2019.00603
Informasi Jurnal
Tahun Terbit
2018
Bahasa
en
Total Sitasi
833×
Sumber Database
Semantic Scholar
DOI
10.1109/ICCV.2019.00603
Akses
Open Access ✓