Semantic Scholar Open Access 2022 361 sitasi

EDGE: Editable Dance Generation From Music

Jo-Han Tseng Rodrigo Castellon C. Liu

Abstrak

Dance is an important human art form, but creating new dances can be difficult and time-consuming. In this work, we introduce Editable Dance GEneration (EDGE), a state-of-the-art method for editable dance generation that is capable of creating realistic, physically-plausible dances while remaining faithful to the input music. EDGE uses a transformer-based diffusion model paired with Jukebox, a strong music feature extractor, and confers powerful editing capabilities well-suited to dance, including joint-wise conditioning, and in-betweening. We introduce a new metric for physical plausibility, and evaluate dance quality generated by our method extensively through (1) multiple quantitative metrics on physical plausibility, beat alignment, and diversity benchmarks, and more importantly, (2) a large-scale user study, demonstrating a significant improvement over previous state-of-the-art methods. Qualitative samples from our model can be found at our website.

Penulis (3)

J

Jo-Han Tseng

R

Rodrigo Castellon

C

C. Liu

Format Sitasi

Tseng, J., Castellon, R., Liu, C. (2022). EDGE: Editable Dance Generation From Music. https://doi.org/10.1109/CVPR52729.2023.00051

Akses Cepat

Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Total Sitasi
361×
Sumber Database
Semantic Scholar
DOI
10.1109/CVPR52729.2023.00051
Akses
Open Access ✓