arXiv Open Access 2023

Robust Dancer: Long-term 3D Dance Synthesis Using Unpaired Data

Bin Feng Tenglong Ao Zequn Liu Wei Ju Libin Liu +1 lainnya
Lihat Sumber

Abstrak

How to automatically synthesize natural-looking dance movements based on a piece of music is an incrementally popular yet challenging task. Most existing data-driven approaches require hard-to-get paired training data and fail to generate long sequences of motion due to error accumulation of autoregressive structure. We present a novel 3D dance synthesis system that only needs unpaired data for training and could generate realistic long-term motions at the same time. For the unpaired data training, we explore the disentanglement of beat and style, and propose a Transformer-based model free of reliance upon paired data. For the synthesis of long-term motions, we devise a new long-history attention strategy. It first queries the long-history embedding through an attention computation and then explicitly fuses this embedding into the generation pipeline via multimodal adaptation gate (MAG). Objective and subjective evaluations show that our results are comparable to strong baseline methods, despite not requiring paired training data, and are robust when inferring long-term music. To our best knowledge, we are the first to achieve unpaired data training - an ability that enables to alleviate data limitations effectively. Our code is released on https://github.com/BFeng14/RobustDancer

Topik & Kata Kunci

Penulis (6)

B

Bin Feng

T

Tenglong Ao

Z

Zequn Liu

W

Wei Ju

L

Libin Liu

M

Ming Zhang

Format Sitasi

Feng, B., Ao, T., Liu, Z., Ju, W., Liu, L., Zhang, M. (2023). Robust Dancer: Long-term 3D Dance Synthesis Using Unpaired Data. https://arxiv.org/abs/2303.16856

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓