Semantic Scholar Open Access 2022 168 sitasi

Music2Dance: DanceNet for Music-Driven Dance Generation

Wenlin Zhuang Congyi Wang Jinxiang Chai Yangang Wang Ming Shao +1 lainnya

Abstrak

Synthesize human motions from music (i.e., music to dance) is appealing and has attracted lots of research interests in recent years. It is challenging because of the requirement for realistic and complex human motions for dance, but more importantly, the synthesized motions should be consistent with the style, rhythm, and melody of the music. In this article, we propose a novel autoregressive generative model, DanceNet, to take the style, rhythm, and melody of music as the control signals to generate 3D dance motions with high realism and diversity. Due to the high long-term spatio-temporal complexity of dance, we propose the dilated convolution to improve the receptive field, and adopt the gated activation unit as well as separable convolution to enhance the fusion of motion features and control signals. To boost the performance of our proposed model, we capture several synchronized music-dance pairs by professional dancers and build a high-quality music-dance pair dataset. Experiments have demonstrated that the proposed method can achieve state-of-the-art results.

Topik & Kata Kunci

Penulis (6)

W

Wenlin Zhuang

C

Congyi Wang

J

Jinxiang Chai

Y

Yangang Wang

M

Ming Shao

S

Siyu Xia

Format Sitasi

Zhuang, W., Wang, C., Chai, J., Wang, Y., Shao, M., Xia, S. (2022). Music2Dance: DanceNet for Music-Driven Dance Generation. https://doi.org/10.1145/3485664

Akses Cepat

Lihat di Sumber doi.org/10.1145/3485664
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Total Sitasi
168×
Sumber Database
Semantic Scholar
DOI
10.1145/3485664
Akses
Open Access ✓