arXiv Open Access 2020

Temporally Guided Music-to-Body-Movement Generation

Hsuan-Kai Kao Li Su
Lihat Sumber

Abstrak

This paper presents a neural network model to generate virtual violinist's 3-D skeleton movements from music audio. Improved from the conventional recurrent neural network models for generating 2-D skeleton data in previous works, the proposed model incorporates an encoder-decoder architecture, as well as the self-attention mechanism to model the complicated dynamics in body movement sequences. To facilitate the optimization of self-attention model, beat tracking is applied to determine effective sizes and boundaries of the training examples. The decoder is accompanied with a refining network and a bowing attack inference mechanism to emphasize the right-hand behavior and bowing attack timing. Both objective and subjective evaluations reveal that the proposed model outperforms the state-of-the-art methods. To the best of our knowledge, this work represents the first attempt to generate 3-D violinists' body movements considering key features in musical body movement.

Penulis (2)

H

Hsuan-Kai Kao

L

Li Su

Format Sitasi

Kao, H., Su, L. (2020). Temporally Guided Music-to-Body-Movement Generation. https://arxiv.org/abs/2009.08015

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓