arXiv Open Access 2025

EXPOTION: Facial Expression and Motion Control for Multimodal Music Generation

Fathinah Izzati Xinyue Li Gus Xia
Lihat Sumber

Abstrak

We propose Expotion (Facial Expression and Motion Control for Multimodal Music Generation), a generative model leveraging multimodal visual controls - specifically, human facial expressions and upper-body motion - as well as text prompts to produce expressive and temporally accurate music. We adopt parameter-efficient fine-tuning (PEFT) on the pretrained text-to-music generation model, enabling fine-grained adaptation to the multimodal controls using a small dataset. To ensure precise synchronization between video and music, we introduce a temporal smoothing strategy to align multiple modalities. Experiments demonstrate that integrating visual features alongside textual descriptions enhances the overall quality of generated music in terms of musicality, creativity, beat-tempo consistency, temporal alignment with the video, and text adherence, surpassing both proposed baselines and existing state-of-the-art video-to-music generation models. Additionally, we introduce a novel dataset consisting of 7 hours of synchronized video recordings capturing expressive facial and upper-body gestures aligned with corresponding music, providing significant potential for future research in multimodal and interactive music generation.

Penulis (3)

F

Fathinah Izzati

X

Xinyue Li

G

Gus Xia

Format Sitasi

Izzati, F., Li, X., Xia, G. (2025). EXPOTION: Facial Expression and Motion Control for Multimodal Music Generation. https://arxiv.org/abs/2507.04955

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓