arXiv Open Access 2025

Extending Visual Dynamics for Video-to-Music Generation

Xiaohao Liu Teng Tu Yunshan Ma Tat-Seng Chua
Lihat Sumber

Abstrak

Music profoundly enhances video production by improving quality, engagement, and emotional resonance, sparking growing interest in video-to-music generation. Despite recent advances, existing approaches remain limited in specific scenarios or undervalue the visual dynamics. To address these limitations, we focus on tackling the complexity of dynamics and resolving temporal misalignment between video and music representations. To this end, we propose DyViM, a novel framework to enhance dynamics modeling for video-to-music generation. Specifically, we extract frame-wise dynamics features via a simplified motion encoder inherited from optical flow methods, followed by a self-attention module for aggregation within frames. These dynamic features are then incorporated to extend existing music tokens for temporal alignment. Additionally, high-level semantics are conveyed through a cross-attention mechanism, and an annealing tuning strategy benefits to fine-tune well-trained music decoders efficiently, therefore facilitating seamless adaptation. Extensive experiments demonstrate DyViM's superiority over state-of-the-art (SOTA) methods.

Topik & Kata Kunci

Penulis (4)

X

Xiaohao Liu

T

Teng Tu

Y

Yunshan Ma

T

Tat-Seng Chua

Format Sitasi

Liu, X., Tu, T., Ma, Y., Chua, T. (2025). Extending Visual Dynamics for Video-to-Music Generation. https://arxiv.org/abs/2504.07594

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓