Foley Music: Learning to Generate Music from Videos
Abstrak
In this paper, we introduce Foley Music, a system that can synthesize plausible music for a silent video clip about people playing musical instruments. We first identify two key intermediate representations for a successful video to music generator: body keypoints from videos and MIDI events from audio recordings. We then formulate music generation from videos as a motion-to-MIDI translation problem. We present a Graph$-$Transformer framework that can accurately predict MIDI event sequences in accordance with the body movements. The MIDI event can then be converted to realistic music using an off-the-shelf music synthesizer tool. We demonstrate the effectiveness of our models on videos containing a variety of music performances. Experimental results show that our model outperforms several existing systems in generating music that is pleasant to listen to. More importantly, the MIDI representations are fully interpretable and transparent, thus enabling us to perform music editing flexibly. We encourage the readers to watch the demo video with audio turned on to experience the results.
Topik & Kata Kunci
Penulis (5)
Chuang Gan
Deng Huang
Peihao Chen
J. Tenenbaum
A. Torralba
Akses Cepat
PDF tidak tersedia langsung
Cek di sumber asli →- Tahun Terbit
- 2020
- Bahasa
- en
- Total Sitasi
- 156×
- Sumber Database
- Semantic Scholar
- DOI
- 10.1007/978-3-030-58621-8_44
- Akses
- Open Access ✓