arXiv Open Access 2026

Audiocards: Structured Metadata Improves Audio Language Models For Sound Design

Sripathi Sridhar Prem Seetharaman Oriol Nieto Mark Cartwright Justin Salamon
Lihat Sumber

Abstrak

Sound designers search for sounds in large sound effects libraries using aspects such as sound class or visual context. However, the metadata needed for such search is often missing or incomplete, and requires significant manual effort to add. Existing solutions to automate this task by generating metadata, i.e. captioning, and search using learned embeddings, i.e. text-audio retrieval, are not trained on metadata with the structure and information pertinent to sound design. To this end we propose audiocards, structured metadata grounded in acoustic attributes and sonic descriptors, by exploiting the world knowledge of LLMs. We show that training on audiocards improves downstream text-audio retrieval, descriptive captioning, and metadata generation on professional sound effects libraries. Moreover, audiocards also improve performance on general audio captioning and retrieval over the baseline single-sentence captioning approach. We release a curated dataset of sound effects audiocards to invite further research in audio language modeling for sound design.

Topik & Kata Kunci

Penulis (5)

S

Sripathi Sridhar

P

Prem Seetharaman

O

Oriol Nieto

M

Mark Cartwright

J

Justin Salamon

Format Sitasi

Sridhar, S., Seetharaman, P., Nieto, O., Cartwright, M., Salamon, J. (2026). Audiocards: Structured Metadata Improves Audio Language Models For Sound Design. https://arxiv.org/abs/2602.13835

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓