Audio-Lyrics Multimodal Fusion for Music Genre Clustering with Dynamic Modality Weighting
Abstrak
In the field of music information retrieval, existing genre clustering approaches employed for music suggestion, automatic tagging, and content arrangement typically combine audio and lyrics with static weights, which neglects the reality that diverse genres depend on these two forms of data to varying extents. this paper put forward an audio-lyrics multimodal fusion system with variable modality weights for unsupervised music genre clustering, first, the paper separately drew out multi - level representations from lyrics and audio, then, utilizing indicators like the presence of instruments, energy, and feature quality, the paper applied heuristic guidelines to figure out a modality weight for each sample, making it possible for the fusion to be adaptable at the sample level, ablation researches on a simulated dataset demonstrated that the dynamic weighting technique functioned considerably better than static - weight combination and single - modality benchmarks in terms of clustering quality measures, further examination of weight distributions among clusters revealed that the dynamic weighting system could flexibly grasp genre - specific modality dependence and enhance the understandability of clustering results, to further verify the feature extraction and clustering process, the paper also carried out subsequent experiments on the real-world Marsyas GTZAN dataset.
Topik & Kata Kunci
Penulis (1)
Yang Jinyu
Akses Cepat
- Tahun Terbit
- 2026
- Sumber Database
- DOAJ
- DOI
- 10.1051/itmconf/20268403025
- Akses
- Open Access ✓