Hasil untuk "Music"

Menampilkan 20 dari ~1057576 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar

JSON API
S2 Open Access 2023
Music Received

K. Abromeit

This column features percussion music by women composers. Names were compiled using the Theodore Front database. Additional names were contributed by Ross Karre, Associate Professor of Percussion, in the Oberlin Conservatory of Music. The lists of works included are not comprehensive for each composer but are rather either recently published or notable. Special thanks to Jake Balmuth for assisting with this column.

S2 Open Access 2020
Jukebox: A Generative Model for Music

Prafulla Dhariwal, Heewoo Jun, Christine Payne et al.

We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multi-scale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples at this https URL, along with model weights and code at this https URL

932 sitasi en Engineering, Computer Science
S2 Open Access 2015
librosa: Audio and Music Signal Analysis in Python

Brian McFee, Colin Raffel, Dawen Liang et al.

This document describes version 0.4.0 of librosa: a Python pack- age for audio and music signal processing. At a high level, librosa provides implementations of a variety of common functions used throughout the field of music information retrieval. In this document, a brief overview of the library's functionality is provided, along with explanations of the design goals, software development practices, and notational conventions.

2905 sitasi en Computer Science
S2 Open Access 2015
MUSAN: A Music, Speech, and Noise Corpus

David Snyder, Guoguo Chen, Daniel Povey

This report introduces a new corpus of music, speech, and noise. This dataset is suitable for training models for voice activity detection (VAD) and music/speech discrimination. Our corpus is released under a flexible Creative Commons license. The dataset consists of music from several genres, speech from twelve languages, and a wide assortment of technical and non-technical noises. We demonstrate use of this corpus for music/speech discrimination on Broadcast news and VAD for speaker identification.

1599 sitasi en Computer Science
S2 Open Access 2023
MusicLM: Generating Music From Text

A. Agostinelli, Timo I. Denk, Zalán Borsos et al.

We introduce MusicLM, a model generating high-fidelity music from text descriptions such as"a calming violin melody backed by a distorted guitar riff". MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes. Our experiments show that MusicLM outperforms previous systems both in audio quality and adherence to the text description. Moreover, we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption. To support future research, we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts.

643 sitasi en Computer Science, Engineering
S2 Open Access 2023
Simple and Controllable Music Generation

Jade Copet, F. Kreuk, Itai Gat et al.

We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen. Music samples, code, and models are available at https://github.com/facebookresearch/audiocraft.

636 sitasi en Computer Science, Engineering
S2 Open Access 2021
AI Choreographer: Music Conditioned 3D Dance Generation with AIST++

Ruilong Li, Sha Yang, David A. Ross et al.

We present AIST++, a new multi-modal dataset of 3D dance motion and music, along with FACT, a Full-Attention Cross-modal Transformer network for generating 3D dance motion conditioned on music. The proposed AIST++ dataset contains 5.2 hours of 3D dance motion in 1408 sequences, covering 10 dance genres with multi-view videos with known camera poses—the largest dataset of this kind to our knowledge. We show that naively applying sequence models such as transformers to this dataset for the task of music conditioned 3D motion generation does not produce satisfactory 3D motion that is well correlated with the input music. We overcome these shortcomings by introducing key changes in its architecture design and supervision: FACT model involves a deep cross-modal transformer block with full-attention that is trained to predict N future motions. We empirically show that these changes are key factors in generating long sequences of realistic dance motion that are well-attuned to the input music. We conduct extensive experiments on AIST++ with user studies, where our method outperforms recent state-of-the-art methods both qualitatively and quantitatively. The code and the dataset can be found at: https://google.github.io/aichoreographer.

683 sitasi en Computer Science
S2 Open Access 2017
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang et al.

Generating music has a few notable differences from generating images and videos. First, music is an art of time, necessitating a temporal model. Second, music is usually composed of multiple instruments/tracks with their own temporal dynamics, but collectively they unfold over time interdependently. Lastly, musical notes are often grouped into chords, arpeggios or melodies in polyphonic music, and thereby introducing a chronological ordering of notes is not naturally suitable. In this paper, we propose three models for symbolic multi-track music generation under the framework of generative adversarial networks (GANs). The three models, which differ in the underlying assumptions and accordingly the network architectures, are referred to as the jamming model, the composer model and the hybrid model. We trained the proposed models on a dataset of over one hundred thousand bars of rock music and applied them to generate piano-rolls of five tracks: bass, drums, guitar, piano and strings. A few intra-track and inter-track objective metrics are also proposed to evaluate the generative results, in addition to a subjective user study. We show that our models can generate coherent music of four bars right from scratch (i.e. without human inputs). We also extend our models to human-AI cooperative music generation: given a specific track composed by human, we can generate four additional tracks to accompany it. All code, the dataset and the rendered audio samples are available at https://salu133445.github.io/musegan/.

615 sitasi en Computer Science, Engineering
S2 Open Access 2018
Music Transformer: Generating Music with Long-Term Structure

Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit et al.

Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length. We propose an algorithm that reduces their intermediate memory requirement to linear in the sequence length. This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long compositions (thousands of steps, four times the length modeled in Oore et al., 2018) with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-Competition, and obtain state-of-the-art results on the latter.

562 sitasi en Computer Science, Engineering
S2 Open Access 2020
Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions

Yu-Siang Huang, Yi-Hsuan Yang

A great number of deep learning based models have been recently proposed for automatic music composition. Among these models, the Transformer stands out as a prominent approach for generating expressive classical piano performance with a coherent structure of up to one minute. The model is powerful in that it learns abstractions of data on its own, without much human-imposed domain knowledge or constraints. In contrast with this general approach, this paper shows that Transformers can do even better for music modeling, when we improve the way a musical score is converted into the data fed to a Transformer model. In particular, we seek to impose a metrical structure in the input data, so that Transformers can be more easily aware of the beat-bar-phrase hierarchical structure in music. The new data representation maintains the flexibility of local tempo changes, and provides hurdles to control the rhythmic and harmonic structure of music. With this approach, we build a Pop Music Transformer that composes Pop piano music with better rhythmic structure than existing Transformer models.

372 sitasi en Computer Science
S2 Open Access 2020
Effects of music interventions on stress-related outcomes: a systematic review and two meta-analyses

Martina de Witte, A. Spruit, Susan van Hooren et al.

ABSTRACT Music interventions are used for stress reduction in a variety of settings because of the positive effects of music listening on both physiological arousal (e.g., heart rate, blood pressure, and hormonal levels) and psychological stress experiences (e.g., restlessness, anxiety, and nervousness). To summarize the growing body of empirical research, two multilevel meta-analyses of 104 RCTs, containing 327 effect sizes and 9,617 participants, were performed to assess the strength of the effects of music interventions on both physiological and psychological stress-related outcomes, and to test the potential moderators of the intervention effects. Results showed that music interventions had an overall significant effect on stress reduction in both physiological (d = .380) and psychological (d = .545) outcomes. Further, moderator analyses showed that the type of outcome assessment moderated the effects of music interventions on stress-related outcomes. Larger effects were found on heart rate (d = .456), compared to blood pressure (d = .343) and hormone levels (d = .349). Implications for stress-reducing music interventions are discussed.

366 sitasi en Medicine, Psychology
S2 Open Access 2022
MuLan: A Joint Embedding of Music Audio and Natural Language

Qingqing Huang, A. Jansen, Joonseok Lee et al.

Music tagging and content-based retrieval systems have traditionally been constructed using pre-defined ontologies covering a rigid set of music attributes or text queries. This paper presents MuLan: a first attempt at a new generation of acoustic models that link music audio directly to unconstrained natural language music descriptions. MuLan takes the form of a two-tower, joint audio-text embedding model trained using 44 million music recordings (370K hours) and weakly-associated, free-form text annotations. Through its compatibility with a wide range of music genres and text styles (including conventional music tags), the resulting audio-text representation subsumes existing ontologies while graduating to true zero-shot functionalities. We demonstrate the versatility of the MuLan embeddings with a range of experiments including transfer learning, zero-shot music tagging, language understanding in the music domain, and cross-modal retrieval applications.

176 sitasi en Computer Science, Engineering
S2 Open Access 2024
Long-form music generation with latent diffusion

Zach Evans, Julian Parker, CJ Carr et al.

Audio-based generative models for music have seen great strides recently, but so far have not managed to produce full-length music tracks with coherent musical structure from text prompts. We show that by training a generative model on long temporal contexts it is possible to produce long-form music of up to 4m45s. Our model consists of a diffusion-transformer operating on a highly downsampled continuous latent representation (latent rate of 21.5Hz). It obtains state-of-the-art generations according to metrics on audio quality and prompt alignment, and subjective tests reveal that it produces full-length music with coherent structure.

102 sitasi en Computer Science, Engineering
S2 Open Access 2023
LP-MusicCaps: LLM-Based Pseudo Music Captioning

Seungheon Doh, Keunwoo Choi, Jongpil Lee et al.

Automatic music captioning, which generates natural language descriptions for given music tracks, holds significant potential for enhancing the understanding and organization of large volumes of musical data. Despite its importance, researchers face challenges due to the costly and time-consuming collection process of existing music-language datasets, which are limited in size. To address this data scarcity issue, we propose the use of large language models (LLMs) to artificially generate the description sentences from large-scale tag datasets. This results in approximately 2.2M captions paired with 0.5M audio clips. We term it Large Language Model based Pseudo music caption dataset, shortly, LP-MusicCaps. We conduct a systemic evaluation of the large-scale music captioning dataset with various quantitative evaluation metrics used in the field of natural language processing as well as human evaluation. In addition, we trained a transformer-based music captioning model with the dataset and evaluated it under zero-shot and transfer-learning settings. The results demonstrate that our proposed approach outperforms the supervised baseline model.

124 sitasi en Computer Science, Engineering
S2 Open Access 2023
VampNet: Music Generation via Masked Acoustic Token Modeling

H. F. García, Prem Seetharaman, Rithesh Kumar et al.

We introduce VampNet, a masked acoustic token modeling approach to music synthesis, compression, inpainting, and variation. We use a variable masking schedule during training which allows us to sample coherent music from the model by applying a variety of masking approaches (called prompts) during inference. VampNet is non-autoregressive, leveraging a bidirectional transformer architecture that attends to all tokens in a forward pass. With just 36 sampling passes, VampNet can generate coherent high-fidelity musical waveforms. We show that by prompting VampNet in various ways, we can apply it to tasks like music compression, inpainting, outpainting, continuation, and looping with variation (vamping). Appropriately prompted, VampNet is capable of maintaining style, genre, instrumentation, and other high-level aspects of the music. This flexible prompting capability makes VampNet a powerful music co-creation tool. Code and audio samples are available online.

87 sitasi en Computer Science, Engineering

Halaman 1 dari 52879