S. Hirsch
Hasil untuk "Music"
Menampilkan 20 dari ~1057489 hasil · dari CrossRef, arXiv, DOAJ, Semantic Scholar
M. Frueh
Aniruddh D. Patel
1. Introduction 2. Sound Elements: Pitch and Timbre 2.1 Introduction 2.2 Musical Sound Systems 2.3 Linguistic Sound Systems 2.4 Sound Category Learning as a Key Link 2.5 Conclusion Appendixes 3. Rhythm 3.1 Introduction 3.2 Rhythm in Music 3.3 Rhythm in Speech 3.4 Interlude: Rhythm in Poetry and Song 3.5 Non-Periodic Aspects of Rhythm as a Key Link 3.6 Conclusion Appendixes 4. Melody 4.1 Introduction 4.2 Melody in Music: Comparisons to Speech 4.3 Speech Melody: Links to Music 4.4 Interlude: Musical and Linguistic Melody in Song 4.5 Melodic Statistics and Melodic Contour as Key Links 4.6 Conclusion Appendix 5. Syntax 5.1 Introduction 5.2 The Structural Richness of Musical Syntax 5.3 Formal Differences and Similarities between Musical and Linguistic Syntax 5.4 Neural Resources for Syntactic Integration as a Key Link 5.5 Conclusion 6. Meaning 6.1 Introduction 6.2 A Brief Taxonomy of Musical Meaning 6.3 Linguistic Meaning in Relation to Music 6.4 Interlude: Linguistic and Musical Meaning in Song 6.5 The Expression and Appraisal of Emotion as a Key Link 6.6 Conclusion 7. Evolution 7.1 Introduction 7.2 Language and Natural Selection 7.3 Music and Natural Selection 7.4 Music and Evolution: Neither Adaptation nor Frill 7.5 Beat-Based Rhythm Processing as a Key Research Area 7.6 Conclusion Appendix Afterword References List of Sound Examples Lis of Credits Author Index Subject Index
Sarah Thornton
D. Harvey, F. Lerdahl, Ray Jackendoff
J. Bradt, C. DiLeo, Lucanne Magill et al.
Michaël Defferrard, Kirell Benzi, P. Vandergheynst et al.
We introduce the Free Music Archive (FMA), an open and easily accessible dataset suitable for evaluating several tasks in MIR, a field concerned with browsing, searching, and organizing large music collections. The community's growing interest in feature and end-to-end learning is however restrained by the limited availability of large audio datasets. The FMA aims to overcome this hurdle by providing 917 GiB and 343 days of Creative Commons-licensed audio from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a hierarchical taxonomy of 161 genres. It provides full-length and high-quality audio, pre-computed features, together with track- and user-level metadata, tags, and free-form text such as biographies. We here describe the dataset and how it was created, propose a train/validation/test split and three subsets, discuss some suitable MIR tasks, and evaluate some baselines for genre recognition. Code, data, and usage examples are available at this https URL
Li-Chia Yang, Szu-Yu Chou, Yi-Hsuan Yang
Most existing neural network models for music generation use recurrent neural networks. However, the recent WaveNet model proposed by DeepMind shows that convolutional neural networks (CNNs) can also generate realistic musical waveforms in the audio domain. Following this light, we investigate using CNNs for generating melody (a series of MIDI notes) one bar after another in the symbolic domain. In addition to the generator, we use a discriminator to learn the distributions of melodies, making it a generative adversarial network (GAN). Moreover, we propose a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars (e.g. a priming melody), among other possibilities. The resulting model, named MidiNet, can be expanded to generate music with multiple MIDI channels (i.e. tracks). We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google's MelodyRNN models, each time using the same priming melody. Result shows that MidiNet performs comparably with MelodyRNN models in being realistic and pleasant to listen to, yet MidiNet's melodies are reported to be much more interesting.
S. Koelsch, P. Vuust, Karl J. Friston
We suggest that music perception is an active act of listening, providing an irresistible epistemic offering. When listening to music we constantly generate plausible hypotheses about what could happen next, while actively attending to music resolves the ensuing uncertainty. Within the predictive coding framework, we present a novel formulation of precision filtering and attentional selection, which explains why some lower-level auditory, and even higher-level music-syntactic processes elicited by irregular events are relatively exempt from top-down predictive processes. We review findings providing unique evidence for the attentional selection of salient auditory features. This formulation suggests that 'listening' is a more active process than traditionally conceived in models of perception.
Martina de Witte, Ana da Silva Pinho, G. Stams et al.
ABSTRACT Music therapy is increasingly being used as an intervention for stress reduction in both medical and mental healthcare settings. Music therapy is characterized by personally tailored music interventions initiated by a trained and qualified music therapist, which distinguishes music therapy from other music interventions, such as ‘music medicine’, which concerns mainly music listening interventions offered by healthcare professionals. To summarize the growing body of empirical research on music therapy, a multilevel meta-analysis, containing 47 studies, 76 effect sizes and 2.747 participants, was performed to assess the strength of the effects of music therapy on both physiological and psychological stress-related outcomes, and to test potential moderators of the intervention effects. Results showed that music therapy showed an overall medium-to-large effect on stress-related outcomes (d = .723, [.51–.94]). Larger effects were found for clinical controlled trials (CCT) compared to randomized controlled trials (RCT), waiting list controls instead of care as usual (CAU) or other stress-reducing interventions, and for studies conducted in Non-Western countries compared to Western countries. Implications for both music therapy and future research are discussed.
Sarah Sinnamon
Music performance involves precise motor control that is coordinated with higher order planning to convey complex structural information. In addition, music performance usually involves motor tasks that are not learned spontaneously (as in the use of the vocal apparatus), the reproduction of preestablished sequences (notated or from memory), and synchronized joint performance with one or more other musicians. Music performance also relies on a rich repertoire of musical knowledge that can be used for purposes of expressive variation and improvisation. As such, the study of music performance provides a way to explore learning, motor control, memory, and interpersonal coordination in the context of a real-world behavior. Music performance skills vary considerably in the population and reflect interactions between genetic predispositions and the effect of intensive practice. At the same time, research suggests that most individuals have the capacity to perform music through singing or learning an instrument, and in this sense music performance taps into a universal human propensity for communication and coordination with conspecifics.
P. Terry, C. Karageorghis, M. Curran et al.
Regular physical activity has multifarious benefits for physical and mental health, and music has been found to exert positive effects on physical activity. Summative literature reviews and conceptual models have hypothesized potential benefits and salient mechanisms associated with music listening in exercise and sport contexts, although no large-scale objective summary of the literature has been conducted. A multilevel meta-analysis of 139 studies was used to quantify the effects of music listening in exercise and sport domains. In total, 598 effect sizes from four categories of potential benefits (i.e., psychological responses, physiological responses, psychophysical responses, and performance outcomes) were calculated based on 3,599 participants. Music was associated with significant beneficial effects on affective valence (g = 0.48, CI [0.39, 0.56]), physical performance (g = 0.31, CI [0.25, 0.36]), perceived exertion (g = 0.22, CI [0.14, 0.30]), and oxygen consumption (g = 0.15, CI [0.02, 0.27]). No significant benefit of music was found for heart rate (g = 0.07, CI [-0.03, 0.16]). Performance effects were moderated by study domain (exercise > sport) and music tempo (fast > slow-to-medium). Overall, results supported the use of music listening across a range of physical activities to promote more positive affective valence, enhance physical performance (i.e., ergogenic effect), reduce perceived exertion, and improve physiological efficiency. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
L. Ferreri, E. Mas-Herrero, R. Zatorre et al.
Significance In everyday life humans regularly seek participation in highly complex and pleasurable experiences such as music listening, singing, or playing, that do not seem to have any specific survival advantage. The question addressed here is to what extent dopaminergic transmission plays a direct role in the reward experience (both motivational and hedonic) induced by music. We report that pharmacological manipulation of dopamine modulates musical responses in both positive and negative directions, thus showing that dopamine causally mediates musical reward experience. Understanding how the brain translates a structured sequence of sounds, such as music, into a pleasant and rewarding experience is a fascinating question which may be crucial to better understand the processing of abstract rewards in humans. Previous neuroimaging findings point to a challenging role of the dopaminergic system in music-evoked pleasure. However, there is a lack of direct evidence showing that dopamine function is causally related to the pleasure we experience from music. We addressed this problem through a double blind within-subject pharmacological design in which we directly manipulated dopaminergic synaptic availability while healthy participants (n = 27) were engaged in music listening. We orally administrated to each participant a dopamine precursor (levodopa), a dopamine antagonist (risperidone), and a placebo (lactose) in three different sessions. We demonstrate that levodopa and risperidone led to opposite effects in measures of musical pleasure and motivation: while the dopamine precursor levodopa, compared with placebo, increased the hedonic experience and music-related motivational responses, risperidone led to a reduction of both. This study shows a causal role of dopamine in musical pleasure and indicates that dopaminergic transmission might play different or additive roles than the ones postulated in affective processing so far, particularly in abstract cognitive activities.
Gautam Mittal, Jesse Engel, Curtis Hawthorne et al.
Score-based generative models and diffusion probabilistic models have been successful at generating high-quality samples in continuous domains such as images and audio. However, due to their Langevin-inspired sampling mechanisms, their application to discrete and sequential data has been limited. In this work, we present a technique for training diffusion models on sequential data by parameterizing the discrete domain in the continuous latent space of a pre-trained variational autoencoder. Our method is non-autoregressive and learns to generate sequences of latent embeddings through the reverse process and offers parallel generation with a constant number of iterative refinement steps. We apply this technique to modeling symbolic music and show strong unconditional generation and post-hoc conditional infilling results compared to autoregressive language models operating over the same continuous embeddings.
P. Bohlman
FOREWORD BY ALAN DUNDES ACKNOWLEDGMENTS INTRODUCTION 1. The Origins of Folk Music, Past and Present 2. Folk Music and Oral Tradition 3. Classification: The Discursive Boundaries of Folk Music 4. The Social Basis of Folk Music: A Sense of Community, A Sense of Place 5. The Folk Musician 6. Folk Music in Non-Western Cultures 7. Folk Music and Canon-Formation: The Creative Dialectic between Text and Context 8. Folk Music in the Modern World Bibliography Index
Emmanouil Benetos, S. Dixon, Z. Duan et al.
The capability of transcribing music audio into music notation is a fascinating example of human intelligence. It involves perception (analyzing complex auditory scenes), cognition (recognizing musical objects), knowledge representation (forming musical structures), and inference (testing alternative hypotheses). Automatic music transcription (AMT), i.e., the design of computational algorithms to convert acoustic music signals into some form of music notation, is a challenging task in signal processing and artificial intelligence. It comprises several subtasks, including multipitch estimation (MPE), onset and offset detection, instrument recognition, beat and rhythm tracking, interpretation of expressive timing and dynamics, and score typesetting.
Mingliang Zeng, Xu Tan, Rui Wang et al.
Symbolic music understanding, which refers to the understanding of music from the symbolic data (e.g., MIDI format, but not audio), covers many music applications such as genre classification, emotion classification, and music pieces matching. While good music representations are beneficial for these applications, the lack of training data hinders representation learning. Inspired by the success of pre-training models in natural language processing, in this paper, we develop MusicBERT, a large-scale pre-trained model for music understanding. To this end, we construct a large-scale symbolic music corpus that contains more than 1 million music songs. Since symbolic music contains more structural (e.g., bar, position) and diverse information (e.g., tempo, instrument, and pitch), simply adopting the pre-training techniques from NLP to symbolic music only brings marginal gains. Therefore, we design several mechanisms, including OctupleMIDI encoding and bar-level masking strategy, to enhance pre-training with symbolic music data. Experiments demonstrate the advantages of MusicBERT on four music understanding tasks, including melody completion, accompaniment suggestion, genre classification, and style classification. Ablation studies also verify the effectiveness of our designs of OctupleMIDI encoding and bar-level masking strategy in MusicBERT.
G. Dingle, L. Sharman, Zoe Bauer et al.
Background: This scoping review analyzed research about how music activities may affect participants' health and well-being. Primary outcomes were measures of health (including symptoms and health behaviors) and well-being. Secondary measures included a range of psychosocial processes such as arousal, mood, social connection, physical activation or relaxation, cognitive functions, and identity. Diverse music activities were considered: receptive and intentional music listening; sharing music; instrument playing; group singing; lyrics and rapping; movement and dance; and songwriting, composition, and improvisation. Methods: Nine databases were searched with terms related to the eight music activities and the psychosocial variables of interest. Sixty-three papers met selection criteria, representing 6,975 participants of all ages, nationalities, and contexts. Results: Receptive and intentional music listening were found to reduce pain through changes in physiological arousal in some studies but not others. Shared music listening (e.g., concerts or radio programs) enhanced social connections and mood in older adults and in hospital patients. Music listening and carer singing decreased agitation and improved posture, movement, and well-being of people with dementia. Group singing supported cognitive health and well-being of older adults and those with mental health problems, lung disease, stroke, and dementia through its effects on cognitive functions, mood, and social connections. Playing a musical instrument was associated with improved cognitive health and well-being in school students, older adults, and people with mild brain injuries via effects on motor, cognitive and social processes. Dance and movement with music programs were associated with improved health and well-being in people with dementia, women with postnatal depression, and sedentary women with obesity through various cognitive, physical, and social processes. Rapping, songwriting, and composition helped the well-being of marginalized people through effects on social and cultural inclusion and connection, self-esteem and empowerment. Discussion: Music activities offer a rich and underutilized resource for health and well-being to participants of diverse ages, backgrounds, and settings. The review provides preliminary evidence that particular music activities may be recommended for specific psychosocial purposes and for specific health conditions.
Marion Baranes, Romain Hennequin, Elena V. Epure
Although annotated music descriptor datasets for user queries are increasingly common, few consider the user's intent behind these descriptors, which is essential for effectively meeting their needs. We introduce MusicRecoIntent, a manually annotated corpus of 2,291 Reddit music requests, labeling musical descriptors across seven categories with positive, negative, or referential preference-bearing roles. We then investigate how reliably large language models (LLMs) can extract these music descriptors, finding that they do capture explicit descriptors but struggle with context-dependent ones. This work can further serve as a benchmark for fine-grained modeling of user intent and for gaining insights into improving LLM-based music understanding systems.
Hsiao-Tzu Hung, Joann Ching, Seungheon Doh et al.
While there are many music datasets with emotion labels in the literature, they cannot be used for research on symbolic-domain music analysis or generation, as there are usually audio files only. In this paper, we present the EMOPIA (pronounced `yee-mo-pi-uh') dataset, a shared multi-modal (audio and MIDI) database focusing on perceived emotion in pop piano music, to facilitate research on various tasks related to music emotion. The dataset contains 1,087 music clips from 387 songs and clip-level emotion labels annotated by four dedicated annotators. Since the clips are not restricted to one clip per song, they can also be used for song-level analysis. We present the methodology for building the dataset, covering the song list curation, clip selection, and emotion annotation processes. Moreover, we prototype use cases on clip-level music emotion classification and emotion-based symbolic music generation by training and evaluating corresponding models using the dataset. The result demonstrates the potential of EMOPIA for being used in future exploration on piano emotion-related MIR tasks.
Halaman 2 dari 52875