N. Kraus, B. Chandrasekaran
Hasil untuk "Music"
Menampilkan 20 dari ~1057764 hasil · dari DOAJ, Semantic Scholar, CrossRef, arXiv
Jochen Wirtz, A. Mattila
R. Radano, J. Attali, B. Massumi
Frances H. Rauscher, G. Shaw, Catherine N. Ky
F. Lerdahl, Ray Jackendoff
Patrik N. Juslin, J. Sloboda
S. Koelsch, Thomas Fritz, D. Yves v. Cramon et al.
A. Blood, R. Zatorre, P. Bermudez et al.
Eric D. Scheirer, M. Slaney
Robert O. Gjerdingen
V. Menon, D. Levitin
Although the neural underpinnings of music cognition have been widely studied in the last 5 years, relatively little is known about the neuroscience underlying emotional reactions that music induces in listeners. Many people spend a significant amount of time listening to music, and its emotional power is assumed but not well understood. Here, we use functional and effective connectivity analyses to show for the first time that listening to music strongly modulates activity in a network of mesolimbic structures involved in reward processing including the nucleus accumbens (NAc) and the ventral tegmental area (VTA), as well as the hypothalamus and insula, which are thought to be involved in regulating autonomic and physiological responses to rewarding and emotional stimuli. Responses in the NAc and the VTA were strongly correlated pointing to an association between dopamine release and NAc response to music. Responses in the NAc and the hypothalamus were also strongly correlated across subjects, suggesting a mechanism by which listening to pleasant music evokes physiological reactions. Effective connectivity confirmed these findings, and showed significant VTA-mediated interaction of the NAc with the hypothalamus, insula, and orbitofrontal cortex. The enhanced functional and effective connectivity between brain regions mediating reward, autonomic, and cognitive processing provides insight into understanding why listening to music is one of the most rewarding and pleasurable human experiences.
D. Huron
P. Juslin, J. Sloboda
Nicolas Boulanger-Lewandowski, Yoshua Bengio, Pascal Vincent
We investigate the problem of modeling symbolic sequences of polyphonic music in a completely general piano-roll representation. We introduce a probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences. Our approach outperforms many traditional models of polyphonic music on a variety of realistic datasets. We show how our musical language model can serve as a symbolic prior to improve the accuracy of polyphonic transcription.
WANG Changyue (王长跃), YUE Lina (岳丽娜), ZHANG Tingting (张婷婷) et al.
This article summarizes the clinical efficacy and nursing management in treating a patient with constipation due to gastrointestinal heat accumulation using acupoint massage combined with Five-Element music therapy, aiming to provide clinical reference for such treatments. Traditional Chinese Medicine (TCM) syndrome differentiation-guided acupoint massage directly acts on meridians and acupoints, while Five-Element music therapy regulates visceral functions, enhances spleen and stomach transportation, and alleviates anxiety. The combination of these two methods addresses both symptoms and root causes, holistically regulating the body to unblock meridians, balance yin and yang, and promote intestinal peristalsis, effectively relieving constipation. Through syndrome differentiation-based treatment and meticulous nursing care, the patient achieved normal bowel movements by the 7th day and was discharged with full recovery by the 10th day, without recurrence of constipation. (本文总结经穴推拿联合五音疗法治疗1例胃肠积热型便秘患者的效果和护理体会。中医辨证经穴推拿直接作用经络和穴位, 五音疗法调节脏腑功能, 增强脾胃运化功能, 缓解焦虑。将经穴推拿与五音疗法相结合, 标本兼治、整体调节, 以疏通经络、调整阴阳, 促进肠蠕动, 有效缓解患者便秘症状。经过辨证治疗与精心护理, 治疗第7天患者排便正常, 第10天患者康复出院, 未出现便秘症状。)
Keon Ju Maverick Lee, Jeff Ens, Sara Adkins et al.
The Musical Instrument Digital Interface (MIDI), introduced in 1983, revolutionized music production by allowing computers and instruments to communicate efficiently. MIDI files encode musical instructions compactly, facilitating convenient music sharing. They benefit Music Information Retrieval (MIR), aiding in research on music understanding, computational musicology, and generative music. The GigaMIDI dataset contains over 1.4 million unique MIDI files, encompassing 1.8 billion MIDI note events and over 5.3 million MIDI tracks. GigaMIDI is currently the largest collection of symbolic music in MIDI format available for research purposes under fair dealing. Distinguishing between non-expressive and expressive MIDI tracks is challenging, as MIDI files do not inherently make this distinction. To address this issue, we introduce a set of innovative heuristics for detecting expressive music performance. These include the Distinctive Note Velocity Ratio (DNVR) heuristic, which analyzes MIDI note velocity; the Distinctive Note Onset Deviation Ratio (DNODR) heuristic, which examines deviations in note onset times; and the Note Onset Median Metric Level (NOMML) heuristic, which evaluates onset positions relative to metric levels. Our evaluation demonstrates these heuristics effectively differentiate between non-expressive and expressive MIDI tracks. Furthermore, after evaluation, we create the most substantial expressive MIDI dataset, employing our heuristic, NOMML. This curated iteration of GigaMIDI encompasses expressively-performed instrument tracks detected by NOMML, containing all General MIDI instruments, constituting 31% of the GigaMIDI dataset, totalling 1,655,649 tracks.
Aditya Bhattacharjee, Ivan Meresman Higgs, Mark Sandler et al.
Automatic sample identification (ASID), the detection and identification of portions of audio recordings that have been reused in new musical works, is an essential but challenging task in the field of audio query-based retrieval. While a related task, audio fingerprinting, has made significant progress in accurately retrieving musical content under "real world" (noisy, reverberant) conditions, ASID systems struggle to identify samples that have undergone musical modifications. Thus, a system robust to common music production transformations such as time-stretching, pitch-shifting, effects processing, and underlying or overlaying music is an important open challenge. In this work, we propose a lightweight and scalable encoding architecture employing a Graph Neural Network within a contrastive learning framework. Our model uses only 9% of the trainable parameters compared to the current state-of-the-art system while achieving comparable performance, reaching a mean average precision (mAP) of 44.2%. To enhance retrieval quality, we introduce a two-stage approach consisting of an initial coarse similarity search for candidate selection, followed by a cross-attention classifier that rejects irrelevant matches and refines the ranking of retrieved candidates - an essential capability absent in prior models. In addition, because queries in real-world applications are often short in duration, we benchmark our system for short queries using new fine-grained annotations for the Sample100 dataset, which we publish as part of this work.
Miles Blencowe, Michael Casey, Kimberly Tan
We describe our investigations concerning the sonification of measured data from experiments involving various mesoscopic mechanical oscillator systems cooled down close to their quantum ground states, and music generation from a programmed quantum computer that subjects a single quantum bit ("qubit") to various unitary rotations, composed in order to test for the breakdown of macroscopic realism as expressed by the violation of the Leggett-Garg inequality. "Listening'' to data via their resulting sonifications facilitates the discovery of signals that might otherwise be hard to detect in common graphic (i.e., visual) representations, and for the quantum computer music experiment provides a complementary way to discern when the measured qubit data violates macroscopic realism with some prior listening training. The resulting soundscapes and music also provide a complementary window into the quantum realm that is accessible to non-experts with open ears.
I. Peretz, Dominique T Vuvan, Marie-Élaine Lagrois et al.
Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing.
Steven Jan
Applying the theory of memetics to music offers the prospect of reconciling general Darwinian principles with the style and structure of music. The nature of the units of cultural evolution in music—memes or, more specifically, musemes—can potentially shed light on the evolutionary processes and pressures attendant upon early-hominin musicality. That is, primarily conjunct, narrow-tessitura musemes (those conforming to Ratner's “singing style,” and its instrumental assimilations) and primarily disjunct, wide-tessitura musemes (those conforming to Ratner's “brilliant style,” and its vocal assimilations) appear to be the outcome of distinct cultural-evolutionary processes. Moreover, musemes in each category arguably acquire their fecundity (perceptual-cognitive salience, and thus transmissibility) by appealing to different music-underpinning brain and body subsystems. Given music's status as an embodied phenomenon, both singing-style and brilliant-style musemes recruit and evoke image schemata, but those in the former category draw primarily upon vocal images of line, direction and continuity; whereas those in the latter category draw primarily upon rhythmic impetus and energy. These two museme-categories may have been molded by distinct biological-evolutionary processes—the evolution of fine vocal control, and that of rhythmic synchronisation, respectively; and they might—via the process of memetic drive—have themselves acted as separate and distinct selection pressures on biological evolution, in order to optimize the environment for their replication. As a case-study of (primarily) singing-style musemes, this article argues that a passage from the love duet “Mon cœur s'ouvre à ta voix” from Camille Saint-Saëns' opera Samson et Dalila op. 47 (1877) is the cultural-evolutionary antecedent of the Introduction/Chorus/Outro material of ABBA's song “The Winner Takes It All.” Discussion of their melodic and harmonic similarities supports a memetic link between elements of Saint-Saëns' duet and ABBA's song. These relationships of cultural transmission are argued to have been impelled by the fecundity of the shared musemes, which arises from the image-schematic and embodied effects of the implication-realisation structures (in Narmour's sense) that comprise them; and which is underwritten by the coevolution of musemes with vocal- and rhythmic-production mechanisms, and associated perceptual-cognitive schemata.
Halaman 6 dari 52889