Hasil untuk "Music"

Menampilkan 20 dari ~1058310 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
S2 Open Access 2000
The importance of music to adolescents.

Adrian C. North, D. Hargreaves, S. O’Neill

AIMS The study aims to determine the importance of music to adolescents in England, and investigates why they listen to and perform music. SAMPLE A total of 2465 adolescents (1149 males; 1266 females; 50 participants did not state their sex) between 13 and 14 years of age who were attending Year 9 at one of 22 secondary schools in the North Staffordshire region of England. METHOD A questionnaire asked participants (a) about their degree of involvement with musical activities; (b) to rate the importance of music relative to other activities; and (c) to rate the importance of several factors that might determine why they and other people of their age and sex might listen to/perform pop and classical music. RESULTS Responses indicated that i) over 50% of respondents either played an instrument currently or had played regularly before giving up, and the sample listened to music for an average of 2.45 hours per day; ii) listening to music was preferred to other indoor activities but not to outdoor activities; iii) listening to/playing pop music has different perceived benefits to listening to/playing classical music; iv) responses to suggested reasons for listening to music could be grouped into three factors; and v) responses to suggested reasons for playing music could be grouped into four factors. CONCLUSIONS These results indicate that music is important to adolescents, and that this is because it allows them to (a) portray an 'image' to the outside world and (b) satisfy their emotional needs.

740 sitasi en Medicine, Psychology
DOAJ Open Access 2026
Using chills-inducing music to augment self-transcendence, emotional breakthrough, and psychological insight during mindfulness and loving kindness meditation

Leonardo Christov-Moore, Felix Schoeller, Mathilda Von Guttenberg et al.

IntroductionNon-pharmacologically induced altered states of consciousness that promote mental health and wellbeing are a growing focus of clinical and basic research. Previous work has revealed the mood-augmenting, belief-altering, and self-transcendent effects of aesthetic-chills-inducing audiovisual stimulation. The current study investigated how a guided loving kindness meditation (LKM) combined with uplifting, chills-inducing music (henceforth: chills-augmented) affected participants’ mood, self-transcendence (ST), psychological insight, and emotional breakthrough.MethodsWe conducted a randomized, controlled online study (n = 398) using a 2 × 2 design comparing a validated loving kindness meditation (LKM) to mindfulness-based control (MC), each with chills augmentation (+) and without (−).ResultsAs hypothesized, LKM, compared to MC, increased connectedness to others, while chills augmentation to either stimulus (LKM+/MC+) enhanced ST, mood, emotional breakthrough, and psychological insight. Mediation analyses confirmed that the occurrence of aesthetic chills during meditation predicted these downstream effects. They also found trait measures that independently (of main effects) contributed to distinct outcomes: absorption predicted feelings of ego-dissolution, connectedness to the world and self, and moral elevation; interoceptive awareness predicted ego-dissolution and connectedness to self; and vividness of internal imagery predicted connectedness to the world and others.DiscussionChills augmentation appears a viable method for enhancing the immersiveness, salience, and downstream positive impact of guided contemplative interventions, without interfering with the intended outcome. This work can further our understanding of and access to non-ordinary experiences that beget salutogenic, prosocial outcomes.

arXiv Open Access 2026
Musical Metamerism with Time--Frequency Scattering

Vincent Lostanlen, Han Han

The concept of metamerism originates from colorimetry, where it describes a sensation of visual similarity between two colored lights despite significant differences in spectral content. Likewise, we propose to call ``musical metamerism'' the sensation of auditory similarity which is elicited by two music fragments which differ in terms of underlying waveforms. In this technical report, we describe a method to generate musical metamers from any audio recording. Our method is based on joint time--frequency scattering in Kymatio, an open-source software in Python which enables GPU computing and automatic differentiation. The advantage of our method is that it does not require any manual preprocessing, such as transcription, beat tracking, or source separation. We provide a mathematical description of JTFS as well as some excerpts from the Kymatio source code. Lastly, we review the prior work on JTFS and draw connections with closely related algorithms, such as spectrotemporal receptive fields (STRF), modulation power spectra (MPS), and Gabor filterbank (GBFB).

en cs.SD, eess.AS
arXiv Open Access 2026
Multi-Stage Music Source Restoration with BandSplit-RoFormer Separation and HiFi++ GAN

Tobias Morocutti, Emmanouil Karystinaios, Jonathan Greif et al.

Music Source Restoration (MSR) targets recovery of original, unprocessed instrument stems from fully mixed and mastered audio, where production effects and distribution artifacts violate common linear-mixture assumptions. This technical report presents the CP-JKU team's system for the MSR ICASSP Challenge 2025. Our approach decomposes MSR into separation and restoration. First, a single BandSplit-RoFormer separator predicts eight stems plus an auxiliary other stem, and is trained with a three-stage curriculum that progresses from 4-stem warm-start fine-tuning (with LoRA) to 8-stem extension via head expansion. Second, we apply a HiFi++ GAN waveform restorer trained as a generalist and then specialized into eight instrument-specific experts.

en cs.SD, cs.LG
arXiv Open Access 2025
MIDI-LLM: Adapting Large Language Models for Text-to-MIDI Music Generation

Shih-Lun Wu, Yoon Kim, Cheng-Zhi Anna Huang

We present MIDI-LLM, an LLM for generating multitrack MIDI music from free-form text prompts. Our approach expands a text LLM's vocabulary to include MIDI tokens, and uses a two-stage training recipe to endow text-to-MIDI abilities. By preserving the original LLM's parameter structure, we can directly leverage the vLLM library for accelerated inference. Experiments show that MIDI-LLM achieves higher quality, better text control, and faster inference compared to the recent Text2midi model. Live demo at https://midi-llm-demo.vercel.app.

en cs.SD, cs.CL
arXiv Open Access 2025
Detecting Notational Errors in Digital Music Scores

Géré Léo, Nicolas Audebert, Florent Jacquemard

Music scores are used to precisely store music pieces for transmission and preservation. To represent and manipulate these complex objects, various formats have been tailored for different use cases. While music notation follows specific rules, digital formats usually enforce them leniently. Hence, digital music scores widely vary in quality, due to software and format specificity, conversion issues, and dubious user inputs. Problems range from minor engraving discrepancies to major notation mistakes. Yet, data quality is a major issue when dealing with musical information extraction and retrieval. We present an automated approach to detect notational errors, aiming at precisely localizing defects in scores. We identify two types of errors: i) rhythm/time inconsistencies in the encoding of individual musical elements, and ii) contextual errors, i.e. notation mistakes that break commonly accepted musical rules. We implement the latter using a modular state machine that can be easily extended to include rules representing the usual conventions from the common Western music notation. Finally, we apply this error-detection method to the piano score dataset ASAP. We highlight that around 40% of the scores contain at least one notational error, and manually fix multiple of them to enhance the dataset's quality.

en cs.MM
arXiv Open Access 2025
Balancing physical modeling and musical requirements: Algorithmically simulating the calls of Hyalessa maculaticollis for real-time instrumental control

Staas de Jong

This paper presents an algorithm that simulates the calls of the Hyalessa maculaticollis cicada for musical use. Written in SuperCollider, its input parameters enable real-time control of the insect call phase, loudness, and perceived musical pitch. To this end, the anatomical mechanics of the tymbal muscles, tymbal apodeme, tymbal ribs, tymbal plate, abdominal air sac, tympana, and opercula are physically modeled. This also includes decoherence, following the hypothesis that it, in H. maculaticollis, might explain the change in timbre apparent during the final phase of a call sequence. Overall, the algorithm seems to illustrate three main points regarding the trade-offs encountered when modeling bioacoustics for tonal use: that it may be necessary to prioritize musical requirements over realistic physical modeling at many stages of design and implementation; that the resulting adjustments may revolve around having physical modeling perceptually yield sonic events that are well-pitched, single-attack, single-source, and timbrally expressive; that the pitch-adjusted simulation of resonating bodies may fail musically precisely when it succeeds physically, by inducing the perception of different sound sources for different pitches. Audio examples are included, and the source code is structured and documented so as to support the further development of cicada bioacoustics for musical use.

arXiv Open Access 2025
SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning

Anuradha Chopra, Abhinaba Roy, Dorien Herremans

Detailed captions that accurately reflect the characteristics of a music piece can enrich music databases and drive forward research in music AI. This paper introduces a multi-task music captioning model, SonicVerse, that integrates caption generation with auxiliary music feature detection tasks such as key detection, vocals detection, and more, so as to directly capture both low-level acoustic details as well as high-level musical attributes. The key contribution is a projection-based architecture that transforms audio input into language tokens, while simultaneously detecting music features through dedicated auxiliary heads. The outputs of these heads are also projected into language tokens, to enhance the captioning input. This framework not only produces rich, descriptive captions for short music fragments but also directly enables the generation of detailed time-informed descriptions for longer music pieces, by chaining the outputs using a large-language model. To train the model, we extended the MusicBench dataset by annotating it with music features using MIRFLEX, a modular music feature extractor, resulting in paired audio, captions and music feature data. Experimental results show that incorporating features in this way improves the quality and detail of the generated captions.

en cs.SD, cs.AI
arXiv Open Access 2025
Persistent Homology of Music Network with Three Different Distances

Eunwoo Heo, Byeongchan Choi, Myung ock Kim et al.

Persistent homology has been widely used to discover hidden topological structures in data across various applications, including music data. To apply persistent homology, a distance or metric must be defined between points in a point cloud or between nodes in a graph network. These definitions are not unique and depend on the specific objectives of a given problem. In other words, selecting different metric definitions allows for multiple topological inferences. In this work, we focus on applying persistent homology to music graph with predefined weights. We examine three distinct distance definitions based on edge-wise pathways and demonstrate how these definitions affect persistent barcodes, persistence diagrams, and birth/death edges. We found that there exist inclusion relations in one-dimensional persistent homology reflected on persistence barcode and diagram among these three distance definitions. We verified these findings using real music data.

en cs.SD, cs.CG
DOAJ Open Access 2024
Disney Screencerts: A Video Essay

Sureshkumar Sekar

In this video essay, I establish, discuss, and illustrate different types of Disney screencerts—events where the performance of music on stage is accompanied by the projection of the associated audiovisual on screen. Focusing on the audience experience, I use concepts from intermedial and multimodal studies to illustrate the experiential difference in each of the different forms of Disney screencerts—Film-with-Live-Orchestra Concert; Film-with-Live-Theatre-and-Orchestra Concert; Excerpt/Montage-with-Live-Orchestra Concert; and Shorts-with-Live-Orchestra concert. In the 100th anniversary of Disney, the event that truly celebrated the innovative spirit of Disney was not the one literary titled “Disney 100: The Concert” but the one titled “Encanto at the Hollywood Bowl”, which is a Film-with-Live-Theatre-and-Orchestra concert—a super-hybrid screencert form in which only Disney films have been presented so far.

arXiv Open Access 2024
A Kalman Filter model for synchronization in musical ensembles

Hugo T. Carvalho, Min S. Li, Massimiliano di Luca et al.

The synchronization of motor responses to rhythmic auditory cues is a fundamental biological phenomenon observed across various species. While the importance of temporal alignment varies across different contexts, achieving precise temporal synchronization is a prominent goal in musical performances. Musicians often incorporate expressive timing variations, which require precise control over timing and synchronization, particularly in ensemble performance. This is crucial because both deliberate expressive nuances and accidental timing deviations can affect the overall timing of a performance. This discussion prompts the question of how musicians adjust their temporal dynamics to achieve synchronization within an ensemble. This paper introduces a novel feedback correction model based on the Kalman Filter, aimed at improving the understanding of interpersonal timing in ensemble music performances. The proposed model performs similarly to other linear correction models in the literature, with the advantage of low computational cost and good performance even in scenarios where the underlying tempo varies.

en eess.AS

Halaman 12 dari 52916