J. Sloboda
Hasil untuk "Music"
Menampilkan 20 dari ~1058314 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
D. Hargreaves, Adrian C. North
Aniruddh D. Patel, E. Gibson, Jennifer Ratner et al.
L. Balkwill, W. Thompson
Fatma Zehra AĞAN, Çiğdem Cindoğlu, Neriman Sila KOÇ et al.
Background: Chronic kidney disease (CKD) is a progressive condition associated with high morbidity and mortality. Haemodialysis (HD) patients experience significant psychological and physiological stress. Non-pharmacological interventions such as music listening sessions can alleviate anxiety and depression without drug-related side effects. Aim: This study was conducted to evaluate the effects of music therapy on psychological well-being and selected biochemical parameters in HD patients. Setting: This study was conducted with 49 HD patients at the Dialysis Unit of Harran University Faculty of Medicine between May and July 2025. Methods: All patients underwent a 4-week music listening programme (12 sessions, each session consisting of 30 min of traditional music). Psychological status was assessed before and after the intervention using the Beck Anxiety and Depression Inventories. Biochemical parameters and dialysis efficiency indicators were also recorded. Data were analysed using the Wilcoxon Signed-Rank Test. Results: Significant decreases were observed in anxiety and depression scores (p 0.001). Biochemical analyses showed significant changes in sodium (p 0.001), calcium (p = 0.002), glucose (p = 0.024) and albumin (p 0.001) levels. No significant changes were observed in dialysis efficiency indicators. Conclusion: Music listening sessions administered during HD sessions improved patients’ psychological state and affected selected biochemical parameters. This is a safe, cost-effective, complementary intervention that may increase comfort and potentially improve physiological outcomes. Contribution: This study highlights the potential of music listening sessions as an adjunct to conventional treatments in HD care.
Yash Bhake, Ankit Anand, Preeti Rao
This paper presents an attempt to study the aesthetics of North Indian Khayal music with reference to the flexibility exercised by artists in performing popular compositions. We study expressive timing and pitch variations of the given lyrical content within and across performances and propose computational representations that can discriminate between different performances of the same song in terms of expression. We present the necessary audio processing and annotation procedures, and discuss our observations and insights from the analysis of a dataset of two songs in two ragas each rendered by ten prominent artists.
Emmanouil Karystinaios, Johannes Hentschel, Markus Neuwirth et al.
Recent years have seen a boom in computational approaches to music analysis, yet each one is typically tailored to a specific analytical domain. In this work, we introduce AnalysisGNN, a novel graph neural network framework that leverages a data-shuffling strategy with a custom weighted multi-task loss and logit fusion between task-specific classifiers to integrate heterogeneously annotated symbolic datasets for comprehensive score analysis. We further integrate a Non-Chord-Tone prediction module, which identifies and excludes passing and non-functional notes from all tasks, thereby improving the consistency of label signals. Experimental evaluations demonstrate that AnalysisGNN achieves performance comparable to traditional static-dataset approaches, while showing increased resilience to domain shifts and annotation inconsistencies across multiple heterogeneous corpora.
Anna Kruspe
Large Language Models (LLMs) reflect the biases in their training data and, by extension, those of the people who created this training data. Detecting, analyzing, and mitigating such biases is becoming a focus of research. One type of bias that has been understudied so far are geocultural biases. Those can be caused by an imbalance in the representation of different geographic regions and cultures in the training data, but also by value judgments contained therein. In this paper, we make a first step towards analyzing musical biases in LLMs, particularly ChatGPT and Mixtral. We conduct two experiments. In the first, we prompt LLMs to provide lists of the "Top 100" musical contributors of various categories and analyze their countries of origin. In the second experiment, we ask the LLMs to numerically rate various aspects of the musical cultures of different countries. Our results indicate a strong preference of the LLMs for Western music cultures in both experiments.
Liam Pond, Ichiro Fujinaga
This study evaluates the baseline capabilities of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini to learn concepts in music theory through in-context learning and chain-of-thought prompting. Using carefully designed prompts (in-context learning) and step-by-step worked examples (chain-of-thought prompting), we explore how LLMs can be taught increasingly complex material and how pedagogical strategies for human learners translate to educating machines. Performance is evaluated using questions from an official Canadian Royal Conservatory of Music (RCM) Level 6 examination, which covers a comprehensive range of topics, including interval and chord identification, key detection, cadence classification, and metrical analysis. Additionally, we evaluate the suitability of various music encoding formats for these tasks (ABC, Humdrum, MEI, MusicXML). All experiments were run both with and without contextual prompts. Results indicate that without context, ChatGPT with MEI performs the best at 52%, while with context, Claude with MEI performs the best at 75%. Future work will further refine prompts and expand to cover more advanced music theory concepts. This research contributes to the broader understanding of teaching LLMs and has applications for educators, students, and developers of AI music tools alike.
Richa Namballa, Giovana Morais, Magdalena Fuentes
Musical source separation (MSS) has recently seen a big breakthrough in separating instruments from a mixture in the context of Western music, but research on non-Western instruments is still limited due to a lack of data. In this demo, we use an existing dataset of Brazilian sama percussion to create artificial mixtures for training a U-Net model to separate the surdo drum, a traditional instrument in samba. Despite limited training data, the model effectively isolates the surdo, given the drum's repetitive patterns and its characteristic low-pitched timbre. These results suggest that MSS systems can be successfully harnessed to work in more culturally-inclusive scenarios without the need of collecting extensive amounts of data.
K. Scherer, M. Zentner
Yim-chi Ho, M. Cheung, A. Chan
Victoria McArthur, Susan Everington, Martyn Patel
Introduction: Dementia is a global health priority, with an increasing percentage of overall hospital bed days occupied by people with dementia (PWD). This combined with increased demand and availability of complex scanning means that there is a need for all pathways including diagnostic imaging to consider interventions to improve patient experience and outcomes. Objectives: Assess the effectiveness of music-based interventions designed to lower anxiety, improve wellbeing and allow better management and care of PWD in an acute hospital setting. Methods: A systematic search of seven databases was conducted in May 2024, following the PRISMA guidelines. Relevant reviews and articles were also examined for additional sources. Results: Fifteen studies met the eligibility criteria and were included in this review, which included a total of 581 people with dementia. The studies were of varying design, some with very small sample sizes. Quality of the studies varied, but overall were of moderate to good quality. However, only three studies were RCT and only one of these blinded to the intervention. Overall eleven of the included articles reported a reduction in behavioural and psychological symptoms associated with dementia, with one RCT reporting a significant reduction. Conclusion: While this review supports the effectiveness of music-based interventions to lower anxiety of people with dementia in acute care it also highlights the need for more robust, high quality trials in a challenging environment. Research should establish the best interventions to enhance the care experience of people living with dementia that can be easily incorporated into acute care settings.
V. Nykonenko
У статті послідовно розглядаються особливості художніх інтерпретацій батьківського архетипу в українському кінематографі в період правління Йосифа Сталіна. На прикладі кінофільмів «Іван» (1932), «Чарівний сад» (1935), «Аероград» (1935), «Партизани в степах України» (1942), «Третій удар» (1948) здійснюється аналіз політичних та соціокультурних чинників, що корінним чином вплинули на особливості трактування екранного персонажа батька в зазначений період. Згідно з юнгівським ученням про архетипи колективного несвідомого, батьківська постать розглядається в статті в найширшому символічному значенні — коли батьківську роль для людини може відігравати ціла соціальна інституція або ж апарат державної влади. Саме тому значна увага приділяється взаємовідносинам між радянською владою та громадянами УРСР. Ці зв’язки тогочасні кінематографісти часто репрезентували, орієнтуючись на класичні моделі сімейних відносин: представникам більшовицької системи відводилася роль опікунів, населенню — їхніх вихованців.
Ilaria Manco, Justin Salamon, Oriol Nieto
Audio-text contrastive models have become a powerful approach in music representation learning. Despite their empirical success, however, little is known about the influence of key design choices on the quality of music-text representations learnt through this framework. In this work, we expose these design choices within the constraints of limited data and computation budgets, and establish a more solid understanding of their impact grounded in empirical observations along three axes: the choice of base encoders, the level of curation in training data, and the use of text augmentation. We find that data curation is the single most important factor for music-text contrastive training in resource-constrained scenarios. Motivated by this insight, we introduce two novel techniques, Augmented View Dropout and TextSwap, which increase the diversity and descriptiveness of text inputs seen in training. Through our experiments we demonstrate that these are effective at boosting performance across different pre-training regimes, model architectures, and downstream data distributions, without incurring higher computational costs or requiring additional training data.
Karn N. Watcharasupat, Chih-Wei Wu, Iroro Orife
Cinematic audio source separation (CASS), as a standalone problem of extracting individual stems from their mixture, is a fairly new subtask of audio source separation. A typical setup of CASS is a three-stem problem, with the aim of separating the mixture into the dialogue (DX), music (MX), and effects (FX) stems. Given the creative nature of cinematic sound production, however, several edge cases exist; some sound sources do not fit neatly in any of these three stems, necessitating the use of additional auxiliary stems in production. One very common edge case is the singing voice in film audio, which may belong in either the DX or MX or neither, depending heavily on the cinematic context. In this work, we demonstrate a very straightforward extension of the dedicated-decoder Bandit and query-based single-decoder Banquet models to a four-stem problem, treating non-musical dialogue, instrumental music, singing voice, and effects as separate stems. Interestingly, the query-based Banquet model outperformed the dedicated-decoder Bandit model. We hypothesized that this is due to a better feature alignment at the bottleneck as enforced by the band-agnostic FiLM layer. Dataset and model implementation will be made available at https://github.com/kwatcharasupat/source-separation-landing.
Li-Yang Tseng, Tzu-Ling Lin, Hong-Han Shuai et al.
Nowadays, humans are constantly exposed to music, whether through voluntary streaming services or incidental encounters during commercial breaks. Despite the abundance of music, certain pieces remain more memorable and often gain greater popularity. Inspired by this phenomenon, we focus on measuring and predicting music memorability. To achieve this, we collect a new music piece dataset with reliable memorability labels using a novel interactive experimental procedure. We then train baselines to predict and analyze music memorability, leveraging both interpretable features and audio mel-spectrograms as inputs. To the best of our knowledge, we are the first to explore music memorability using data-driven deep learning-based methods. Through a series of experiments and ablation studies, we demonstrate that while there is room for improvement, predicting music memorability with limited data is possible. Certain intrinsic elements, such as higher valence, arousal, and faster tempo, contribute to memorable music. As prediction techniques continue to evolve, real-life applications like music recommendation systems and music style transfer will undoubtedly benefit from this new area of research.
Lie Lu, Dan Liu, HongJiang Zhang
Adrian C. North, D. Hargreaves, Jon Hargreaves
Stefan Koelsch, Walter A Siebel
Halaman 13 dari 52916