Emre Öztürk
Hasil untuk "Music and books on Music"
Menampilkan 20 dari ~889570 hasil · dari CrossRef, DOAJ, arXiv
Li Zhang
Most work in AI music generation focused on audio, which has seen limited use in the music production industry due to its rigidity. To maximize flexibility while assuming only textual instructions from producers, we are among the first to tackle symbolic music editing. We circumvent the known challenge of lack of labeled data by proving that LLMs with zero-shot prompting can effectively edit drum grooves. The recipe of success is a creatively designed format that interfaces LLMs and music, while we facilitate evaluation by providing an evaluation dataset with annotated unit tests that highly aligns with musicians' judgment.
ChungHa Lee, Jin-Hyuk Hong
Music visualization is an important medium that enables synesthetic experiences and creative inspiration. However, previous research focused mainly on the technical and theoretical aspects, overlooking users' everyday interaction with music visualizations. This gap highlights the pressing need for research on how music visualization influences users in synesthetic creative experiences and where they are heading. Thus, we developed musicolors, a web-based music visualization library available in real-time. Additionally, we conducted a qualitative user study with composers, developers, and listeners to explore how they use musicolors to appreciate and get inspiration and craft the music-visual interaction. The results show that musicolors provides a rich value of music visualization to users through sketching for musical ideas, integrating visualizations with other systems or platforms, and synesthetic listening. Based on these findings, we also provide guidelines for future music visualizations to offer a more interactive and creative experience.
Maziar Kanani, Sean O Leary, James McDermott
Non-metric music forms the core of the repertoire in Iranian classical music. Dastgahi music serves as the underlying theoretical system for both Iranian art music and certain folk traditions. At the heart of Iranian classical music lies the radif, a foundational repertoire that organizes melodic material central to performance and pedagogy. In this study, we introduce a digital corpus representing the complete non-metrical radif repertoire, covering all 13 existing components of this repertoire. We provide MIDI files (about 281 minutes in total) and data spreadsheets describing notes, note durations, intervals, and hierarchical structures for 228 pieces of music. We faithfully represent the tonality including quarter-tones, and the non-metric aspect. Furthermore, we provide supporting basic statistics, and measures of complexity and similarity over the corpus. Our corpus provides a platform for computational studies of Iranian classical music. Researchers might employ it in studying melodic patterns, investigating improvisational styles, or for other tasks in music information retrieval, music theory, and computational (ethno)musicology.
Qing Wang, Xiaohang Yang, Yilan Dong et al.
Music-to-dance generation aims to synthesize human dance motion conditioned on musical input. Despite recent progress, significant challenges remain due to the semantic gap between music and dance motion, as music offers only abstract cues, such as melody, groove, and emotion, without explicitly specifying the physical movements. Moreover, a single piece of music can produce multiple plausible dance interpretations. This one-to-many mapping demands additional guidance, as music alone provides limited information for generating diverse dance movements. The challenge is further amplified by the scarcity of paired music and dance data, which restricts the modelâĂŹs ability to learn diverse dance patterns. In this paper, we introduce DanceChat, a Large Language Model (LLM)-guided music-to-dance generation approach. We use an LLM as a choreographer that provides textual motion instructions, offering explicit, high-level guidance for dance generation. This approach goes beyond implicit learning from music alone, enabling the model to generate dance that is both more diverse and better aligned with musical styles. Our approach consists of three components: (1) an LLM-based pseudo instruction generation module that produces textual dance guidance based on music style and structure, (2) a multi-modal feature extraction and fusion module that integrates music, rhythm, and textual guidance into a shared representation, and (3) a diffusion-based motion synthesis module together with a multi-modal alignment loss, which ensures that the generated dance is aligned with both musical and textual cues. Extensive experiments on AIST++ and human evaluations show that DanceChat outperforms state-of-the-art methods both qualitatively and quantitatively.
Brandon James Carone, Pablo Ripollés
SoundSignature is a music application that integrates a custom OpenAI Assistant to analyze users' favorite songs. The system incorporates state-of-the-art Music Information Retrieval (MIR) Python packages to combine extracted acoustic/musical features with the assistant's extensive knowledge of the artists and bands. Capitalizing on this combined knowledge, SoundSignature leverages semantic audio and principles from the emerging Internet of Sounds (IoS) ecosystem, integrating MIR with AI to provide users with personalized insights into the acoustic properties of their music, akin to a musical preference personality report. Users can then interact with the chatbot to explore deeper inquiries about the acoustic analyses performed and how they relate to their musical taste. This interactivity transforms the application, acting not only as an informative resource about familiar and/or favorite songs, but also as an educational platform that enables users to deepen their understanding of musical features, music theory, acoustic properties commonly used in signal processing, and the artists behind the music. Beyond general usability, the application also incorporates several well-established open-source musician-specific tools, such as a chord recognition algorithm (CREMA), a source separation algorithm (DEMUCS), and an audio-to-MIDI converter (basic-pitch). These features allow users without coding skills to access advanced, open-source music processing algorithms simply by interacting with the chatbot (e.g., can you give me the stems of this song?). In this paper, we highlight the application's innovative features and educational potential, and present findings from a pilot user study that evaluates its efficacy and usability.
Yixiao Zhang
The field of AI-assisted music creation has made significant strides, yet existing systems often struggle to meet the demands of iterative and nuanced music production. These challenges include providing sufficient control over the generated content and allowing for flexible, precise edits. This thesis tackles these issues by introducing a series of advancements that progressively build upon each other, enhancing the controllability and editability of text-to-music generation models. First, we introduce Loop Copilot, a system that tries to address the need for iterative refinement in music creation. Loop Copilot leverages a large language model (LLM) to coordinate multiple specialised AI models, enabling users to generate and refine music interactively through a conversational interface. Central to this system is the Global Attribute Table, which records and maintains key musical attributes throughout the iterative process, ensuring that modifications at any stage preserve the overall coherence of the music. While Loop Copilot excels in orchestrating the music creation process, it does not directly address the need for detailed edits to the generated content. To overcome this limitation, MusicMagus is presented as a further solution for editing AI-generated music. MusicMagus introduces a zero-shot text-to-music editing approach that allows for the modification of specific musical attributes, such as genre, mood, and instrumentation, without the need for retraining. By manipulating the latent space within pre-trained diffusion models, MusicMagus ensures that these edits are stylistically coherent and that non-targeted attributes remain unchanged. This system is particularly effective in maintaining the structural integrity of the music during edits, but it encounters challenges with more complex and real-world audio scenarios. ...
Caroline Caregnato, Pablo da Silva Gusmão
This study approaches melodic dictation in relation to aural analysis, by investigating the effect on an aural analysis, accomplished before dictation taking, on its results. 98 music undergraduates participated in the study by performing a melodic dictation task. The participants were divided into a control group and an experimental group. Subjects in the experimental situation were asked to answer a few questions about the melody regarding structure, motifs and harmonic tension prior to notating it. The participants in the control group performed significantly better. Moreover, no association between the precision of analysis and performance in the dictation task was found. It is possible that the difference in performance is due to an attention overload, provoked by the dual task accomplished by the experimental group, which may happen when aural analysis is not a well-practiced strategy. The analytical task could have impacted either the memory encoding phase or it could have interfered with the recent memory. Further research is needed, therefore, in order to explore the impact of trained versus untrained analytical tasks during melodic dictation.
Liwei Lin, Gus Xia, Junyan Jiang et al.
Recent years have witnessed a rapid growth of large-scale language models in the domain of music audio. Such models enable end-to-end generation of higher-quality music, and some allow conditioned generation using text descriptions. However, the control power of text controls on music is intrinsically limited, as they can only describe music indirectly through meta-data (such as singers and instruments) or high-level representations (such as genre and emotion). We aim to further equip the models with direct and content-based controls on innate music languages such as pitch, chords and drum track. To this end, we contribute Coco-Mulla, a content-based control method for music large language modeling. It uses a parameter-efficient fine-tuning (PEFT) method tailored for Transformer-based audio models. Experiments show that our approach achieved high-quality music generation with low-resource semi-supervised learning, tuning with less than 4% parameters compared to the original model and training on a small dataset with fewer than 300 songs. Moreover, our approach enables effective content-based controls, and we illustrate the control power via chords and rhythms, two of the most salient features of music audio. Furthermore, we show that by combining content-based controls and text descriptions, our system achieves flexible music variation generation and arrangement. Our source codes and demos are available online.
Stella Hadjineophytou
This article explores how the language of disability affects music therapists’ perceptions of the people they work with. A review of the literature examines how music therapy discourse and practice has been influenced by models of disability, specifically in the use of person-first and identity-first language. This is summarised by considering the power of language to affect the unconscious perceptions, choices, and actions of music therapists, leading to collusion between music therapists and inherently ableist social structures. The second half of this article presents the author’s introspective journey of consciously changing language, shifting perceptions, and subverting power imbalances in music therapy sessions with Kirsty, a young woman with autism attending sessions for her mental health. The case study incorporates Kirsty’s own written reflections to demonstrate the potential for collaboration and learning as part of this journey. The article concludes that music therapists might seek opportunities to become “unknowing” and “inexpert” in relation to the people they work with, in a bid to create holistic learning spaces that manifest and embody empowering language. The language of this article reflects the author’s preference for identity-first language. Person-first language is used in reference to Kirsty, at her request.
Botao Yu, Peiling Lu, Rui Wang et al.
Symbolic music generation aims to generate music scores automatically. A recent trend is to use Transformer or its variants in music generation, which is, however, suboptimal, because the full attention cannot efficiently model the typically long music sequences (e.g., over 10,000 tokens), and the existing models have shortcomings in generating musical repetition structures. In this paper, we propose Museformer, a Transformer with a novel fine- and coarse-grained attention for music generation. Specifically, with the fine-grained attention, a token of a specific bar directly attends to all the tokens of the bars that are most relevant to music structures (e.g., the previous 1st, 2nd, 4th and 8th bars, selected via similarity statistics); with the coarse-grained attention, a token only attends to the summarization of the other bars rather than each token of them so as to reduce the computational cost. The advantages are two-fold. First, it can capture both music structure-related correlations via the fine-grained attention, and other contextual information via the coarse-grained attention. Second, it is efficient and can model over 3X longer music sequences compared to its full-attention counterpart. Both objective and subjective experimental results demonstrate its ability to generate long music sequences with high quality and better structures.
Fabio Wanderley Janhan Sousa
This article aims to define the development of audiovisual electroacoustic music for virtual reality (VR) as an emerging artistic expression. Revisiting concepts and terminologies by several authors (Gibbs, 2007; Leite, 2004; Lima, 2011; Fry, 1920) in relation to the most diverse audiovisual manifestations, we consolidated Hill's definition (2010) and expanded it to a specific reproduction system, VR, as the most appropriate for our object of study. The maintenance of what we call extrinsic space (Henriksen, 2002) in this type of manifestation makes it possible to consolidate it as a musical parameter. First studies in audiovisual electroacoustic music for VR are presented as result of doctoral work.
Davindar Singh
-
Michele Guerra
Technique and creativity Having been called upon to provide a contribution to a publication dedicated to “Techne”, I feel it is fitting to start from the theme of technique, given that for too many years now, we have fruitlessly attempted to understand the inner workings of cinema whilst disregarding the element of technique. And this has posed a significant problem in our field of study, as it would be impossible to gain a true understanding of what cinema is without immersing ourselves in the technical and industrial culture of the 19th century. It was within this culture that a desire was born: to mould the imaginary through the new techniques of reproduction and transfiguration of reality through images. Studying the development of the so-called “pre-cinema” – i.e. the period up to the conventional birth of cinema on 28 December 1895 with the presentation of the Cinématographe Lumière – we discover that the technical history of cinema is not only almost more enthralling than its artistic and cultural history, but that it contains all the great theoretical, philosophical and scientific insights that we need to help us understand the social, economic and cultural impact that cinema had on the culture of the 20th century. At the 1900 Paris Exposition, when cinema had already existed in some form for a few years, when the first few short films of narrative fiction also already existed, the cinematograph was placed in the Pavilion of Technical Discoveries, to emphasise the fact that the first wonder, this element of unparalleled novelty and modernity, was still there, in technique, in this marvel of innovation and creativity. I would like to express my idea through the words of Franco Moretti, who claims in one of his most recent works that it is only possible to understand form through the forces that pulsate through it and press on it from beneath, finally allowing the form itself to come to the surface and make itself visible and comprehensible to our senses. As such, the cinematic form – that which appears on the screen, that which is now so familiar to us, that which each of us has now internalised, that has even somehow become capable of configuring our way of thinking, imagining, dreaming – that form is underpinned by forces that allow it to eventually make its way onto the screen and become artistic and narrative substance. And those forces are the forces of technique, the forces of industry, the economic, political and social forces without which we could never hope to understand cinema. One of the issues that I always make a point of addressing in the first few lessons with my students is that if they think that the history of cinema is made up of films, directors, narrative plots to be understood, perhaps even retold in some way, then they are entirely on the wrong track; if, on the other hand, they understand that it is the story of an institution with economic, political and social drivers within it that can, in some way, allow us to come to the great creators, the great titles, but that without a firm grasp of those drivers, there is no point in even attempting to explore it, then they are on the right track. As I see it, cinema in the twentieth century was a great democratic, interclassist laboratory such as no other art has ever been, and this occurred thanks to the fact that what underpinned it was an industrial reasoning: it had to respond to the capital invested in it, it had to make money, and as such, it had to reach the largest possible number of people, immersing it into a wholly unprecedented relational situation. The aim was to be as inclusive as possible, ultimately giving rise to the idea that cinema could not be autonomous, as other forms of art could be, but that it must instead be able to negotiate all the various forces acting upon it, pushing it in every direction. This concept of negotiation is one which has been explored in great detail by one of the greatest film theorists of our modern age, Francesco Casetti. In a 2005 book entitled “Eye of the Century”, which I consider to be a very important work, Casetti actually argues that cinema has proven itself to be the art form most capable of adhering to the complexity and fast pace of the short century, and that it is for this very reason that its golden age (in the broadest sense) can be contained within the span of just a hundred years. The fact that cinema was the true epistemological driving force of 20th-century modernity – a position now usurped by the Internet – is not, in my opinion, something that diminishes the strength of cinema, but rather an element of even greater interest. Casetti posits that cinema was the great negotiator of new cultural needs, of the need to look at art in a different way, of the willingness to adapt to technique and technology: indeed, the form of cinema has always changed according to the techniques and technologies that it has brought to the table or established a dialogue with on a number of occasions. Barry Salt, whose background is in physics, wrote an important book – publishing it at his own expense, as a mark of how difficult it is to work in certain fields – entitled “Film Style and Technology”, in which he calls upon us stop writing the history of cinema starting from the creators, from the spirit of the time, from the great cultural and historical questions, and instead to start afresh by following the techniques available over the course of its development. Throughout the history of cinema, the creation of certain films has been the result of a particular set of technical conditions: having a certain type of film, a certain type of camera, only being able to move in a certain way, needing a certain level of lighting, having an entire arsenal of equipment that was very difficult to move and handle; and as the equipment, medium and techniques changed and evolved over the years, so too did the type of cinema that we were able to make. This means framing the history of cinema and film theory in terms of the techniques that were available, and starting from there: of course, whilst Barry Salt’s somewhat provocative suggestion by no means cancels out the entire cultural, artistic and aesthetic discourse in cinema – which remains fundamental – it nonetheless raises an interesting point, as if we fail to consider the methods and techniques of production, we will probably never truly grasp what cinema is. These considerations also help us to understand just how vast the “construction site” of cinema is – the sort of “factory” that lies behind the production of any given film. Erwin Panofsky wrote a single essay on cinema in the 1930s entitled “Style and Medium in the Motion Pictures” – a very intelligent piece, as one would expect from Panofsky – in which at a certain point, he compares the construction site of the cinema to those of Gothic cathedrals, which were also under an immense amount of pressure from different forces, namely religious ones, but also socio-political and economic forces which ultimately shaped – in the case of the Gothic cathedral and its development – an idea of the relationship between the earth and the otherworldly. The same could be said for cinema, because it also involves starting with something very earthly, very grounded, which is then capable of unleashing an idea of imaginary metamorphosis. Some scholars, such as Edgar Morin, will say that cinema is increasingly becoming the new supernatural, the world of contemporary gods, as religion gradually gives way to other forms of deification. Panofsky’s image is a very focused one: by making film production into a construction site, which to all intents and purposes it is, he leads us to understand that there are different forces at work, represented by a producer, a scriptwriter, a director, but also a workforce, the simple labourers, as is always the case in large construction sites, calling into question the idea of who the “creator” truly is. So much so that cinema, now more than ever before, is reconsidering the question of authorship, moving towards a “history of cinema without names” in an attempt to combat the “policy of the author” which, in the 1950s, especially in France, identified the director as the de facto author of the film. Today, we are still in that position, with the director still considered the author of the film, but that was not always so: back in the 1910s, in the United States, the author of the film was the scriptwriter, the person who wrote it (as is now the case for TV series, where they have once again taken pride of place as the showrunner, the creator, the true author of the series, and nobody remembers the names of the directors of the individual episodes); or at times, it can be the producer, as was the case for a long time when the Oscar for Best Picture, for example, was accepted by the producer in their capacity as the commissioner, as the “owner” of the work. As such, the theme of authorship is a very controversial one indeed, but one which helps us to understand the great meeting of minds that goes into the production of a film, starting with the technicians, of course, but also including the actors. Occasionally, a film is even attributed to the name of a star, almost as if to declare that that film is theirs, in that it is their body and their talent as an actor lending it a signature that provides far more of a draw to audiences than the name of the director does. In light of this, the theme of authorship, which Panofsky raised in the 1930s through the example of the Gothic cathedral, which ultimately does not have a single creator, is one which uses the image of the construction site to also help us to better understand what kind of development a film production can go through and to what extent this affects its critical and historical reception; as such, grouping films together based on their director means doing something that, whilst certainly not incorrect in itself, precludes other avenues of interpretation and analysis which could have favoured or could still favour a different reading of the “cinematographic construction site”. Design and execution The great classic Hollywood film industry was a model that, although it no longer exists in the same form today, unquestionably made an indelible mark at a global level on the history not only of cinema, but more broadly, of the culture of the 20th century. The industry involved a very strong vertical system resembling an assembly line, revolving around producers, who had a high level of decision-making autonomy and a great deal of expertise, often inclined towards a certain genre of film and therefore capable of bringing together the exact kinds of skills and visions required to make that particular film. The history of classic American cinema is one that can also be reconstructed around the units that these producers would form. The “majors”, along with the so-called “minors”, were put together like football teams, with a chairman flanked by figures whom we would nowadays refer to as a sporting director and a managing director, who built the team based on specific ideas, “buying” directors, scriptwriters, scenographers, directors of photography, and even actors and actresses who generally worked almost exclusively for their major – although they could occasionally be “loaned out” to other studios. This system led to a very marked characterisation and allowed for the film to be designed in a highly consistent, recognisable way in an age when genres reigned supreme and there was the idea that in order to keep the audience coming back, it was important to provide certain reassurances about what they would see: anyone going to see a Western knew what sorts of characters and storylines to expect, with the same applying to a musical, a crime film, a comedy, a melodrama, and so on. The star system served to fuel this working method, with these major actors also representing both forces and materials in the hands of an approach to the filmmaking which had the ultimate objective of constructing the perfect film, in which everything had to function according to a rule rooted in both the aesthetic and the economic. Gore Vidal wrote that from 1939 onwards, Hollywood did not produce a single “wrong” film: indeed, whilst certainly hyperbolic, this claim confirms that that system produced films that were never wrong, never off-key, but instead always perfectly in tune with what the studios wished to achieve. Whilst this long-entrenched system of yesteryear ultimately imploded due to certain historical phenomena that determined it to be outdated, the way of thinking about production has not changed all that much, with film design remaining tied to a professional approach that is still rooted within it. The overwhelming majority of productions still start from a system which analyses the market and the possible economic impact of the film, before even starting to tackle the various steps that lead up to the creation of the film itself. Following production systems and the ways in which they have changed, in terms of both the technology and the cultural contexts, also involves taking stock of the still considerable differences that exist between approaches to filmmaking in different countries, or indeed the similarities linking highly disparate economic systems (consider, for example, India’s “Bollywood” or Nigeria’s “Nollywood”: two incredibly strong film industries that we are not generally familiar with as they lack global distribution, although they are built very solidly). In other words, any attempt to study Italian cinema and American cinema – to stay within this double field – with the same yardstick is unthinkable, precisely because the context of their production and design is completely different. Composition and innovation Studying the publications on cinema in the United States in the early 1900s – which, from about 1911 to 1923, offers us a revealing insight into the attempts made to garner an in-depth understanding of how this new storytelling machine worked and the development of the first real cultural industry of the modern age – casts light on the centrality of the issues of design and composition. I remain convinced that without reading and understanding that debate, it is very difficult to understand why cinema is as we have come to be familiar with it today. Many educational works investigated the inner workings of cinema, and some, having understood them, suggested that they were capable of teaching others to do so. These publications have almost never been translated into Italian and remain seldom studied even in the US, and yet they are absolutely crucial for understanding how cinema established itself on an industrial and aesthetic level. There are two key words that crop up time and time again in these books, the first being “action”, one of the first words uttered when a film starts rolling: “lights, camera, action”. This collection of terms is interesting in that “motore” highlights the presence of a machine that has to be started up, followed by “action”, which expresses that something must happen at that moment in front of that machine, otherwise the film will not exist. As such, “action” – a term to which I have devoted some of my studies – is a fundamental word here in that it represents a sort of moment of birth of the film that is very clear – tangible, even. The other word is “composition”, and this is an even more interesting word with a history that deserves a closer look: the first professor of cinema in history, Victor Oscar Freeburg (I edited the Italian translation of his textbook “The Art of Photoplay Making”, published in 1918), took up his position at Columbia University in 1915 and, in doing so, took on the task of teaching the first ever university course in cinema. Whilst Freeburg was, for his time, a very well-educated and highly-qualified person, having studied at Yale and then obtained his doctorate in theatre at Columbia, cinema was not entirely his field of expertise. He was asked to teach a course entitled “Photoplay Writing”. At the time, a film was known as a “photoplay”, in that it was a photographed play of sorts, and the fact that the central topic of the course was photoplay writing makes it clear that back then, the scriptwriter was considered the main author of the work. From this point of view, it made sense to entrust the teaching of cinema to an expert in theatre, based on the idea that it was useful to first and foremost teach a sort of photographable dramaturgy. However, upon arriving at Columbia, Freeburg soon realised whilst preparing his course that “photoplay writing” risked misleading the students, as it is not enough to simply write a story in order to make a film; as such, he decided to change the title of his course to “photoplay composition”. This apparently minor alteration, from “writing” to “composition”, in fact marked a decisive conceptual shift in that it highlighted that it was no longer enough to merely write: one had to “compose”. So it was that the author of a film became, according to Freeburg, not the scriptwriter or director, but the “cinema composer” (a term of his own coinage), thus directing and broadening the concept of composition towards music, on the one hand, and architecture, on the other. We are often inclined to think that cinema has inherited expressive modules that come partly from literature, partly from theatre and partly from painting, but in actual fact, what Freeburg helps us to understand is that there are strong elements of music and architecture in a film, emphasising the lofty theme of the project. In his book, he explores at great length the relationship between static and dynamic forms in cinema, a topic that few have ever addressed in that way and that again, does not immediately spring to mind as applicable to a film. I believe that those initial intuitions were the result of a reflection unhindered by all the prejudices and preconceived notions that subsequently began to condition film studies as a discipline, and I feel that they are of great use to use today because they guide us, on the one hand, towards a symphonic idea of filmmaking, and on the other, towards an idea that preserves the fairly clear imprint of architecture. Space-Time In cinema as in architecture, the relationship between space and time is a crucial theme: in every textbook, space and time are amongst the first chapters to be studied precisely because in cinema, they undergo a process of metamorphosis – as Edgar Morin would say – which is vital to constructing the intermediate world of film. Indeed, from both a temporal and a spatial point of view, cinema provides a kind of ubiquitous opportunity to overlap different temporalities and spatialities, to move freely from one space to another, but above all, to construct new systems of time. The rules of film editing – especially so-called “invisible editing”, i.e. classical editing that conceals its own presence – are rules built upon specific and precise connections that hold together different spaces – even distant ones – whilst nonetheless giving the impression of unity, of contiguity, of everything that cinema never is in reality, because cinema is constantly fragmented and interrupted, even though we very often perceive it in continuity. As such, from both a spatial and a temporal perspective, there are technical studies that explain the rules of how to edit so as to give the idea of spatial continuity, as well as theoretical studies that explain how cinema has transformed our sense of space and time. To mark the beginning of Parma’s run as Italy’s Capital of Culture, an exhibition was organised entitled “Time Machine. Seeing and Experiencing Time”, curated by Antonio Somaini, with the challenge of demonstrating how cinema, from its earliest experiments to the digital age, has managed to manipulate and transform time, profoundly affecting our way of engaging with it. The themes of time and space are vital to understanding cinema, including from a philosophical point of view: in two of Gilles Deleuze’s seminal volumes, “The Movement Image” and “The Time Image”, the issues of space and time become the two great paradigms not only for explaining cinema, but also – as Deleuze himself says – for explaining a certain 20th-century philosophy. Deleuze succeeds in a truly impressive endeavour, namely linking cinema to philosophical reflection – indeed, making cinema into an instrument of philosophical thought; this heteronomy of filmmaking is then also transferred to its ability to become an instrument that goes beyond its own existence to become a reflection on the century that saw it as a protagonist of sorts. Don Ihde argues that every era has a technical discovery that somehow becomes what he calls an “epistemological engine”: a tool that opens up a system of thought that would never have been possible without that discovery. One of the many examples of this over the centuries is the camera obscura, but we could also name cinema as the defining discovery for 20th-century thought: indeed, cinema is indispensable for understanding the 20th century, just as the Internet is for understanding our way of thinking in the 21st century. Real-virtual Nowadays, the film industry is facing the crisis of cinema closures, ultimately caused by ever-spreading media platforms and the power of the economic competition that they are exerting by aggressively entering the field of production and distribution, albeit with a different angle on the age-old desire to garner audiences. Just a few days ago, Martin Scorsese was lamenting the fact that on these platforms, the artistic project is in danger of foundering, as excellent projects are placed in a catalogue alongside a series of products of varying quality, thus confusing the viewer. A few years ago, during the opening ceremony of the academic year at the University of Southern California, Steven Spielberg and George Lucas expressed the same concept about the future of cinema in a different way. Lucas argued that cinemas would soon have to become incredibly high-tech places where people can have an experience that is impossible to reproduce elsewhere, with a ticket price that takes into account the expanded and increased experiential value on offer thanks to the new technologies used. Spielberg, meanwhile, observed that cinemas will manage to survive if they manage to transform the cinemagoer from a simple viewer into a player, an actor of sorts. The history of cinema has always been marked by continuous adaptation to technological evolutions. I do not believe that cinema will ever end. Jean-Luc Godard, one of the great masters of the Nouvelle Vague, once said in an interview: «I am very sorry not to have witnessed the birth of cinema, but I am sure that I will witness its death». Godard, who was born in 1930, is still alive. Since its origins, cinema has always transformed rather than dying. Raymond Bellour says that cinema is an art that never finishes finishing, a phrase that encapsulates the beauty and the secret of cinema: an art that never quite finishes finishing is an art that is always on the very edge of the precipice but never falls off, although it leans farther and farther over that edge. This is undoubtedly down to cinema’s ability to continually keep up with technique and technology, and in doing so to move – even to a different medium – to relocate, as contemporary theorists say, even finally moving out of cinemas themselves to shift onto platforms and tablets, yet all without ever ceasing to be cinema. That said, we should give everything we’ve got to ensure that cinemas survive.
Shuang Wu, Shijian Lu, Li Cheng
Dance choreography for a piece of music is a challenging task, having to be creative in presenting distinctive stylistic dance elements while taking into account the musical theme and rhythm. It has been tackled by different approaches such as similarity retrieval, sequence-to-sequence modeling and generative adversarial networks, but their generated dance sequences are often short of motion realism, diversity and music consistency. In this paper, we propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographies from music. We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music. This gives a well defined and non-divergent training objective that mitigates the limitation of standard GAN training which is frequently plagued with instability and divergent generator loss issues. Extensive experiments demonstrate that our MDOT-Net can synthesize realistic and diverse dances which achieve an organic unity with the input music, reflecting the shared intentionality and matching the rhythmic articulation. Sample results are found at https://www.youtube.com/watch?v=dErfBkrlUO8.
Gunjan Aggarwal, Devi Parikh
Dance and music typically go hand in hand. The complexities in dance, music, and their synchronisation make them fascinating to study from a computational creativity perspective. While several works have looked at generating dance for a given music, automatically generating music for a given dance remains under-explored. This capability could have several creative expression and entertainment applications. We present some early explorations in this direction. We present a search-based offline approach that generates music after processing the entire dance video and an online approach that uses a deep neural network to generate music on-the-fly as the video proceeds. We compare these approaches to a strong heuristic baseline via human studies and present our findings. We have integrated our online approach in a live demo! A video of the demo can be found here: https://sites.google.com/view/dance2music/live-demo.
Adilia Yip
György Csepeli, Gergő Prazsák
The Revolution of Knowledge. The Internet User’s Sociology and Social-Psychology The latest communicational architecture, the internet has not yet a perfectly transparent effect on the entirety of human beings, remodeling and modifying cognition, knowledge, the relationships between people. The network system substantially influences the fields of the social system, listed here politics, economy, culture and the world of life. In this presentation new theories and new research achievements are presented, through these we can understand the new phenomenon of the society moved (migrated) on the internet. A new world has come into existence; its metaphysics is born now.
Subotin-Golubović Tatjana
MS Hilandar 307, a triodion sticherarion from the late 12th century, is one of the oldest Slavonic manuscripts kept at Hilandar. The manuscript has not survived in its entirety - it is missing the first part which contained stichera of the Lenten cycle; the extant part contains the pentecostarion cycle of stichera. It was written in the Russian recension. The manuscript has probably been kept at the monastery ever since its establishment and could have been even procured by St. Sava at the time of the formation of the monastery library. Its presence at the Serbian monastery confirms that there were no linguistic or practical liturgical obstacles to its use in religious services. Since the Serbian manuscript heritage does not include surviving sticheraria as a type of liturgical book, its content is highly interesting. This paper explores the interrelationship between the sticherarion and corresponding services in the oldest Serbian triodion, copied in the first half of the 13th century and now kept in the National Library of Russia in Saint Petersburg (F. п. I. 68). Two services were selected as examples - the service for the Mid-Pentecost (Midfeast) and the service to the Holy Fathers of the First Ecumenical Council. An initial careful comparison already revealed the appearance of different translations of the texts shared by both manuscripts. Also, it was found that only a part of stichera in the sticherarion appear in full triodion services, in which stichera make up just one segment of the service as a complex hymnographic ensemble.
Donatella Caramia
Neuroscienze cognitive della musica by Alice Mado Proverbio offers an updated selection of studies on the delicate interweaving between musical practice and brain plasticity. The book provides evidence on how the learning and enjoyment of music can be verified by means of neuroimaging techniques, showing how it determines significant variations in sensory and motor cortical areas as well as in limbic centers, the guardians of emotions. The many topics addressed, from perfect pitch to rehabilitation in Parkinson's and mirror neurons, turn out to be useful in an educational perspective, building a valid framework of reference for any further independent investigation. As a reader and commentator on this text, I hope that readers who, like me, are fascinated with the musical synaptic forest, will be able to follow the story of science without getting lost in dead ends, and fully discover just to what degree we are made of music.
Halaman 36 dari 44479