Hasil untuk "Dancing"

Menampilkan 20 dari ~199828 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar

JSON API
S2 Open Access 2018
Everybody Dance Now

C. Chan, Shiry Ginosar, Tinghui Zhou et al.

This paper presents a simple method for “do as I do” motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis. Although our method is quite simple, it produces surprisingly compelling results (see video). This motivates us to also provide a forensics tool for reliable synthetic content detection, which is able to distinguish videos synthesized by our system from real data. In addition, we release a first-of-its-kind open-source dataset of videos that can be legally used for training and motion transfer.

833 sitasi en Computer Science
arXiv Open Access 2026
Listen to Rhythm, Choose Movements: Autoregressive Multimodal Dance Generation via Diffusion and Mamba with Decoupled Dance Dataset

Oran Duan, Yinghua Shen, Yingzhu Lv et al.

Advances in generative models and sequence learning have greatly promoted research in dance motion generation, yet current methods still suffer from coarse semantic control and poor coherence in long sequences. In this work, we present Listen to Rhythm, Choose Movements (LRCM), a multimodal-guided diffusion framework supporting both diverse input modalities and autoregressive dance motion generation. We explore a feature decoupling paradigm for dance datasets and generalize it to the Motorica Dance dataset, separating motion capture data, audio rhythm, and professionally annotated global and local text descriptions. Our diffusion architecture integrates an audio-latent Conformer and a text-latent Cross-Conformer, and incorporates a Motion Temporal Mamba Module (MTMM) to enable smooth, long-duration autoregressive synthesis. Experimental results indicate that LRCM delivers strong performance in both functional capability and quantitative metrics, demonstrating notable potential in multimodal input scenarios and extended sequence generation. We will release the full codebase, dataset, and pretrained models publicly upon acceptance.

en cs.GR, cs.CV
DOAJ Open Access 2025
talking-dancing-howling-walking-whispering-spiraling…

Agnès Benoit

“Moving Words in Space” is an artistic and teaching practice based on a dialogue between dance improvisation and language learning. I have been developing this teaching approach with adults for the last twenty-five years, but I have also used it with children to teach English under the name “Jump’n Turn”. In this contribution, I retrace the developmental stages of “Moving Words in Space”, before describing how I use performance scores to introduce children to the English language through “Jump'n Turn”. To illustrate this process, I use “Scramble” (1970), an exercise and performance score originally created by the American dancer and choreographer Simone Forti. The tasks are extremely simple yet have a strong pedagogical potential. I explain how I guide young children through the activities so they can experience language fully through movement while performing a score that was created during the development of postmodern dance in the United States.

Special aspects of education, Drama
DOAJ Open Access 2025
Балет «Сім красунь» у постановці Миколи Трегубова 1954 року

Тетяна Миколаївна Чурпіта

Мета статті – проаналізувати балет «Сім красунь» у постановці М. Трегубова 1954 р. на основі першоджерел. Методологія. Дослідження ґрунтується на історичному (вивчення подій у хронологічній послідовності з урахуванням історико-культурного контексту), історіографічному (аналіз наявних наукових праць із досліджуваної проблематики) та аналітичному (вивчення джерельної бази, подій, фактів тощо) підходах. Наукова новизна. Уперше на основі першоджерел здійснено комплексний аналіз балету «Сім красунь» у постановці М. Трегубова 1954 р. на сцені Львівського театру опери та балету; виявлено як найвдаліші балетмейстерські рішення та яскраві сценічні образи, так і окремі недоліки в виконавській техніці артистів. Висновки. 16 та 17 жовтня 1954 р. на сцені Львівського театру опери та балету відбулися перші покази балету «Сім красунь» на музику азербайджанського композитора Кара Караєва в постановці головного балетмейстера театру М. Трегубова, який виступав як режисер вистави, диригента С. Арбіта, художника Ф. Нірода, балетмейстера-педагога К. Васіної. Відгуки про виставу відзначали майстерність постановника, особливо його здатність до глибокого соціального осмислення твору, де центральне місце відводилося азербайджанському народові, а також виразну динаміку музично-хореографічної композиції. Балетмейстер, відмовившись від пантомімічного підходу, зміг створити глибокі хореографічні характеристики персонажів. Використовуючи народний танець, М. Трегубов яскраво передав різноманітні образи семи красунь, виразно розкрив ліричні лінії головних героїв, цікаво композиційно вирішив образи семи ремісників, майстерно поставив трудові танці, розкішні дивертисментні сцени та без використання гротеску зобразив негативних персонажів. Попри вдало підібраний акторський ансамбль критики рекомендували балетмейстеру доопрацювати масові сцени другої дії та вдосконалити виконавську майстерність окремих танцівників. Загалом новий балет визнали значним творчим досягненням колективу та прогнозували йому тривале сценічне життя. Проте на львівській сцені виставу показали лише 16 разів, після чого її замінили новими мистецькими проєктами.

DOAJ Open Access 2025
Dancing With Strangers: Young Legal Scholars and Their Disciplinary Predicament

Timotej Obreza

In a world where academia's mantra increasingly demands interdisciplinary engagement, legal scholarship faces a choice: uphold its traditional boundaries or embrace disciplinary confluence. This paper explores how legal knowledge maintains its identity while adapting to contemporary academic discourse. It does so through the metaphorical address of a young legal scholar, proposing two crucial epistemic perspectives: the “legal phantasm” – a lawyer's distinct cognitive toolkit for constructing and applying law, and the “spirit of interdisciplinarity” – an attitude fostering creative engagement beyond normative boundaries. By distinguishing between knowledge of law and knowledge about law, the paper argues for a nuanced approach to scholarly engagement. Using the metaphor of dancing with disciplinary strangers, it explores how legal scholars might maintain professional rigour while pursuing intellectual innovation. It argues for epistemologically conscious inquiry that recognises both the necessity of boundaries and the value of their careful transgression. The paper calls for methodological awareness rather than mere interdisciplinary hype, suggesting that meaningful scholarship requires understanding not just whether to dance, but how.

Law in general. Comparative and uniform law. Jurisprudence
arXiv Open Access 2025
MEGADance: Mixture-of-Experts Architecture for Genre-Aware 3D Dance Generation

Kaixing Yang, Xulong Tang, Ziqiao Peng et al.

Music-driven 3D dance generation has attracted increasing attention in recent years, with promising applications in choreography, virtual reality, and creative content creation. Previous research has generated promising realistic dance movement from audio signals. However, traditional methods underutilize genre conditioning, often treating it as auxiliary modifiers rather than core semantic drivers. This oversight compromises music-motion synchronization and disrupts dance genre continuity, particularly during complex rhythmic transitions, thereby leading to visually unsatisfactory effects. To address the challenge, we propose MEGADance, a novel architecture for music-driven 3D dance generation. By decoupling choreographic consistency into dance generality and genre specificity, MEGADance demonstrates significant dance quality and strong genre controllability. It consists of two stages: (1) High-Fidelity Dance Quantization Stage (HFDQ), which encodes dance motions into a latent representation by Finite Scalar Quantization (FSQ) and reconstructs them with kinematic-dynamic constraints, and (2) Genre-Aware Dance Generation Stage (GADG), which maps music into the latent representation by synergistic utilization of Mixture-of-Experts (MoE) mechanism with Mamba-Transformer hybrid backbone. Extensive experiments on the FineDance and AIST++ dataset demonstrate the state-of-the-art performance of MEGADance both qualitatively and quantitatively. Code is available at https://github.com/XulongT/MEGADance.

en cs.SD, cs.MM
arXiv Open Access 2025
DANCER: Dance ANimation via Condition Enhancement and Rendering with diffusion model

Yucheng Xing, Jinxing Yin, Xiaodong Liu

Recently, diffusion models have shown their impressive ability in visual generation tasks. Besides static images, more and more research attentions have been drawn to the generation of realistic videos. The video generation not only has a higher requirement for the quality, but also brings a challenge in ensuring the video continuity. Among all the video generation tasks, human-involved contents, such as human dancing, are even more difficult to generate due to the high degrees of freedom associated with human motions. In this paper, we propose a novel framework, named as DANCER (Dance ANimation via Condition Enhancement and Rendering with Diffusion Model), for realistic single-person dance synthesis based on the most recent stable video diffusion model. As the video generation is generally guided by a reference image and a video sequence, we introduce two important modules into our framework to fully benefit from the two inputs. More specifically, we design an Appearance Enhancement Module (AEM) to focus more on the details of the reference image during the generation, and extend the motion guidance through a Pose Rendering Module (PRM) to capture pose conditions from extra domains. To further improve the generation capability of our model, we also collect a large amount of video data from Internet, and generate a novel datasetTikTok-3K to enhance the model training. The effectiveness of the proposed model has been evaluated through extensive experiments on real-world datasets, where the performance of our model is superior to that of the state-of-the-art methods. All the data and codes will be released upon acceptance.

en cs.CV
arXiv Open Access 2025
Dyads: Artist-Centric, AI-Generated Dance Duets

Zixuan Wang, Luis Zerkowski, Ilya Vidrin et al.

Existing AI-generated dance methods primarily train on motion capture data from solo dance performances, but a critical feature of dance in nearly any genre is the interaction of two or more bodies in space. Moreover, many works at the intersection of AI and dance fail to incorporate the ideas and needs of the artists themselves into their development process, yielding models that produce far more useful insights for the AI community than for the dance community. This work addresses both needs of the field by proposing an AI method to model the complex interactions between pairs of dancers and detailing how the technical methodology can be shaped by ongoing co-creation with the artistic stakeholders who curated the movement data. Our model is a probability-and-attention-based Variational Autoencoder that generates a choreographic partner conditioned on an input dance sequence. We construct a custom loss function to enhance the smoothness and coherence of the generated choreography. Our code is open-source, and we also document strategies for other interdisciplinary research teams to facilitate collaboration and strong communication between artists and technologists.

en cs.LG, cs.CY
arXiv Open Access 2025
Walk Before You Dance: High-fidelity and Editable Dance Synthesis via Generative Masked Motion Prior

Foram N Shah, Parshwa Shah, Muhammad Usama Saleem et al.

Recent advances in dance generation have enabled the automatic synthesis of 3D dance motions. However, existing methods still face significant challenges in simultaneously achieving high realism, precise dance-music synchronization, diverse motion expression, and physical plausibility. To address these limitations, we propose a novel approach that leverages a generative masked text-to-motion model as a distribution prior to learn a probabilistic mapping from diverse guidance signals, including music, genre, and pose, into high-quality dance motion sequences. Our framework also supports semantic motion editing, such as motion inpainting and body part modification. Specifically, we introduce a multi-tower masked motion model that integrates a text-conditioned masked motion backbone with two parallel, modality-specific branches: a music-guidance tower and a pose-guidance tower. The model is trained using synchronized and progressive masked training, which allows effective infusion of the pretrained text-to-motion prior into the dance synthesis process while enabling each guidance branch to optimize independently through its own loss function, mitigating gradient interference. During inference, we introduce classifier-free logits guidance and pose-guided token optimization to strengthen the influence of music, genre, and pose signals. Extensive experiments demonstrate that our method sets a new state of the art in dance generation, significantly advancing both the quality and editability over existing approaches. Project Page available at https://foram-s1.github.io/DanceMosaic/

en cs.GR, cs.AI
arXiv Open Access 2025
Music-Aligned Holistic 3D Dance Generation via Hierarchical Motion Modeling

Xiaojie Li, Ronghui Li, Shukai Fang et al.

Well-coordinated, music-aligned holistic dance enhances emotional expressiveness and audience engagement. However, generating such dances remains challenging due to the scarcity of holistic 3D dance datasets, the difficulty of achieving cross-modal alignment between music and dance, and the complexity of modeling interdependent motion across the body, hands, and face. To address these challenges, we introduce SoulDance, a high-precision music-dance paired dataset captured via professional motion capture systems, featuring meticulously annotated holistic dance movements. Building on this dataset, we propose SoulNet, a framework designed to generate music-aligned, kinematically coordinated holistic dance sequences. SoulNet consists of three principal components: (1) Hierarchical Residual Vector Quantization, which models complex, fine-grained motion dependencies across the body, hands, and face; (2) Music-Aligned Generative Model, which composes these hierarchical motion units into expressive and coordinated holistic dance; (3) Music-Motion Retrieval Module, a pre-trained cross-modal model that functions as a music-dance alignment prior, ensuring temporal synchronization and semantic coherence between generated dance and input music throughout the generation process. Extensive experiments demonstrate that SoulNet significantly surpasses existing approaches in generating high-quality, music-coordinated, and well-aligned holistic 3D dance sequences.

en cs.MM, cs.SD
DOAJ Open Access 2024
I Am Reading the Poems of Kristina Hočevar

Alojzija Zupan Sosič

The article takes into account the defining characteristics of contemporary poetry and in this sense introduces innovative approaches to reading Kristina Hočevar’s poetry collection Na zobeh aluminij, na ustnicah kreda (Aluminium on Teeth, Chalk on Lips): a cogmotive and holistic approach that also considers the interpretive focus and lyrical dominant. -At the same time, it also introduces two ways of selecting poems for interpretation, i.e. a very short poem (“give me marbles”) without placing it in a wider context, and a longer poem (“only these walls are your walls”) within the context of the entire collection, the author’s poetics, and lesbian poetry. While the cogmotive approach connects the emotional and rational areas, the holistic approach assumes the impossibility of finding the only meaningful core of the text. When I consider the interpretive focus and the lyrical dominant within the framework of the latter, I am constantly aware that it is more important to reflect the point of interpretation, which is significantly more intense and thus more successful when one tries to focus on the interpretive center and on the dominant poetic qualities separately, rather than both simultaneously. The intertwining of the interpretive focus and the lyrical dominant, i.e. the juxtaposition of the reception and production aspects, enabled the interpretation to be diverse and elastic. If the interpretative center in Kristina Hočevar’s two poems is non-verbal, in the shorter one an allusion to children’s play and in the longer one the feeling of being outcast and at the same time at home in a safe environment, the lyrical dominant in the shorter poem (“give me marbles”) is the metaphorical equation of eyes and marbles, in the longer the action of symbols such as walls, a black sun, crimson, shackles, iron bars, a dancing body. Both poems prove the quality of the poet’s poetics, recognizable in the dynamism of the images, non-naive and non-pathetic sincerity, the depth of the dialogic view, the ironic and grotesque fluidity of the self and the collective, or the transience of identities, in which lesbian determinacy is important.

History of scholarship and learning. The humanities, Literature (General)
arXiv Open Access 2024
LM2D: Lyrics- and Music-Driven Dance Synthesis

Wenjie Yin, Xuejiao Zhao, Yi Yu et al.

Dance typically involves professional choreography with complex movements that follow a musical rhythm and can also be influenced by lyrical content. The integration of lyrics in addition to the auditory dimension, enriches the foundational tone and makes motion generation more amenable to its semantic meanings. However, existing dance synthesis methods tend to model motions only conditioned on audio signals. In this work, we make two contributions to bridge this gap. First, we propose LM2D, a novel probabilistic architecture that incorporates a multimodal diffusion model with consistency distillation, designed to create dance conditioned on both music and lyrics in one diffusion generation step. Second, we introduce the first 3D dance-motion dataset that encompasses both music and lyrics, obtained with pose estimation technologies. We evaluate our model against music-only baseline models with objective metrics and human evaluations, including dancers and choreographers. The results demonstrate LM2D is able to produce realistic and diverse dance matching both lyrics and music. A video summary can be accessed at: https://youtu.be/4XCgvYookvA.

en cs.SD, cs.AI
arXiv Open Access 2024
MIDGET: Music Conditioned 3D Dance Generation

Jinwu Wang, Wei Mao, Miaomiao Liu

In this paper, we introduce a MusIc conditioned 3D Dance GEneraTion model, named MIDGET based on Dance motion Vector Quantised Variational AutoEncoder (VQ-VAE) model and Motion Generative Pre-Training (GPT) model to generate vibrant and highquality dances that match the music rhythm. To tackle challenges in the field, we introduce three new components: 1) a pre-trained memory codebook based on the Motion VQ-VAE model to store different human pose codes, 2) employing Motion GPT model to generate pose codes with music and motion Encoders, 3) a simple framework for music feature extraction. We compare with existing state-of-the-art models and perform ablation experiments on AIST++, the largest publicly available music-dance dataset. Experiments demonstrate that our proposed framework achieves state-of-the-art performance on motion quality and its alignment with the music.

en cs.SD, cs.CV
arXiv Open Access 2024
Beat-It: Beat-Synchronized Multi-Condition 3D Dance Generation

Zikai Huang, Xuemiao Xu, Cheng Xu et al.

Dance, as an art form, fundamentally hinges on the precise synchronization with musical beats. However, achieving aesthetically pleasing dance sequences from music is challenging, with existing methods often falling short in controllability and beat alignment. To address these shortcomings, this paper introduces Beat-It, a novel framework for beat-specific, key pose-guided dance generation. Unlike prior approaches, Beat-It uniquely integrates explicit beat awareness and key pose guidance, effectively resolving two main issues: the misalignment of generated dance motions with musical beats, and the inability to map key poses to specific beats, critical for practical choreography. Our approach disentangles beat conditions from music using a nearest beat distance representation and employs a hierarchical multi-condition fusion mechanism. This mechanism seamlessly integrates key poses, beats, and music features, mitigating condition conflicts and offering rich, multi-conditioned guidance for dance generation. Additionally, a specially designed beat alignment loss ensures the generated dance movements remain in sync with the designated beats. Extensive experiments confirm Beat-It's superiority over existing state-of-the-art methods in terms of beat alignment and motion controllability.

en cs.GR, cs.SD

Halaman 5 dari 9992