Hasil untuk "Dancing"

Menampilkan 20 dari ~199828 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar

JSON API
arXiv Open Access 2026
Learning Quantised Structure-Preserving Motion Representations for Dance Fingerprinting

Arina Kharlamova, Bowei He, Chen Ma et al.

We present DANCEMATCH, an end-to-end framework for motion-based dance retrieval, the task of identifying semantically similar choreographies directly from raw video, defined as DANCE FINGERPRINTING. While existing motion analysis and retrieval methods can compare pose sequences, they rely on continuous embeddings that are difficult to index, interpret, or scale. In contrast, DANCEMATCH constructs compact, discrete motion signatures that capture the spatio-temporal structure of dance while enabling efficient large-scale retrieval. Our system integrates Skeleton Motion Quantisation (SMQ) with Spatio-Temporal Transformers (STT) to encode human poses, extracted via Apple CoMotion, into a structured motion vocabulary. We further design DANCE RETRIEVAL ENGINE (DRE), which performs sub-linear retrieval using a histogram-based index followed by re-ranking for refined matching. To facilitate reproducible research, we release DANCETYPESBENCHMARK, a pose-aligned dataset annotated with quantised motion tokens. Experiments demonstrate robust retrieval across diverse dance styles and strong generalisation to unseen choreographies, establishing a foundation for scalable motion fingerprinting and quantitative choreographic analysis.

en cs.CV, cs.AI
arXiv Open Access 2026
Skeleton2Stage: Reward-Guided Fine-Tuning for Physically Plausible Dance Generation

Jidong Jia, Youjian Zhang, Huan Fu et al.

Despite advances in dance generation, most methods are trained in the skeletal domain and ignore mesh-level physical constraints. As a result, motions that look plausible as joint trajectories often exhibit body self-penetration and Foot-Ground Contact (FGC) anomalies when visualized with a human body mesh, reducing the aesthetic appeal of generated dances and limiting their real-world applications. We address this skeleton-to-mesh gap by deriving physics-based rewards from the body mesh and applying Reinforcement Learning Fine-Tuning (RLFT) to steer the diffusion model toward physically plausible motion synthesis under mesh visualization. Our reward design combines (i) an imitation reward that measures a motion's general plausibility by its imitability in a physical simulator (penalizing penetration and foot skating), and (ii) a Foot-Ground Deviation (FGD) reward with test-time FGD guidance to better capture the dynamic foot-ground interaction in dance. However, we find that the physics-based rewards tend to push the model to generate freezing motions for fewer physical anomalies and better imitability. To mitigate it, we propose an anti-freezing reward to preserve motion dynamics while maintaining physical plausibility. Experiments on multiple dance datasets consistently demonstrate that our method can significantly improve the physical plausibility of generated motions, yielding more realistic and aesthetically pleasing dances. The project page is available at: https://jjd1123.github.io/Skeleton2Stage/

en cs.CV
arXiv Open Access 2025
DanceEditor: Towards Iterative Editable Music-driven Dance Generation with Open-Vocabulary Descriptions

Hengyuan Zhang, Zhe Li, Xingqun Qi et al.

Generating coherent and diverse human dances from music signals has gained tremendous progress in animating virtual avatars. While existing methods support direct dance synthesis, they fail to recognize that enabling users to edit dance movements is far more practical in real-world choreography scenarios. Moreover, the lack of high-quality dance datasets incorporating iterative editing also limits addressing this challenge. To achieve this goal, we first construct DanceRemix, a large-scale multi-turn editable dance dataset comprising the prompt featuring over 25.3M dance frames and 84.5K pairs. In addition, we propose a novel framework for iterative and editable dance generation coherently aligned with given music signals, namely DanceEditor. Considering the dance motion should be both musical rhythmic and enable iterative editing by user descriptions, our framework is built upon a prediction-then-editing paradigm unifying multi-modal conditions. At the initial prediction stage, our framework improves the authority of generated results by directly modeling dance movements from tailored, aligned music. Moreover, at the subsequent iterative editing stages, we incorporate text descriptions as conditioning information to draw the editable results through a specifically designed Cross-modality Editing Module (CEM). Specifically, CEM adaptively integrates the initial prediction with music and text prompts as temporal motion cues to guide the synthesized sequences. Thereby, the results display music harmonics while preserving fine-grained semantic alignment with text descriptions. Extensive experiments demonstrate that our method outperforms the state-of-the-art models on our newly collected DanceRemix dataset. Code is available at https://lzvsdy.github.io/DanceEditor/.

en cs.GR, cs.CV
arXiv Open Access 2025
Dance Style Recognition Using Laban Movement Analysis

Muhammad Turab, Philippe Colantoni, Damien Muselet et al.

The growing interest in automated movement analysis has presented new challenges in recognition of complex human activities including dance. This study focuses on dance style recognition using features extracted using Laban Movement Analysis. Previous studies for dance style recognition often focus on cross-frame movement analysis, which limits the ability to capture temporal context and dynamic transitions between movements. This gap highlights the need for a method that can add temporal context to LMA features. For this, we introduce a novel pipeline which combines 3D pose estimation, 3D human mesh reconstruction, and floor aware body modeling to effectively extract LMA features. To address the temporal limitation, we propose a sliding window approach that captures movement evolution across time in features. These features are then used to train various machine learning methods for classification, and their explainability explainable AI methods to evaluate the contribution of each feature to classification performance. Our proposed method achieves a highest classification accuracy of 99.18\% which shows that the addition of temporal context significantly improves dance style recognition performance.

en cs.CV, cs.AI
arXiv Open Access 2025
Dress&Dance: Dress up and Dance as You Like It - Technical Preview

Jun-Kun Chen, Aayush Bansal, Minh Phuoc Vo et al.

We present Dress&Dance, a video diffusion framework that generates high quality 5-second-long 24 FPS virtual try-on videos at 1152x720 resolution of a user wearing desired garments while moving in accordance with a given reference video. Our approach requires a single user image and supports a range of tops, bottoms, and one-piece garments, as well as simultaneous tops and bottoms try-on in a single pass. Key to our framework is CondNet, a novel conditioning network that leverages attention to unify multi-modal inputs (text, images, and videos), thereby enhancing garment registration and motion fidelity. CondNet is trained on heterogeneous training data, combining limited video data and a larger, more readily available image dataset, in a multistage progressive manner. Dress&Dance outperforms existing open source and commercial solutions and enables a high quality and flexible try-on experience.

en cs.CV, cs.LG
arXiv Open Access 2025
MATHDance: Mamba-Transformer Architecture with Uniform Tokenization for High-Quality 3D Dance Generation

Kaixing Yang, Xulong Tang, Ziqiao Peng et al.

Music-to-dance generation represents a challenging yet pivotal task at the intersection of choreography, virtual reality, and creative content generation. Despite its significance, existing methods face substantial limitation in achieving choreographic consistency. To address the challenge, we propose MatchDance, a novel framework for music-to-dance generation that constructs a latent representation to enhance choreographic consistency. MatchDance employs a two-stage design: (1) a Kinematic-Dynamic-based Quantization Stage (KDQS), which encodes dance motions into a latent representation by Finite Scalar Quantization (FSQ) with kinematic-dynamic constraints and reconstructs them with high fidelity, and (2) a Hybrid Music-to-Dance Generation Stage(HMDGS), which uses a Mamba-Transformer hybrid architecture to map music into the latent representation, followed by the KDQS decoder to generate 3D dance motions. Additionally, a music-dance retrieval framework and comprehensive metrics are introduced for evaluation. Extensive experiments on the FineDance dataset demonstrate state-of-the-art performance.

en cs.SD, cs.GR
arXiv Open Access 2025
GCDance: Genre-Controlled Music-Driven 3D Full Body Dance Generation

Xinran Liu, Xu Dong, Shenbin Qian et al.

Music-driven dance generation is a challenging task as it requires strict adherence to genre-specific choreography while ensuring physically realistic and precisely synchronized dance sequences with the music's beats and rhythm. Although significant progress has been made in music-conditioned dance generation, most existing methods struggle to convey specific stylistic attributes in generated dance. To bridge this gap, we propose a diffusion-based framework for genre-specific 3D full-body dance generation, conditioned on both music and descriptive text. To effectively incorporate genre information, we develop a text-based control mechanism that maps input prompts, either explicit genre labels or free-form descriptive text, into genre-specific control signals, enabling precise and controllable text-guided generation of genre-consistent dance motions. Furthermore, to enhance the alignment between music and textual conditions, we leverage the features of a music foundation model, facilitating coherent and semantically aligned dance synthesis. Last, to balance the objectives of extracting text-genre information and maintaining high-quality generation results, we propose a novel multi-task optimization strategy. This effectively balances competing factors such as physical realism, spatial accuracy, and text classification, significantly improving the overall quality of the generated sequences. Extensive experimental results obtained on the FineDance and AIST++ datasets demonstrate the superiority of GCDance over the existing state-of-the-art approaches.

en cs.GR, cs.CV
arXiv Open Access 2025
Emotion Recognition in Contemporary Dance Performances Using Laban Movement Analysis

Muhammad Turab, Philippe Colantoni, Damien Muselet et al.

This paper presents a novel framework for emotion recognition in contemporary dance by improving existing Laban Movement Analysis (LMA) feature descriptors and introducing robust, novel descriptors that capture both quantitative and qualitative aspects of the movement. Our approach extracts expressive characteristics from 3D keypoints data of professional dancers performing contemporary dance under various emotional states, and trains multiple classifiers, including Random Forests and Support Vector Machines. Additionally, we provide in-depth explanation of features and their impact on model predictions using explainable machine learning methods. Overall, our study improves emotion recognition in contemporary dance and offers promising applications in performance analysis, dance training, and human--computer interaction, with a highest accuracy of 96.85\%.

en cs.CV, cs.AI
arXiv Open Access 2025
MDD: A Dataset for Text-and-Music Conditioned Duet Dance Generation

Prerit Gupta, Jason Alexander Fotso-Puepi, Zhengyuan Li et al.

We introduce Multimodal DuetDance (MDD), a diverse multimodal benchmark dataset designed for text-controlled and music-conditioned 3D duet dance motion generation. Our dataset comprises 620 minutes of high-quality motion capture data performed by professional dancers, synchronized with music, and detailed with over 10K fine-grained natural language descriptions. The annotations capture a rich movement vocabulary, detailing spatial relationships, body movements, and rhythm, making MDD the first dataset to seamlessly integrate human motions, music, and text for duet dance generation. We introduce two novel tasks supported by our dataset: (1) Text-to-Duet, where given music and a textual prompt, both the leader and follower dance motion are generated (2) Text-to-Dance Accompaniment, where given music, textual prompt, and the leader's motion, the follower's motion is generated in a cohesive, text-aligned manner. We include baseline evaluations on both tasks to support future research.

en cs.GR, cs.CV
DOAJ Open Access 2025
Dance Science Research

Réka Asztalos

In many areas of dance research, there is a notable lack of background literature. The two volumes discussed here, Introduction to dance research methodology [Bevezetés a tánccal kapcsolatos kutatások módszertanába] (2020, edited by Lanszki), and Research methods in the dance sciences (2023, edited by Welsh, et al.), aim to address this gap, providing guidance on both theoretical and empirical research in dance studies.

Special aspects of education, Dancing
DOAJ Open Access 2024
Against Discovery

Sarah Elizabeth Lass

This essay investigates how settler subjectivity shapes modes of attention in post-Judson Western contemporary dance, specifically through this dancing culture’s embrace and value of “discovery” as an attentional framework and aim of dancing. Engaging Mark Rifkin’s Settler Common Sense along with existing research into the nature and operation of attention in Western contemporary dance, the writing highlights the ways in which “discovery” is mobilized through assumptions of porousness, availability, and worldmaking in the space of encounter between a dancer moving in this lineage and their surrounds, thereby enacting and extending everyday, commonplace settler modes of feeling and perception that dynamize ongoing indigenous dispossession. The essay concludes with a summary of a “coordination” practice, initiated and refined in the context of an advanced contemporary dance technique course at Smith College in the spring of 2023. Through analysis of two constituent “coordination” scores and informed by conversations with dance students in the course, the writing explores how “coordination” as an attentional framework supports movers’ awareness of both implication and distinction within their surrounds, and honors and upholds both mover and surrounds as always already underway and in the midst.

arXiv Open Access 2023
Explore 3D Dance Generation via Reward Model from Automatically-Ranked Demonstrations

Zilin Wang, Haolin Zhuang, Lu Li et al.

This paper presents an Exploratory 3D Dance generation framework, E3D2, designed to address the exploration capability deficiency in existing music-conditioned 3D dance generation models. Current models often generate monotonous and simplistic dance sequences that misalign with human preferences because they lack exploration capabilities. The E3D2 framework involves a reward model trained from automatically-ranked dance demonstrations, which then guides the reinforcement learning process. This approach encourages the agent to explore and generate high quality and diverse dance movement sequences. The soundness of the reward model is both theoretically and experimentally validated. Empirical experiments demonstrate the effectiveness of E3D2 on the AIST++ dataset. Project Page: https://sites.google.com/view/e3d2.

en cs.HC, cs.AI
arXiv Open Access 2023
TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration

Kehong Gong, Dongze Lian, Heng Chang et al.

We propose a novel task for generating 3D dance movements that simultaneously incorporate both text and music modalities. Unlike existing works that generate dance movements using a single modality such as music, our goal is to produce richer dance movements guided by the instructive information provided by the text. However, the lack of paired motion data with both music and text modalities limits the ability to generate dance movements that integrate both. To alleviate this challenge, we propose to utilize a 3D human motion VQ-VAE to project the motions of the two datasets into a latent space consisting of quantized vectors, which effectively mix the motion tokens from the two datasets with different distributions for training. Additionally, we propose a cross-modal transformer to integrate text instructions into motion generation architecture for generating 3D dance movements without degrading the performance of music-conditioned dance generation. To better evaluate the quality of the generated motion, we introduce two novel metrics, namely Motion Prediction Distance (MPD) and Freezing Score (FS), to measure the coherence and freezing percentage of the generated motion. Extensive experiments show that our approach can generate realistic and coherent dance movements conditioned on both text and music while maintaining comparable performance with the two single modalities. Code is available at https://garfield-kh.github.io/TM2D/.

en cs.CV
arXiv Open Access 2023
Chiral edge waves in a dance-based human topological insulator

Matthew Du, Juan B. Pérez-Sánchez, Jorge A. Campos-Gonzalez-Angulo et al.

Topological insulators are insulators in the bulk but feature chiral energy propagation along the boundary. This property is topological in nature and therefore robust to disorder. Originally discovered in electronic materials, topologically protected boundary transport has since been observed in many other physical systems. Thus, it is natural to ask whether this phenomenon finds relevance in a broader context. We choreograph a dance in which a group of humans, arranged on a square grid, behave as a topological insulator. The dance features unidirectional flow of movement through dancers on the lattice edge. This effect persists when people are removed from the dance floor. Our work extends the applicability of wave physics to the performance arts.

en cond-mat.mes-hall
arXiv Open Access 2023
DanceAnyWay: Synthesizing Beat-Guided 3D Dances with Randomized Temporal Contrastive Learning

Aneesh Bhattacharya, Manas Paranjape, Uttaran Bhattacharya et al.

We present DanceAnyWay, a generative learning method to synthesize beat-guided dances of 3D human characters synchronized with music. Our method learns to disentangle the dance movements at the beat frames from the dance movements at all the remaining frames by operating at two hierarchical levels. At the coarser "beat" level, it encodes the rhythm, pitch, and melody information of the input music via dedicated feature representations only at the beat frames. It leverages them to synthesize the beat poses of the target dances using a sequence-to-sequence learning framework. At the finer "repletion" level, our method encodes similar rhythm, pitch, and melody information from all the frames of the input music via dedicated feature representations. It generates the full dance sequences by combining the synthesized beat and repletion poses and enforcing plausibility through an adversarial learning framework. Our training paradigm also enforces fine-grained diversity in the synthesized dances through a randomized temporal contrastive loss, which ensures different segments of the dance sequences have different movements and avoids motion freezing or collapsing to repetitive movements. We evaluate the performance of our approach through extensive experiments on the benchmark AIST++ dataset and observe improvements of about 7%-12% in motion quality metrics and 1.5%-4% in motion diversity metrics over the current baselines, respectively. We also conducted a user study to evaluate the visual quality of our synthesized dances. We note that, on average, the samples generated by our method were about 9-48% more preferred by the participants and had a 4-27% better five-point Likert-scale score over the best available current baseline in terms of motion quality and synchronization. Our source code and project page are available at https://github.com/aneeshbhattacharya/DanceAnyWay.

en cs.SD, cs.GR
DOAJ Open Access 2023
Understanding Circle Time Practices in Montessori Early Childhood Settings

Andrea Koczela, Kateri Carver

Circle time is commonplace in traditional preschools, yet there are few references to the practice in Montessori’s writings or in major Montessori organizations’ and teacher education standards. This article investigates whether circle time is frequent in Montessori 3–6-year-old classrooms using data from a widely distributed Qualtrics survey. The results, from 276 respondents spanning all 50 states, provide insight into the circle time practices of United States-based preschool Montessori teachers, also known in Montessori classrooms as guides. We present novel information regarding circle time duration and frequency, types of circle time activities, Montessori guides’ circle time training and planning, whether children’s circle time attendance is free choice or compulsory, and the nature of circle time in programs associated with Association Montessori Internationale versus American Montessori Society. Results revealed that 92% of survey participants have circle time every day or most days; most participants hold circle time for 20 minutes or less; the most common circle time events were show-and-tell, calendar work, vocabulary lessons, Grace and Courtesy lessons, read aloud discussions, dancing and movement, snack time, general conversation, read aloud (stories), and birthday celebrations. We found that many of the most frequent circle time activities do not align with children’s preferences, teacher preferences, or Early Childhood best practices. Our work invites Montessorians to engage in the work of reconstructing the traditional practice of circle time to better align with Montessori hallmarks of choice, development of the will, and joyfulness.

Education, Theory and practice of education
DOAJ Open Access 2023
Emotional labor mediates the associations between self-consciousness and flow in dancers

Xiaohui Liu, Yu Liao, Jiayi Tan et al.

Abstract Emotional labor has been a focal point in occupational well-being literature, but studies have long overlooked an important group of emotional laborers: performers. This research represents a pioneering effort to examine dancers’ adoption of emotional labor strategies, their antecedent of self-consciousness, and the outcome of flow experience. We explored these elements both in the traditional setting of stage dancing and in the novel context of online dance classes without on-site spectators during the COVID-19 pandemic. The results revealed that dancers employed all three common emotional labor strategies: surface acting, deep acting, and expression of naturally felt emotions, with deep acting being the most frequent. In the traditional setting, only the expression of naturally felt emotions mediated the positive effect of private self-consciousness and the negative effect of public self-consciousness on flow experience. In contrast, in the online setting, only private self-consciousness impacted flow through the mediation of deep acting and expression of naturally felt emotions. This exploratory study bridges dramaturgy-originated theories of emotional labor with empirical performing arts research, preliminarily advancing knowledge in the relevant fields of dance education, self-presentation, and flow studies.

Medicine, Science
DOAJ Open Access 2023
Dancing the Nanopore limbo – Nanopore metagenomics from small DNA quantities for bacterial genome reconstruction

Sophie A. Simon, Katharina Schmidt, Lea Griesdorn et al.

Abstract Background While genome-resolved metagenomics has revolutionized our understanding of microbial and genetic diversity in environmental samples, assemblies of short-reads often result in incomplete and/or highly fragmented metagenome-assembled genomes (MAGs), hampering in-depth genomics. Although Nanopore sequencing has increasingly been used in microbial metagenomics as long reads greatly improve the assembly quality of MAGs, the recommended DNA quantity usually exceeds the recoverable amount of DNA of environmental samples. Here, we evaluated lower-than-recommended DNA quantities for Nanopore library preparation by determining sequencing quality, community composition, assembly quality and recovery of MAGs. Results We generated 27 Nanopore metagenomes using the commercially available ZYMO mock community and varied the amount of input DNA from 1000 ng (the recommended minimum) down to 1 ng in eight steps. The quality of the generated reads remained stable across all input levels. The read mapping accuracy, which reflects how well the reads match a known reference genome, was consistently high across all libraries. The relative abundance of the species in the metagenomes was stable down to input levels of 50 ng. High-quality MAGs (> 95% completeness, ≤ 5% contamination) could be recovered from metagenomes down to 35 ng of input material. When combined with publicly available Illumina reads for the mock community, Nanopore reads from input quantities as low as 1 ng improved the quality of hybrid assemblies. Conclusion Our results show that the recommended DNA amount for Nanopore library preparation can be substantially reduced without any adverse effects to genome recovery and still bolster hybrid assemblies when combined with short-read data. We posit that the results presented herein will enable studies to improve genome recovery from low-biomass environments, enhancing microbiome understanding.

Biotechnology, Genetics
DOAJ Open Access 2023
Eight-Week Zumba Training for Women in the New Normal Period

I Gede Dharma Utamayasa, Moh Hanafi, Yandika Fefrian Rosmi et al.

Zumba shares similarities with other aerobic exercises such as dancing and cycling, as it enhances cardiovascular health and facilitates calorie burning. However, what distinguishes Zumba is its emphasis on enjoyment and the incorporation of dance movements from various music genres. This form of aerobic exercise involves sustained moderate to high-intensity activity without excessive fatigue. It strengthens the heart muscle and promotes efficient blood circulation. Furthermore, aerobics can effectively reduce blood pressure in individuals with hypertension. This positive effect is attributed to the improvement of blood vessel function, facilitating better blood flow and alleviating strain on the heart. Regular aerobic exercise also contributes to weight loss, which further aids in lowering blood pressure. Nevertheless, the impact of Zumba on VO2 max ability remains to be explored. In this study, a pre-experimental design was employed, involving one-hour Zumba sessions conducted over eight weeks, comprising approximately 12 tracks prepared by the instructor. The study sample consisted of 30 participants engaged in Zumba classes. Prior to Zumba, the Jackson non-exercise test formula was employed to assess VO2 max fitness. Post-Zumba, the 1-mile jog test formula was utilized to measure VO2 max fitness. The study findings indicate a significant increase in the mean VO2max value after treatment, compared to the lower mean value observed before treatment. Specifically, the mean value of VO2max increased from 38.46 ml/kg/minute before treatment to 47.83 ml/kg/minute after treatment. These results suggest that Zumba exercise enhances aerobic fitness by positively impacting cardiovascular biological mechanisms in young women during the transition to the new normal period.

Sports, Sports medicine
DOAJ Open Access 2022
The vertebrate Embryo Clock: Common players dancing to a different beat

Gil Carraco, Gil Carraco, Ana P. Martins-Jesus et al.

Vertebrate embryo somitogenesis is the earliest morphological manifestation of the characteristic patterned structure of the adult axial skeleton. Pairs of somites flanking the neural tube are formed periodically during early development, and the molecular mechanisms in temporal control of this early patterning event have been thoroughly studied. The discovery of a molecular Embryo Clock (EC) underlying the periodicity of somite formation shed light on the importance of gene expression dynamics for pattern formation. The EC is now known to be present in all vertebrate organisms studied and this mechanism was also described in limb development and stem cell differentiation. An outstanding question, however, remains unanswered: what sets the different EC paces observed in different organisms and tissues? This review aims to summarize the available knowledge regarding the pace of the EC, its regulation and experimental manipulation and to expose new questions that might help shed light on what is still to unveil.

Biology (General)

Halaman 7 dari 9992