Federated graph learning (FGL) enables collaborative training on graph data across multiple clients. With the rise of large language models (LLMs), textual attributes in FGL graphs are gaining attention. Text-attributed graph federated learning (TAG-FGL) improves FGL by explicitly leveraging LLMs to process and integrate these textual features. However, current TAG-FGL methods face three main challenges: \textbf{(1) Overhead.} LLMs for processing long texts incur high token and computation costs. To make TAG-FGL practical, we introduce graph condensation (GC) to reduce computation load, but this choice also brings new issues. \textbf{(2) Suboptimal.} To reduce LLM overhead, we introduce GC into TAG-FGL by compressing multi-hop texts/neighborhoods into a condensed core with fixed LLM surrogates. However, this one-shot condensation is often not client-adaptive, leading to suboptimal performance. \textbf{(3) Interpretability.} LLM-based condensation further introduces a black-box bottleneck: summaries lack faithful attribution and clear grounding to specific source spans, making local inspection and auditing difficult. To address the above issues, we propose \textbf{DANCE}, a new TAG-FGL paradigm with GC. To improve \textbf{suboptimal} performance, DANCE performs round-wise, model-in-the-loop condensation refresh using the latest global model. To enhance \textbf{interpretability}, DANCE preserves provenance by storing locally inspectable evidence packs that trace predictions to selected neighbors and source text spans. Across 8 TAG datasets, DANCE improves accuracy by \textbf{2.33\%} at an \textbf{8\%} condensation ratio, with \textbf{33.42\%} fewer tokens than baselines.
In this paper, I intend to show that the “sense of shame” can be understood as the foundation of total pedagogy in Plato's Laws. By total pedagogy, I understand the meticulous care with the regulation of life in all its details, in order to make laws persuasive and not simply punitive. To measure the essence of Plato's political project, in this dialogue which is considered his last written one, I turn to the criticisms elaborated by Aristotle in Book II of the Politics. Like Aristotle, I see a strong continuity among the theses developed in the Republic and the political project of a virtuous city in the Laws. Making the virtue of temperance the basic virtue to be promoted and creating the greatest possible unity of the city through singing and dancing in unison would be the antidotes to the greatest evil that can befall political life: the stasis. At the same time, by radiating the virtuous way of life, centered on reason, and honoring the soul, the laws would ensure the happiness of the entire city. At what price?
Fast radio bursts (FRBs) are transient signals exhibiting diverse strengths and emission bandwidths. Traditional single-pulse search techniques are widely employed for FRB detection; yet weak, narrow-band bursts often remain undetectable due to low signal-to-noise ratios (SNR) in integrated profiles. We developed DANCE, a detection tool based on cluster analysis of the original spectrum. It is specifically designed to detect and isolate weak, narrow-band FRBs, providing direct visual identification of their emission properties. This method performs density clustering on reconstructed, RFI-cleaned observational data, enabling the extraction of targeted clusters in time-frequency domain that correspond to the genuine FRB emission range. Our simulations show that DANCE successfully extracts all true signals with SNR~>5 and achieves a detection precision exceeding 93%. Furthermore, through the practical detection of FRB 20201124A, DANCE has demonstrated a significant advantage in finding previously undetectable weak bursts, particularly those with distinct narrow-band features or occurring in proximity to stronger bursts.
Claire Bonial, Julia Bonn, Harish Tayyar Madabushi
In this chapter, we argue for the benefits of understanding multiword expressions from the perspective of usage-based, construction grammar approaches. We begin with a historical overview of how construction grammar was developed in order to account for idiomatic expressions using the same grammatical machinery as the non-idiomatic structures of language. We cover a comprehensive description of constructions, which are pairings of meaning with form of any size (morpheme, word, phrase), as well as how constructional approaches treat the acquisition and generalization of constructions. We describe a successful case study leveraging constructional templates for representing multiword expressions in English PropBank. Because constructions can be at any level or unit of form, we then illustrate the benefit of a constructional representation of multi-meaningful morphosyntactic unit constructions in Arapaho, a highly polysynthetic and agglutinating language. We include a second case study leveraging constructional templates for representing these multi-morphemic expressions in Uniform Meaning Representation. Finally, we demonstrate the similarities and differences between a usage-based explanation of a speaker learning a novel multiword expression, such as "dancing with deer," and that of a large language model. We present experiments showing that both models and speakers can generalize the meaning of novel multiword expressions based on a single exposure of usage. However, only speakers can reason over the combination of two such expressions, as this requires comparison of the novel forms to a speaker's lifetime of stored constructional exemplars, which are rich with cross-modal details.
Susung Hong, Ira Kemelmacher-Shlizerman, Brian Curless
et al.
We introduce MusicInfuser, an approach that aligns pre-trained text-to-video diffusion models to generate high-quality dance videos synchronized with specified music tracks. Rather than training a multimodal audio-video or audio-motion model from scratch, our method demonstrates how existing video diffusion models can be efficiently adapted to align with musical inputs. We propose a novel layer-wise adaptability criterion based on a guidance-inspired constructive influence function to select adaptable layers, significantly reducing training costs while preserving rich prior knowledge, even with limited, specialized datasets. Experiments show that MusicInfuser effectively bridges the gap between music and video, generating novel and diverse dance movements that respond dynamically to music. Furthermore, our framework generalizes well to unseen music tracks, longer video sequences, and unconventional subjects, outperforming baseline models in consistency and synchronization. All of this is achieved without requiring motion data, with training completed on a single GPU within a day.
Recent pose-to-video models can translate 2D pose sequences into photorealistic, identity-preserving dance videos, so the key challenge is to generate temporally coherent, rhythm-aligned 2D poses from music, especially under complex, high-variance in-the-wild distributions. We address this by reframing music-to-dance generation as a music-token-conditioned multi-channel image synthesis problem: 2D pose sequences are encoded as one-hot images, compressed by a pretrained image VAE, and modeled with a DiT-style backbone, allowing us to inherit architectural and training advances from modern text-to-image models and better capture high-variance 2D pose distributions. On top of this formulation, we introduce (i) a time-shared temporal indexing scheme that explicitly synchronizes music tokens and pose latents over time and (ii) a reference-pose conditioning strategy that preserves subject-specific body proportions and on-screen scale while enabling long-horizon segment-and-stitch generation. Experiments on a large in-the-wild 2D dance corpus and the calibrated AIST++2D benchmark show consistent improvements over representative music-to-dance methods in pose- and video-space metrics and human preference, and ablations validate the contributions of the representation, temporal indexing, and reference conditioning. See supplementary videos at https://hot-dance.github.io
Generating long-term, coherent, and realistic music-conditioned dance sequences remains a challenging task in human motion synthesis. Existing approaches exhibit critical limitations: motion graph methods rely on fixed template libraries, restricting creative generation; diffusion models, while capable of producing novel motions, often lack temporal coherence and musical alignment. To address these challenges, we propose $\textbf{MotionRAG-Diff}$, a hybrid framework that integrates Retrieval-Augmented Generation (RAG) with diffusion-based refinement to enable high-quality, musically coherent dance generation for arbitrary long-term music inputs. Our method introduces three core innovations: (1) A cross-modal contrastive learning architecture that aligns heterogeneous music and dance representations in a shared latent space, establishing unsupervised semantic correspondence without paired data; (2) An optimized motion graph system for efficient retrieval and seamless concatenation of motion segments, ensuring realism and temporal coherence across long sequences; (3) A multi-condition diffusion model that jointly conditions on raw music signals and contrastive features to enhance motion quality and global synchronization. Extensive experiments demonstrate that MotionRAG-Diff achieves state-of-the-art performance in motion quality, diversity, and music-motion synchronization accuracy. This work establishes a new paradigm for music-driven dance generation by synergizing retrieval-based template fidelity with diffusion-based creative enhancement.
Data steganography aims to conceal information within visual content, yet existing spatial- and frequency-domain approaches suffer from trade-offs between security, capacity, and perceptual quality. Recent advances in generative models, particularly diffusion models, offer new avenues for adaptive image synthesis, but integrating precise information embedding into the generative process remains challenging. We introduce Shackled Dancing Diffusion, or SD$^2$, a plug-and-play generative steganography method that combines bit-position locking with diffusion sampling injection to enable controllable information embedding within the generative trajectory. SD$^2$ leverages the expressive power of diffusion models to synthesize diverse carrier images while maintaining full message recovery with $100\%$ accuracy. Our method achieves a favorable balance between randomness and constraint, enhancing robustness against steganalysis without compromising image fidelity. Extensive experiments show that SD$^2$ substantially outperforms prior methods in security, embedding capacity, and stability. This algorithm offers new insights into controllable generation and opens promising directions for secure visual communication.
Modern artistic productions increasingly demand automated choreography generation that adapts to diverse musical styles and individual dancer characteristics. Existing approaches often fail to produce high-quality dance videos that harmonize with both musical rhythm and user-defined choreography styles, limiting their applicability in real-world creative contexts. To address this gap, we introduce ChoreoMuse, a diffusion-based framework that uses SMPL format parameters and their variation version as intermediaries between music and video generation, thereby overcoming the usual constraints imposed by video resolution. Critically, ChoreoMuse supports style-controllable, high-fidelity dance video generation across diverse musical genres and individual dancer characteristics, including the flexibility to handle any reference individual at any resolution. Our method employs a novel music encoder MotionTune to capture motion cues from audio, ensuring that the generated choreography closely follows the beat and expressive qualities of the input music. To quantitatively evaluate how well the generated dances match both musical and choreographic styles, we introduce two new metrics that measure alignment with the intended stylistic cues. Extensive experiments confirm that ChoreoMuse achieves state-of-the-art performance across multiple dimensions, including video quality, beat alignment, dance diversity, and style adherence, demonstrating its potential as a robust solution for a wide range of creative applications. Video results can be found on our project page: https://choreomuse.github.io.
Dance-to-music (D2M) generation aims to automatically compose music that is rhythmically and temporally aligned with dance movements. Existing methods typically rely on coarse rhythm embeddings, such as global motion features or binarized joint-based rhythm values, which discard fine-grained motion cues and result in weak rhythmic alignment. Moreover, temporal mismatches introduced by feature downsampling further hinder precise synchronization between dance and music. To address these problems, we propose \textbf{GACA-DiT}, a diffusion transformer-based framework with two novel modules for rhythmically consistent and temporally aligned music generation. First, a \textbf{genre-adaptive rhythm extraction} module combines multi-scale temporal wavelet analysis and spatial phase histograms with adaptive joint weighting to capture fine-grained, genre-specific rhythm patterns. Second, a \textbf{context-aware temporal alignment} module resolves temporal mismatches using learnable context queries to align music latents with relevant dance rhythm features. Extensive experiments on the AIST++ and TikTok datasets demonstrate that GACA-DiT outperforms state-of-the-art methods in both objective metrics and human evaluation. Project page: https://beria-moon.github.io/GACA-DiT/.
This paper explores the intersection of dance, human experience, and artificial intelligence (AI), focusing on how AI can engage in dance-like movements through kinematic co-creation with human performers. The study challenges traditional notions of dance, which are typically centered on human physicality and expressivity, by demonstrating how AI-generated movements can evoke meaningful dance experiences. The project, Dancing Embryo, is a collaboration between a dancer-choreographer and a computational scientist to create an interactive AI capable of generating and transforming dance movements. The AI dancer, designed using motion data from contemporary dancers, participates in real-time performances by synchronizing features of its movements with a human dancer. This work expands the definition of dance to include non-human agents and emphasizes co-creativity between humans and machines. The paper discusses the technological, philosophical, and artistic implications of AI dance, proposing that the experience of dance can be perceived and completed through human interpretation, even when performed by a machine.
The following piece is a monologue unfolding in a fictional setting as a plea to denounce research methodologies that tend to perpetuate violence. In this piece, Attorney situates the problem with academia and the academic community (thus having a strong bond with the reader through gossiping - echoing gossiping as a research methodology). At the end of the plea, the Attorney, who is revealed to be the Anthropologist, the other characters in the courtroom, the reader, hence the research community, lead this denunciation and propose ways to rethink research methodologies to consider ethics better—acknowledging one's vulnerability and fear in the field. This plea denounces systematic training and mentorship of ethnography, calls for collaborative research, acknowledges marginalized forms of knowledge, and refuses a voyeurist researcher approach to the informants. The plea calls for flexible research methods over the fixation of gathering data and thinking ethics beyond pseudonyms. This piece attempts to make sense of the absurdity of research methodology, specifically analyzing the field of academia in which violence is generated and normalized under the name of fieldwork as a slice of life.
N.B. Metaphors in the plea are based on true stories
A tanulmány a projektorientált tanulás alkalmazását elemzi a Magyar Táncművészeti Egyetem másodéves divattánc szakirányán tanuló táncos- és próbavezető szakos hallgatók csoportjában (n = 16). A vizsgálat rávilágít, hogyan illeszkedett a projekt a tantervi célok közé, és milyen módon járult hozzá a hallgatók egyéni és csoportos fejlődéséhez. A hallgatók a tantervi és a tanterven kívüli célokat és követelményeket követve vettek részt egy olyan projektben, amelynek végproduktuma egy Soundpainting elnevezésű előadás volt, amely Cornelius Cardew angol kísérleti zeneszerző Treatise című gyűjteményének notációira, valamint zenei és vizuális improvizációkra épült.
Human dance generation (HDG) aims to synthesize realistic videos from images and sequences of driving poses. Despite great success, existing methods are limited to generating videos of a single person with specific backgrounds, while the generalizability for real-world scenarios with multiple persons and complex backgrounds remains unclear. To systematically measure the generalizability of HDG models, we introduce a new task, dataset, and evaluation protocol of compositional human dance generation (cHDG). Evaluating the state-of-the-art methods on cHDG, we empirically find that they fail to generalize to real-world scenarios. To tackle the issue, we propose a novel zero-shot framework, dubbed MultiDance-Zero, that can synthesize videos consistent with arbitrary multiple persons and background while precisely following the driving poses. Specifically, in contrast to straightforward DDIM or null-text inversion, we first present a pose-aware inversion method to obtain the noisy latent code and initialization text embeddings, which can accurately reconstruct the composed reference image. Since directly generating videos from them will lead to severe appearance inconsistency, we propose a compositional augmentation strategy to generate augmented images and utilize them to optimize a set of generalizable text embeddings. In addition, consistency-guided sampling is elaborated to encourage the background and keypoints of the estimated clean image at each reverse step to be close to those of the reference image, further improving the temporal consistency of generated videos. Extensive qualitative and quantitative results demonstrate the effectiveness and superiority of our approach.
Penelitian ini bertujuan untuk mengidentifikasi dan menganalisis majas perbandingan yang terdapat dalam kumpulan puisi “Dancing Rain” karya Jane Ardaneshwari dan impelmentasinya dalam pengajaran apresiasi sastra di sekolah menengah pertama. Metode yang digunakan dalam penelitian ini adalah kualitatif. Penelitian ini menggunakan teori Tarigan. Dari hasil penelitian ini diketahui bahwa majas perbandingan yang terdapat dalam antologi puisi total ada 10 majas perbandingan yaitu majas perbandingan perumpamaan, metafora, personifikasi, depersonifikasi, alegori, anitesis, pleonasme, perifrasis, prolepis, dan koreksio. Pembelajaran apresiasi sastra di SMP kita bisa lihat dengan adanya berbagai macam gaya bahasa yang beragam dan bervariasi cukup sulit untuk diajarkan kepada siswa sehingga pemahaman siswa kurang memadai untuk mengerti tentang majas terutama pada contoh gaya bahasa yang terdapat pada puisi Dari data diatas dapat disimpulkan bahwa majas dalam kumpulan puisi dancing rain karya Jane Ardaneshwari yang paling dominan dari kategori berbagai majas lainnya. Hal tersebut disebabkan karena majas memiliki hubungan yang sangat penting dalam pembelajaran apresiasi sastra di smp yaitu salah satunya dengan puisi yang didalamnya memiliki majas yang membuat isi dari puisi menjadi hidup dan menarik.
Kata kunci: Majas, Puisi, Pembelajaran Apresiasi Sastra
Theory and practice of education, Languages and literature of Eastern Asia, Africa, Oceania
The various films of the two successful spy franchises, James Bond and Mission: Impossible, are distinguished not only by their lead actors (heroes and antagonists), but also by their geographical settings. Each new film must explore new destinations, and the audience expects a change of scenery. This change of scenery, far from depending solely on the filming locations, is in fact literally skilfully orchestrated. This paper aims to show how the music composed for these movies contributes to the creation of topoi by promoting exoticism and orientalism, notably through the use of local instruments and musical motifs and through the representation, on screen, of traditional musical and artistic practices (singing, dancing, etc.). We analyze the music that accompanies the hero's travels to emblematic “exotic” places, by studying the tourist imaginaries that they convey, help to reinforce or reconfigure.
In Zimbabwe, images of the female body in fiction and the media are gradually shifting from being interpreted as merely sexualized objects of male visual consumption to those of resistance to and defiance against sexist objectification, exploitation, and moralist surveillance. Drawing on the intersection between Foucault’s notion of disciplinary power, feminist notions of the female body as a cultural template for punitive patriarchy and the male gaze, and the decolonial insights from African feminism(s), I discuss the potential for representations of female dance and bodily display to stimulate debate on gender (dis)empowerment, agency, and punishment in Novuyo Tshuma’s novella Shadows (2012). Acknowledging the pervasiveness of the globalized male gaze, I develop a flipside notion of male glare to negotiate the hardly critiqued unconscious African male desire to rebuke the imagined subversive dancing or stripping female body. The novella enables a discussion of the agency of the stereotyped African sex worker – not only as a debased performer, but also a potentially empowered embodied being. However, by employing a male narrative voice, informed by the dominant male discourses on gender, Tshuma problematically prioritizes the scopophilic and punitive narrative perspective that she seeks to undermine.
Axions are one of the well-motivated candidates for dark matter, originally proposed to solve the strong CP problem in particle physics. Dark matter Axion search with riNg Cavity Experiment (DANCE) is a new experimental project to broadly search for axion dark matter in the mass range of $10^{-17}~\mathrm{eV} < m_a < 10^{-11}~\mathrm{eV}$. We aim to detect the rotational oscillation of linearly polarized light caused by the axion-photon coupling with a bow-tie cavity. The first results of the prototype experiment, DANCE Act-1, are reported from a 24-hour observation. We found no evidence for axions and set 95% confidence level upper limit on the axion-photon coupling $g_{a γ} \lesssim 8 \times 10^{-4}~\mathrm{GeV^{-1}}$ in $10^{-14}~\mathrm{eV} < m_a < 10^{-13}~\mathrm{eV}$. Although the bound did not exceed the current best limits, this optical cavity experiment is the first demonstration of polarization-based axion dark matter search without any external magnetic field.
Rosimeide Francisco dos Santos Legnani, Ana Cláudia Merchan Giaxa, Filipe Ferreira da Silva
et al.
ABSTRACT
Breast cancer incidence increases with age, and its treatment usually has side effects such as joint pain, depression and fatigue. Insufficient physical activity can decrease muscle strength, increase fatigue and reduce quality of life. Given the above, the purpose of this systematic review was to identify the main intervention strategies based on physical activities for women during breast cancer treatment. This study considered four databases: PubMed, SportDiscus, Lilacs, and Scielo, taking into consideration studies of the last five years. The descriptors for physical activity variable were “exercise", “physical activity”, and “motor activity”. For breast cancer variable, the descriptors were “breast neoplasm”, and for participants: “women” and “adults”. Seven studies related to the benefits of physical activity were found, one of those was carried out in Brazil. The variables studied were fatigue, anthropometry, physical activity level (PAL), pain threshold, sleep quality, life quality (LQ), physical fitness (PF), and cortisol level. The types of intervention strategies were in most part through aerobic exercises, resistance/strength training, hydrotherapy, relaxation, yoga and belly dancing. Although there is no consensus on which physical activity, intensity, and frequency are best for the patients, in general, all patients increased their level of physical activity and quality of life, reduced fatigue, and were not impeded by the treatment or found necessary to interrupt it while performing such physical activities.
The `colored Cube Dance' is an extension of Douthett's and Steinbach's Cube Dance graph, related to a monoid of binary relations defined on the set of major, minor, and augmented triads. This contribution explores the automorphism group of this monoid action, as a way to transform chord progressions. We show that this automorphism group is of order 7776 and is isomorphic to $({\mathbb{Z}_3}^4 \rtimes D_8) \rtimes (D_6 \times \mathbb{Z}_2)$. The size and complexity of this group makes it unwieldy: we therefore provide an interactive tool via a web interface based on common HTML/Javascript frameworks for students, musicians, and composers to explore these automorphisms, showing the potential of these technologies for math/music outreach activities.