Hasil untuk "Motion pictures"

Menampilkan 20 dari ~2223752 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar

JSON API
arXiv Open Access 2026
Does Motion Intensity Impair Cognition in HCI? The Critical Role of Physical Motion-Visual Target Directional Congruency

Jianshu Wang, Siyu Liu, Chao Zhou et al.

Human-computer interaction (HCI) increasingly occurs in motion-rich environments. The ability to accurately and rapidly respond to directional visual cues is critical in these contexts. How whole-body motion and individual differences affect human perception and reaction to these directional cues is therefore a key, yet an underexplored question for HCI. This study used a 6-DOF motion platform to measure task performance on a visual direction judgment task. We analyzed performance by decomposing the complex motion into two distinct components: a task-irrelevant lateral interference component and a task-aligned directional congruency component. Results indicate that increased motion intensity lengthened reaction times. This effect was primarily driven by the lateral interference component, and this detrimental impact was disproportionately amplified for individuals with high motion sickness susceptibility. Conversely, directional congruency, where motion direction matched the visual cue, improved performance for all participants. These findings suggest that motion's impact on cognition is not monolithic, and that system design for mobile HCI can be informed by strategies that actively shape motion, such as minimizing lateral interference while maximizing directional congruency.

en cs.HC
arXiv Open Access 2026
Fine-grained Motion Retrieval via Joint-Angle Motion Images and Token-Patch Late Interaction

Yao Zhang, Zhuchenyang Liu, Yanlan He et al.

Text-motion retrieval aims to learn a semantically aligned latent space between natural language descriptions and 3D human motion skeleton sequences, enabling bidirectional search across the two modalities. Most existing methods use a dual-encoder framework that compresses motion and text into global embeddings, discarding fine-grained local correspondences, and thus reducing accuracy. Additionally, these global-embedding methods offer limited interpretability of the retrieval results. To overcome these limitations, we propose an interpretable, joint-angle-based motion representation that maps joint-level local features into a structured pseudo-image, compatible with pre-trained Vision Transformers. For text-to-motion retrieval, we employ MaxSim, a token-wise late interaction mechanism, and enhance it with Masked Language Modeling regularization to foster robust, interpretable text-motion alignment. Extensive experiments on HumanML3D and KIT-ML show that our method outperforms state-of-the-art text-motion retrieval approaches while offering interpretable fine-grained correspondences between text and motion. The code is available in the supplementary material.

en cs.CV, cs.IR
arXiv Open Access 2026
Riemannian Motion Generation: A Unified Framework for Human Motion Representation and Generation via Riemannian Flow Matching

Fangran Miao, Jian Huang, Ting Li

Human motion generation is often learned in Euclidean spaces, although valid motions follow structured non-Euclidean geometry. We present Riemannian Motion Generation (RMG), a unified framework that represents motion on a product manifold and learns dynamics via Riemannian flow matching. RMG factorizes motion into several manifold factors, yielding a scale-free representation with intrinsic normalization, and uses geodesic interpolation, tangent-space supervision, and manifold-preserving ODE integration for training and sampling. On HumanML3D, RMG achieves state-of-the-art FID in the HumanML3D format (0.043) and ranks first on all reported metrics under the MotionStreamer format. On MotionMillion, it also surpasses strong baselines (FID 5.6, R@1 0.86). Ablations show that the compact $\mathscr{T}+\mathscr{R}$ (translation + rotations) representation is the most stable and effective, highlighting geometry-aware modeling as a practical and scalable route to high-fidelity motion generation.

en cs.CV, stat.ML
arXiv Open Access 2025
Constants of motion network revisited

Wenqi Fang, Chao Chen, Yongkui Yang et al.

Discovering constants of motion is meaningful in helping understand the dynamical systems, but inevitably needs proficient mathematical skills and keen analytical capabilities. With the prevalence of deep learning, methods employing neural networks, such as Constant Of Motion nETwork (COMET), are promising in handling this scientific problem. Although the COMET method can produce better predictions on dynamics by exploiting the discovered constants of motion, there is still plenty of room to sharpen it. In this paper, we propose a novel neural network architecture, built using the singular-value-decomposition (SVD) technique, and a two-phase training algorithm to improve the performance of COMET. Extensive experiments show that our approach not only retains the advantages of COMET, such as applying to non-Hamiltonian systems and indicating the number of constants of motion, but also can be more lightweight and noise-robust than COMET.

en cs.LG, physics.class-ph
arXiv Open Access 2025
Motion-Aware Generative Frame Interpolation

Guozhen Zhang, Yuhan Zhu, Yutao Cui et al.

Flow-based frame interpolation methods ensure motion stability through estimated intermediate flow but often introduce severe artifacts in complex motion regions. Recent generative approaches, boosted by large-scale pre-trained video generation models, show promise in handling intricate scenes. However, they frequently produce unstable motion and content inconsistencies due to the absence of explicit motion trajectory constraints. To address these challenges, we propose Motion-aware Generative frame interpolation (MoG) that synergizes intermediate flow guidance with generative capacities to enhance interpolation fidelity. Our key insight is to simultaneously enforce motion smoothness through flow constraints while adaptively correcting flow estimation errors through generative refinement. Specifically, we first introduce a dual guidance injection that propagates condition information using intermediate flow at both latent and feature levels, aligning the generated motion with flow-derived motion trajectories. Meanwhile, we implemented two critical designs, encoder-only guidance injection and selective parameter fine-tuning, which enable dynamic artifact correction in the complex motion regions. Extensive experiments on both real-world and animation benchmarks demonstrate that MoG outperforms state-of-the-art methods in terms of video quality and visual fidelity. Our work bridges the gap between flow-based stability and generative flexibility, offering a versatile solution for frame interpolation across diverse scenarios.

en cs.CV
DOAJ Open Access 2024
Theatrical Release Windows: A Playground for “Cultural Exception” Policies?

Mariagrazia Fanchi, Massimo Locatelli

In recent years, cinema culture in Europe has undergone a substantial reorganization of production models and a profound change of public intervention in favor of the film industry. This article aims to reconstruct the different combinations between protectionist and liberalist policies through a comparative analysis of the contemporary European national cinema aids, identifying differences and shared trends and verifying the existence of a “continental” cinema support model. Therefore the article will analyze public support policies on cinema production, distribution, and exhibition in the EU and in several of its member states (including France, Germany, England, Spain, and Italy) from 2018 to 2022. Focusing on theatrical release windows, this essay will attempt to answer the following main questions: is there a “European” mark in policies in favor of cinema? Can we speak of a “European” model (even outside the European Union) of support for the cinema? What are the elements and actions that define it? What are the sectors of the industry in which it is most fully expressed (production, distribution, exhibition)? And what are the themes and areas in which, on the contrary, national differences (sometimes driven by resurgent nationalisms) are most marked?

Motion pictures
arXiv Open Access 2024
Motion Manifold Flow Primitives for Task-Conditioned Trajectory Generation under Complex Task-Motion Dependencies

Yonghyeon Lee, Byeongho Lee, Seungyeon Kim et al.

Effective movement primitives should be capable of encoding and generating a rich repertoire of trajectories -- typically collected from human demonstrations -- conditioned on task-defining parameters such as vision or language inputs. While recent methods based on the motion manifold hypothesis, which assumes that a set of trajectories lies on a lower-dimensional nonlinear subspace, address challenges such as limited dataset size and the high dimensionality of trajectory data, they often struggle to capture complex task-motion dependencies, i.e., when motion distributions shift drastically with task variations. To address this, we introduce Motion Manifold Flow Primitives (MMFP), a framework that decouples the training of the motion manifold from task-conditioned distributions. Specifically, we employ flow matching models, state-of-the-art conditional deep generative models, to learn task-conditioned distributions in the latent coordinate space of the learned motion manifold. Experiments are conducted on language-guided trajectory generation tasks, where many-to-many text-motion correspondences introduce complex task-motion dependencies, highlighting MMFP's superiority over existing methods.

en cs.RO, cs.AI
arXiv Open Access 2024
IM-MoCo: Self-supervised MRI Motion Correction using Motion-Guided Implicit Neural Representations

Ziad Al-Haj Hemidi, Christian Weihsbach, Mattias P. Heinrich

Motion artifacts in Magnetic Resonance Imaging (MRI) arise due to relatively long acquisition times and can compromise the clinical utility of acquired images. Traditional motion correction methods often fail to address severe motion, leading to distorted and unreliable results. Deep Learning (DL) alleviated such pitfalls through generalization with the cost of vanishing structures and hallucinations, making it challenging to apply in the medical field where hallucinated structures can tremendously impact the diagnostic outcome. In this work, we present an instance-wise motion correction pipeline that leverages motion-guided Implicit Neural Representations (INRs) to mitigate the impact of motion artifacts while retaining anatomical structure. Our method is evaluated using the NYU fastMRI dataset with different degrees of simulated motion severity. For the correction alone, we can improve over state-of-the-art image reconstruction methods by $+5\%$ SSIM, $+5\:db$ PSNR, and $+14\%$ HaarPSI. Clinical relevance is demonstrated by a subsequent experiment, where our method improves classification outcomes by at least $+1.5$ accuracy percentage points compared to motion-corrupted images.

en eess.IV, cs.CV
arXiv Open Access 2023
On Sudden Cessation in Circular Motion

Milan Batista

This short paper presents a simple analytical model for the abrupt termination of circular motion, as discussed in the "The Most Mind-Blowing Aspect of Circular Motion".

en physics.gen-ph
arXiv Open Access 2023
ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model

Mingyuan Zhang, Xinying Guo, Liang Pan et al.

3D human motion generation is crucial for creative industry. Recent advances rely on generative models with domain knowledge for text-driven motion generation, leading to substantial progress in capturing common motions. However, the performance on more diverse motions remains unsatisfactory. In this work, we propose ReMoDiffuse, a diffusion-model-based motion generation framework that integrates a retrieval mechanism to refine the denoising process. ReMoDiffuse enhances the generalizability and diversity of text-driven motion generation with three key designs: 1) Hybrid Retrieval finds appropriate references from the database in terms of both semantic and kinematic similarities. 2) Semantic-Modulated Transformer selectively absorbs retrieval knowledge, adapting to the difference between retrieved samples and the target motion sequence. 3) Condition Mixture better utilizes the retrieval database during inference, overcoming the scale sensitivity in classifier-free guidance. Extensive experiments demonstrate that ReMoDiffuse outperforms state-of-the-art methods by balancing both text-motion consistency and motion quality, especially for more diverse motion generation.

en cs.CV
arXiv Open Access 2023
On oscillating sticky Brownian motion

Wajdi Touhami

Starting with a Brownian motion, we define and study a novel diffusion process by combining stickiness and oscillation properties. The associated stochastic differential equation, resolvent and semigroup are provided. Also the trivariate density of position, local time and occupation time of this diffusion is obtained explicitly. Furthermore, we give a construction of two Brownian motions with drift and scaling whose difference is an oscillating sticky Brownian motion, up to a multiplicative constant.

en math.PR
arXiv Open Access 2023
OmniMotionGPT: Animal Motion Generation with Limited Data

Zhangsihao Yang, Mingyuan Zhou, Mengyi Shan et al.

Our paper aims to generate diverse and realistic animal motion sequences from textual descriptions, without a large-scale animal text-motion dataset. While the task of text-driven human motion synthesis is already extensively studied and benchmarked, it remains challenging to transfer this success to other skeleton structures with limited data. In this work, we design a model architecture that imitates Generative Pretraining Transformer (GPT), utilizing prior knowledge learned from human data to the animal domain. We jointly train motion autoencoders for both animal and human motions and at the same time optimize through the similarity scores among human motion encoding, animal motion encoding, and text CLIP embedding. Presenting the first solution to this problem, we are able to generate animal motions with high diversity and fidelity, quantitatively and qualitatively outperforming the results of training human motion generation baselines on animal data. Additionally, we introduce AnimalML3D, the first text-animal motion dataset with 1240 animation sequences spanning 36 different animal identities. We hope this dataset would mediate the data scarcity problem in text-driven animal motion generation, providing a new playground for the research community.

en cs.CV
arXiv Open Access 2023
Continuous Intermediate Token Learning with Implicit Motion Manifold for Keyframe Based Motion Interpolation

Clinton Ansun Mo, Kun Hu, Chengjiang Long et al.

Deriving sophisticated 3D motions from sparse keyframes is a particularly challenging problem, due to continuity and exceptionally skeletal precision. The action features are often derivable accurately from the full series of keyframes, and thus, leveraging the global context with transformers has been a promising data-driven embedding approach. However, existing methods are often with inputs of interpolated intermediate frame for continuity using basic interpolation methods with keyframes, which result in a trivial local minimum during training. In this paper, we propose a novel framework to formulate latent motion manifolds with keyframe-based constraints, from which the continuous nature of intermediate token representations is considered. Particularly, our proposed framework consists of two stages for identifying a latent motion subspace, i.e., a keyframe encoding stage and an intermediate token generation stage, and a subsequent motion synthesis stage to extrapolate and compose motion data from manifolds. Through our extensive experiments conducted on both the LaFAN1 and CMU Mocap datasets, our proposed method demonstrates both superior interpolation accuracy and high visual similarity to ground truth motions.

en cs.CV, cs.GR
arXiv Open Access 2021
Disentangling intrinsic motion from neighbourhood effects in heterogeneous collective motion

Arshed Nabeel, Danny Raj M

Most real world collectives, including active particles, living cells, and grains, are heterogeneous, where individuals with differing properties interact. The differences among individuals in their intrinsic properties have emergent effects at the group level. It is often of interest to infer how the intrinsic properties differ among the individuals, based on their observed movement patterns. However, the true individual properties may be masked by emergent effects in the collective. We investigate the inference problem in the context of a bidisperse collective with two types of agents, where the goal is to observe the motion of the collective and classify the agents according to their types. Since collective effects such as jamming and clustering affect individual motion, an agent's own movement does not have sufficient information to perform the classification well: a simple observer algorithm, based only on individual velocities cannot accurately estimate the level of heterogeneity of the system, and often misclassifies agents. We propose a novel approach to the classification problem, where collective effects on an agent's motion is explicitly accounted for. We use insights about the physics of collective motion to quantify the effect of the neighbourhood on an agent using a neighbourhood parameter. Such an approach can distinguish between agents of two types, even when their observed motion is identical. This approach estimates the level of heterogeneity much more accurately, and achieves significant improvements in classification. Our results demonstrate that explicitly accounting for neighbourhood effects is often necessary to correctly infer intrinsic properties of individuals.

en cs.LG, nlin.AO
arXiv Open Access 2020
Developing Motion Code Embedding for Action Recognition in Videos

Maxat Alibayev, David Paulius, Yu Sun

In this work, we propose a motion embedding strategy known as motion codes, which is a vectorized representation of motions based on a manipulation's salient mechanical attributes. These motion codes provide a robust motion representation, and they are obtained using a hierarchy of features called the motion taxonomy. We developed and trained a deep neural network model that combines visual and semantic features to identify the features found in our motion taxonomy to embed or annotate videos with motion codes. To demonstrate the potential of motion codes as features for machine learning tasks, we integrated the extracted features from the motion embedding model into the current state-of-the-art action recognition model. The obtained model achieved higher accuracy than the baseline model for the verb classification task on egocentric videos from the EPIC-KITCHENS dataset.

en cs.CV, cs.AI
arXiv Open Access 2020
Creep motion of elastic interfaces driven in a disordered landscape

Ezequiel E. Ferrero, Laura Foini, Thierry Giamarchi et al.

The thermally activated creep motion of an elastic interface weakly driven on a disordered landscape is one of the best examples of glassy universal dynamics. Its understanding has evolved over the last 30 years thanks to a fruitful interplay between elegant scaling arguments, sophisticated analytical calculations, efficient optimization algorithms and creative experiments. In this article, starting from the pioneer arguments, we review the main theoretical and experimental results that lead to the current physical picture of the creep regime. In particular, we discuss recent works unveiling the collective nature of such ultra-slow motion in terms of elementary activated events. We show that these events control the mean velocity of the interface and cluster into "creep avalanches" statistically similar to the deterministic avalanches observed at the depinning critical threshold. The associated spatio-temporal patterns of activated events have been recently observed in experiments with magnetic domain walls. The emergent physical picture is expected to be relevant for a large family of disordered systems presenting thermally activated dynamics.

en cond-mat.dis-nn, cond-mat.stat-mech

Halaman 42 dari 111188