Hasil untuk "Motion pictures"

Menampilkan 20 dari ~2223379 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar

JSON API
DOAJ Open Access 2026
Aesthetic Experience, Film Comedy and Suffering in Sullivan’s Travels: The Prisoners’ Laughter

Scott Robinson

This article examines the way aesthetic experience is represented in contrast to direct experience in Preston Sturges’s film Sullivan’s Travels (1941). The film portrays the efforts of a successful Hollywood film director to acquire the experience of suffering in order to make a film with social significance. The film suggests a contrast between lived experience and aesthetic experience in conveying a political message and social knowledge. Framed in terms of Jacques Rancière’s Kantian aesthetics, I defend the view that aesthetic representations provide a specific link to knowledge and politics that is mediated and indeterminate. Sturges’s film complicates the desire for direct experience by playing on conventional forms, genres and tropes in Hollywood comedy films. I argue that Sullivan’s Travels confronts us with pleasure in the depiction of suffering, articulating the specifically aesthetic connection between experience, knowledge and politics.

Motion pictures, Philosophy (General)
arXiv Open Access 2026
PhyGile: Physics-Prefix Guided Motion Generation for Agile General Humanoid Motion Tracking

Jiacheng Bao, Haoran Yang, Yucheng Xin et al.

Humanoid robots are expected to execute agile and expressive whole-body motions in real-world settings. Existing text-to-motion generation models are predominantly trained on captured human motion datasets, whose priors assume human biomechanics, actuation, mass distribution, and contact strategies. When such motions are directly retargeted to humanoid robots, the resulting trajectories may satisfy geometric constraints (e.g., joint limits and pose continuity) and appear kinematically reasonable. However, they frequently violate the physical feasibility required for real-world execution. To address these issues, we present PhyGile, a unified framework that closes the loop between robot-native motion generation and General Motion Tracking (GMT). PhyGile performs physics-prefix-guided robot-native motion generation at inference time, directly generating robot-native motions in a 262-dimensional skeletal space with physics-guided prefixes, thereby eliminating inference-time retargeting artifacts and reducing generation-execution discrepancies. Before physics-prefix adaptation, we train the GMT controller with a curriculum-based mixture-of-experts scheme, followed by post-training on unlabeled motion data to improve robustness over large-scale robot motions. During physics-prefix adaptation, the GMT controller is further fine-tuned with generated objectives under physics-derived prefixes, enabling agile and stable execution of complex motions on real robots. Extensive offline and real-robot experiments demonstrate that PhyGile expands the frontier of text-driven humanoid control, enabling stable tracking of agile, highly difficult whole-body motions that go well beyond walking and low-dynamic motions typically achieved by prior methods.

en cs.RO, cs.AI
arXiv Open Access 2026
Quantum simulation in the Heisenberg picture via vectorization

Shao-Hen Chiew, Armando Angrisani, Zoë Holmes et al.

We present a general framework for simulating quantum systems in the Heisenberg picture on quantum hardware. Based on the vectorization map, our framework fully exploits the mapping between operators and quantum states, allowing any task defined on Heisenberg operators to be mapped to standard Schrödinger-picture tasks that are naturally accessible via quantum computers and simulators. This yields new or improved protocols for tasks such as operator sampling, the computation of OTOCs/superoperator expectation values and their higher order moments, two-point correlators, and operator stabilizer and entanglement entropies. Our approach is also amenable to implementation, as it inherits the structure and resource requirements of the (forward and time-reversed) Schrödinger-picture quantum simulation problem. We demonstrate this by proposing implementations of our framework for a 2D problem on digital and analog quantum simulators, taking into account device connectivity constraints.

en quant-ph
arXiv Open Access 2025
OmniMoGen: Unifying Human Motion Generation via Learning from Interleaved Text-Motion Instructions

Wendong Bu, Kaihang Pan, Yuze Lin et al.

Large language models (LLMs) have unified diverse linguistic tasks within a single framework, yet such unification remains unexplored in human motion generation. Existing methods are confined to isolated tasks, limiting flexibility for free-form and omni-objective generation. To address this, we propose OmniMoGen, a unified framework that enables versatile motion generation through interleaved text-motion instructions. Built upon a concise RVQ-VAE and transformer architecture, OmniMoGen supports end-to-end instruction-driven motion generation. We construct X2Mo, a large-scale dataset of over 137K interleaved text-motion instructions, and introduce AnyContext, a benchmark for evaluating interleaved motion generation. Experiments show that OmniMoGen achieves state-of-the-art performance on text-to-motion, motion editing, and AnyContext, exhibiting emerging capabilities such as compositional editing, self-reflective generation, and knowledge-informed generation. These results mark a step toward the next intelligent motion generation. Project Page: https://OmniMoGen.github.io/.

en cs.CV
arXiv Open Access 2024
Monkey See, Monkey Do: Harnessing Self-attention in Motion Diffusion for Zero-shot Motion Transfer

Sigal Raab, Inbar Gat, Nathan Sala et al.

Given the remarkable results of motion synthesis with diffusion models, a natural question arises: how can we effectively leverage these models for motion editing? Existing diffusion-based motion editing methods overlook the profound potential of the prior embedded within the weights of pre-trained models, which enables manipulating the latent feature space; hence, they primarily center on handling the motion space. In this work, we explore the attention mechanism of pre-trained motion diffusion models. We uncover the roles and interactions of attention elements in capturing and representing intricate human motion patterns, and carefully integrate these elements to transfer a leader motion to a follower one while maintaining the nuanced characteristics of the follower, resulting in zero-shot motion transfer. Editing features associated with selected motions allows us to confront a challenge observed in prior motion diffusion approaches, which use general directives (e.g., text, music) for editing, ultimately failing to convey subtle nuances effectively. Our work is inspired by how a monkey closely imitates what it sees while maintaining its unique motion patterns; hence we call it Monkey See, Monkey Do, and dub it MoMo. Employing our technique enables accomplishing tasks such as synthesizing out-of-distribution motions, style transfer, and spatial editing. Furthermore, diffusion inversion is seldom employed for motions; as a result, editing efforts focus on generated motions, limiting the editability of real ones. MoMo harnesses motion inversion, extending its application to both real and generated motions. Experimental results show the advantage of our approach over the current art. In particular, unlike methods tailored for specific applications through training, our approach is applied at inference time, requiring no training. Our webpage is at https://monkeyseedocg.github.io.

en cs.CV, cs.AI
DOAJ Open Access 2023
Film as Museum: One-of-a-Kind Objects in Berkun Oya's Bir Başkadır

Olivia Landry

This article explores the 2020 Turkish Netflix series Bir Başkadır (Ethos) written and directed by Berkun Oya about contemporary Turkey through its objects. With objects surge memories, which are both personal and collective. From the charged objects that convey private attachments, traumas, and histories to ordinary household trinkets and finally archival audiovisual material, this series assumes the status of museum in its drive to carefully exhibit the material world on screen. As the Turkish title of the series indicates, these objects are “bir başkadır”: one of a kind. Through themes and practices of lost innocence, counter-archives, and archiveology, I sift through the quotidian objects, miniatures, old photos, souvenirs, and analogue film footage re-presented and re-collected in this series with an eye to their new scope and allure. The past and present rest adjacent to one another in the mise-en-scène of this series. In engagement with the philosophical writings of Walter Benjamin on the collector, the archive, and memory, Andreas Huyssen's concept of the “museal gaze,” Jennifer Culbert's “counter-archival sensibility,” and finally Catherine Russell's practice of “archiveology,” this article examines how the objects that fashion the on-screen world acquire depth and meaning and the film as museum comes to form.

Motion pictures, Philosophy (General)
arXiv Open Access 2023
Learning-based Axial Video Motion Magnification

Kwon Byung-Ki, Oh Hyun-Bin, Kim Jun-Seong et al.

Video motion magnification amplifies invisible small motions to be perceptible, which provides humans with a spatially dense and holistic understanding of small motions in the scene of interest. This is based on the premise that magnifying small motions enhances the legibility of motions. In the real world, however, vibrating objects often possess convoluted systems that have complex natural frequencies, modes, and directions. Existing motion magnification often fails to improve legibility since the intricate motions still retain complex characteristics even after being magnified, which may distract us from analyzing them. In this work, we focus on improving legibility by proposing a new concept, axial motion magnification, which magnifies decomposed motions along the user-specified direction. Axial motion magnification can be applied to various applications where motions of specific axes are critical, by providing simplified and easily readable motion information. To achieve this, we propose a novel Motion Separation Module that enables to disentangle and magnify the motion representation along axes of interest. Furthermore, we build a new synthetic training dataset for the axial motion magnification task. Our proposed method improves the legibility of resulting motions along certain axes by adding a new feature: user controllability. Axial motion magnification is a more generalized concept; thus, our method can be directly adapted to the generic motion magnification and achieves favorable performance against competing methods.

en eess.IV, cs.CV
arXiv Open Access 2023
Modelling Human Visual Motion Processing with Trainable Motion Energy Sensing and a Self-attention Network

Zitang Sun, Yen-Ju Chen, Yung-hao Yang et al.

Visual motion processing is essential for humans to perceive and interact with dynamic environments. Despite extensive research in cognitive neuroscience, image-computable models that can extract informative motion flow from natural scenes in a manner consistent with human visual processing have yet to be established. Meanwhile, recent advancements in computer vision (CV), propelled by deep learning, have led to significant progress in optical flow estimation, a task closely related to motion perception. Here we propose an image-computable model of human motion perception by bridging the gap between biological and CV models. Specifically, we introduce a novel two-stages approach that combines trainable motion energy sensing with a recurrent self-attention network for adaptive motion integration and segregation. This model architecture aims to capture the computations in V1-MT, the core structure for motion perception in the biological visual system, while providing the ability to derive informative motion flow for a wide range of stimuli, including complex natural scenes. In silico neurophysiology reveals that our model's unit responses are similar to mammalian neural recordings regarding motion pooling and speed tuning. The proposed model can also replicate human responses to a range of stimuli examined in past psychophysical studies. The experimental results on the Sintel benchmark demonstrate that our model predicts human responses better than the ground truth, whereas the state-of-the-art CV models show the opposite. Our study provides a computational architecture consistent with human visual motion processing, although the physiological correspondence may not be exact.

en cs.AI, q-bio.NC
arXiv Open Access 2022
The impact of NFT profile pictures within social network communities

Simone Casale-Brunet, Mirko Zichichi, Lee Hutchinson et al.

This paper presents an analysis of the role of social media, specifically Twitter, in the context of non-fungible tokens, better known as NFTs. Such emerging technology framing the creation and exchange of digital object, started years ago with early projects such as "CryptoPunks" and since early 2021, has received an increasing interest by a community of people creating, buying, selling NFT's and by the media reporting to the general public. In this work it is shown how the landscape of one class of projects, specifically those used as social media profile pictures, has become mainstream with leading projects such as "Bored Ape Yacht Club", "Cool Cats" and "Doodles". This work illustrates how heterogeneous data was collected from the Ethereum blockchain and Twitter and then analysed using algorithms and state-of-art metrics related to graphs. The initial results show that from a social network perspective, the collections of most popular NFTs can be considered as a single community around NFTs. Thus, while each project has its own value and volume of exchange, on a social level all of them are primarily influenced by the evolution of values and trades of "Bored Ape Yacht Club" collection.

DOAJ Open Access 2021
Evaluation of Curved Canal Transportation Using the Neoniti Rotary System with Reciprocal Motion: A Comparative Study

Mohsen Aminsobhani, Arvin Rezaei Avval, Fatemeh Hamidzadeh

The ideal root canal preparation is where the original canal morphology is maintained during the biomechanical preparation. Preparation of curved canals has always been a challenge to clinicians. Better results have been suggested for a single NiTi instrument with reciprocating motion than the conventional continuous rotation method in the preparation of curved root canals. Although the Neoniti rotary system is not suggested to be used with reciprocal motion, running a pilot study, we found that it could be possible. The present study aimed to investigate if shaping curved canals using the Neoniti rotary system with reciprocal motion leads to better results in terms of root canal transportation. One hundred acrylic j-shape canal simulator endoblocks were used in this study. Five preparation sequences were applied: GPS followed by A1#20 (GPS + A1#20), GPS followed by A1#20 and then A1#25 (GPS + A1#20 + A1#25), GPS followed by A1#25 (GPS + A1#25), hand file followed by A1#20 (hand file + A1#20), and GPS followed by A1#20 (with reciprocal motion) (GPS + A1#20(reciprocal)). Pictures were taken from blocks once before and once after preparation from two dimensions. Before-and-after pictures were superimposed in Photoshop software. Measurements were performed in Digimizer. The number of autoreverses and pecking motions was recorded after reviewing the recorded videos. Data were analyzed in SPSS, version 26. A p value of less than 0.05 was considered statistically significant. The group GPS + A1#20 + A1#25 had more transportation compared with the others, at apical, middle, and coronal thirds not only in the frontal view but also in the lateral view. Other groups were not significantly different. The number of peckings and autoreverses was significantly less when A1#25 was used after GPS and A1#20. When A1#20 was used with reciprocal motion, it had less peckings compared with the same file with continuous rotation, and no autoreverses were observed in that group. Using Neoniti files with reciprocal motion might result in less instrument fatigue and favorable results, with respect to canal anatomy preservation. Using A1#20 before A1#25 also will decrease the stress on the instrument during preparation. However, this may lead to significantly more canal transportation.

arXiv Open Access 2020
Everettian relative states in the Heisenberg picture

Samuel Kuypers, David Deutsch

Everett's relative-state construction in quantum theory has never been satisfactorily expressed in the Heisenberg picture. What one might have expected to be a straightforward process was impeded by conceptual and technical problems that we solve here. The result is a construction which, unlike Everett's one in the Schrödinger picture, makes manifest the locality of Everettian multiplicity, and its inherently approximative nature, and its origin in certain kinds of entanglement and locally inaccessible information. Our construction also allows us to give a more precise definition of an Everett 'universe', under which it is fully quantum, not quasi-classical, and we compare the Everettian decomposition of a quantum state with the foliation of a spacetime.

en quant-ph
arXiv Open Access 2019
Human Motion Anticipation with Symbolic Label

Julian Tanke, Andreas Weber, Juergen Gall

Anticipating human motion depends on two factors: the past motion and the person's intention. While the first factor has been extensively utilized to forecast short sequences of human motion, the second one remains elusive. In this work we approximate a person's intention via a symbolic representation, for example fine-grained action labels such as walking or sitting down. Forecasting a symbolic representation is much easier than forecasting the full body pose with its complex inter-dependencies. However, knowing the future actions makes forecasting human motion easier. We exploit this connection by first anticipating symbolic labels and then generate human motion, conditioned on the human motion input sequence as well as on the forecast labels. This allows the model to anticipate motion changes many steps ahead and adapt the poses accordingly. We achieve state-of-the-art results on short-term as well as on long-term human motion forecasting.

en cs.CV, cs.LG
arXiv Open Access 2019
Two-dimensional active motion

Francisco J. Sevilla

The diffusion in two dimensions of non-interacting active particles that follow an arbitrary motility pattern is considered for analysis. Accordingly, the transport equation is generalized to take into account an arbitrary distribution of scattered angles of the swimming direction, which encompasses the pattern of motion of particles that move at constant speed. An exact analytical expression for the marginal probability density of finding a particle on a given position at a given instant, independently of its direction of motion, is provided; and a connection with a generalized diffusion equation is unveiled. Exact analytical expressions for the time dependence of the mean-square displacement and of the kurtosis of the distribution of the particle positions are presented. For this, it is shown that only the first trigonometric moments of the distribution of the scattered direction of motion are needed. The effects of persistence and of circular motion are discussed for different families of distributions of the scattered direction of motion.

en cond-mat.stat-mech
arXiv Open Access 2018
Deep Motion Boundary Detection

Xiaoqing Yin, Xiyang Dai, Xinchao Wang et al.

Motion boundary detection is a crucial yet challenging problem. Prior methods focus on analyzing the gradients and distributions of optical flow fields, or use hand-crafted features for motion boundary learning. In this paper, we propose the first dedicated end-to-end deep learning approach for motion boundary detection, which we term as MoBoNet. We introduce a refinement network structure which takes source input images, initial forward and backward optical flows as well as corresponding warping errors as inputs and produces high-resolution motion boundaries. Furthermore, we show that the obtained motion boundaries, through a fusion sub-network we design, can in turn guide the optical flows for removing the artifacts. The proposed MoBoNet is generic and works with any optical flows. Our motion boundary detection and the refined optical flow estimation achieve results superior to the state of the art.

en cs.CV
arXiv Open Access 2018
Learning to Segment and Represent Motion Primitives from Driving Data for Motion Planning Applications

Boyang Wang, Jianwei Gong, Ruizeng Zhang et al.

Developing an intelligent vehicle which can perform human-like actions requires the ability to learn basic driving skills from a large amount of naturalistic driving data. The algorithms will become efficient if we could decompose the complex driving tasks into motion primitives which represent the elementary compositions of driving skills. Therefore, the purpose of this paper is to segment unlabeled trajectory data into a library of motion primitives. By applying a probabilistic inference based on an iterative Expectation-Maximization algorithm, our method segments the collected trajectories while learning a set of motion primitives represented by the dynamic movement primitives. The proposed method utilizes the mutual dependencies between the segmentation and representation of motion primitives and the driving-specific based initial segmentation. By utilizing this mutual dependency and the initial condition, this paper presents how we can enhance the performance of both the segmentation and the motion primitive library establishment. We also evaluate the applicability of the primitive representation method to imitation learning and motion planning algorithms. The model is trained and validated by using the driving data collected from the Beijing Institute of Technology intelligent vehicle platform. The results show that the proposed approach can find the proper segmentation and establish the motion primitive library simultaneously.

arXiv Open Access 2017
Knee Motion Generation Method for Transfemoral Prosthesis based on Kinematic Synergy and Inertial Motion

Hiroshi Sano, Takahiro Wada

Previous research has shown that the effective use of inertial motion (i.e., less or no torque input at the knee joint) plays an important role in achieving a smooth gait of transfemoral prostheses in the swing phase. In our previous research, a method for generating a timed knee trajectory close to able-bodied individuals, which leads to sufficient clearance between the foot and the floor and the knee extension, was proposed using the inertial motion. Limb motions are known to correlate with each other during walking. This phenomenon is called kinematic synergy. In the present study, we measure gaits in level walking of able-bodied individuals with a wide range of walking velocities. We show that this kinematic synergy also exists between the motions of the intact limbs and those of the knee as determined by the inertial motion technique. We then propose a new method for generating the motion of the knee joint using its inertial motion close to the able-bodied individuals in mid-swing based on its kinematic synergy, such that the method can adapt to the changes in the motion velocity. The numerical simulation results show that the proposed method achieves prosthetic walking similar to that of able-bodied individuals with a wide range of constant walking velocities and termination of walking from steady-state walking. Further investigations have found that a kinematic synergy also exists at the start of walking. Overall, our method successfully achieves knee motion generation from the initiation of walking through steady-state walking with different velocities until termination of walking.

Halaman 33 dari 111169