Cinema in Uruguay (1960-1974): Resistance, Guerrilla and Third World
Gabriele
The article reviews the dialogue between documentary and animated cinema produced in Uruguay during the 1960s and 1970s and different forms of political resistance. The Uruguayan historical-political situation is contextualized and three films are analysed as examples, in order to show the complexities of the moment: Como el Uruguay no hay, by Ugo Ulive (1960), Me gustan los estudiantes, by Mario Handler (1968) and En la selva hay mucho trabajo por hacer, by Walter Tournier (1974). The three short films show a clear accusation of Uruguay’s political situation and, additionally, they reveal the complexities within Uruguayan society in moments of democratic debacle. The complicated political scenario of the country during those years led to the coup d'état of 1973 and the consequent exile of Ulive, Handler and Tournier. The three directors followed a combative form of filmmaking from different Latin American nations by discussing the intellectual’s role in a colonized space and eventually they became big names of the New Latin American Cinema. They came together at the Cinemateca del Tercer Mundo (C3M), founded in Montevideo in 1969, and set up a relationship with other Latin American filmmakers of the time. They discussed about the political and artistic situation in the continent, by creating networks of exhibition and co-working and by publishing theorical material on all those topics. The C3M thus became a space for debate on key notions such as Third Cinema, Imperfect Cinema or Cinema of Denunciation, promoted from the Global South as a way of confronting European and Hollywood film industries.
Motion Attribution for Video Generation
Xindi Wu, Despoina Paschalidou, Jun Gao
et al.
Despite the rapid progress of video generation models, the role of data in influencing motion is poorly understood. We present Motive (MOTIon attribution for Video gEneration), a motion-centric, gradient-based data attribution framework that scales to modern, large, high-quality video datasets and models. We use this to study which fine-tuning clips improve or degrade temporal dynamics. Motive isolates temporal dynamics from static appearance via motion-weighted loss masks, yielding efficient and scalable motion-specific influence computation. On text-to-video models, Motive identifies clips that strongly affect motion and guides data curation that improves temporal consistency and physical plausibility. With Motive-selected high-influence data, our method improves both motion smoothness and dynamic degree on VBench, achieving a 74.1% human preference win rate compared with the pretrained base model. To our knowledge, this is the first framework to attribute motion rather than visual appearance in video generative models and to use it to curate fine-tuning data.
EMP: Executable Motion Prior for Humanoid Robot Standing Upper-body Motion Imitation
Haocheng Xu, Haodong Zhang, Zhenghan Chen
et al.
To support humanoid robots in performing manipulation tasks, it is essential to study stable standing while accommodating upper-body motions. However, the limited controllable range of humanoid robots in a standing position affects the stability of the entire body. Thus we introduce a reinforcement learning based framework for humanoid robots to imitate human upper-body motions while maintaining overall stability. Our approach begins with designing a retargeting network that generates a large-scale upper-body motion dataset for training the reinforcement learning (RL) policy, which enables the humanoid robot to track upper-body motion targets, employing domain randomization for enhanced robustness. To avoid exceeding the robot's execution capability and ensure safety and stability, we propose an Executable Motion Prior (EMP) module, which adjusts the input target movements based on the robot's current state. This adjustment improves standing stability while minimizing changes to motion amplitude. We evaluate our framework through simulation and real-world tests, demonstrating its practical applicability.
PersonaAnimator: Personalized Motion Transfer from Unconstrained Videos
Ziyun Qian, Runyu Xiao, Shuyuan Tu
et al.
Recent advances in motion generation show remarkable progress. However, several limitations remain: (1) Existing pose-guided character motion transfer methods merely replicate motion without learning its style characteristics, resulting in inexpressive characters. (2) Motion style transfer methods rely heavily on motion capture data, which is difficult to obtain. (3) Generated motions sometimes violate physical laws. To address these challenges, this paper pioneers a new task: Video-to-Video Motion Personalization. We propose a novel framework, PersonaAnimator, which learns personalized motion patterns directly from unconstrained videos. This enables personalized motion transfer. To support this task, we introduce PersonaVid, the first video-based personalized motion dataset. It contains 20 motion content categories and 120 motion style categories. We further propose a Physics-aware Motion Style Regularization mechanism to enforce physical plausibility in the generated motions. Extensive experiments show that PersonaAnimator outperforms state-of-the-art motion transfer methods and sets a new benchmark for the Video-to-Video Motion Personalization task.
Mulheres que vêem e vivem universos masculinos: da realizador Shepitko à protagonista Nadezhda
Mónica Baptista
O presente artigo procura destacar o trabalho da realizadora russa de origem ucraniana Larisa Shepitko (1938-1979), através dos três filmes longos completos, Asas (1966), Tu e Eu (1971) e Ascensão (1977), realizados antes da sua precoce morte, em 1979. Procuramos realçar, através da análise destas obras, o papel fulcral e único da cineasta como mulher no tratamento de temas como a guerra, dando um olhar humano, mais do que ideológico, aos mesmos. Simultaneamente, perceberemos que o modo de expressão pessoal e artísticos podem contrariar e ir lá do que o estereótipo de género estabelece. Isso é evidente na protagonista de Asas (1966), uma antiga piloto e heroína da II Guerra Mundial, que procura um outro caminho para a sua vida, na meia-idade. A análise é feita com o auxílio dos autores Edgar Morin, Anais Nin, Judith Butler, Simone Weil e Lígia Amândio, que nos aproximam não só das temáticas de Shepitko como da abordagem à questão feminista num sentido abrangente.
Visual arts, Motion pictures
The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion
Changan Chen, Juze Zhang, Shrinidhi K. Lakshmikanth
et al.
Human communication is inherently multimodal, involving a combination of verbal and non-verbal cues such as speech, facial expressions, and body gestures. Modeling these behaviors is essential for understanding human interaction and for creating virtual characters that can communicate naturally in applications like games, films, and virtual reality. However, existing motion generation models are typically limited to specific input modalities -- either speech, text, or motion data -- and cannot fully leverage the diversity of available data. In this paper, we propose a novel framework that unifies verbal and non-verbal language using multimodal language models for human motion understanding and generation. This model is flexible in taking text, speech, and motion or any combination of them as input. Coupled with our novel pre-training strategy, our model not only achieves state-of-the-art performance on co-speech gesture generation but also requires much less data for training. Our model also unlocks an array of novel tasks such as editable gesture generation and emotion prediction from motion. We believe unifying the verbal and non-verbal language of human motion is essential for real-world applications, and language models offer a powerful approach to achieving this goal. Project page: languageofmotion.github.io.
Approximation of skew Brownian motion by snapping-out Brownian motions
Adam Bobrowski, Elżbieta Ratajczyk
We elaborate on the theorem saying that as permeability coefficients of snapping-out Brownian motions tend to infinity in such a way that their ratio remains constant, these processes converge to a skew Brownian motion. In particular, convergence of the related semigroups, cosine families and projections is discussed.
Strata, Narrative, and Space in Ici et ailleurs
Kamil Lipiński
This article examines the pedagogic vision of audiovisual archives in Ici et ailleurs (Here and Elsewhere, 1974/1978) (shot by Sonimage and drawn from the abandoned project Jusqu’à la Victoire [1970]) in terms of the stratification of images and sounds. Drawing on Gilles Deleuze and Michel Foucault, Tom Conley writes that a diagram that depends upon the division between the visible and the enunciable may be comprehended in terms of a map and as a line of forces. Such strata can act as signposts for diverse and multilateral readings of film, as viewers “read” cinematic landscapes and time-images. In Ici et ailleurs, stratigraphic shots juxtapose the Western, static home life of a family in France with the nomadic life of Fedayeen troops in Jordan and Palestine. This article argues that Sonimage’s narrative specificity relies upon an “in-between” method that defines an inter-space between semantic orders.
Motion pictures, Philosophy (General)
Iterative Motion Editing with Natural Language
Purvi Goel, Kuan-Chieh Wang, C. Karen Liu
et al.
Text-to-motion diffusion models can generate realistic animations from text prompts, but do not support fine-grained motion editing controls. In this paper, we present a method for using natural language to iteratively specify local edits to existing character animations, a task that is common in most computer animation workflows. Our key idea is to represent a space of motion edits using a set of kinematic motion editing operators (MEOs) whose effects on the source motion is well-aligned with user expectations. We provide an algorithm that leverages pre-existing language models to translate textual descriptions of motion edits into source code for programs that define and execute sequences of MEOs on a source animation. We execute MEOs by first translating them into keyframe constraints, and then use diffusion-based motion models to generate output motions that respect these constraints. Through a user study and quantitative evaluation, we demonstrate that our system can perform motion edits that respect the animator's editing intent, remain faithful to the original animation (it edits the original animation, but does not dramatically change it), and yield realistic character animation results.
Reduction of Plane Quartics and Cayley Octads
Raymond van Bommel, Jordan Docking, Vladimir Dokchitser
et al.
We give a conjectural characterisation of the stable reduction of plane quartics over local fields in terms of their Cayley octads. This results in p-adic criteria that efficiently give the stable reduction type amongst the 42 possible types, and whether the reduction is hyperelliptic or not. These criteria are in the vein of the machinery of "cluster pictures" for hyperelliptic curves. We also construct explicit families of quartic curves that realise all possible stable types, against which we test these criteria. We give numerical examples that illustrate how to use these criteria in practice.
Motion Style Transfer: Modular Low-Rank Adaptation for Deep Motion Forecasting
Parth Kothari, Danya Li, Yuejiang Liu
et al.
Deep motion forecasting models have achieved great success when trained on a massive amount of data. Yet, they often perform poorly when training data is limited. To address this challenge, we propose a transfer learning approach for efficiently adapting pre-trained forecasting models to new domains, such as unseen agent types and scene contexts. Unlike the conventional fine-tuning approach that updates the whole encoder, our main idea is to reduce the amount of tunable parameters that can precisely account for the target domain-specific motion style. To this end, we introduce two components that exploit our prior knowledge of motion style shifts: (i) a low-rank motion style adapter that projects and adjusts the style features at a low-dimensional bottleneck; and (ii) a modular adapter strategy that disentangles the features of scene context and motion history to facilitate a fine-grained choice of adaptation layers. Through extensive experimentation, we show that our proposed adapter design, coined MoSA, outperforms prior methods on several forecasting benchmarks.
A invisibilidade do corpo feminino nas montanhas fálicas: "Picnic na Montanha Misteriosa" (1975)
Adriana Falqueto Lemos, Alice da Rocha Perini, Hugo Felipe Quintela
A partir de discussões que são ancoradas em teóricos da recepção e sobre o corpo feminino e sua dimensão na literatura, este artigo discute a narrativa fílmica de "Picnic na Montanha Misteriosa" (1975), de Peter Weir, adaptação de romance homônimo de Joan Lindsay (1967). O estudo tem como intenção entender a mitificação da figura feminina através dos efeitos produzidos pela obra cinematográfica do diretor em questão, tendo como contraponto a figura da audiência e a apreensão para a concretização do sentido da obra. Para tanto, é importante destacar as reflexões que abarcam as relações de gênero no cinema, pois, assim como em outras linguagens, a representação do feminino, quase sempre, alterna entre presença e ausência, ou seja, ora como um objeto mediante um olhar masculino, ora como uma imagem esmaecida quando protagoniza a criação de sentido. Em contrapartida, a teoria feminista do cinema (Rosen, 1973, Mellen, 1974 & Haskell, 1987) há muito tem proposto uma nova perspectiva ao espaço obscurecido pela construção social dos gêneros. O cinema é uma área importante para que sejam estabelecidas as discussões sobre gênero.
Visual arts, Motion pictures
Researching European Crime Narratives and the Role of Television: An Introduction
Gabriele, Alice Jacquelin, Federico Pagello
A review of mobile robot motion planning methods: from classical motion planning workflows to reinforcement learning-based architectures
Lu Dong, Zichen He, Chunwei Song
et al.
Motion planning is critical to realize the autonomous operation of mobile robots. As the complexity and randomness of robot application scenarios increase, the planning capability of the classical hierarchical motion planners is challenged. With the development of machine learning, deep reinforcement learning (DRL)-based motion planner has gradually become a research hotspot due to its several advantageous features. DRL-based motion planner is model-free and does not rely on the prior structured map. Most importantly, DRL-based motion planner achieves the unification of the global planner and the local planner. In this paper, we provide a systematic review of various motion planning methods. First, we summarize the representative and state-of-the-art works for each submodule of the classical motion planning architecture and analyze their performance features. Subsequently, we concentrate on summarizing RL-based motion planning approaches, including motion planners combined with RL improvements, map-free RL-based motion planners, and multi-robot cooperative planning methods. Last but not least, we analyze the urgent challenges faced by these mainstream RL-based motion planners in detail, review some state-of-the-art works for these issues, and propose suggestions for future research.
A Gladyshev theorem for trifractional Brownian motion and $n$-th order fractional Brownian motion
Xiyue Han
We prove limit theorems for the weighted quadratic variation of trifractional Brownian motion and $n$-th order fractional Brownian motion. Furthermore, a sufficient condition for the $L^P$-convergence of the weighted quadratic variation for Gaussian processes is obtained as a byproduct. As an application, we give a statistical estimator for the self-similarity index of trifractional Brownian motion. These theorems extend results of Baxter, Gladyshev, and Norvaiša.
Estimating Motion Codes from Demonstration Videos
Maxat Alibayev, David Paulius, Yu Sun
A motion taxonomy can encode manipulations as a binary-encoded representation, which we refer to as motion codes. These motion codes innately represent a manipulation action in an embedded space that describes the motion's mechanical features, including contact and trajectory type. The key advantage of using motion codes for embedding is that motions can be more appropriately defined with robotic-relevant features, and their distances can be more reasonably measured using these motion features. In this paper, we develop a deep learning pipeline to extract motion codes from demonstration videos in an unsupervised manner so that knowledge from these videos can be properly represented and used for robots. Our evaluations show that motion codes can be extracted from demonstrations of action in the EPIC-KITCHENS dataset.
Accelerated Motion-Aware MR Imaging via Motion Prediction from K-Space Center
Christoph Jud, Damien Nguyen, Alina Giger
et al.
Motion has been a challenge for magnetic resonance (MR) imaging ever since the MR has been invented. Especially in volumetric imaging of thoracic and abdominal organs, motion-awareness is essential for reducing motion artifacts in the final image. A recently proposed MR imaging approach copes with motion by observing the motion patterns during the acquisition. Repetitive scanning of the k-space center region enables the extraction of the patient motion while acquiring the remaining part of the k-space. Due to highly redundant measurements of the center, the required scanning time of over 11 min and the reconstruction time of 2 h exceed clinical applicability though. We propose an accelerated motion-aware MR imaging method where the motion is inferred from small-sized k-space center patches and an initial training phase during which the characteristic movements are modeled. Thereby, acquisition times are reduced by a factor of almost 2 and reconstruction times by two orders of magnitude. Moreover, we improve the existing motion-aware approach with a systematic temporal shift correction to achieve a sharper image reconstruction. We tested our method on 12 volunteers and scanned their lungs and abdomen under free breathing. We achieved equivalent to higher reconstruction quality using the motion-prediction compared to the slower existing approach.
Deja Vu: Motion Prediction in Static Images
Silvia L. Pintea, Jan C. van Gemert, Arnold W. M. Smeulders
This paper proposes motion prediction in single still images by learning it from a set of videos. The building assumption is that similar motion is characterized by similar appearance. The proposed method learns local motion patterns given a specific appearance and adds the predicted motion in a number of applications. This work (i) introduces a novel method to predict motion from appearance in a single static image, (ii) to that end, extends of the Structured Random Forest with regression derived from first principles, and (iii) shows the value of adding motion predictions in different tasks such as: weak frame-proposals containing unexpected events, action recognition, motion saliency. Illustrative results indicate that motion prediction is not only feasible, but also provides valuable information for a number of applications.
Editorial #7
Tiago Baptista, Susana Viegas, Maria do Carmo Piçarra
et al.
Online Design Aid for Evaluating Manure Pit Ventilation Systems to Reduce Entry Risk
Harvey B. Manbeck, Daniel W. Hofstetter, Dennis J. Murphy
et al.
On-farm manure storage pits contain both toxic and asphyxiating gases such as hydrogen sulfide, carbon dioxide, methane and ammonia. Farmers and service personnel occasionally need to enter these pits to conduct repair and maintenance tasks. One intervention to reduce the toxic and asphyxiating gas exposure risk to farm workers when entering manure pits is manure pit ventilation. This article describes an online computational fluid dynamics based design aid for evaluating the effectiveness of manure pit ventilation systems to reduce the concentrations of toxic and asphyxiating gases in the manure pits. This design aid, developed by a team of agricultural engineering and agricultural safety specialists at Pennsylvania State University, represents the culmination of more than a decade of research and technology development effort. The article includes a summary of the research efforts leading to the online design aid development and describes protocols for using the online design aid, including procedures for data input and for accessing design aid results. Design aid results include gas concentration decay and oxygen replenishment curves inside the manure pit and inside the barns above the manure pits, as well as animated motion pictures of individual gas concentration decay and oxygen replenishment in selected horizontal and vertical cut plots in the manure pits and barns. These results allow the user to assess: (1) how long one needs to ventilate the pits to remove toxic and asphyxiating gases from the pit and barn, (2) from which portions of the barn and pit these gases are most and least readily evacuated, and (3) whether or not animals and personnel need to be removed from portions of the barn above the manure pit being ventilated.
Public aspects of medicine