arXiv Open Access 2026

TriC-Motion: Tri-Domain Causal Modeling Grounded Text-to-Motion Generation

Yiyang Cao Yunze Deng Ziyu Lin Bin Feng Xinggang Wang +3 lainnya
Lihat Sumber

Abstrak

Text-to-motion generation, a rapidly evolving field in computer vision, aims to produce realistic and text-aligned motion sequences. Current methods primarily focus on spatial-temporal modeling or independent frequency domain analysis, lacking a unified framework for joint optimization across spatial, temporal, and frequency domains. This limitation hinders the model's ability to leverage information from all domains simultaneously, leading to suboptimal generation quality. Additionally, in motion generation frameworks, motion-irrelevant cues caused by noise are often entangled with features that contribute positively to generation, thereby leading to motion distortion. To address these issues, we propose Tri-Domain Causal Text-to-Motion Generation (TriC-Motion), a novel diffusion-based framework integrating spatial-temporal-frequency-domain modeling with causal intervention. TriC-Motion includes three core modeling modules for domain-specific modeling, namely Temporal Motion Encoding, Spatial Topology Modeling, and Hybrid Frequency Analysis. After comprehensive modeling, a Score-guided Tri-domain Fusion module integrates valuable information from the triple domains, simultaneously ensuring temporal consistency, spatial topology, motion trends, and dynamics. Moreover, the Causality-based Counterfactual Motion Disentangler is meticulously designed to expose motion-irrelevant cues to eliminate noise, disentangling the real modeling contributions of each domain for superior generation. Extensive experimental results validate that TriC-Motion achieves superior performance compared to state-of-the-art methods, attaining an outstanding R@1 of 0.612 on the HumanML3D dataset. These results demonstrate its capability to generate high-fidelity, coherent, diverse, and text-aligned motion sequences. Code is available at: https://caoyiyang1105.github.io/TriC-Motion/.

Topik & Kata Kunci

Penulis (8)

Y

Yiyang Cao

Y

Yunze Deng

Z

Ziyu Lin

B

Bin Feng

X

Xinggang Wang

W

Wenyu Liu

D

Dandan Zheng

J

Jingdong Chen

Format Sitasi

Cao, Y., Deng, Y., Lin, Z., Feng, B., Wang, X., Liu, W. et al. (2026). TriC-Motion: Tri-Domain Causal Modeling Grounded Text-to-Motion Generation. https://arxiv.org/abs/2602.08462

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓