Force Sensing Control for Physical Human–Robot Interaction: A Transformer-Based Action Chunking Approach
Abstrak
In human–robot collaboration (HRC) scenarios with direct physical contact, accurately estimating human intentions and adjusting robot behaviors based on multimodal information is the core factors that restrict the efficiency and precision of current HRC tasks. To enhance the performance of human–robot collaboration under physical contact conditions, we propose a joint network model named ACT_force_cooperative (AFC). This model leverages force sensing information as a representation of human intent to achieve human intent prediction during physical interaction, while simultaneously capturing visual information and robot state data, thereby enabling more efficient execution of human–robot collaborative tasks with multimodal information processing. Existing HRC methods often ignore humans’ collaborative experience in the environment and fail to recognize the uniqueness of interactive force in expressing human intentions. Focusing on the special role of interactive force among various types of data in physical interaction environments, the proposed model predicts humans’ future behavioral intentions and adopts a Transformer model to realize the fusion and representation of multimodal information, thus accomplishing more accurate and effective HRC tasks. Experimental results demonstrate that, through the processing of force sensing information and fusion of multimodal data, the proposed model reduces the motion error by 44.9% and increases the effective collaborative time ratio by 20.2% compared with the baseline Action Chunk Transformer (ACT) model. This not only improves the motion accuracy of the robot in collaborative tasks but also enhances the collaborative experience of human operators.
Topik & Kata Kunci
Penulis (2)
Zhenyu Pan
Weiming Wang
Akses Cepat
- Tahun Terbit
- 2026
- Sumber Database
- DOAJ
- DOI
- 10.3390/machines14020249
- Akses
- Open Access ✓