arXiv Open Access 2024

Reinforcement Learning Controllers for Soft Robots using Learned Environments

Uljad Berdica Matthew Jackson Niccolò Enrico Veronese Jakob Foerster Perla Maiolino
Lihat Sumber

Abstrak

Soft robotic manipulators offer operational advantage due to their compliant and deformable structures. However, their inherently nonlinear dynamics presents substantial challenges. Traditional analytical methods often depend on simplifying assumptions, while learning-based techniques can be computationally demanding and limit the control policies to existing data. This paper introduces a novel approach to soft robotic control, leveraging state-of-the-art policy gradient methods within parallelizable synthetic environments learned from data. We also propose a safety oriented actuation space exploration protocol via cascaded updates and weighted randomness. Specifically, our recurrent forward dynamics model is learned by generating a training dataset from a physically safe \textit{mean reverting} random walk in actuation space to explore the partially-observed state-space. We demonstrate a reinforcement learning approach towards closed-loop control through state-of-the-art actor-critic methods, which efficiently learn high-performance behaviour over long horizons. This approach removes the need for any knowledge regarding the robot's operation or capabilities and sets the stage for a comprehensive benchmarking tool in soft robotics control.

Topik & Kata Kunci

Penulis (5)

U

Uljad Berdica

M

Matthew Jackson

N

Niccolò Enrico Veronese

J

Jakob Foerster

P

Perla Maiolino

Format Sitasi

Berdica, U., Jackson, M., Veronese, N.E., Foerster, J., Maiolino, P. (2024). Reinforcement Learning Controllers for Soft Robots using Learned Environments. https://arxiv.org/abs/2410.18519

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓