arXiv Open Access 2025

MORAL: A Multimodal Reinforcement Learning Framework for Decision Making in Autonomous Laboratories

Natalie Tirabassi Sathish A. P. Kumar Sumit Jha Arvind Ramanathan
Lihat Sumber

Abstrak

We propose MORAL (a multimodal reinforcement learning framework for decision making in autonomous laboratories) that enhances sequential decision-making in autonomous robotic laboratories through the integration of visual and textual inputs. Using the BridgeData V2 dataset, we generate fine-tuned image captions with a pretrained BLIP-2 vision-language model and combine them with visual features through an early fusion strategy. The fused representations are processed using Deep Q-Network (DQN) and Proximal Policy Optimization (PPO) agents. Experimental results demonstrate that multimodal agents achieve a 20% improvement in task completion rates and significantly outperform visual-only and textual-only baselines after sufficient training. Compared to transformer-based and recurrent multimodal RL models, our approach achieves superior performance in cumulative reward and caption quality metrics (BLEU, METEOR, ROUGE-L). These results highlight the impact of semantically aligned language cues in enhancing agent learning efficiency and generalization. The proposed framework contributes to the advancement of multimodal reinforcement learning and embodied AI systems in dynamic, real-world environments.

Topik & Kata Kunci

Penulis (4)

N

Natalie Tirabassi

S

Sathish A. P. Kumar

S

Sumit Jha

A

Arvind Ramanathan

Format Sitasi

Tirabassi, N., Kumar, S.A.P., Jha, S., Ramanathan, A. (2025). MORAL: A Multimodal Reinforcement Learning Framework for Decision Making in Autonomous Laboratories. https://arxiv.org/abs/2504.03153

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓