arXiv Open Access 2024

Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld

Moein Khajehnejad Forough Habibollahi Aswin Paul Adeel Razi Brett J. Kagan
Lihat Sumber

Abstrak

How do biological systems and machine learning algorithms compare in the number of samples required to show significant improvements in completing a task? We compared the learning efficiency of in vitro biological neural networks to the state-of-the-art deep reinforcement learning (RL) algorithms in a simplified simulation of the game `Pong'. Using DishBrain, a system that embodies in vitro neural networks with in silico computation using a high-density multi-electrode array, we contrasted the learning rate and the performance of these biological systems against time-matched learning from three state-of-the-art deep RL algorithms (i.e., DQN, A2C, and PPO) in the same game environment. This allowed a meaningful comparison between biological neural systems and deep RL. We find that when samples are limited to a real-world time course, even these very simple biological cultures outperformed deep RL algorithms across various game performance characteristics, implying a higher sample efficiency. Ultimately, even when tested across multiple types of information input to assess the impact of higher dimensional data input, biological neurons showcased faster learning than all deep reinforcement learning agents.

Topik & Kata Kunci

Penulis (5)

M

Moein Khajehnejad

F

Forough Habibollahi

A

Aswin Paul

A

Adeel Razi

B

Brett J. Kagan

Format Sitasi

Khajehnejad, M., Habibollahi, F., Paul, A., Razi, A., Kagan, B.J. (2024). Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld. https://arxiv.org/abs/2405.16946

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓