arXiv Open Access 2023

Asynchronous Deep Double Duelling Q-Learning for Trading-Signal Execution in Limit Order Book Markets

Peer Nagy Jan-Peter Calliess Stefan Zohren
Lihat Sumber

Abstrak

We employ deep reinforcement learning (RL) to train an agent to successfully translate a high-frequency trading signal into a trading strategy that places individual limit orders. Based on the ABIDES limit order book simulator, we build a reinforcement learning OpenAI gym environment and utilise it to simulate a realistic trading environment for NASDAQ equities based on historic order book messages. To train a trading agent that learns to maximise its trading return in this environment, we use Deep Duelling Double Q-learning with the APEX (asynchronous prioritised experience replay) architecture. The agent observes the current limit order book state, its recent history, and a short-term directional forecast. To investigate the performance of RL for adaptive trading independently from a concrete forecasting algorithm, we study the performance of our approach utilising synthetic alpha signals obtained by perturbing forward-looking returns with varying levels of noise. Here, we find that the RL agent learns an effective trading strategy for inventory management and order placing that outperforms a heuristic benchmark trading strategy having access to the same signal.

Topik & Kata Kunci

Penulis (3)

P

Peer Nagy

J

Jan-Peter Calliess

S

Stefan Zohren

Format Sitasi

Nagy, P., Calliess, J., Zohren, S. (2023). Asynchronous Deep Double Duelling Q-Learning for Trading-Signal Execution in Limit Order Book Markets. https://arxiv.org/abs/2301.08688

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓