Financial trading decision model based on deep reinforcement learning for smart agricultural management
Abstrak
This study proposes a decision-making model based on deep reinforcement learning (DRL) for agricultural financial transactions, addressing core challenges such as significant data noise, strong time-series dependence, and limited strategy adaptability. We developed a multifactor dynamic denoising framework by integrating the Grubbs test for outlier detection and the median absolute deviation (MAD) method for noise handling. This framework categorizes agricultural financial indicators into six feature types, significantly enhancing robustness against data noise and improving model reliability. Furthermore, an long short-term memory (LSTM)-enhanced DRL architecture is employed, incorporating a sliding window mechanism to capture market timing features. This framework constructs a transaction cost-based reward function. It establishes an intelligent trading decision model based on the LSTM algorithm and the data query language (DQL). Experimental results demonstrate an annualized return of 45.12% and a 35% reduction in maximum retracement for Deere & Company and BAYN.DE. The Sharpe ratio reaches 1.51, reflecting a 62% improvement over the benchmark model. The results validate the robustness of the proposed decision-making model in the face of price fluctuations and policy interventions. This model addresses critical bottlenecks in the application of DRL in agricultural finance, facilitating the transition of agricultural economic management from empirical judgment to data-driven approaches. Through three key innovations—data denoising, time-series modeling, and domain adaptation—it provides a vital decision-support tool for advancing smart agriculture.
Penulis (3)
Di Fan
Nazrul Hisyam Ab Razak
Wei Ni Soh
Akses Cepat
- Tahun Terbit
- 2025
- Bahasa
- en
- Sumber Database
- CrossRef
- DOI
- 10.7717/peerj-cs.3196
- Akses
- Open Access ✓