Fuzzy Q-table reinforcement learning for continues state spaces: A case study on bitcoin futures trading
Authors
Advisors
Issue Date
Type
Keywords
Citation
Abstract
One of the simplest approach in Reinforcement Learning (RL) is updating Q-table using Bellman operator. While theoretical expectations hint at the potential convergence achieved by modeling the discrete Q-table with the Bellman operator, practical limitations surface in real-world scenarios. The main challenges associated with it include the exponential growth of the Q-table size with an increasing number of state dimensions and the inability to use the Q-table in continuous state spaces. Alternative approaches, such as employing neural networks to approximate the parameterized Q-function, may not necessarily result in convergence. In response to these challenges, this paper introduces an simple innovative methodology inspired by the Bellman method updating. The proposed method utilizes fuzzy rules to discretize the state space, leading to the direct use of the Bellman operator for updating the fuzzy neural network weights, effectively acting as the Fuzzy Q-table. Instead of approximating the Q-function utilizing neural network/deep neural network based on gradient approaches, the proposed method establishes a Fuzzy Q-table and updates it using the Bellman equation. This strategic decision helps to solve the convergence problem in addition to prevent entrapment in local minima problems, a common challenge faced by conventional gradient methods. The efficacy of the proposed approach is demonstrated through its application to trading in the Bitcoin Futures Market, showcasing its ability to navigate complexities and uncertainties. Beyond financial markets, this methodology presents a versatile solution applicable to a diverse range of reinforcement learning problems, addressing limitations faced by traditional Q-tables or DQN.
Table of Contents
Description
Conference Location: Vallette, Malta
Publisher
Journal
Book Title
Series
PubMed ID
ISSN
2576-3547