• Media type: E-Book
  • Title: Reinforcement Learning for Continuous-Time Optimal Execution : Actor-Critic Algorithm and Error Analysis
  • Contributor: Wang, Boyu [VerfasserIn]; Gao, Xuefeng [VerfasserIn]; Li, Lingfei [VerfasserIn]
  • imprint: [S.l.]: SSRN, 2023
  • Extent: 1 Online-Ressource (49 p)
  • Language: English
  • DOI: 10.2139/ssrn.4378950
  • Identifier:
  • Keywords: reinforcement learning ; optimal execution ; stochastic control ; actor-critic method ; finite-time error analysis ; convergence analysis
  • Origination:
  • Footnote: Nach Informationen von SSRN wurde die ursprüngliche Fassung des Dokuments March 6, 2023 erstellt
  • Description: We propose an actor-critic reinforcement learning (RL) algorithm for the optimal execution problem. We consider the celebrated Almgren-Chriss model in continuous time and formulate a relaxed stochastic control problem for execution under an entropy regularized mean-quadratic variation objective. We obtain in closed form the optimal value function and the optimal feedback policy, which is Gaussian. We then utilize these analytical results to parametrize our value function and control policy for RL. While standard actor-critic RL algorithms perform policy evaluation update and policy gradient update alternately, we introduce a recalibration step in addition to these two updates, which turns out to be critical for convergence. We develop a finite-time error analysis of our algorithm and show that it converges linearly under suitable conditions on the learning rates. We test our algorithm in three different types of market simulators built on the Almgren-Chriss model, historical data of order flow, and a stochastic model of limit order books. Empirical results demonstrate the advantages of our algorithm over the classical control method and a deep learning based RL algorithm
  • Access State: Open Access