• Media type: Text; Report; E-Book
  • Title: How to gamble with non-stationary X-armed bandits and have no regrets
  • Contributor: Avanesov, Valeriy [Author]
  • Published: Weierstrass Institute for Applied Analysis and Stochastics publication server, 2020
  • Language: English
  • DOI: https://doi.org/10.20347/WIAS.PREPRINT.2686
  • Keywords: 62M10 ; article ; 62H15 ; Bootstrap -- change point detection -- nonparametrics -- regression -- multiscale
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: In X-armed bandit problem an agent sequentially interacts with environment which yields a reward based on the vector input the agent provides. The agent's goal is to maximise the sum of these rewards across some number of time steps. The problem and its variations have been a subject of numerous studies, suggesting sub-linear and sometimes optimal strategies. The given paper introduces a new variation of the problem. We consider an environment, which can abruptly change its behaviour an unknown number of times. To that end we propose a novel strategy and prove it attains sub-linear cumulative regret. Moreover, the obtained regret bound matches the best known bound for GP-UCB for a stationary case, and approaches the minimax lower bound in case of highly smooth relation between an action and the corresponding reward. The theoretical result is supported by experimental study.