• Media type: Electronic Conference Proceeding; E-Article; Text
  • Title: How to Play in Infinite MDPs (Invited Talk)
  • Contributor: Kiefer, Stefan [Author]; Mayr, Richard [Author]; Shirmohammadi, Mahsa [Author]; Totzke, Patrick [Author]; Wojtczak, Dominik [Author]
  • imprint: Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020
  • Language: English
  • DOI: https://doi.org/10.4230/LIPIcs.ICALP.2020.3
  • Keywords: Markov decision processes
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: Markov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochastic and nondeterministic behavior. For MDPs with finite state space it is known that for a wide range of objectives there exist optimal strategies that are memoryless and deterministic. In contrast, if the state space is infinite, optimal strategies may not exist, and optimal or ε-optimal strategies may require (possibly infinite) memory. In this paper we consider qualitative objectives: reachability, safety, (co-)Büchi, and other parity objectives. We aim at giving an introduction to a collection of techniques that allow for the construction of strategies with little or no memory in countably infinite MDPs.
  • Access State: Open Access