• Media type: E-Book
  • Title: Ambiguous dynamic treatment regimes : a reinforcement learning approach
  • Contributor: Saghafian, Soroush [VerfasserIn]
  • imprint: [Cambridge, MA]: Harvard Kennedy School, John F. Kennedy School of Government, 2021
  • Published in: John F. Kennedy School of Government: Faculty research working paper series ; 2021,34
  • Issue: Version: December 8, 2021
  • Extent: 1 Online-Ressource (circa 36 Seiten); Illustrationen
  • Language: English
  • DOI: 10.2139/ssrn.3980837
  • Identifier:
  • Keywords: Observational Data ; Dynamic Treatment Regimes ; Unobserved Confounders ; APOMDPs Ambiguous Partially Observable Mark Decision Processes ; Reinforcement Learning ; APOMDPs ; Graue Literatur
  • Origination:
  • Footnote:
  • Description: A main research goal in various studies is to use an observational data set and provide a new set of counterfactual guidelines that can yield causal improvements. Dynamic Treatment Regimes (DTRs) are widely studied to formalize this process and enable researchers to find guidelines that are both personalized and dynamic. However, available methods in finding optimal DTRs often rely on assumptions that are violated in real-world applications (e.g., medical decision-making or public policy), especially when (a) the existence of unobserved confounders cannot be ignored, and (b) the unobserved confounders are time-varying (e.g., affected by previous actions). When such assumptions are violated, one often faces ambiguity regarding the underlying causal model that is needed to be assumed to obtain an optimal DTR. This ambiguity is inevitable, since the dynamics of unobserved confounders and their causal impact on the observed part of the data cannot be understood from the observed data. Motivated by a case study of finding superior treatment regimes for patients who underwent transplantation in our partner hospital and faced a medical condition known as New Onset Diabetes After Transplantation (NODAT), we extend DTRs to a new class termed Ambiguous Dynamic Treatment Regimes (ADTRs), in which the causal impact of treatment regimes is evaluated based on a “cloud” of potential causal models. We then connect ADTRs to Ambiguous Partially Observable Mark Decision Processes (APOMDPs) proposed by Saghafian (2018), and consider unobserved confounders as latent variables but with ambiguous dynamics and causal effects on observed variables. Using this connection, we develop two Reinforcement Learning methods termed Direct Augmented V-Learning (DAV-Learning) and Safe Augmented V-Learning (SAV-Learning), which enable using the observed data to efficiently learn an optimal treatment regime. We establish theoretical results for these learning methods, including (weak) consistency and asymptotic normality. We further evaluate the performance of these learning methods both in our case study (using clinical data) and in simulation experiments (using synthetic data). We find promising results for our proposed approaches, showing that they perform well even compared to an imaginary oracle who knows both the true causal model (of the data generating process) and the optimal regime under that model
  • Access State: Open Access