• Media type: E-Book
  • Title: Markov decision processes : discrete stochastic dynamic programming
  • Contributor: Puterman, Martin L. [Author]
  • imprint: New York: Wiley, ©1994
  • Published in: Wiley series in probability and mathematical statistics. Applied probability and statistics section
  • Extent: 1 Online-Ressource (xvii, 649 pages); illustrations
  • Language: English
  • DOI: 10.1002/9780470316887
  • ISBN: 0470316888; 0470317728; 0471619779; 9780470317723; 9780470316887; 9780471619772
  • Identifier:
  • Keywords: Markov processes ; Statistical decision ; Dynamic programming ; MATHEMATICS ; Probability & Statistics ; Bayesian Analysis ; Markov-beslissingsproblemen ; Markov-processen ; Dynamische programmering
  • Origination:
  • Footnote: Includes bibliographical references (pages 613-642) and index
  • Description: 1. Introduction -- 2. Model Formulation -- 3. Examples -- 4. Finite-Horizon Markov Decision Processes -- 5. Infinite-Horizon Models: Foundations -- 6. Discounted Markov Decision Problems -- 7. The Expected Total-Reward Criterion -- 8. Average Reward and Related Criteria -- 9. The Average Reward Criterion-Multichain and Communicating Models -- 10. Sensitive Discount Optimality -- 11. Continuous-Time Models -- Appendix A. Markov Chains -- Appendix B. Semicontinuous Functions -- Appendix C. Normed Linear Spaces -- Appendix D. Linear Programming.

    Markov Decision Processes will prove to be invaluable to researchers in operations research, management science, and control theory. Its applied emphasis will serve the needs of researchers in communications and control engineering, economics, statistics, mathematics, computer science, and mathematical ecology. Moreover, its conceptual development from simple to complex models, numerous applications in text and problems, and background coverage of relevant mathematics will make it a highly useful textbook in courses on dynamic programming and stochastic control

    Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historical references in the book's extensive, up-to-date bibliography ... numerous figures illustrate examples, algorithms, results, and computations ... a biographical sketch highlights the life and work of A.A. Markov ... an afterword discusses partially observed models and other key topics ... and appendices examine Markov chains, normed linear spaces, semi-continuous functions, and linear programming

    Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a "theorem-proof" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms

    The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature