A Version of the Euler Equation in Discounted Markov Decision Processes

This paper deals with Markov decision processes (MDPs) on Euclidean spaces with an infinite horizon. An approach to study this kind of MDPs is using the dynamic programming technique (DP). Then the optimal value function is characterized through the value iteration functions. The paper provides cond...

Full description

Saved in:
Bibliographic Details
Main Authors: H. Cruz-Suárez, G. Zacarías-Espinoza, V. Vázquez-Guevara
Format: Article
Language:English
Published: Wiley 2012-01-01
Series:Journal of Applied Mathematics
Online Access:http://dx.doi.org/10.1155/2012/103698
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper deals with Markov decision processes (MDPs) on Euclidean spaces with an infinite horizon. An approach to study this kind of MDPs is using the dynamic programming technique (DP). Then the optimal value function is characterized through the value iteration functions. The paper provides conditions that guarantee the convergence of maximizers of the value iteration functions to the optimal policy. Then, using the Euler equation and an envelope formula, the optimal solution of the optimal control problem is obtained. Finally, this theory is applied to a linear-quadratic control problem in order to find its optimal policy.
ISSN:1110-757X
1687-0042