Loading…

Long Run Stochastic Control Problems with General Discounting

Controlled discrete time Markov processes are studied first with long run general discounting functional. It is shown that optimal strategies for average reward per unit time problem are also optimal for average generally discounting functional. Then long run risk sensitive reward functional with ge...

Full description

Saved in:
Bibliographic Details
Published in:Applied mathematics & optimization 2024-04, Vol.89 (2), p.52, Article 52
Main Author: Stettner, Łukasz
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Controlled discrete time Markov processes are studied first with long run general discounting functional. It is shown that optimal strategies for average reward per unit time problem are also optimal for average generally discounting functional. Then long run risk sensitive reward functional with general discounting is considered. When risk factor is positive then optimal value of such reward functional is dominated by the reward functional corresponding to the long run risk sensitive control. In the case of negative risk factor we get an asymptotical result, which says that optimal average reward per unit time control is nearly optimal for long run risk sensitive reward functional with general discounting, assuming that risk factor is close to 0. For this purpose we show in Appendix upper estimates for large deviations of weighted empirical measures, which are of independent interest.
ISSN:0095-4616
1432-0606
DOI:10.1007/s00245-024-10118-5