Risk-Averse Total-Reward Reinforcement Learning

Source: cs.LG updates on arXiv.org

RELATED POSTS

arXiv:2506.21683v1 Announce Type: new
Abstract: Risk-averse total-reward Markov Decision Processes (MDPs) offer a promising framework for modeling and solving undiscounted infinite-horizon objectives. Existing model-based algorithms for risk measures like the entropic risk measure (ERM) and entropic value-at-risk (EVaR) are effective in small problems, but require full access to transition probabilities. We propose a Q-learning algorithm to compute the optimal stationary policy for total-reward ERM and EVaR objectives with strong convergence and performance guarantees. The algorithm and its optimality are made possible by ERM’s dynamic consistency and elicitability. Our numerical results on tabular domains demonstrate quick and reliable convergence of the proposed Q-learning algorithm to the optimal risk-averse value function.

Read the full article »

Support authors and subscribe to content

This is premium stuff. Subscribe to read the entire article.

Subscribe

Gain access to all our Premium contents.
More than 100+ articles.

Buy Article

Unlock this article and gain permanent access to read it.

Related Posts

Next Post

Recommended Stories

Welcome Back!

Login to your account below

Retrieve your password

Please enter your username or email address to reset your password.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?