A criterion for selecting the appropriate one from the trained models for model‐based offline policy evaluation

Abstract Offline policy evaluation, evaluating and selecting complex policies for decision‐making by only using offline datasets is important in reinforcement learning. At present, the model‐based offline policy evaluation (MBOPE) is widely welcomed because of its easy to implement and good performa...

Full description

Saved in:
Bibliographic Details
Main Authors: Chongchong Li, Yue Wang, Zhi‐Ming Ma, Yuting Liu
Format: Article
Language:English
Published: Wiley 2025-02-01
Series:CAAI Transactions on Intelligence Technology
Subjects:
Online Access:https://doi.org/10.1049/cit2.12376
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Offline policy evaluation, evaluating and selecting complex policies for decision‐making by only using offline datasets is important in reinforcement learning. At present, the model‐based offline policy evaluation (MBOPE) is widely welcomed because of its easy to implement and good performance. MBOPE directly approximates the unknown value of a given policy using the Monte Carlo method given the estimated transition and reward functions of the environment. Usually, multiple models are trained, and then one of them is selected to be used. However, a challenge remains in selecting an appropriate model from those trained for further use. The authors first analyse the upper bound of the difference between the approximated value and the unknown true value. Theoretical results show that this difference is related to the trajectories generated by the given policy on the learnt model and the prediction error of the transition and reward functions at these generated data points. Based on the theoretical results, a new criterion is proposed to tell which trained model is better suited for evaluating the given policy. At last, the effectiveness of the proposed criterion is demonstrated on both benchmark and synthetic offline datasets.
ISSN:2468-2322