Information-Theoretic Generalization Bounds for Batch Reinforcement Learning

We analyze the generalization properties of batch reinforcement learning (batch RL) with value function approximation from an information-theoretic perspective. We derive generalization bounds for batch RL using (conditional) mutual information. In addition, we demonstrate how to establish a connect...

Full description

Saved in:
Bibliographic Details
Main Author: Xingtu Liu
Format: Article
Language:English
Published: MDPI AG 2024-11-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/26/11/995
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We analyze the generalization properties of batch reinforcement learning (batch RL) with value function approximation from an information-theoretic perspective. We derive generalization bounds for batch RL using (conditional) mutual information. In addition, we demonstrate how to establish a connection between certain structural assumptions on the value function space and conditional mutual information. As a by-product, we derive a <i>high-probability</i> generalization bound via conditional mutual information, which was left open and may be of independent interest.
ISSN:1099-4300