Loading…

Hundreds Guide Millions: Adaptive Offline Reinforcement Learning With Expert Guidance

Offline reinforcement learning (RL) optimizes the policy on a previously collected dataset without any interactions with the environment, yet usually suffers from the distributional shift problem. To mitigate this issue, a typical solution is to impose a policy constraint on a policy improvement obj...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2023-11, Vol.PP, p.1-13
Main Authors: Yang, Qisen, Wang, Shenzhi, Zhang, Qihang, Huang, Gao, Song, Shiji
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Offline reinforcement learning (RL) optimizes the policy on a previously collected dataset without any interactions with the environment, yet usually suffers from the distributional shift problem. To mitigate this issue, a typical solution is to impose a policy constraint on a policy improvement objective. However, existing methods generally adopt a "one-size-fits-all" practice, i.e., keeping only a single improvement-constraint balance for all the samples in a mini-batch or even the entire offline dataset. In this work, we argue that different samples should be treated with different policy constraint intensities. Based on this idea, a novel plug-in approach named guided offline RL (GORL) is proposed. GORL employs a guiding network, along with only a few expert demonstrations, to adaptively determine the relative importance of the policy improvement and policy constraint for every sample. We theoretically prove that the guidance provided by our method is rational and near-optimal. Extensive experiments on various environments suggest that GORL can be easily installed on most offline RL algorithms with statistically significant performance improvements.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2023.3293508