SOTAVerified

Model Embedding Model-Based Reinforcement Learning

2020-06-16Unverified0· sign in to hype

Xiaoyu Tan, Chao Qu, Junwu Xiong, James Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL). Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias. In this paper, we propose a simple and elegant model-embedding model-based reinforcement learning (MEMB) algorithm in the framework of the probabilistic reinforcement learning. To balance the sample-efficiency and model bias, we exploit both real and imaginary data in the training. In particular, we embed the model in the policy update and learn Q and V functions from the real data set. We provide the theoretical analysis of MEMB with the Lipschitz continuity assumption on the model and policy. At last, we evaluate MEMB on several benchmarks and demonstrate our algorithm can achieve state-of-the-art performance.

Tasks

Reproductions