SOTAVerified

Dataset Distillation for Offline Reinforcement Learning

2024-07-29Code Available0· sign in to hype

Jonathan Light, Yuanzhe Liu, Ziniu Hu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Offline reinforcement learning often requires a quality dataset that we can train a policy on. However, in many situations, it is not possible to get such a dataset, nor is it easy to train a policy to perform well in the actual environment given the offline data. We propose using data distillation to train and distill a better dataset which can then be used for training a better policy model. We show that our method is able to synthesize a dataset where a model trained on it achieves similar performance to a model trained on the full dataset or a model trained using percentile behavioral cloning. Our project site is available at https://datasetdistillation4rl.github.iohere. We also provide our implementation at https://github.com/ggflow123/DDRLthis GitHub repository.

Tasks

Reproductions