Learning State Abstractions for Transfer in Continuous Control
Kavosh Asadi, David Abel, Michael L. Littman
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/anonicml2019/icml_2019_state_abstractionOfficialIn papernone★ 2
- github.com/david-abel/continuous_state_sanone★ 1
Abstract
Can simple algorithms with a good representation solve challenging reinforcement learning problems? In this work, we answer this question in the affirmative, where we take "simple learning algorithm" to be tabular Q-Learning, the "good representations" to be a learned state abstraction, and "challenging problems" to be continuous control tasks. Our main contribution is a learning algorithm that abstracts a continuous state-space into a discrete one. We transfer this learned representation to unseen problems to enable effective learning. We provide theory showing that learned abstractions maintain a bounded value loss, and we report experiments showing that the abstractions empower tabular Q-Learning to learn efficiently in unseen tasks.