SOTAVerified

Relational Abstractions for Generalized Reinforcement Learning on Symbolic Problems

2022-04-27Unverified0· sign in to hype

Rushang Karia, Siddharth Srivastava

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Reinforcement learning in problems with symbolic state spaces is challenging due to the need for reasoning over long horizons. This paper presents a new approach that utilizes relational abstractions in conjunction with deep learning to learn a generalizable Q-function for such problems. The learned Q-function can be efficiently transferred to related problems that have different object names and object quantities, and thus, entirely different state spaces. We show that the learned generalized Q-function can be utilized for zero-shot transfer to related problems without an explicit, hand-coded curriculum. Empirical evaluations on a range of problems show that our method facilitates efficient zero-shot transfer of learned knowledge to much larger problem instances containing many objects.

Tasks

Reproductions