SOTAVerified

Coarse-to-fine Q-Network with Action Sequence for Data-Efficient Robot Learning

2024-11-19Unverified0· sign in to hype

Younggyo Seo, Pieter Abbeel

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Predicting a sequence of actions has been crucial in the success of recent behavior cloning algorithms in robotics. Can similar ideas improve reinforcement learning (RL)? We answer affirmatively by observing that incorporating action sequences when predicting ground-truth return-to-go leads to lower validation loss. Motivated by this, we introduce Coarse-to-fine Q-Network with Action Sequence (CQN-AS), a novel value-based RL algorithm that learns a critic network that outputs Q-values over a sequence of actions, i.e., explicitly training the value function to learn the consequence of executing action sequences. Our experiments show that CQN-AS outperforms several baselines on a variety of sparse-reward humanoid control and tabletop manipulation tasks from BiGym and RLBench.

Tasks

Reproductions