SOTAVerified

Quantization-Free Autoregressive Action Transformer

2025-03-18Code Available0· sign in to hype

Ziyad Sheebaelhamd, Michael Tschannen, Michael Muehlebach, Claire Vernade

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Current transformer-based imitation learning approaches introduce discrete action representations and train an autoregressive transformer decoder on the resulting latent code. However, the initial quantization breaks the continuous structure of the action space thereby limiting the capabilities of the generative model. We propose a quantization-free method instead that leverages Generative Infinite-Vocabulary Transformers (GIVT) as a direct, continuous policy parametrization for autoregressive transformers. This simplifies the imitation learning pipeline while achieving state-of-the-art performance on a variety of popular simulated robotics tasks. We enhance our policy roll-outs by carefully studying sampling algorithms, further improving the results.

Tasks

Reproductions