SOTAVerified

Language Models are Few-Shot Butlers

2021-04-16EMNLP 2021Code Available1· sign in to hype

Vincent Micheli, François Fleuret

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Pretrained language models demonstrate strong performance in most NLP tasks when fine-tuned on small task-specific datasets. Hence, these autoregressive models constitute ideal agents to operate in text-based environments where language understanding and generative capabilities are essential. Nonetheless, collecting expert demonstrations in such environments is a time-consuming endeavour. We introduce a two-stage procedure to learn from a small set of demonstrations and further improve by interacting with an environment. We show that language models fine-tuned with only 1.2% of the expert demonstrations and a simple reinforcement learning algorithm achieve a 51% absolute improvement in success rate over existing methods in the ALFWorld environment.

Tasks

Reproductions