SOTAVerified

-former: Infinite Memory Transformer

2021-09-01Code Available1· sign in to hype

Pedro Henrique Martins, Zita Marinho, André F. T. Martins

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformers are unable to model long-term memories effectively, since the amount of computation they need to perform grows with the context length. While variations of efficient transformers have been proposed, they all have a finite memory capacity and are forced to drop old information. In this paper, we propose the -former, which extends the vanilla transformer with an unbounded long-term memory. By making use of a continuous-space attention mechanism to attend over the long-term memory, the -former's attention complexity becomes independent of the context length, trading off memory length with precision. In order to control where precision is more important, -former maintains "sticky memories" being able to model arbitrarily long contexts while keeping the computation budget fixed. Experiments on a synthetic sorting task, language modeling, and document grounded dialogue generation demonstrate the -former's ability to retain information from long sequences.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CMU DoG∞-former (Sticky memories)F19.01Unverified
PG-19∞-former (Sticky memories + initialized GPT-2 Small)Perplexity32.48Unverified

Reproductions