SOTAVerified

Transformer-QL: A Step Towards Making Transformer Network Quadratically Large

2021-01-01Unverified0· sign in to hype

Suvadeep Hajra

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Transformer networks have shown outstanding performance on many natural language processing tasks. However the context length (the number of previous tokens on which the output states depend) of a Transformer network grows at best linearly with the memory and computational power used. This limitation prevents a transformer network to have very long context in a resource limited application. In this work, we propose a class of transformer networks, namely Transformer-QL (Quadratically Large), in which, the context length can grow at best quadratically with the memory and computational power used. We have empirically evaluated a Transformer-QL model in three long range language modeling datasets. The results show that Transformer-QL can provide significant improvements over other state of the art networks.

Tasks

Reproductions