SOTAVerified

LightSeq: A High Performance Inference Library for Transformers

2020-10-23NAACL 2021Code Available2· sign in to hype

Xiaohui Wang, Ying Xiong, Yang Wei, Mingxuan Wang, Lei LI

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformer, BERT and their variants have achieved great success in natural language processing. Since Transformer models are huge in size, serving these models is a challenge for real industrial applications. In this paper, we propose LightSeq, a highly efficient inference library for models in the Transformer family. LightSeq includes a series of GPU optimization techniques to to streamline the computation of neural layers and to reduce memory footprint. LightSeq can easily import models trained using PyTorch and Tensorflow. Experimental results on machine translation benchmarks show that LightSeq achieves up to 14x speedup compared with TensorFlow and 1.4x compared with FasterTransformer, a concurrent CUDA implementation. The code is available at https://github.com/bytedance/lightseq.

Tasks

Reproductions