SOTAVerified

Attention Temperature Matters in Abstractive Summarization Distillation

2021-06-07ACL 2022Code Available1· sign in to hype

Shengqiang Zhang, Xingxing Zhang, Hangbo Bao, Furu Wei

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent progress of abstractive text summarization largely relies on large pre-trained sequence-to-sequence Transformer models, which are computationally expensive. This paper aims to distill these large models into smaller ones for faster inference and minimal performance loss. Pseudo-labeling based methods are popular in sequence-to-sequence model distillation. In this paper, we find simply manipulating attention temperatures in Transformers can make pseudo labels easier to learn for student models. Our experiments on three summarization datasets show our proposed method consistently improves over vanilla pseudo-labeling based methods. We also find that both the pseudo labels and summaries produced by our students are shorter and more abstractive. Our code is available at https://github.com/Shengqiang-Zhang/plate.

Tasks

Reproductions