SOTAVerified

TopicBERT for Energy Efficient Document Classification

2020-10-15Findings of the Association for Computational LinguisticsCode Available1· sign in to hype

Yatin Chaudhary, Pankaj Gupta, Khushbu Saxena, Vivek Kulkarni, Thomas Runkler, Hinrich Schütze

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Prior research notes that BERT's computational cost grows quadratically with sequence length thus leading to longer training times, higher GPU memory constraints and carbon emissions. While recent work seeks to address these scalability issues at pre-training, these issues are also prominent in fine-tuning especially for long sequence tasks like document classification. Our work thus focuses on optimizing the computational cost of fine-tuning for document classification. We achieve this by complementary learning of both topic and language models in a unified framework, named TopicBERT. This significantly reduces the number of self-attention operations - a main performance bottleneck. Consequently, our model achieves a 1.4x (40\%) speedup with 40\% reduction in CO_2 emission while retaining 99.9\% performance over 5 datasets.

Tasks

Reproductions