SOTAVerified

Compressing Pre-trained Language Models by Matrix Decomposition

2020-12-01Asian Chapter of the Association for Computational LinguisticsUnverified0· sign in to hype

Matan Ben Noach, Yoav Goldberg

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large pre-trained language models reach state-of-the-art results on many different NLP tasks when fine-tuned individually; They also come with a significant memory and computational requirements, calling for methods to reduce model sizes (green AI). We propose a two-stage model-compression method to reduce a model's inference time cost. We first decompose the matrices in the model into smaller matrices and then perform feature distillation on the internal representation to recover from the decomposition. This approach has the benefit of reducing the number of parameters while preserving much of the information within the model. We experimented on BERT-base model with the GLUE benchmark dataset and show that we can reduce the number of parameters by a factor of 0.4x, and increase inference speed by a factor of 1.45x, while maintaining a minimal loss in metric performance.

Tasks

Reproductions