SOTAVerified

SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization

2019-11-08ACL 2020Code Available1· sign in to hype

Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Tuo Zhao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transfer learning has fundamentally changed the landscape of natural language processing (NLP) research. Many existing state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model. To address the above issue in a more principled manner, we propose a new computational framework for robust and efficient fine-tuning for pre-trained language models. Specifically, our proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the capacity of the model; 2. Bregman proximal point optimization, which is a class of trust-region methods and can prevent knowledge forgetting. Our experiments demonstrate that our proposed method achieves the state-of-the-art performance on multiple NLP benchmarks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AXT5Accuracy53.1Unverified
MNLI + SNLI + ANLI + FEVERSMARTRoBERTa-LARGE% Dev Accuracy57.1Unverified
MultiNLISMART-BERTDev Matched85.6Unverified
MultiNLISMART+BERT-BASEAccuracy85.6Unverified
MultiNLISMARTRoBERTaDev Matched91.1Unverified
MultiNLIT5Matched92Unverified
MultiNLIMT-DNN-SMARTv0Accuracy85.7Unverified
MultiNLIMT-DNN-SMARTAccuracy85.7Unverified
QNLIALICEAccuracy99.2Unverified
QNLIMT-DNN-SMARTAccuracy99.2Unverified
RTESMARTAccuracy71.2Unverified
RTET5-XXL 11BAccuracy92.5Unverified
RTESMARTRoBERTaAccuracy92Unverified
RTESMART-BERTAccuracy71.2Unverified
SciTailMT-DNN-SMART_1%ofTrainingDataDev Accuracy88.6Unverified
SciTailMT-DNN-SMART_0.1%ofTrainingDataDev Accuracy82.3Unverified
SciTailMT-DNN-SMARTLARGEv0% Dev Accuracy96.6Unverified
SciTailMT-DNN-SMART_100%ofTrainingDataDev Accuracy96.1Unverified
SciTailMT-DNN-SMART_10%ofTrainingDataDev Accuracy91.3Unverified
SNLIMT-DNN-SMART_1%ofTrainingDataDev Accuracy86Unverified
SNLIMT-DNN-SMART_0.1%ofTrainingDataDev Accuracy82.7Unverified
SNLIMT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
SNLIMT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
SNLIMT-DNN-SMART_10%ofTrainingDataDev Accuracy88.7Unverified

Reproductions