SOTAVerified

How to Fine-Tune BERT for Text Classification?

2019-05-14Code Available1· sign in to hype

Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
IMDbBERT_large+ITPTAccuracy95.79Unverified
IMDbBERT_base+ITPTAccuracy95.63Unverified
Yelp Binary classificationBERT_large+ITPTError1.81Unverified
Yelp Binary classificationBERT_base+ITPTError1.92Unverified
Yelp Fine-grained classificationBERT_large+ITPTError28.62Unverified
Yelp Fine-grained classificationBERT_base+ITPTError29.42Unverified

Reproductions