SOTAVerified

Pre-training Pre-trained Models with Auxiliary Labels and Fine-tuning for Text Classification

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

With the development of pre-trained models, the performance of text classification has been continuously improved. However, we argue that directly employing features generated by pre-trained models for text classification might fail to fully capture discriminative features. For example, in sentiment classification, both the words ``very good'' and ``I would still choose'' are indicative of the positive sentiment. Most pre-trained model based approaches pay more attention to ``very good'' while ignoring the second when both are in the same sentence. To fully capture discriminative features, in this paper, we incorporate auxiliary labels to exploit the knowledge in the pre-trained model. In specific, we pre-train a pre-trained model incorporating auxiliary labels to effectively extract the discriminative textual semantic representation. Then, the classifier is fine tuned. Moreover, multiple pre-trained models are combined to further provide the textual semantic representation. Experiments were conducted on seven classification tasks and the results show that the proposed approach outperforms several baselines.

Tasks

Reproductions