SOTAVerified

SiBert: Enhanced Chinese Pre-trained Language Model with Sentence Insertion

2020-05-01LREC 2020Code Available1· sign in to hype

Jiahao Chen, Chenjie Cao, Xiuyan Jiang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Pre-trained models have achieved great success in learning unsupervised language representations by self-supervised tasks on large-scale corpora. Recent studies mainly focus on how to fine-tune different downstream tasks from a general pre-trained model. However, some studies show that customized self-supervised tasks for a particular type of downstream task can effectively help the pre-trained model to capture more corresponding knowledge and semantic information. Hence a new pre-training task called Sentence Insertion (SI) is proposed in this paper for Chinese query-passage pairs NLP tasks including answer span prediction, retrieval question answering and sentence level cloze test. The related experiment results indicate that the proposed SI can improve the performance of the Chinese Pre-trained models significantly. Moreover, a word segmentation method called SentencePiece is utilized to further enhance Chinese Bert performance for tasks with long texts. The complete source code is available at https://github.com/ewrfcas/SiBert\_tensorflow.

Tasks

Reproductions