SOTAVerified

MVP-BERT: Redesigning Vocabularies for Chinese BERT and Multi-Vocab Pretraining

2020-11-17Unverified0· sign in to hype

Wei Zhu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Despite the development of pre-trained language models (PLMs) significantly raise the performances of various Chinese natural language processing (NLP) tasks, the vocabulary for these Chinese PLMs remain to be the one provided by Google Chinese Bert devlin2018bert, which is based on Chinese characters. Second, the masked language model pre-training is based on a single vocabulary, which limits its downstream task performances. In this work, we first propose a novel method, seg\_tok, to form the vocabulary of Chinese BERT, with the help of Chinese word segmentation (CWS) and subword tokenization. Then we propose three versions of multi-vocabulary pretraining (MVP) to improve the models expressiveness. Experiments show that: (a) compared with char based vocabulary, seg\_tok does not only improves the performances of Chinese PLMs on sentence level tasks, it can also improve efficiency; (b) MVP improves PLMs' downstream performance, especially it can improve seg\_tok's performances on sequence labeling tasks.

Tasks

Reproductions