SOTAVerified

An Empirical Exploration of Local Ordering Pre-training for Structured Prediction

2020-11-01Findings of the Association for Computational LinguisticsCode Available0· sign in to hype

Zhisong Zhang, Xiang Kong, Lori Levin, Eduard Hovy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently, pre-training contextualized encoders with language model (LM) objectives has been shown an effective semi-supervised method for structured prediction. In this work, we empirically explore an alternative pre-training method for contextualized encoders. Instead of predicting words in LMs, we ``mask out'' and predict word order information, with a local ordering strategy and word-selecting objectives. With evaluations on three typical structured prediction tasks (dependency parsing, POS tagging, and NER) over four languages (English, Finnish, Czech, and Italian), we show that our method is consistently beneficial. We further conduct detailed error analysis, including one that examines a specific type of parsing error where the head is misidentified. The results show that pre-trained contextual encoders can bring improvements in a structured way, suggesting that they may be able to capture higher-order patterns and feature combinations from unlabeled data.

Tasks

Reproductions