SOTAVerified

Semi-supervised Domain Adaptation for Dependency Parsing

2019-07-01ACL 2019Code Available0· sign in to hype

Zhenghua Li, Xue Peng, Min Zhang, Rui Wang, Luo Si

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

During the past decades, due to the lack of sufficient labeled data, most studies on cross-domain parsing focus on unsupervised domain adaptation, assuming there is no target-domain training data. However, unsupervised approaches make limited progress so far due to the intrinsic difficulty of both domain adaptation and parsing. This paper tackles the semi-supervised domain adaptation problem for Chinese dependency parsing, based on two newly-annotated large-scale domain-aware datasets. We propose a simple domain embedding approach to merge the source- and target-domain training data, which is shown to be more effective than both direct corpus concatenation and multi-task learning. In order to utilize unlabeled target-domain data, we employ the recent contextualized word representations and show that a simple fine-tuning procedure can further boost cross-domain parsing accuracy by large margin.

Tasks

Reproductions