SOTAVerified

Improved Dependency Parsing using Implicit Word Connections Learned from Unlabeled Data

2018-10-01EMNLP 2018Unverified0· sign in to hype

Wenhui Wang, Baobao Chang, Mairgup Mansur

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Pre-trained word embeddings and language model have been shown useful in a lot of tasks. However, both of them cannot directly capture word connections in a sentence, which is important for dependency parsing given its goal is to establish dependency relations between words. In this paper, we propose to implicitly capture word connections from unlabeled data by a word ordering model with self-attention mechanism. Experiments show that these implicit word connections do improve our parsing model. Furthermore, by combining with a pre-trained language model, our model gets state-of-the-art performance on the English PTB dataset, achieving 96.35\% UAS and 95.25\% LAS.

Tasks

Reproductions