SOTAVerified

Dependency Parsing with LSTMs: An Empirical Evaluation

2016-04-22Unverified0· sign in to hype

Adhiguna Kuncoro, Yuichiro Sawai, Kevin Duh, Yuji Matsumoto

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose a transition-based dependency parser using Recurrent Neural Networks with Long Short-Term Memory (LSTM) units. This extends the feedforward neural network parser of Chen and Manning (2014) and enables modelling of entire sequences of shift/reduce transition decisions. On the Google Web Treebank, our LSTM parser is competitive with the best feedforward parser on overall accuracy and notably achieves more than 3% improvement for long-range dependencies, which has proved difficult for previous transition-based parsers due to error propagation and limited context information. Our findings additionally suggest that dropout regularisation on the embedding layer is crucial to improve the LSTM's generalisation.

Tasks

Reproductions