SOTAVerified

A Simple and Effective Dependency Parser for Telugu

2020-07-01ACL 2020Unverified0· sign in to hype

Sneha Nallani, Manish Shrivastava, Dipti Sharma

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present a simple and effective dependency parser for Telugu, a morphologically rich, free word order language. We propose to replace the rich linguistic feature templates used in the past approaches with a minimal feature function using contextual vector representations. We train a BERT model on the Telugu Wikipedia data and use vector representations from this model to train the parser. Each sentence token is associated with a vector representing the token in the context of that sentence and the feature vectors are constructed by concatenating two token representations from the stack and one from the buffer. We put the feature representations through a feedforward network and train with a greedy transition based approach. The resulting parser has a very simple architecture with minimal feature engineering and achieves state-of-the-art results for Telugu.

Tasks

Reproductions