SOTAVerified

A Morphology-Based Representation Model for LSTM-Based Dependency Parsing of Agglutinative Languages

2018-10-01CONLL 2018Code Available0· sign in to hype

{\c{S}}aziye Bet{\"u}l {\"O}zate{\c{s}}, Arzucan {\"O}zg{\"u}r, Tunga G{\"u}ng{\"o}r, Balk{\i}z {\"O}zt{\"u}rk

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose two word representation models for agglutinative languages that better capture the similarities between words which have similar tasks in sentences. Our models highlight the morphological features in words and embed morphological information into their dense representations. We have tested our models on an LSTM-based dependency parser with character-based word embeddings proposed by Ballesteros et al. (2015). We participated in the CoNLL 2018 Shared Task on multilingual parsing from raw text to universal dependencies as the BOUN team. We show that our morphology-based embedding models improve the parsing performance for most of the agglutinative languages.

Tasks

Reproductions