SOTAVerified

One model, two languages: training bilingual parsers with harmonized treebanks

2015-07-30ACL 2016Unverified0· sign in to hype

David Vilares, Carlos Gómez-Rodríguez, Miguel A. Alonso

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We introduce an approach to train lexicalized parsers using bilingual corpora obtained by merging harmonized treebanks of different languages, producing parsers that can analyze sentences in either of the learned languages, or even sentences that mix both. We test the approach on the Universal Dependency Treebanks, training with MaltParser and MaltOptimizer. The results show that these bilingual parsers are more than competitive, as most combinations not only preserve accuracy, but some even achieve significant improvements over the corresponding monolingual parsers. Preliminary experiments also show the approach to be promising on texts with code-switching and when more languages are added.

Tasks

Reproductions