SOTAVerified

Constraining word alignments with posterior regularization for label transfer

2022-07-01NAACL (ACL) 2022Code Available0· sign in to hype

Thomas Gueudre, Kevin Jose

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Unsupervised word alignments offer a lightweight and interpretable method to transfer labels from high- to low-resource languages, as long as semantically related words have the same label across languages. But such an assumption is often not true in industrial NLP pipelines, where multilingual annotation guidelines are complex and deviate from semantic consistency due to various factors (such as annotation difficulty, conflicting ontology, upcoming feature launches etc.);We address this difficulty by constraining the alignments models to remain consistent with both source and target annotation guidelines , leveraging posterior regularization and labeled examples. We illustrate the overall approach using IBM 2 (fast_align) as a base model, and report results on both internal and external annotated datasets. We measure consistent accuracy improvements on the MultiATIS++ dataset over AWESoME, a popular transformer-based alignment model, in the label projection task (+2.7\% at word-level and +15\% at sentence-level), and show how even a small amount of target language annotations help substantially.

Tasks

Reproductions