SOTAVerified

Morphology-rich Alphasyllabary Embeddings

2020-05-01LREC 2020Unverified0· sign in to hype

Amanuel Mersha, Stephen Wu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Word embeddings have been successfully trained in many languages. However, both intrinsic and extrinsic metrics are variable across languages, especially for languages that depart significantly from English in morphology and orthography. This study focuses on building a word embedding model suitable for the Semitic language of Amharic (Ethiopia), which is both morphologically rich and written as an alphasyllabary (abugida) rather than an alphabet. We compare embeddings from tailored neural models, simple pre-processing steps, off-the-shelf baselines, and parallel tasks on a better-resourced Semitic language -- Arabic. Experiments show our model's performance on word analogy tasks, illustrating the divergent objectives of morphological vs. semantic analogies.

Tasks

Reproductions