SOTAVerified

ANGOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model

2024-04-03Code Available0· sign in to hype

Osvaldo Luamba Quinjica, David Ifeoluwa Adelani

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In recent years, the development of pre-trained language models (PLMs) has gained momentum, showcasing their capacity to transcend linguistic barriers and facilitate knowledge transfer across diverse languages. However, this progress has predominantly bypassed the inclusion of very-low resource languages, creating a notable void in the multilingual landscape. This paper addresses this gap by introducing four tailored PLMs specifically finetuned for Angolan languages, employing a Multilingual Adaptive Fine-tuning (MAFT) approach. In this paper, we survey the role of informed embedding initialization and synthetic data in enhancing the performance of MAFT models in downstream tasks. We improve baseline over SOTA AfroXLMR-base (developed through MAFT) and OFA (an effective embedding initialization) by 12.3 and 3.8 points respectively.

Tasks

Reproductions