SOTAVerified

Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text Diacritization

2023-03-25Unverified0· sign in to hype

Bashar Al-Rfooh, Gheith Abandah, Rami Al-Rfou

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Most of previous work on learning diacritization of the Arabic language relied on training models from scratch. In this paper, we investigate how to leverage pre-trained language models to learn diacritization. We finetune token-free pre-trained multilingual models (ByT5) to learn to predict and insert missing diacritics in Arabic text, a complex task that requires understanding the sentence semantics and the morphological structure of the tokens. We show that we can achieve state-of-the-art on the diacritization task with minimal amount of training and no feature engineering, reducing WER by 40%. We release our finetuned models for the greater benefit of the researchers in the community.

Tasks

Reproductions