SOTAVerified

The birth of Romanian BERT

2020-09-18Findings of the Association for Computational LinguisticsCode Available1· sign in to hype

Stefan Daniel Dumitrescu, Andrei-Marius Avram, Sampo Pyysalo

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large-scale pretrained language models have become ubiquitous in Natural Language Processing. However, most of these models are available either in high-resource languages, in particular English, or as multilingual models that compromise performance on individual languages for coverage. This paper introduces Romanian BERT, the first purely Romanian transformer-based language model, pretrained on a large text corpus. We discuss corpus composition and cleaning, the model training process, as well as an extensive evaluation of the model on various Romanian datasets. We open source not only the model itself, but also a repository that contains information on how to obtain the corpus, fine-tune and use this model in production (with practical examples), and how to fully replicate the evaluation process.

Tasks

Reproductions