SOTAVerified

Pre-training and Evaluating Transformer-based Language Models for Icelandic

2022-06-01LREC 2022Unverified0· sign in to hype

Jón Guðnason, Hrafn Loftsson

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we evaluate several Transformer-based language models for Icelandic on four downstream tasks: Part-of-Speech tagging, Named Entity Recognition. Dependency Parsing, and Automatic Text Summarization. We pre-train four types of monolingual ELECTRA and ConvBERT models and compare our results to a previously trained monolingual RoBERTa model and the multilingual mBERT model. We find that the Transformer models obtain better results, often by a large margin, compared to previous state-of-the-art models. Furthermore, our results indicate that pre-training larger language models results in a significant reduction in error rates in comparison to smaller models. Finally, our results show that the monolingual models for Icelandic outperform a comparably sized multilingual model.

Tasks

Reproductions