SOTAVerified

Evaluating Contextualized Language Models for Hungarian

2021-02-22Code Available0· sign in to hype

Judit Ács, Dániel Lévai, Dávid Márk Nemeskey, András Kornai

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present an extended comparison of contextualized language models for Hungarian. We compare huBERT, a Hungarian model against 4 multilingual models including the multilingual BERT model. We evaluate these models through three tasks, morphological probing, POS tagging and NER. We find that huBERT works better than the other models, often by a large margin, particularly near the global optimum (typically at the middle layers). We also find that huBERT tends to generate fewer subwords for one word and that using the last subword for token-level tasks is generally a better choice than using the first one.

Tasks

Reproductions