SOTAVerified

RoBERTurk: Adjusting RoBERTa for Turkish

2024-01-07Unverified0· sign in to hype

Nuri Tas

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We pretrain RoBERTa on a Turkish corpora using BPE tokenizer. Our model outperforms BERTurk family models on the BOUN dataset for the POS task while resulting in underperformance on the IMST dataset for the same task and achieving competitive scores on the Turkish split of the XTREME dataset for the NER task - all while being pretrained on smaller data than its competitors. We release our pretrained model and tokenizer.

Tasks

Reproductions