SOTAVerified

XLM-R

XLM-R

Papers

Showing 2650 of 221 papers

TitleStatusHype
COVID-19 Named Entity Recognition for VietnameseCode1
ARBERT & MARBERT: Deep Bidirectional Transformers for ArabicCode1
DUMB: A Benchmark for Smart Evaluation of Dutch ModelsCode1
BERTweet: A pre-trained language model for English TweetsCode1
Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine TranslationCode1
GREEK-BERT: The Greeks visiting Sesame StreetCode1
ESCOXLM-R: Multilingual Taxonomy-driven Pre-training for the Job Market DomainCode1
FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual ModelsCode1
Applying Occam's Razor to Transformer-Based Dependency Parsing: What Works, What Doesn't, and What is Really NecessaryCode1
Towards Leaving No Indic Language Behind: Building Monolingual Corpora, Benchmark and Models for Indic LanguagesCode1
IndoNLI: A Natural Language Inference Dataset for IndonesianCode1
Investigating Transfer Learning in Multilingual Pre-trained Language Models through Chinese Natural Language InferenceCode1
Code-Mixing on Sesame Street: Dawn of the Adversarial PolyglotsCode1
Improving Bilingual Lexicon Induction with Cross-Encoder RerankingCode1
GrEmLIn: A Repository of Green Baseline Embeddings for 87 Low-Resource Languages Injected with Multilingual Graph KnowledgeCode1
Adapting Pre-trained Language Models to African Languages via Multilingual Adaptive Fine-TuningCode1
AmaSQuAD: A Benchmark for Amharic Extractive Question Answering0
Do Not Fire the Linguist: Grammatical Profiles Help Language Models Detect Semantic Change0
BERTifying Sinhala -- A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification0
Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems0
Applying Occam’s Razor to Transformer-Based Dependency Parsing: What Works, What Doesn’t, and What is Really Necessary0
A Primer on Pretrained Multilingual Language Models0
DN at SemEval-2023 Task 12: Low-Resource Language Text Classification via Multilingual Pretrained Language Model Fine-tuning0
BabyLMs for isiXhosa: Data-Efficient Language Modelling in a Low-Resource Context0
Massively Multilingual Lexical Specialization of Multilingual Transformers0
Show:102550
← PrevPage 2 of 9Next →

No leaderboard results yet.