SOTAVerified

XLM-R

XLM-R

Papers

Showing 176200 of 221 papers

TitleStatusHype
利用语义关联增强的跨语言预训练模型的译文质量评估(A Cross-language Pre-trained Model with Enhanced Semantic Connection for MT Quality Estimation)0
Emotion Stimulus Detection in German News Headlines0
TGIF: Tree-Graph Integrated-Format Parser for Enhanced UD with Two-Stage Generic- to Individual-Language Finetuning0
A Primer on Pretrained Multilingual Language Models0
Automatic Sexism Detection with Multilingual Transformer Models0
How to Adapt Your Pretrained Multilingual Model to 1600 Languages0
Diagnosing Transformers in Task-Oriented Semantic Parsing0
XeroAlign: Zero-Shot Cross-lingual Transformer AlignmentCode0
Larger-Scale Transformers for Multilingual Masked Language Modeling0
Multilingual and Zero-Shot is Closing in on Monolingual Web Register Classification0
AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource LanguagesCode0
Improving Zero-Shot Cross-Lingual Transfer Learning via Robust TrainingCode0
Bilingual alignment transfers to multilingual alignment for unsupervised parallel text miningCode0
MCL@IITK at SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation using Augmented Data, Signals, and Transformers0
Challenges in Annotating and Parsing Spoken, Code-switched, Frisian-Dutch DataCode0
Benchmarking Pre-trained Language Models for Multilingual NER: TraSpaS at the BSNLP2021 Shared TaskCode0
Priberam Labs at the 3rd Shared Task on SlavNER0
LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation0
Automatic Difficulty Classification of Arabic Sentences0
Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages0
NLP-CUET@DravidianLangTech-EACL2021: Offensive Language Detection from Multilingual Code-Mixed Text using TransformersCode0
Bootstrapping Multilingual AMR with Contextual Word Alignments0
LOME: Large Ontology Multilingual Extraction0
Distilling Large Language Models into Tiny and Effective Students using pQRNN0
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained TransformersCode0
Show:102550
← PrevPage 8 of 9Next →

No leaderboard results yet.