SOTAVerified

XLM-R

XLM-R

Papers

Showing 151200 of 221 papers

TitleStatusHype
Multi-stage Distillation Framework for Cross-Lingual Semantic Similarity Matching0
Prix-LM: Pretraining for Multilingual Knowledge Base Construction0
Combining static and contextualised multilingual embeddings0
Gradient Sparsification For Masked Fine-Tuning of Transformers0
NICT Kyoto Submission for the WMT’21 Quality Estimation Task: Multimetric Multilingual Pretraining for Critical Error Detection0
Saliency-based Multi-View Mixed Language Training for Zero-shot Cross-lingual Classification0
Prix-LM: Pretraining for Multilingual Knowledge Base ConstructionCode0
TEET! Tunisian Dataset for Toxic Speech Detection0
Boosting Transformers for Job Expression Extraction and Classification in a Low-Resource Setting0
On the Universality of Deep Contextual Language Models0
FBERT: A Neural Transformer for Identifying Offensive Content0
Nearest Neighbour Few-Shot Learning for Cross-lingual ClassificationCode0
Siamese Networks for Inference in Malayalam Language Texts0
Classification of Code-Mixed Text Using Capsule Networks0
Cross-Lingual Text Classification of Transliterated Hindi and MalayalamCode0
Contributions of Transformer Attention Heads in Multi- and Cross-lingual Tasks0
LIORI at SemEval-2021 Task 2: Span Prediction and Binary Classification approaches to Word-in-Context Disambiguation0
COSY: COunterfactual SYntax for Cross-Lingual UnderstandingCode0
Applying Occam’s Razor to Transformer-Based Dependency Parsing: What Works, What Doesn’t, and What is Really Necessary0
ARBERT \& MARBERT: Deep Bidirectional Transformers for ArabicCode0
IBM MNLP IE at CASE 2021 Task 1: Multigranular and Multilingual Event Detection on Protest News0
GlossReader at SemEval-2021 Task 2: Reading Definitions Improves Contextualized Word Embeddings0
RobertNLP at the IWPT 2021 Shared Task: Simple Enhanced UD Parsing for 17 Languages0
SkoltechNLP at SemEval-2021 Task 2: Generating Cross-Lingual Training Data for the Word-in-Context Task0
Team “DaDeFrNi” at CASE 2021 Task 1: Document and Sentence Classification for Protest Event Detection0
利用语义关联增强的跨语言预训练模型的译文质量评估(A Cross-language Pre-trained Model with Enhanced Semantic Connection for MT Quality Estimation)0
Emotion Stimulus Detection in German News Headlines0
TGIF: Tree-Graph Integrated-Format Parser for Enhanced UD with Two-Stage Generic- to Individual-Language Finetuning0
A Primer on Pretrained Multilingual Language Models0
Automatic Sexism Detection with Multilingual Transformer Models0
How to Adapt Your Pretrained Multilingual Model to 1600 Languages0
Diagnosing Transformers in Task-Oriented Semantic Parsing0
XeroAlign: Zero-Shot Cross-lingual Transformer AlignmentCode0
Larger-Scale Transformers for Multilingual Masked Language Modeling0
Multilingual and Zero-Shot is Closing in on Monolingual Web Register Classification0
AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource LanguagesCode0
Improving Zero-Shot Cross-Lingual Transfer Learning via Robust TrainingCode0
Bilingual alignment transfers to multilingual alignment for unsupervised parallel text miningCode0
MCL@IITK at SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation using Augmented Data, Signals, and Transformers0
Challenges in Annotating and Parsing Spoken, Code-switched, Frisian-Dutch DataCode0
Benchmarking Pre-trained Language Models for Multilingual NER: TraSpaS at the BSNLP2021 Shared TaskCode0
Priberam Labs at the 3rd Shared Task on SlavNER0
LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation0
Automatic Difficulty Classification of Arabic Sentences0
Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages0
NLP-CUET@DravidianLangTech-EACL2021: Offensive Language Detection from Multilingual Code-Mixed Text using TransformersCode0
Bootstrapping Multilingual AMR with Contextual Word Alignments0
LOME: Large Ontology Multilingual Extraction0
Distilling Large Language Models into Tiny and Effective Students using pQRNN0
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained TransformersCode0
Show:102550
← PrevPage 4 of 5Next →

No leaderboard results yet.