SOTAVerified

Pretrained Multilingual Language Models

Papers

Showing 126 of 26 papers

TitleStatusHype
Improving Word Translation via Two-Stage Contrastive LearningCode1
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language ModelsCode1
Improving Bilingual Lexicon Induction with Cross-Encoder RerankingCode1
Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced LanguagesCode1
Improving Word Translation via Two-Stage Contrastive LearningCode1
A Primer on Pretrained Multilingual Language Models0
Are Pretrained Multilingual Models Equally Fair Across Languages?0
Empirical study of pretrained multilingual language models for zero-shot cross-lingual knowledge transfer in generation0
Probing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders0
Improving Non-autoregressive Translation Quality with Pretrained Language Model, Embedding Distillation and Upsampling Strategy for CTC0
Multi-Source Cross-Lingual Constituency Parsing0
OpenNER 1.0: Standardized Open-Access Named Entity Recognition Datasets in 50+ Languages0
Out of Thin Air: Is Zero-Shot Cross-Lingual Keyword Detection Better Than Unsupervised?0
Rumour Detection via Zero-shot Cross-lingual Transfer Learning0
Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models0
The Effectiveness of Intermediate-Task Training for Code-Switched Natural Language Understanding0
Transliteration: A Simple Technique For Improving Multilingual Language Modeling0
Investigating Math Word Problems using Pretrained Multilingual Language Models0
Exploring the Maze of Multilingual Modeling0
Discovering Low-rank Subspaces for Language-agnostic Multilingual RepresentationsCode0
Investigating Math Word Problems using Pretrained Multilingual Language ModelsCode0
Specializing Multilingual Language Models: An Empirical StudyCode0
Language Agnostic Multilingual Information Retrieval with Contrastive LearningCode0
Robustification of Multilingual Language Models to Real-world Noise in Crosslingual Zero-shot Settings with Robust Contrastive PretrainingCode0
To Adapt or to Fine-tune: A Case Study on Abstractive SummarizationCode0
Are Pretrained Multilingual Models Equally Fair Across Languages?Code0
Show:102550

No leaderboard results yet.