SOTAVerified

Cross-Lingual Transfer

Cross-lingual transfer refers to transfer learning using data and models available for one language for which ample such resources are available (e.g., English) to solve tasks in another, commonly more low-resource, language.

Papers

Showing 626650 of 782 papers

TitleStatusHype
Exploring Benefits of Transfer Learning in Neural Machine Translation0
Exploring Cross-Lingual Transfer Learning with Unsupervised Machine Translation0
Exploring Cross-Lingual Transfer of Morphological Knowledge In Sequence-to-Sequence Models0
Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification0
Exploring Pretraining via Active Forgetting for Improving Cross Lingual Transfer for Decoder Language Models0
Exploring the Impact of Data Quantity on ASR in Extremely Low-resource Languages0
Exposing the limits of Zero-shot Cross-lingual Hate Speech Detection0
Extending Multilingual BERT to Low-Resource Languages0
Feature Aggregation in Zero-Shot Cross-Lingual Transfer Using Multilingual BERT0
Few-Shot Cross-Lingual Transfer for Prompting Large Language Models in Low-Resource Languages0
Fine-Tuning BERT with Character-Level Noise for Zero-Shot Transfer to Dialects and Closely-Related Languages0
Florenz: Scaling Laws for Systematic Generalization in Vision-Language Models0
Fortification of Neural Morphological Segmentation Models for Polysynthetic Minimal-Resource Languages0
FrameNet on the Way to Babel: Creating a Bilingual FrameNet Using Wiktionary as Interlingual Connection0
FreeTransfer-X: Safe and Label-Free Cross-Lingual Transfer from Off-the-Shelf Models0
From Monolingual to Multilingual FAQ Assistant using Multilingual Co-training0
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers0
Frustratingly Easy Cross-Lingual Transfer for Transition-Based Dependency Parsing0
Frustratingly Simple Regularization to Improve Zero-shot Cross-lingual Robustness0
GCDH@LT-EDI-EACL2021: XLM-RoBERTa for Hope Speech Detection in English, Malayalam, and Tamil0
Generalization Measures for Zero-Shot Cross-Lingual Transfer0
GlossReader at SemEval-2021 Task 2: Reading Definitions Improves Contextualized Word Embeddings0
HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training0
How do languages influence each other? Studying cross-lingual data sharing during LM fine-tuning0
How Do Multilingual Encoders Learn Cross-lingual Representation?0
Show:102550
← PrevPage 26 of 32Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PaLM 2 (few-shot)Accuracy94.4Unverified
2mT0-13BAccuracy84.45Unverified
3RoBERTa Large (translate test)Accuracy76.05Unverified
4BLOOMZAccuracy75.5Unverified
5MAD-X BaseAccuracy60.94Unverified
6mGPTAccuracy55.5Unverified