SOTAVerified

Cross-Lingual Natural Language Inference

Using data and models available for one language for which ample such resources are available (e.g., English) to solve a natural language inference task in another, commonly more low-resource, language.

Papers

Showing 131 of 31 papers

TitleStatusHype
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingCode3
mGPT: Few-Shot Learners Go MultilingualCode2
Subword Mapping and Anchoring across LanguagesCode1
Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and BeyondCode1
Supervised Learning of Universal Sentence Representations from Natural Language Inference DataCode1
ByT5: Towards a token-free future with pre-trained byte-to-byte modelsCode1
Better Fine-Tuning by Reducing Representational CollapseCode1
Do Multilingual Language Models Think Better in English?Code1
Leveraging Entailment Judgements in Cross-Lingual SummarisationCode0
Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation and UnderstandingCode0
Enhancing Cross-lingual Natural Language Inference by Prompt-learning from Cross-lingual TemplatesCode0
Enhancing Cross-lingual Natural Language Inference by Soft Prompting with Multilingual VerbalizerCode0
Language Embeddings for Typology and Cross-lingual Transfer LearningCode0
Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud ClustersCode0
PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence PretrainingCode0
Rethinking embedding coupling in pre-trained language modelsCode0
SILT: Efficient transformer training for inter-lingual inferenceCode0
XNLI: Evaluating Cross-lingual Sentence RepresentationsCode0
Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems0
On Learning Universal Representations Across Languages0
A Deep Transfer Learning Method for Cross-Lingual Natural Language Inference0
PARADISE”:" Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining0
Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks0
Data Augmentation with Adversarial Training for Cross-Lingual NLI0
Robust Unsupervised Cross-Lingual Word Embedding using Domain Flow Interpolation0
XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering0
Soft Layer Selection with Meta-Learning for Zero-Shot Cross-Lingual Transfer0
Cross-Lingual Transfer with MAML on Trees0
Meemi: A Simple Method for Post-processing and Integrating Cross-lingual Word Embeddings0
Meta-Learning with MAML on Trees0
Cross-Document Cross-Lingual NLI via RST-Enhanced Graph Fusion and Interpretability Prediction0
Show:102550

No leaderboard results yet.