SOTAVerified

Natural Language Inference

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

| Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. |

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Papers

Showing 851900 of 1961 papers

TitleStatusHype
Diff-Explainer: Differentiable Convex Optimization for Explainable Multi-hop Inference0
Scalar Adjective Identification and Multilingual Ranking0
Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks0
Switching Contexts: Transportability Measures for NLPCode0
Evaluating Attribution in Dialogue Systems: The BEGIN BenchmarkCode1
Entailment as Few-Shot LearnerCode1
PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel ComputationCode1
Finding Fuzziness in Neural Network Models of Language ProcessingCode0
SimCSE: Simple Contrastive Learning of Sentence EmbeddingsCode2
Can NLI Models Verify QA Systems' Predictions?Code1
Distributed NLI: Learning to Predict Human Opinion Distributions for Language ReasoningCode1
Q^2: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question AnsweringCode1
Supervising Model Attention with Human Explanations for Robust Natural Language InferenceCode0
How to Train BERT with an Academic BudgetCode1
Does Putting a Linguist in the Loop Improve NLU Data Collection?0
"I'm Not Mad": Commonsense Implications of Negation and Contradiction0
Cross-Lingual Word Embedding Refinement by _1 Norm OptimisationCode1
Unsupervised Learning of Explainable Parse Trees for Improved GeneralisationCode0
NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model PerformanceCode0
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt CollectionsCode1
Incorporating External Knowledge to Enhance Tabular ReasoningCode1
Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group MasksCode0
BreakingBERT@IITK at SemEval-2021 Task 9 : Statement Verification and Evidence Finding with TablesCode0
Improving Pretrained Models for Zero-shot Multi-label Text Classification through Reinforced Label Hierarchy ReasoningCode0
Cross-Lingual Transfer with MAML on Trees0
How Fast can BERT Learn Simple Natural Language Inference?0
SICK-NL: A Dataset for Dutch Natural Language InferenceCode0
A Simple Three-Step Approach for the Automatic Detection of Exaggerated Statements in Health Science News0
Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference0
You Can Do Better! If You Elaborate the Reason When Making Prediction0
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-20
Unsupervised Contextual Paraphrase Generation using Lexical Control and Reinforcement Learning0
TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing0
SILT: Efficient transformer training for inter-lingual inferenceCode0
Robustly Optimized and Distilled Training for Natural Language Understanding0
Get Your Vitamin C! Robust Fact Verification with Contrastive EvidenceCode1
Meta-Learning with MAML on Trees0
Overcoming Poor Word Embeddings with Word Definitions0
PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen DomainsCode1
Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n ParametersCode1
Have Attention Heads in BERT Learned Constituency Grammar?0
Capturing Label Distribution: A Case Study in NLI0
Language Models for Lexical Inference in ContextCode0
Nyströmformer: A Nyström-Based Algorithm for Approximating Self-AttentionCode1
A Note on Argumentative Topology: Circularity and Syllogisms as Unsolved Problems0
Does Putting a Linguist in the Loop Improve NLU Data Collection?0
LSOIE: A Large-Scale Dataset for Supervised Open Information ExtractionCode1
Muppet: Massive Multi-task Representations with Pre-FinetuningCode0
Exploring Transitivity in Neural NLI Models through VeridicalityCode0
Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks0
Show:102550
← PrevPage 18 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)% Test Accuracy94.7Unverified
2UnitedSynT5 (335M)% Test Accuracy93.5Unverified
3EFL (Entailment as Few-shot Learner) + RoBERTa-large% Test Accuracy93.1Unverified
4Neural Tree Indexers for Text Understanding% Test Accuracy93.1Unverified
5RoBERTa-large+Self-Explaining% Test Accuracy92.3Unverified
6RoBERTa-large + self-explaining layer% Test Accuracy92.3Unverified
7CA-MTL% Test Accuracy92.1Unverified
8SemBERT% Test Accuracy91.9Unverified
9MT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
10MT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
#ModelMetricClaimedVerifiedStatus
1Vega v2 6B (KD-based prompt transfer)Accuracy96Unverified
2PaLM 540B (fine-tuned)Accuracy95.7Unverified
3Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy94.1Unverified
4ST-MoE-32B 269B (fine-tuned)Accuracy93.5Unverified
5DeBERTa-1.5BAccuracy93.2Unverified
6MUPPET Roberta LargeAccuracy92.8Unverified
7DeBERTaV3largeAccuracy92.7Unverified
8T5-XXL 11BAccuracy92.5Unverified
9T5-XXL 11B (fine-tuned)Accuracy92.5Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy92.1Unverified
#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)Matched92.6Unverified
2Turing NLR v5 XXL 5.4B (fine-tuned)Matched92.6Unverified
3T5-XXL 11B (fine-tuned)Matched92Unverified
4T5Matched92Unverified
5T5-11BMismatched91.7Unverified
6T5-3BMatched91.4Unverified
7ALBERTMatched91.3Unverified
8DeBERTa (large)Matched91.1Unverified
9Adv-RoBERTa ensembleMatched91.1Unverified
10SMARTRoBERTaDev Matched91.1Unverified