SOTAVerified

Natural Language Inference

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

| Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. |

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Papers

Showing 9761000 of 1961 papers

TitleStatusHype
Semeval-2012 Task 8: Cross-lingual Textual Entailment for Content Synchronization0
SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge0
Semeval-2013 Task 8: Cross-lingual Textual Entailment for Content Synchronization0
SemEval-2014 Task 10: Multilingual Semantic Textual Similarity0
SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment0
SemEval-2015 Task 17: Taxonomy Extraction Evaluation (TExEval)0
SemEval-2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability0
SemEval-2016 Task 13: Taxonomy Extraction Evaluation (TExEval-2)0
SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation0
SemEval-2016 Task 6: Detecting Stance in Tweets0
SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation0
SemEval-2018 Task 9: Hypernym Discovery0
SemEval-2020 Task 2: Predicting Multilingual and Cross-Lingual (Graded) Lexical Entailment0
SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data0
SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials0
Semi-Automatic Construction of a Textual Entailment Dataset: Selecting Candidates with Vector Space Models0
Semi-Markov Phrase-Based Monolingual Alignment0
Semi-Supervised Clustering for Short Answer Scoring0
SenseBERT: Driving Some Sense into BERT0
Sentence Embedding Evaluation Using Pyramid Annotation0
Sentence Modeling via Multiple Word Embeddings and Multi-level Comparison for Semantic Textual Similarity0
Sentence Pair Embeddings Based Evaluation Metric for Abstractive and Extractive Summarization0
Sentiment-Stance-Specificity (SSS) Dataset: Identifying Support-based Entailment among Opinions.0
SERC: Syntactic and Semantic Sequence based Event Relation Classification0
Service-oriented Text-to-SQL Parsing0
Show:102550
← PrevPage 40 of 79Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)% Test Accuracy94.7Unverified
2UnitedSynT5 (335M)% Test Accuracy93.5Unverified
3Neural Tree Indexers for Text Understanding% Test Accuracy93.1Unverified
4EFL (Entailment as Few-shot Learner) + RoBERTa-large% Test Accuracy93.1Unverified
5RoBERTa-large+Self-Explaining% Test Accuracy92.3Unverified
6RoBERTa-large + self-explaining layer% Test Accuracy92.3Unverified
7CA-MTL% Test Accuracy92.1Unverified
8SemBERT% Test Accuracy91.9Unverified
9MT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
10MT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
#ModelMetricClaimedVerifiedStatus
1Vega v2 6B (KD-based prompt transfer)Accuracy96Unverified
2PaLM 540B (fine-tuned)Accuracy95.7Unverified
3Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy94.1Unverified
4ST-MoE-32B 269B (fine-tuned)Accuracy93.5Unverified
5DeBERTa-1.5BAccuracy93.2Unverified
6MUPPET Roberta LargeAccuracy92.8Unverified
7DeBERTaV3largeAccuracy92.7Unverified
8T5-XXL 11B (fine-tuned)Accuracy92.5Unverified
9T5-XXL 11BAccuracy92.5Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy92.1Unverified
#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)Matched92.6Unverified
2Turing NLR v5 XXL 5.4B (fine-tuned)Matched92.6Unverified
3T5-XXL 11B (fine-tuned)Matched92Unverified
4T5Matched92Unverified
5T5-11BMismatched91.7Unverified
6T5-3BMatched91.4Unverified
7ALBERTMatched91.3Unverified
8Adv-RoBERTa ensembleMatched91.1Unverified
9DeBERTa (large)Matched91.1Unverified
10SMARTRoBERTaDev Matched91.1Unverified