SOTAVerified

Natural Language Inference

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

| Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. |

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Papers

Showing 18511875 of 1961 papers

TitleStatusHype
SOFTCARDINALITY: Learning to Identify Directional Cross-Lingual Entailment from Cardinalities and SMT0
SOFTCARDINALITY: Hierarchical Text Overlap for Student Response Analysis0
SOFTCARDINALITY-CORE: Improving Text Overlap with Distributional Measures for Semantic Textual Similarity0
ALTN: Word Alignment Features for Cross-lingual Textual Entailment0
LIPN-CORE: Semantic Text Similarity using n-grams, WordNet, Syntactic Analysis, ESA and Information Retrieval based Features0
LIMSIILES: Basic English Substitution for Student Answer Assessment at SemEval 20130
PolyUCOMP-CORE\_TYPED: Computing Semantic Textual Similarity using Overlapped Senses0
UKP-BIU: Similarity and Entailment Metrics for Student Response Analysis0
UMCC\_DLSI: Textual Similarity based on Lexical-Semantic features0
Umelb: Cross-lingual Textual Entailment with Word Alignment and String Similarity Features0
iKernels-Core: Tree Kernel Learning for Textual Similarity0
Using the text to evaluate short answers for reading comprehension exercises0
UTTime: Temporal Relation Classification using Deep Syntactic Features0
Semeval-2013 Task 8: Cross-lingual Textual Entailment for Content Synchronization0
SXUCFN-Core: STS Models Integrating FrameNet Parsing Information0
SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge0
EHU-ALM: Similarity-Feature Based Approach for Student Response Analysis0
*SEM 2013 shared task: Semantic Textual Similarity0
ETS: Domain Adaptation and Stacking for Short Answer Scoring0
CFILT-CORE: Semantic Textual Similarity using Universal Networking Language0
Large-Scale Paraphrasing for Natural Language Understanding0
Multi-Metric Optimization Using Ensemble Tuning0
Global Inference for Bridging Anaphora Resolution0
The Life and Death of Discourse Entities: Identifying Singleton Mentions0
A Search Task Dataset for German Textual Entailment0
Show:102550
← PrevPage 75 of 79Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)% Test Accuracy94.7Unverified
2UnitedSynT5 (335M)% Test Accuracy93.5Unverified
3EFL (Entailment as Few-shot Learner) + RoBERTa-large% Test Accuracy93.1Unverified
4Neural Tree Indexers for Text Understanding% Test Accuracy93.1Unverified
5RoBERTa-large+Self-Explaining% Test Accuracy92.3Unverified
6RoBERTa-large + self-explaining layer% Test Accuracy92.3Unverified
7CA-MTL% Test Accuracy92.1Unverified
8SemBERT% Test Accuracy91.9Unverified
9MT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
10MT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
#ModelMetricClaimedVerifiedStatus
1Vega v2 6B (KD-based prompt transfer)Accuracy96Unverified
2PaLM 540B (fine-tuned)Accuracy95.7Unverified
3Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy94.1Unverified
4ST-MoE-32B 269B (fine-tuned)Accuracy93.5Unverified
5DeBERTa-1.5BAccuracy93.2Unverified
6MUPPET Roberta LargeAccuracy92.8Unverified
7DeBERTaV3largeAccuracy92.7Unverified
8T5-XXL 11BAccuracy92.5Unverified
9T5-XXL 11B (fine-tuned)Accuracy92.5Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy92.1Unverified
#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)Matched92.6Unverified
2Turing NLR v5 XXL 5.4B (fine-tuned)Matched92.6Unverified
3T5-XXL 11B (fine-tuned)Matched92Unverified
4T5Matched92Unverified
5T5-11BMismatched91.7Unverified
6T5-3BMatched91.4Unverified
7ALBERTMatched91.3Unverified
8DeBERTa (large)Matched91.1Unverified
9Adv-RoBERTa ensembleMatched91.1Unverified
10SMARTRoBERTaDev Matched91.1Unverified