SOTAVerified

Natural Language Inference

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

| Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. |

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Papers

Showing 18511900 of 1961 papers

TitleStatusHype
CLaC-CORE: Exhaustive Feature Combination for Measuring Textual Similarity0
CFILT-CORE: Semantic Textual Similarity using Universal Networking Language0
UKP-BIU: Similarity and Entailment Metrics for Student Response Analysis0
UMCC\_DLSI: Textual Similarity based on Lexical-Semantic features0
Umelb: Cross-lingual Textual Entailment with Word Alignment and String Similarity Features0
ALTN: Word Alignment Features for Cross-lingual Textual Entailment0
Celi: EDITS and Generic Text Pair Classification0
ECNUCS: Recognizing Cross-lingual Textual Entailment Using Multiple Text Similarity and Text Difference Measures0
Global Inference for Bridging Anaphora Resolution0
SOFTCARDINALITY: Learning to Identify Directional Cross-Lingual Entailment from Cardinalities and SMT0
SOFTCARDINALITY: Hierarchical Text Overlap for Student Response Analysis0
SOFTCARDINALITY-CORE: Improving Text Overlap with Distributional Measures for Semantic Textual Similarity0
Montague Meets Markov: Deep Semantics with Probabilistic Logical Form0
Using the text to evaluate short answers for reading comprehension exercises0
UTTime: Temporal Relation Classification using Deep Syntactic Features0
ETS: Domain Adaptation and Stacking for Short Answer Scoring0
Multi-Metric Optimization Using Ensemble Tuning0
iKernels-Core: Tree Kernel Learning for Textual Similarity0
Semeval-2013 Task 8: Cross-lingual Textual Entailment for Content Synchronization0
SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge0
BUAP: N-gram based Feature Evaluation for the Cross-Lingual Textual Entailment Task0
*SEM 2013 shared task: Semantic Textual Similarity0
LIPN-CORE: Semantic Text Similarity using n-grams, WordNet, Syntactic Analysis, ESA and Information Retrieval based Features0
LIMSIILES: Basic English Substitution for Student Answer Assessment at SemEval 20130
Logic Programs vs. First-Order Formulas in Textual Inference0
Toward Fine-grained Annotation of Modality in Text0
A Search Task Dataset for German Textual Entailment0
UCCA: A Semantics-based Grammatical Annotation Scheme0
Semantic Annotation of Textual Entailment0
Modeling Semantic Relations Expressed by Prepositions0
Squibs: What Is a Paraphrase?0
Good, Great, Excellent: Global Inference of Semantic Intensities0
Light Textual Inference for Semantic Parsing0
Hunting for Entailing Pairs in the Penn Discourse Treebank0
Where's the meeting that was cancelled? existential implications of transitive verbs0
A Latent Discriminative Model for Compositional Entailment Relation Recognition using Natural Logic0
Thai Sentence Paraphrasing from the Lexical Resource0
UAlacant: Using Online Machine Translation for Cross-Lingual Textual Entailment0
DeepPurple: Estimating Sentence Semantic Similarity using N-gram Regression Models and Web Snippets0
Learning Verb Inference Rules from Linguistically-Motivated Evidence0
String Re-writing Kernel0
Using Discourse Information for Paraphrase Extraction0
Entailment-based Text Exploration with Application to the Health-care Domain0
Regular polysemy: A distributional model0
Improving Implicit Discourse Relation Recognition Through Feature Set Optimization0
sranjans : Semantic Textual Similarity using Maximal Weighted Bipartite Graph Matching0
SRIUBC: Simple Similarity Features for Semantic Textual Similarity0
UMCC\_DLSI: Multidimensional Lexical-Semantic Textual Similarity0
UKP: Computing Semantic Textual Similarity by Combining Multiple Content Similarity Measures0
Stanford: Probabilistic Edit Distance Metrics for STS0
Show:102550
← PrevPage 38 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)% Test Accuracy94.7Unverified
2UnitedSynT5 (335M)% Test Accuracy93.5Unverified
3EFL (Entailment as Few-shot Learner) + RoBERTa-large% Test Accuracy93.1Unverified
4Neural Tree Indexers for Text Understanding% Test Accuracy93.1Unverified
5RoBERTa-large+Self-Explaining% Test Accuracy92.3Unverified
6RoBERTa-large + self-explaining layer% Test Accuracy92.3Unverified
7CA-MTL% Test Accuracy92.1Unverified
8SemBERT% Test Accuracy91.9Unverified
9MT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
10MT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
#ModelMetricClaimedVerifiedStatus
1Vega v2 6B (KD-based prompt transfer)Accuracy96Unverified
2PaLM 540B (fine-tuned)Accuracy95.7Unverified
3Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy94.1Unverified
4ST-MoE-32B 269B (fine-tuned)Accuracy93.5Unverified
5DeBERTa-1.5BAccuracy93.2Unverified
6MUPPET Roberta LargeAccuracy92.8Unverified
7DeBERTaV3largeAccuracy92.7Unverified
8T5-XXL 11BAccuracy92.5Unverified
9T5-XXL 11B (fine-tuned)Accuracy92.5Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy92.1Unverified
#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)Matched92.6Unverified
2Turing NLR v5 XXL 5.4B (fine-tuned)Matched92.6Unverified
3T5-XXL 11B (fine-tuned)Matched92Unverified
4T5Matched92Unverified
5T5-11BMismatched91.7Unverified
6T5-3BMatched91.4Unverified
7ALBERTMatched91.3Unverified
8DeBERTa (large)Matched91.1Unverified
9Adv-RoBERTa ensembleMatched91.1Unverified
10SMARTRoBERTaDev Matched91.1Unverified