SOTAVerified

Natural Language Inference

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

| Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. |

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Papers

Showing 151175 of 1961 papers

TitleStatusHype
Building Efficient Universal Classifiers with Natural Language InferenceCode1
Cross-Lingual Word Embedding Refinement by _1 Norm OptimisationCode1
Cross-Thought for Sentence Encoder Pre-trainingCode1
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and LanguageCode1
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model BiasCode1
ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive SummarizationCode1
Deep Learning Based Text Classification: A Comprehensive ReviewCode1
Defeasible Visual Entailment: Benchmark, Evaluator, and Reward-Driven OptimizationCode1
A Broad-Coverage Challenge Corpus for Sentence Understanding through InferenceCode1
Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language InferenceCode1
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighterCode1
Distributed NLI: Learning to Predict Human Opinion Distributions for Language ReasoningCode1
Addressing Inquiries about History: An Efficient and Practical Framework for Evaluating Open-domain Chatbot ConsistencyCode1
Can Explanations Be Useful for Calibrating Black Box Models?Code1
Charformer: Fast Character Transformers via Gradient-based Subword TokenizationCode1
Do Multilingual Language Models Think Better in English?Code1
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERTCode1
Empowering Language Understanding with Counterfactual ReasoningCode1
Enhancing adversarial robustness in Natural Language Inference using explanationsCode1
Enhancing Clinical BERT Embedding using a Biomedical Knowledge BaseCode1
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performanceCode1
Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n ParametersCode1
A Decomposable Attention Model for Natural Language InferenceCode1
e-SNLI-VE: Corrected Visual-Textual Entailment with Natural Language ExplanationsCode1
Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature and PRESuppositionCode1
Show:102550
← PrevPage 7 of 79Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)% Test Accuracy94.7Unverified
2UnitedSynT5 (335M)% Test Accuracy93.5Unverified
3Neural Tree Indexers for Text Understanding% Test Accuracy93.1Unverified
4EFL (Entailment as Few-shot Learner) + RoBERTa-large% Test Accuracy93.1Unverified
5RoBERTa-large+Self-Explaining% Test Accuracy92.3Unverified
6RoBERTa-large + self-explaining layer% Test Accuracy92.3Unverified
7CA-MTL% Test Accuracy92.1Unverified
8SemBERT% Test Accuracy91.9Unverified
9MT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
10MT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
#ModelMetricClaimedVerifiedStatus
1Vega v2 6B (KD-based prompt transfer)Accuracy96Unverified
2PaLM 540B (fine-tuned)Accuracy95.7Unverified
3Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy94.1Unverified
4ST-MoE-32B 269B (fine-tuned)Accuracy93.5Unverified
5DeBERTa-1.5BAccuracy93.2Unverified
6MUPPET Roberta LargeAccuracy92.8Unverified
7DeBERTaV3largeAccuracy92.7Unverified
8T5-XXL 11B (fine-tuned)Accuracy92.5Unverified
9T5-XXL 11BAccuracy92.5Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy92.1Unverified
#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)Matched92.6Unverified
2Turing NLR v5 XXL 5.4B (fine-tuned)Matched92.6Unverified
3T5-XXL 11B (fine-tuned)Matched92Unverified
4T5Matched92Unverified
5T5-11BMismatched91.7Unverified
6T5-3BMatched91.4Unverified
7ALBERTMatched91.3Unverified
8Adv-RoBERTa ensembleMatched91.1Unverified
9DeBERTa (large)Matched91.1Unverified
10SMARTRoBERTaDev Matched91.1Unverified