SOTAVerified

Natural Language Inference

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

| Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. |

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Papers

Showing 301350 of 1961 papers

TitleStatusHype
LanSER: Language-Model Supported Speech Emotion Recognition0
A deep Natural Language Inference predictor without language-specific training data0
Exploiting Language Models as a Source of Knowledge for Cognitive Agents0
BatchPrompt: Accomplish more with lessCode0
Link Prediction for Wikipedia Articles as a Natural Language Inference TaskCode0
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model BiasCode1
Lightweight Adaptation of Neural Language Models via Subspace EmbeddingCode0
Leveraging Codebook Knowledge with NLI and ChatGPT for Zero-Shot Political Relation ClassificationCode0
Towards Controllable Natural Language Inference through Lexical Inference Types0
Improving Domain-Specific Retrieval by NLI Fine-Tuning0
Do Multilingual Language Models Think Better in English?Code1
An Overview Of Temporal Commonsense Reasoning and Acquisition0
Improving Natural Language Inference in Arabic using Transformer Models and Linguistically Informed Pre-TrainingCode0
ARC-NLP at PAN 2023: Transition-Focused Natural Language Inference for Writing Style Detection0
Selective Generation for Controllable Language ModelsCode1
Is Prompt-Based Finetuning Always Better than Vanilla Finetuning? Insights from Cross-Lingual Language UnderstandingCode0
Improving Zero-shot Relation Classification via Automatically-acquired Entailment Templates0
Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language InferenceCode0
NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic0
LEA: Improving Sentence Similarity Robustness to Typos Using Lexical Attention BiasCode0
SpaceNLI: Evaluating the Consistency of Predicting Inferences in SpaceCode0
Evaluating Paraphrastic Robustness in Textual Entailment Models0
Modeling Hierarchical Reasoning Chains by Linking Discourse Units and Key Phrases for Reading ComprehensionCode1
Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language ModelsCode0
No Strong Feelings One Way or Another: Re-operationalizing Neutrality in Natural Language Inference0
Pushing the Limits of ChatGPT on NLP Tasks0
Neural models for Factual Inconsistency Classification with ExplanationsCode0
FLamE: Few-shot Learning from Natural Language Explanations0
NOWJ at COLIEE 2023 -- Multi-Task and Ensemble Approaches in Legal Information Processing0
Analysis of the Fed's communication by using textual entailment model of Zero-Shot classification0
PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial PromptsCode0
Can current NLI systems handle German word order? Investigating language model performance on a new German challenge set of minimal pairsCode0
LogiQA 2.0—An Improved Dataset for Logical Reasoning in Natural Language UnderstandingCode0
From Key Points to Key Point Hierarchy: Structured and Expressive Opinion SummarizationCode0
CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language ModelsCode0
Evaluating the Effectiveness of Natural Language Inference for Hate Speech Detection in Languages with Limited Labeled DataCode0
A Study of Situational Reasoning for Traffic UnderstandingCode1
bgGLUE: A Bulgarian General Language Understanding Evaluation BenchmarkCode0
Stubborn Lexical Bias in Data and Models0
THiFLY Research at SemEval-2023 Task 7: A Multi-granularity System for CTR-based Textual Entailment and Evidence RetrievalCode0
AMR4NLI: Interpretable and robust NLI measures from semantic graphsCode0
Assessing Word Importance Using Models Trained for Semantic TasksCode0
Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback0
What does the Failure to Reason with "Respectively" in Zero/Few-Shot Settings Tell Us about Language Models?0
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark DatasetsCode1
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-TuningCode1
Targeted Data Generation: Finding and Fixing Model Weaknesses0
KNSE: A Knowledge-aware Natural Language Inference Framework for Dialogue Symptom Status Recognition0
AlignScore: Evaluating Factual Consistency with a Unified Alignment FunctionCode4
Characterizing and Measuring Linguistic Dataset DriftCode0
Show:102550
← PrevPage 7 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)% Test Accuracy94.7Unverified
2UnitedSynT5 (335M)% Test Accuracy93.5Unverified
3Neural Tree Indexers for Text Understanding% Test Accuracy93.1Unverified
4EFL (Entailment as Few-shot Learner) + RoBERTa-large% Test Accuracy93.1Unverified
5RoBERTa-large+Self-Explaining% Test Accuracy92.3Unverified
6RoBERTa-large + self-explaining layer% Test Accuracy92.3Unverified
7CA-MTL% Test Accuracy92.1Unverified
8SemBERT% Test Accuracy91.9Unverified
9MT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
10MT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
#ModelMetricClaimedVerifiedStatus
1Vega v2 6B (KD-based prompt transfer)Accuracy96Unverified
2PaLM 540B (fine-tuned)Accuracy95.7Unverified
3Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy94.1Unverified
4ST-MoE-32B 269B (fine-tuned)Accuracy93.5Unverified
5DeBERTa-1.5BAccuracy93.2Unverified
6MUPPET Roberta LargeAccuracy92.8Unverified
7DeBERTaV3largeAccuracy92.7Unverified
8T5-XXL 11B (fine-tuned)Accuracy92.5Unverified
9T5-XXL 11BAccuracy92.5Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy92.1Unverified
#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)Matched92.6Unverified
2Turing NLR v5 XXL 5.4B (fine-tuned)Matched92.6Unverified
3T5-XXL 11B (fine-tuned)Matched92Unverified
4T5Matched92Unverified
5T5-11BMismatched91.7Unverified
6T5-3BMatched91.4Unverified
7ALBERTMatched91.3Unverified
8Adv-RoBERTa ensembleMatched91.1Unverified
9DeBERTa (large)Matched91.1Unverified
10SMARTRoBERTaDev Matched91.1Unverified