SOTAVerified

Natural Language Inference

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

| Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. |

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Papers

Showing 601650 of 1961 papers

TitleStatusHype
HQA-Attack: Toward High Quality Black-Box Hard-Label Adversarial Attack on TextCode0
From Text to Context: An Entailment Approach for News Stakeholder ClassificationCode0
Generating Persona Consistent Dialogues by Exploiting Natural Language InferenceCode0
Idiom Paraphrases: Seventh Heaven vs Cloud NineCode0
Embracing Ambiguity: Shifting the Training Target of NLI ModelsCode0
Hypothesis Engineering for Zero-Shot Hate Speech DetectionCode0
Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language ModelsCode0
Drop Dropout on Single-Epoch Language Model PretrainingCode0
Forget NLI, Use a Dictionary: Zero-Shot Topic Classification for Low-Resource Languages with Application to LuxembourgishCode0
Flexible Natural Language-Based Image Data Downlink Prioritization for NanosatellitesCode0
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark DatasetsCode0
Improving Natural Language Inference in Arabic using Transformer Models and Linguistically Informed Pre-TrainingCode0
Frame- and Entity-Based Knowledge for Common-Sense Argumentative ReasoningCode0
End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence InteractionsCode0
Fine-Grained Natural Language Inference Based Faithfulness Evaluation for Diverse Summarisation TasksCode0
Improving Sentence Embeddings with Automatic Generation of Training Data Using Few-shot ExamplesCode0
BioNLI: Generating a Biomedical NLI Dataset Using Lexico-semantic Constraints for Adversarial ExamplesCode0
Do Prompt-Based Models Really Understand the Meaning of their Prompts?Code0
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language InferenceCode0
FlauBERT: Unsupervised Language Model Pre-training for FrenchCode0
From Alignment to Entailment: A Unified Textual Entailment Framework for Entity AlignmentCode0
Enhancing Cross-lingual Natural Language Inference by Soft Prompting with Multilingual VerbalizerCode0
Don't Fight Hallucinations, Use Them: Estimating Image Realism using NLI over Atomic FactsCode0
Enhancing Ethical Explanations of Large Language Models through Iterative Symbolic RefinementCode0
Enhancing Generalization in Natural Language Inference by SyntaxCode0
Improving Sequence Modeling Ability of Recurrent Neural Networks via SememesCode0
Do Neural Language Representations Learn Physical Commonsense?Code0
Enhancing Sentence Embedding with Generalized PoolingCode0
Can current NLI systems handle German word order? Investigating language model performance on a new German challenge set of minimal pairsCode0
In Search of the Long-Tail: Systematic Generation of Long-Tail Inferential Knowledge via Logical Rule Guided SearchCode0
An Imitation Learning Approach to Unsupervised ParsingCode0
Finding Fuzziness in Neural Network Models of Language ProcessingCode0
Figurative Language in Recognizing Textual EntailmentCode0
Investigating the Robustness of Modelling Decisions for Few-Shot Cross-Topic Stance Detection: A Preregistered StudyCode0
Fill the GAP: Exploiting BERT for Pronoun ResolutionCode0
Do Language Models Understand Morality? Towards a Robust Detection of Moral ContentCode0
Can Large Language Models Capture Dissenting Human Voices?Code0
Is Modularity Transferable? A Case Study through the Lens of Knowledge DistillationCode0
Adaptation of Deep Bidirectional Multilingual Transformers for Russian LanguageCode0
Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations in a Label-Abundant SetupCode0
Fine-grained Entailment: Resources for Greek NLI and Precise EntailmentCode0
FastTrees: Parallel Latent Tree-Induction for Faster Sequence EncodingCode0
Bilateral Multi-Perspective Matching for Natural Language SentencesCode0
Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human RatingsCode0
Does Chinese BERT Encode Word Structure?Code0
FarFetched: Entity-centric Reasoning and Claim Validation for the Greek Language based on Textually Represented EnvironmentsCode0
bgGLUE: A Bulgarian General Language Understanding Evaluation BenchmarkCode0
Fake News Detection as Natural Language InferenceCode0
Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in SummarizationCode0
Doctor XAvIer: Explainable Diagnosis on Physician-Patient Dialogues and XAI EvaluationCode0
Show:102550
← PrevPage 13 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)% Test Accuracy94.7Unverified
2UnitedSynT5 (335M)% Test Accuracy93.5Unverified
3Neural Tree Indexers for Text Understanding% Test Accuracy93.1Unverified
4EFL (Entailment as Few-shot Learner) + RoBERTa-large% Test Accuracy93.1Unverified
5RoBERTa-large+Self-Explaining% Test Accuracy92.3Unverified
6RoBERTa-large + self-explaining layer% Test Accuracy92.3Unverified
7CA-MTL% Test Accuracy92.1Unverified
8SemBERT% Test Accuracy91.9Unverified
9MT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
10MT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
#ModelMetricClaimedVerifiedStatus
1Vega v2 6B (KD-based prompt transfer)Accuracy96Unverified
2PaLM 540B (fine-tuned)Accuracy95.7Unverified
3Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy94.1Unverified
4ST-MoE-32B 269B (fine-tuned)Accuracy93.5Unverified
5DeBERTa-1.5BAccuracy93.2Unverified
6MUPPET Roberta LargeAccuracy92.8Unverified
7DeBERTaV3largeAccuracy92.7Unverified
8T5-XXL 11B (fine-tuned)Accuracy92.5Unverified
9T5-XXL 11BAccuracy92.5Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy92.1Unverified
#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)Matched92.6Unverified
2Turing NLR v5 XXL 5.4B (fine-tuned)Matched92.6Unverified
3T5-XXL 11B (fine-tuned)Matched92Unverified
4T5Matched92Unverified
5T5-11BMismatched91.7Unverified
6T5-3BMatched91.4Unverified
7ALBERTMatched91.3Unverified
8Adv-RoBERTa ensembleMatched91.1Unverified
9DeBERTa (large)Matched91.1Unverified
10SMARTRoBERTaDev Matched91.1Unverified