SOTAVerified

Natural Language Inference

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

| Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. |

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Papers

Showing 19011950 of 1961 papers

TitleStatusHype
Marked Attribute Bias in Natural Language InferenceCode0
Marking: Visual Grading with Highlighting Errors and Annotating Missing BitsCode0
DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language UnderstandingCode0
Disambiguation of Verbal ShiftersCode0
DIBERT: Dependency Injected Bidirectional Encoder Representations from TransformersCode0
An Understanding-Oriented Robust Machine Reading Comprehension ModelCode0
Switching Contexts: Transportability Measures for NLPCode0
Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence ModelingCode0
Adversarial Self-Attention for Language UnderstandingCode0
Unsupervised Improvement of Factual Knowledge in Language ModelsCode0
XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language ModelsCode0
Developmental Negation Processing in Transformer Language ModelsCode0
Reordering Examples Helps during Priming-based Few-Shot LearningCode0
Representing Meaning with a Combination of Logical and Distributional ModelsCode0
Symmetric Regularization based BERT for Pair-wise Semantic ReasoningCode0
Unsupervised Learning of Explainable Parse Trees for Improved GeneralisationCode0
A Novel Cartography-Based Curriculum Learning Method Applied on RoNLI: The First Romanian Natural Language Inference CorpusCode0
A Comparative Study of Pre-training and Self-trainingCode0
Leveraging Codebook Knowledge with NLI and ChatGPT for Zero-Shot Political Relation ClassificationCode0
MedNLI Is Not Immune: Natural Language Inference Artifacts in the Clinical DomainCode0
Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language InferenceCode0
Training Complex Models with Multi-Task Weak SupervisionCode0
Detecting Statements in Text: A Domain-Agnostic Few-Shot SolutionCode0
Rethinking the Event Coding Pipeline with Prompt EntailmentCode0
Detecting Entailment in Code-Mixed Hindi-English ConversationsCode0
TabPert: An Effective Platform for Tabular PerturbationCode0
TabPert : An Effective Platform for Tabular PerturbationCode0
Revisiting neural relation classification in clinical notes with external informationCode0
DELTA: A DEep learning based Language Technology plAtformCode0
DeFactoNLP: Fact Verification using Entity Recognition, TFIDF Vector Comparison and Decomposable AttentionCode0
MINIMAL: Mining Models for Data Free Universal Adversarial TriggersCode0
Deep Neural Representations for Multiword Expressions DetectionCode0
Unsupervised Natural Language Inference Using PHL Triplet GenerationCode0
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark DatasetsCode0
BioNLI: Generating a Biomedical NLI Dataset Using Lexico-semantic Constraints for Adversarial ExamplesCode0
Robust Cross-lingual Hypernymy Detection using Dependency ContextCode0
Robust Document Retrieval and Individual Evidence Modeling for Fact Extraction and Verification.Code0
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited DevicesCode0
Deep Natural Language Feature Learning for Interpretable PredictionCode0
Unsupervised Natural Language Inference via Decoupled Multimodal Contrastive LearningCode0
Deep Learning for Entity Matching: A Design Space ExplorationCode0
Deep Generative Model for Joint Alignment and Word RepresentationCode0
Take It Easy: Label-Adaptive Self-Rationalization for Fact Verification and Explanation GenerationCode0
When data permutations are pathological: the case of neural natural language inferenceCode0
A Multiple Choices Reading Comprehension Corpus for Vietnamese Language EducationCode0
Modelling Instance-Level Annotator Reliability for Natural Language Labelling TasksCode0
Declarative Question Answering over Knowledge Bases containing Natural Language Text with Answer Set ProgrammingCode0
Role of Language Relatedness in Multilingual Fine-tuning of Language Models: A Case Study in Indo-Aryan LanguagesCode0
RPN: A Word Vector Level Data Augmentation Algorithm in Deep Learning for Language UnderstandingCode0
Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and NegationCode0
Show:102550
← PrevPage 39 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)% Test Accuracy94.7Unverified
2UnitedSynT5 (335M)% Test Accuracy93.5Unverified
3EFL (Entailment as Few-shot Learner) + RoBERTa-large% Test Accuracy93.1Unverified
4Neural Tree Indexers for Text Understanding% Test Accuracy93.1Unverified
5RoBERTa-large+Self-Explaining% Test Accuracy92.3Unverified
6RoBERTa-large + self-explaining layer% Test Accuracy92.3Unverified
7CA-MTL% Test Accuracy92.1Unverified
8SemBERT% Test Accuracy91.9Unverified
9MT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
10MT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
#ModelMetricClaimedVerifiedStatus
1Vega v2 6B (KD-based prompt transfer)Accuracy96Unverified
2PaLM 540B (fine-tuned)Accuracy95.7Unverified
3Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy94.1Unverified
4ST-MoE-32B 269B (fine-tuned)Accuracy93.5Unverified
5DeBERTa-1.5BAccuracy93.2Unverified
6MUPPET Roberta LargeAccuracy92.8Unverified
7DeBERTaV3largeAccuracy92.7Unverified
8T5-XXL 11BAccuracy92.5Unverified
9T5-XXL 11B (fine-tuned)Accuracy92.5Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy92.1Unverified
#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)Matched92.6Unverified
2Turing NLR v5 XXL 5.4B (fine-tuned)Matched92.6Unverified
3T5-XXL 11B (fine-tuned)Matched92Unverified
4T5Matched92Unverified
5T5-11BMismatched91.7Unverified
6T5-3BMatched91.4Unverified
7ALBERTMatched91.3Unverified
8DeBERTa (large)Matched91.1Unverified
9Adv-RoBERTa ensembleMatched91.1Unverified
10SMARTRoBERTaDev Matched91.1Unverified