SOTAVerified

Natural Language Inference

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

| Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. |

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Papers

Showing 150 of 1961 papers

TitleStatusHype
RWKV: Reinventing RNNs for the Transformer EraCode6
LLM.int8(): 8-bit Matrix Multiplication for Transformers at ScaleCode5
Learning to Generate Instruction Tuning Datasets for Zero-Shot Task AdaptationCode4
TrueTeacher: Learning Factual Consistency Evaluation with Large Language ModelsCode4
AlignScore: Evaluating Factual Consistency with a Unified Alignment FunctionCode4
N-Grammer: Augmenting Transformers with latent n-gramsCode4
Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice PerspectiveCode4
ERNIE 2.0: A Continual Pre-training Framework for Language UnderstandingCode3
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingCode3
Pre-Training with Whole Word Masking for Chinese BERTCode3
Finetuned Language Models Are Zero-Shot LearnersCode3
ERNIE: Enhanced Representation through Knowledge IntegrationCode3
ST-MoE: Designing Stable and Transferable Sparse Expert ModelsCode3
Language Models are Few-Shot LearnersCode3
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale InstructionsCode2
Order Constraints in Optimal TransportCode2
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq ModelCode2
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language ModelsCode2
BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical TasksCode2
PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical DomainCode2
Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment ApproachCode2
Scientific QA System with Verifiable AnswersCode2
PaLM: Scaling Language Modeling with PathwaysCode2
ALBERT: A Lite BERT for Self-supervised Learning of Language RepresentationsCode2
ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with Modular QuantizersCode2
SimCSE: Simple Contrastive Learning of Sentence EmbeddingsCode2
I-BERT: Integer-only BERT QuantizationCode2
Generative Pretrained Structured Transformers: Unsupervised Syntactic Language Models at ScaleCode2
mGPT: Few-Shot Learners Go MultilingualCode2
Hungry Hungry Hippos: Towards Language Modeling with State Space ModelsCode2
Ask Me Anything: A simple strategy for prompting language modelsCode2
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding SharingCode2
DeBERTa: Decoding-enhanced BERT with Disentangled AttentionCode2
Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerCode2
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-TuningCode2
Chain of Natural Language Inference for Reducing Large Language Model Ungrounded HallucinationsCode1
An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language ModelsCode1
CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From CharactersCode1
Can NLI Provide Proper Indirect Supervision for Low-resource Biomedical Relation Extraction?Code1
Can NLI Models Verify QA Systems’ Predictions?Code1
CBLUE: A Chinese Biomedical Language Understanding Evaluation BenchmarkCode1
Charformer: Fast Character Transformers via Gradient-based Subword TokenizationCode1
Can Explanations Be Useful for Calibrating Black Box Models?Code1
Are self-explanations from Large Language Models faithful?Code1
Calibration of Pre-trained TransformersCode1
Analyzing Multi-Task Learning for Abstractive Text SummarizationCode1
A Comparative Study of Pretrained Language Models for Long Clinical TextCode1
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model BiasCode1
Building Efficient Universal Classifiers with Natural Language InferenceCode1
Can NLI Models Verify QA Systems' Predictions?Code1
Show:102550
← PrevPage 1 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)% Test Accuracy94.7Unverified
2UnitedSynT5 (335M)% Test Accuracy93.5Unverified
3Neural Tree Indexers for Text Understanding% Test Accuracy93.1Unverified
4EFL (Entailment as Few-shot Learner) + RoBERTa-large% Test Accuracy93.1Unverified
5RoBERTa-large+Self-Explaining% Test Accuracy92.3Unverified
6RoBERTa-large + self-explaining layer% Test Accuracy92.3Unverified
7CA-MTL% Test Accuracy92.1Unverified
8SemBERT% Test Accuracy91.9Unverified
9MT-DNN-SMARTLARGEv0% Test Accuracy91.7Unverified
10MT-DNN-SMART_100%ofTrainingDataDev Accuracy91.6Unverified
#ModelMetricClaimedVerifiedStatus
1Vega v2 6B (KD-based prompt transfer)Accuracy96Unverified
2PaLM 540B (fine-tuned)Accuracy95.7Unverified
3Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy94.1Unverified
4ST-MoE-32B 269B (fine-tuned)Accuracy93.5Unverified
5DeBERTa-1.5BAccuracy93.2Unverified
6MUPPET Roberta LargeAccuracy92.8Unverified
7DeBERTaV3largeAccuracy92.7Unverified
8T5-XXL 11B (fine-tuned)Accuracy92.5Unverified
9T5-XXL 11BAccuracy92.5Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy92.1Unverified
#ModelMetricClaimedVerifiedStatus
1UnitedSynT5 (3B)Matched92.6Unverified
2Turing NLR v5 XXL 5.4B (fine-tuned)Matched92.6Unverified
3T5-XXL 11B (fine-tuned)Matched92Unverified
4T5Matched92Unverified
5T5-11BMismatched91.7Unverified
6T5-3BMatched91.4Unverified
7ALBERTMatched91.3Unverified
8Adv-RoBERTa ensembleMatched91.1Unverified
9DeBERTa (large)Matched91.1Unverified
10SMARTRoBERTaDev Matched91.1Unverified