SOTAVerified

Explanation Generation

Papers

Showing 2650 of 235 papers

TitleStatusHype
Variational Causal Inference Network for Explanatory Visual Question AnsweringCode1
HealthFC: Verifying Health Claims with Evidence-Based Medical Fact-CheckingCode1
REX: Reasoning-aware and Grounded ExplanationCode1
TE2Rules: Explaining Tree Ensembles using RulesCode1
Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?Code1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
Retrieval augmentation of large language models for lay language generationCode1
Explainable Automated Fact-Checking for Public Health ClaimsCode1
Explainable Legal Case Matching via Inverse Optimal Transport-based Rationale ExtractionCode1
Explaining Patterns in Data with Language Models via Interpretable AutopromptingCode1
CLEVR-X: A Visual Reasoning Dataset for Natural Language ExplanationsCode1
CodeExp: Explanatory Code Document GenerationCode1
A Survey on Interpretable Cross-modal ReasoningCode1
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?Code1
Calibrating Trust of Multi-Hop Question Answering Systems with Decompositional Probes0
Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images0
Are Training Resources Insufficient? Predict First Then Explain!0
Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models0
Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives0
E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning0
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended algorithms and examples)0
Best of Both Worlds: A Hybrid Approach for Multi-Hop Explanation with Declarative Facts0
EGCR: Explanation Generation for Conversational Recommendation0
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards0
Balancing Explicability and Explanation in Human-Aware Planning0
Show:102550
← PrevPage 2 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified