SOTAVerified

Explanation Generation

Papers

Showing 5175 of 235 papers

TitleStatusHype
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards0
Eclectic Rule Extraction for Explainability of Deep Neural Network based Intrusion Detection Systems0
Effects of Explanation Specificity on Passengers in Autonomous Driving0
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended algorithms and examples)0
EGCR: Explanation Generation for Conversational Recommendation0
E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning0
E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning0
Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models0
Balancing Explicability and Explanation in Human-Aware Planning0
Enhancing Emotion Prediction in News Headlines: Insights from ChatGPT and Seq2Seq Models for Free-Text Generation0
Enriching Visual with Verbal Explanations for Relational Concepts -- Combining LIME with Aleph0
Calibrating Trust of Multi-Hop Question Answering Systems with Decompositional Probes0
Automatic Claim Review for Climate Science via Explanation Generation0
Distinguish Before Answer: Generating Contrastive Explanation as Knowledge for Commonsense Question Answering0
Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap0
Explanation, Debate, Align: A Weak-to-Strong Framework for Language Model Generalization0
Exploring Effectiveness of Explanations for Appropriate Trust: Lessons from Cognitive Psychology0
Diagnostics-Guided Explanation Generation0
Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions0
A Framework of Explanation Generation toward Reliable Autonomous Robots0
Creating an Explainable Intrusion Detection System Using Self Organizing Maps0
CounterNet: End-to-End Training of Prediction Aware Counterfactual Explanations0
Augmenting the Veracity and Explanations of Complex Fact Checking via Iterative Self-Revision with LLMs0
Counterfactual Explanations for Predictive Business Process Monitoring0
A Transparency Paradox? Investigating the Impact of Explanation Specificity and Autonomous Vehicle Perceptual Inaccuracies on Passengers0
Show:102550
← PrevPage 3 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified