SOTAVerified

Explanation Generation

Papers

Showing 51100 of 235 papers

TitleStatusHype
Graph-Guided Textual Explanation Generation Framework0
Eclectic Rule Extraction for Explainability of Deep Neural Network based Intrusion Detection Systems0
Automatic Claim Review for Climate Science via Explanation Generation0
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended algorithms and examples)0
EGCR: Explanation Generation for Conversational Recommendation0
E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning0
E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning0
Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models0
Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap0
Enhancing Emotion Prediction in News Headlines: Insights from ChatGPT and Seq2Seq Models for Free-Text Generation0
Enriching Visual with Verbal Explanations for Relational Concepts -- Combining LIME with Aleph0
Calibrating Trust of Multi-Hop Question Answering Systems with Decompositional Probes0
Generating Skyline Explanations for Graph Neural Networks0
Harnessing LLMs Explanations to Boost Surrogate Models in Tabular Data Classification0
Hierarchical Aspect-guided Explanation Generation for Explainable Recommendation0
Implementing Evidential Reasoning in Expert Systems0
Generating Contrastive Explanations for Inductive Logic Programming Based on a Near Miss Approach0
A Framework of Explanation Generation toward Reliable Autonomous Robots0
Generating Explanations in Medical Question-Answering by Expectation Maximization Inference over Evidence0
Creating an Explainable Intrusion Detection System Using Self Organizing Maps0
Augmenting the Veracity and Explanations of Complex Fact Checking via Iterative Self-Revision with LLMs0
Generating Commonsense Explanation by Extracting Bridge Concepts from Reasoning Paths0
Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing0
Counterfactual Explanations for Predictive Business Process Monitoring0
A Transparency Paradox? Investigating the Impact of Explanation Specificity and Autonomous Vehicle Perceptual Inaccuracies on Passengers0
A Framework for Rationale Extraction for Deep QA models0
A Deep Generative XAI Framework for Natural Language Inference Explanations Generation0
Explaining latent representations of generative models with large multimodal models0
Coherency Improved Explainable Recommendation via Large Language Model0
A Three-step Method for Multi-Hop Inference Explanation Regeneration0
Generally-Occurring Model Change for Robust Counterfactual Explanations0
Counterfactual Explanation Generation with s(CASP)0
Explaining with Attribute-based and Relational Near Misses: An Interpretable Approach to Distinguishing Facial Expressions of Pain and Disgust0
Explanation as a Defense of Recommendation0
Explanation, Debate, Align: A Weak-to-Strong Framework for Language Model Generalization0
Explanation Generation for a Math Word Problem Solver0
Explanation Generation for Multi-Modal Multi-Agent Path Finding with Optimal Resource Utilization using Answer Set Programming0
CounterNet: End-to-End Training of Prediction Aware Counterfactual Explanations0
Explanations for CommonsenseQA: New Dataset and Models0
Explanations from Large Language Models Make Small Reasoners Better0
Exploring Effectiveness of Explanations for Appropriate Trust: Lessons from Cognitive Psychology0
Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions0
Diagnostics-Guided Explanation Generation0
Generate Natural Language Explanations for Recommendation0
FASTER-CE: Fast, Sparse, Transparent, and Robust Counterfactual Explanations0
Distinguish Before Answer: Generating Contrastive Explanation as Knowledge for Commonsense Question Answering0
Fine-tuning BERT with Focus Words for Explanation Regeneration0
Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention0
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards0
Explaining Competitive-Level Programming Solutions using LLMs0
Show:102550
← PrevPage 2 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified