SOTAVerified

Explanation Generation

Papers

Showing 151200 of 235 papers

TitleStatusHype
Motif-guided Time Series Counterfactual Explanations0
Best of Both Worlds: A Hybrid Approach for Multi-Hop Explanation with Declarative Facts0
Unlocking the `Why' of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience0
Multimodal Fake News Video Explanation: Dataset, Analysis and Evaluation0
Multimodal RAG-driven Anomaly Detection and Classification in Laser Powder Bed Fusion using Large Language Models0
Multi-Scale Distribution Deep Variational Autoencoder for Explanation Generation0
Balancing Explicability and Explanation in Human-Aware Planning0
Unsupervised Explanation Generation for Machine Reading Comprehension0
Not all users are the same: Providing personalized explanations for sequential decision making problems0
Automatic Claim Review for Climate Science via Explanation Generation0
On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios0
Online Explanation Generation for Human-Robot Teaming0
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis0
Order Matters: Generating Progressive Explanations for Planning Tasks in Human-Robot Teaming0
Parallel Sentence-Level Explanation Generation for Real-World Low-Resource Scenarios0
Parameterized Explanations for Investor / Company Matching0
Plan Explanations as Model Reconciliation -- An Empirical Study0
Unsupervised Explanation Generation via Correct Instantiations0
Progressive Explanation Generation for Human-robot Teaming0
Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions0
Quantifying Relational Exploration in Cultural Heritage Knowledge Graphs with LLMs: A Neuro-Symbolic Approach0
Formal Semantic Geometry over Transformer-based Variational AutoEncoder0
Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations0
A Deep Generative XAI Framework for Natural Language Inference Explanations Generation0
Reasoning About Persuasion: Can LLMs Enable Explainable Propaganda Detection?0
RecExplainer: Aligning Large Language Models for Explaining Recommendation Models0
RecMind: Large Language Model Powered Agent For Recommendation0
Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation0
Augmenting the Veracity and Explanations of Complex Fact Checking via Iterative Self-Revision with LLMs0
A Transparency Paradox? Investigating the Impact of Explanation Specificity and Autonomous Vehicle Perceptual Inaccuracies on Passengers0
Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning Based on Visually Grounded Conversations0
A Three-step Method for Multi-Hop Inference Explanation Regeneration0
Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap0
Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the CUBE dataset0
RX-ADS: Interpretable Anomaly Detection using Adversarial ML for Electric Vehicle CAN data0
Assertion Enhanced Few-Shot Learning: Instructive Technique for Large Language Models to Generate Educational Explanations0
YNU-oxz at SemEval-2020 Task 4: Commonsense Validation Using BERT with Bidirectional GRU0
Weak Robust Compatibility Between Learning Algorithms and Counterfactual Explanation Generation Algorithms0
SeQwen at the Financial Misinformation Detection Challenge Task: Sequential Learning for Claim Verification and Explanation Generation in Financial Domains0
Artwork Explanation in Large-scale Vision Language Models0
What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components0
SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection0
Are We On The Same Page? Hierarchical Explanation Generation for Planning Tasks in Human-Robot Teaming using Reinforcement Learning0
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks0
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations0
Dynamic MOdularized Reasoning for Compositional Structured Explanation Generation0
Eclectic Rule Extraction for Explainability of Deep Neural Network based Intrusion Detection Systems0
Effects of Explanation Specificity on Passengers in Autonomous Driving0
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended algorithms and examples)0
EGCR: Explanation Generation for Conversational Recommendation0
Show:102550
← PrevPage 4 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified