SOTAVerified

Explanation Generation

Papers

Showing 151200 of 235 papers

TitleStatusHype
Towards Generating Robust, Fair, and Emotion-Aware Explanations for Recommender Systems0
Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)0
Towards Reasoning-Aware Explainable VQA0
Truth Table Deep Convolutional Neural Network, A New SAT-Encodable Architecture - Application To Complete Robustness0
Unlocking the `Why' of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience0
Unsupervised Explanation Generation for Machine Reading Comprehension0
Unsupervised Explanation Generation via Correct Instantiations0
Weak Robust Compatibility Between Learning Algorithms and Counterfactual Explanation Generation Algorithms0
What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components0
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks0
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations0
Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering0
`Why didn't you allocate this task to them?' Negotiation-Aware Explicable Task Allocation and Contrastive Explanation Generation0
Why should I not follow you? Reasons For and Reasons Against in Responsible Recommender Systems0
Why the Agent Made that Decision: Explaining Deep Reinforcement Learning with Vision Masks0
XAI Benchmark for Visual Explanation0
YNU-oxz at SemEval-2020 Task 4: Commonsense Validation Using BERT with Bidirectional GRU0
Graph-Guided Textual Explanation Generation Framework0
Harnessing LLMs Explanations to Boost Surrogate Models in Tabular Data Classification0
Hierarchical Aspect-guided Explanation Generation for Explainable Recommendation0
Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations0
Hierarchical Interaction Summarization and Contrastive Prompting for Explainable Recommendations0
How to Do Human Evaluation: Best Practices for User Studies in NLP0
ICCV23 Visual-Dialog Emotion Explanation Challenge: SEU_309 Team Technical Report0
Implementing Evidential Reasoning in Expert Systems0
Improving Personalized Explanation Generation through Visualization0
In Search for a SAT-friendly Binarized Neural Network Architecture0
INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations0
Interpretability of Blackbox Machine Learning Models through Dataview Extraction and Shadow Model creation0
InterPrompt: Interpretable Prompting for Interrelated Interpersonal Risk Factors in Reddit Posts0
Is Explanation the Cure? Misinformation Mitigation in the Short Term and Long Term0
Global Human-guided Counterfactual Explanations for Molecular Properties via Reinforcement LearningCode0
GNN2R: Weakly-Supervised Rationale-Providing Question Answering over Knowledge GraphsCode0
Preference Distillation for Personalized Generative RecommendationCode0
XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI ApproachCode0
Generating High-Quality Explanations for Navigation in Partially-Revealed EnvironmentsCode0
Generating High-Quality Explanations for Navigation in Partially-Revealed EnvironmentsCode0
Explainable Debugger for Black-box Machine Learning ModelsCode0
Advisable Learning for Self-Driving Vehicles by Internalizing Observation-to-Action RulesCode0
Adapting to Change: Robust Counterfactual Explanations in Dynamic Data LandscapesCode0
Beyond Persuasion: Towards Conversational Recommender System with Credible ExplanationsCode0
RealFactBench: A Benchmark for Evaluating Large Language Models in Real-World Fact-CheckingCode0
Using Stratified Sampling to Improve LIME Image ExplanationsCode0
RecExplainer: Aligning Large Language Models for Explaining Recommendation ModelsCode0
IndMask: Inductive Explanation for Multivariate Time Series Black-Box ModelsCode0
A Framework for Learning Ante-hoc Explainable Models via ConceptsCode0
Analogy Generation by Prompting Large Language Models: A Case Study of InstructGPTCode0
Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation GenerationCode0
Explainable Agency by Revealing Suboptimality in Child-Robot Learning ScenariosCode0
Evaluating Evidence Attribution in Generated Fact Checking ExplanationsCode0
Show:102550
← PrevPage 4 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified