SOTAVerified

Explanation Generation

Papers

Showing 51100 of 235 papers

TitleStatusHype
Tox-BART: Leveraging Toxicity Attributes for Explanation Generation of Implicit Hate SpeechCode0
Weak Robust Compatibility Between Learning Algorithms and Counterfactual Explanation Generation Algorithms0
On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios0
Tell Me Why: Explainable Public Health Fact-Checking with Large Language ModelsCode0
Generating Robust Counterfactual Witnesses for Graph Neural Networks0
Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?Code1
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis0
Using Stratified Sampling to Improve LIME Image ExplanationsCode0
RU22Fact: Optimizing Evidence for Multilingual Explainable Fact-Checking on Russia-Ukraine ConflictCode0
Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap0
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks0
Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and BeyondCode1
T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision TransformersCode0
SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection0
Artwork Explanation in Large-scale Vision Language Models0
MACRec: a Multi-Agent Collaboration Framework for RecommendationCode2
Unlocking the `Why' of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience0
Explainability for Machine Learning Models: From Data Adaptability to User Perception0
Explaining Veracity Predictions with Evidence Summarization: A Multi-Task Model ApproachCode0
LLMs for Coding and Robotics Education0
Sentiment-enhanced Graph-based Sarcasm Explanation in DialogueCode0
Explaining latent representations of generative models with large multimodal models0
Eclectic Rule Extraction for Explainability of Deep Neural Network based Intrusion Detection Systems0
Logic-Scaffolding: Personalized Aspect-Instructed Recommendation Explanation Generation using LLMs0
Mismatch Quest: Visual and Textual Feedback for Image-Text MisalignmentCode0
Assertion Enhanced Few-Shot Learning: Instructive Technique for Large Language Models to Generate Educational Explanations0
GNN2R: Weakly-Supervised Rationale-Providing Question Answering over Knowledge GraphsCode0
InterPrompt: Interpretable Prompting for Interrelated Interpersonal Risk Factors in Reddit Posts0
From Wrong To Right: A Recursive Approach Towards Vision-Language Explanation0
RecExplainer: Aligning Large Language Models for Explaining Recommendation ModelsCode0
XplainLLM: A Knowledge-Augmented Dataset for Reliable Grounded Explanations in LLMsCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
Is Explanation the Cure? Misinformation Mitigation in the Short Term and Long Term0
Counterfactual Explanation Generation with s(CASP)0
Explaining Interactions Between Text SpansCode0
VLIS: Unimodal Language Models Guide Multimodal Language GenerationCode1
EX-FEVER: A Dataset for Multi-hop Explainable Fact VerificationCode1
XAI Benchmark for Visual Explanation0
LLM4Vis: Explainable Visualization Recommendation using ChatGPTCode1
Generating Explanations in Medical Question-Answering by Expectation Maximization Inference over Evidence0
Towards LLM-guided Causal Explainability for Black-box Text Classifiers0
Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language ModelsCode0
Fin-Fact: A Benchmark Dataset for Multimodal Financial Fact Checking and Explanation GenerationCode1
Reward Engineering for Generating Semi-structured ExplanationCode0
HealthFC: Verifying Health Claims with Evidence-Based Medical Fact-CheckingCode1
Dynamic MOdularized Reasoning for Compositional Structured Explanation Generation0
A Survey on Interpretable Cross-modal ReasoningCode1
Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning Based on Visually Grounded Conversations0
RecMind: Large Language Model Powered Agent For Recommendation0
Explaining with Attribute-based and Relational Near Misses: An Interpretable Approach to Distinguishing Facial Expressions of Pain and Disgust0
Show:102550
← PrevPage 2 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified