SOTAVerified

Explanation Generation

Papers

Showing 150 of 235 papers

TitleStatusHype
Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face DetectorCode2
MACRec: a Multi-Agent Collaboration Framework for RecommendationCode2
Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?Code1
Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and BeyondCode1
XplainLLM: A Knowledge-Augmented Dataset for Reliable Grounded Explanations in LLMsCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
VLIS: Unimodal Language Models Guide Multimodal Language GenerationCode1
EX-FEVER: A Dataset for Multi-hop Explainable Fact VerificationCode1
LLM4Vis: Explainable Visualization Recommendation using ChatGPTCode1
HealthFC: Verifying Health Claims with Evidence-Based Medical Fact-CheckingCode1
Fin-Fact: A Benchmark Dataset for Multimodal Financial Fact Checking and Explanation GenerationCode1
A Survey on Interpretable Cross-modal ReasoningCode1
LLMRec: Benchmarking Large Language Models on Recommendation TaskCode1
Multi-source Semantic Graph-based Multimodal Sarcasm Explanation GenerationCode1
Towards Explainable Conversational Recommender SystemsCode1
Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language ModelsCode1
Explaining black box text modules in natural language with language modelsCode1
Variational Causal Inference Network for Explanatory Visual Question AnsweringCode1
Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language ExplanationsCode1
CodeExp: Explanatory Code Document GenerationCode1
OCTET: Object-aware Counterfactual ExplanationsCode1
Retrieval augmentation of large language models for lay language generationCode1
Explaining Patterns in Data with Language Models via Interpretable AutopromptingCode1
Sim2Word: Explaining Similarity with Representative Attribute Words via Counterfactual ExplanationsCode1
Explainable Legal Case Matching via Inverse Optimal Transport-based Rationale ExtractionCode1
TE2Rules: Explaining Tree Ensembles using RulesCode1
End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and ModelsCode1
Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasksCode1
CLEVR-X: A Visual Reasoning Dataset for Natural Language ExplanationsCode1
REX: Reasoning-aware and Grounded ExplanationCode1
AR-BERT: Aspect-relation enhanced Aspect-level Sentiment Classification with Multi-modal ExplanationsCode1
Learn-Explain-Reinforce: Counterfactual Reasoning and Its Guidance to Reinforce an Alzheimer's Disease Diagnosis ModelCode1
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural NetworksCode1
Faithfully Explainable Recommendation via Neural Logic ReasoningCode1
Explain and Predict, and then Predict AgainCode1
Towards Interpretable Natural Language Understanding with Explanations as Latent VariablesCode1
Explainable Automated Fact-Checking for Public Health ClaimsCode1
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?Code1
QED: A Framework and Dataset for Explanations in Question AnsweringCode1
Hierarchical Interaction Summarization and Contrastive Prompting for Explainable Recommendations0
The Future is Agentic: Definitions, Perspectives, and Open Challenges of Multi-Agent Recommender Systems0
RealFactBench: A Benchmark for Evaluating Large Language Models in Real-World Fact-CheckingCode0
LiTEx: A Linguistic Taxonomy of Explanations for Understanding Within-Label Variation in Natural Language InferenceCode0
Does Rationale Quality Matter? Enhancing Mental Disorder Detection via Selective Reasoning DistillationCode0
Multimodal RAG-driven Anomaly Detection and Classification in Laser Powder Bed Fusion using Large Language Models0
SNAPE-PM: Building and Utilizing Dynamic Partner Models for Adaptive Explanation GenerationCode0
Towards Budget-Friendly Model-Agnostic Explanation Generation for Large Language Models0
Generating Skyline Explanations for Graph Neural Networks0
Harnessing LLMs Explanations to Boost Surrogate Models in Tabular Data Classification0
ChartQA-X: Generating Explanations for Charts0
Show:102550
← PrevPage 1 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified