SOTAVerified

Explanation Generation

Papers

Showing 125 of 235 papers

TitleStatusHype
Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face DetectorCode2
MACRec: a Multi-Agent Collaboration Framework for RecommendationCode2
Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?Code1
Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and BeyondCode1
XplainLLM: A Knowledge-Augmented Dataset for Reliable Grounded Explanations in LLMsCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
EX-FEVER: A Dataset for Multi-hop Explainable Fact VerificationCode1
VLIS: Unimodal Language Models Guide Multimodal Language GenerationCode1
LLM4Vis: Explainable Visualization Recommendation using ChatGPTCode1
HealthFC: Verifying Health Claims with Evidence-Based Medical Fact-CheckingCode1
Fin-Fact: A Benchmark Dataset for Multimodal Financial Fact Checking and Explanation GenerationCode1
A Survey on Interpretable Cross-modal ReasoningCode1
LLMRec: Benchmarking Large Language Models on Recommendation TaskCode1
Multi-source Semantic Graph-based Multimodal Sarcasm Explanation GenerationCode1
Towards Explainable Conversational Recommender SystemsCode1
Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language ModelsCode1
Explaining black box text modules in natural language with language modelsCode1
Variational Causal Inference Network for Explanatory Visual Question AnsweringCode1
Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language ExplanationsCode1
CodeExp: Explanatory Code Document GenerationCode1
OCTET: Object-aware Counterfactual ExplanationsCode1
Retrieval augmentation of large language models for lay language generationCode1
Explaining Patterns in Data with Language Models via Interpretable AutopromptingCode1
Sim2Word: Explaining Similarity with Representative Attribute Words via Counterfactual ExplanationsCode1
Explainable Legal Case Matching via Inverse Optimal Transport-based Rationale ExtractionCode1
Show:102550
← PrevPage 1 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified