SOTAVerified

Explanation Generation

Papers

Showing 101125 of 235 papers

TitleStatusHype
LLMRec: Benchmarking Large Language Models on Recommendation TaskCode1
Adapting to Change: Robust Counterfactual Explanations in Dynamic Data LandscapesCode0
Sustainable transparency in Recommender Systems: Bayesian Ranking of Images for ExplainabilityCode0
Explaining Competitive-Level Programming Solutions using LLMs0
Effects of Explanation Specificity on Passengers in Autonomous Driving0
Multi-source Semantic Graph-based Multimodal Sarcasm Explanation GenerationCode1
Towards Explainable Conversational Recommender SystemsCode1
Beyond Labels: Empowering Human Annotators with Natural Language Explanations through a Novel Active-Learning ArchitectureCode0
Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language ModelsCode1
Explaining black box text modules in natural language with language modelsCode1
Distinguish Before Answer: Generating Contrastive Explanation as Knowledge for Commonsense Question Answering0
Explainable Recommender with Geometric Information Bottleneck0
Textual Explanations for Automated Commentary DrivingCode0
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended algorithms and examples)0
Empowering CAM-Based Methods with Capability to Generate Fine-Grained and High-Faithfulness ExplanationsCode0
Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images0
Parallel Sentence-Level Explanation Generation for Real-World Low-Resource Scenarios0
Towards a Unified Model for Generating Answers and Explanations in Visual Question Answering0
Variational Causal Inference Network for Explanatory Visual Question AnsweringCode1
Explanation Regeneration via Information BottleneckCode0
Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language ExplanationsCode1
CodeExp: Explanatory Code Document GenerationCode1
OCTET: Object-aware Counterfactual ExplanationsCode1
Unsupervised Explanation Generation via Correct Instantiations0
Towards Reasoning-Aware Explainable VQA0
Show:102550
← PrevPage 5 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified