SOTAVerified

Explanation Generation

Papers

Showing 176200 of 235 papers

TitleStatusHype
RecExplainer: Aligning Large Language Models for Explaining Recommendation Models0
RecMind: Large Language Model Powered Agent For Recommendation0
Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation0
Augmenting the Veracity and Explanations of Complex Fact Checking via Iterative Self-Revision with LLMs0
A Transparency Paradox? Investigating the Impact of Explanation Specificity and Autonomous Vehicle Perceptual Inaccuracies on Passengers0
Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning Based on Visually Grounded Conversations0
A Three-step Method for Multi-Hop Inference Explanation Regeneration0
Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap0
Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the CUBE dataset0
RX-ADS: Interpretable Anomaly Detection using Adversarial ML for Electric Vehicle CAN data0
Assertion Enhanced Few-Shot Learning: Instructive Technique for Large Language Models to Generate Educational Explanations0
YNU-oxz at SemEval-2020 Task 4: Commonsense Validation Using BERT with Bidirectional GRU0
Weak Robust Compatibility Between Learning Algorithms and Counterfactual Explanation Generation Algorithms0
SeQwen at the Financial Misinformation Detection Challenge Task: Sequential Learning for Claim Verification and Explanation Generation in Financial Domains0
Artwork Explanation in Large-scale Vision Language Models0
What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components0
SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection0
Are We On The Same Page? Hierarchical Explanation Generation for Planning Tasks in Human-Robot Teaming using Reinforcement Learning0
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks0
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations0
Dynamic MOdularized Reasoning for Compositional Structured Explanation Generation0
Eclectic Rule Extraction for Explainability of Deep Neural Network based Intrusion Detection Systems0
Effects of Explanation Specificity on Passengers in Autonomous Driving0
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended algorithms and examples)0
EGCR: Explanation Generation for Conversational Recommendation0
Show:102550
← PrevPage 8 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified