SOTAVerified

Explanation Generation

Papers

Showing 51100 of 235 papers

TitleStatusHype
TathyaNyaya and FactLegalLlama: Advancing Factual Judgment Prediction and Explanation in the Indian Legal Context0
Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the CUBE dataset0
Explainable Synthetic Image Detection through Diffusion Timestep Ensembling0
EXCLAIM: An Explainable Cross-Modal Agentic System for Misinformation Detection with Hierarchical Retrieval0
MemeIntel: Explainable Detection of Propagandistic and Hateful Memes0
Reasoning About Persuasion: Can LLMs Enable Explainable Propaganda Detection?0
Coherency Improved Explainable Recommendation via Large Language Model0
Accelerating Anchors via Specialization and Feature Transformation0
Target-Augmented Shared Fusion-based Multimodal Sarcasm Explanation GenerationCode0
Self-Rationalization in the Wild: A Large Scale Out-of-Distribution Evaluation on NLI-related tasksCode0
Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models0
Multimodal Fake News Video Explanation: Dataset, Analysis and Evaluation0
Quantifying Relational Exploration in Cultural Heritage Knowledge Graphs with LLMs: A Neuro-Symbolic Approach0
Graph-Guided Textual Explanation Generation Framework0
Explainable CTR Prediction via LLM Reasoning0
SeQwen at the Financial Misinformation Detection Challenge Task: Sequential Learning for Claim Verification and Explanation Generation in Financial Domains0
Why the Agent Made that Decision: Explaining Deep Reinforcement Learning with Vision Masks0
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations0
IndMask: Inductive Explanation for Multivariate Time Series Black-Box ModelsCode0
Augmenting the Veracity and Explanations of Complex Fact Checking via Iterative Self-Revision with LLMs0
ForgeryGPT: Multimodal Large Language Model For Explainable Image Forgery Detection and Localization0
Take It Easy: Label-Adaptive Self-Rationalization for Fact Verification and Explanation GenerationCode0
Multimodal Coherent Explanation Generation of Robot FailuresCode0
FMDLlama: Financial Misinformation Detection based on Large Language ModelsCode0
Beyond Persuasion: Towards Conversational Recommender System with Credible ExplanationsCode0
Cross-Refine: Improving Natural Language Explanation Generation by Learning in TandemCode0
Explanation, Debate, Align: A Weak-to-Strong Framework for Language Model Generalization0
LLM-GAN: Construct Generative Adversarial Network Through Large Language Models For Explainable Fake News Detection0
A Transparency Paradox? Investigating the Impact of Explanation Specificity and Autonomous Vehicle Perceptual Inaccuracies on Passengers0
LLMExplainer: Large Language Model based Bayesian Inference for Graph Explanation Generation0
Aligning Explanations for Recommendation with Rating and Feature via Maximizing Mutual InformationCode0
Generally-Occurring Model Change for Robust Counterfactual Explanations0
XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI ApproachCode0
Enhancing Emotion Prediction in News Headlines: Insights from ChatGPT and Seq2Seq Models for Free-Text Generation0
ICCV23 Visual-Dialog Emotion Explanation Challenge: SEU_309 Team Technical Report0
Preference Distillation for Personalized Generative RecommendationCode0
Global Human-guided Counterfactual Explanations for Molecular Properties via Reinforcement LearningCode0
Evaluating Evidence Attribution in Generated Fact Checking ExplanationsCode0
Tox-BART: Leveraging Toxicity Attributes for Explanation Generation of Implicit Hate SpeechCode0
Weak Robust Compatibility Between Learning Algorithms and Counterfactual Explanation Generation Algorithms0
On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios0
Tell Me Why: Explainable Public Health Fact-Checking with Large Language ModelsCode0
Generating Robust Counterfactual Witnesses for Graph Neural Networks0
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis0
Using Stratified Sampling to Improve LIME Image ExplanationsCode0
RU22Fact: Optimizing Evidence for Multilingual Explainable Fact-Checking on Russia-Ukraine ConflictCode0
Advancing Explainable Autonomous Vehicle Systems: A Comprehensive Review and Research Roadmap0
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks0
T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision TransformersCode0
SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection0
Show:102550
← PrevPage 2 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified