SOTAVerified

Explanation Generation

Papers

Showing 126150 of 235 papers

TitleStatusHype
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis0
Order Matters: Generating Progressive Explanations for Planning Tasks in Human-Robot Teaming0
Parallel Sentence-Level Explanation Generation for Real-World Low-Resource Scenarios0
Parameterized Explanations for Investor / Company Matching0
Plan Explanations as Model Reconciliation -- An Empirical Study0
Progressive Explanation Generation for Human-robot Teaming0
Quantifying Relational Exploration in Cultural Heritage Knowledge Graphs with LLMs: A Neuro-Symbolic Approach0
Formal Semantic Geometry over Transformer-based Variational AutoEncoder0
Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations0
Reasoning About Persuasion: Can LLMs Enable Explainable Propaganda Detection?0
RecExplainer: Aligning Large Language Models for Explaining Recommendation Models0
RecMind: Large Language Model Powered Agent For Recommendation0
Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation0
Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the CUBE dataset0
RX-ADS: Interpretable Anomaly Detection using Adversarial ML for Electric Vehicle CAN data0
SeQwen at the Financial Misinformation Detection Challenge Task: Sequential Learning for Claim Verification and Explanation Generation in Financial Domains0
SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection0
TathyaNyaya and FactLegalLlama: Advancing Factual Judgment Prediction and Explanation in the Indian Legal Context0
Team SVMrank: Leveraging Feature-rich Support Vector Machines for Ranking Explanations to Elementary Science Questions0
Textual Explanations and Critiques in Recommendation Systems0
The Future is Agentic: Definitions, Perspectives, and Open Challenges of Multi-Agent Recommender Systems0
Towards a Unified Model for Generating Answers and Explanations in Visual Question Answering0
Towards Budget-Friendly Model-Agnostic Explanation Generation for Large Language Models0
Towards Generating Robust, Fair, and Emotion-Aware Explanations for Recommender Systems0
Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)0
Show:102550
← PrevPage 6 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VLIS (Lynx)Accuracy80Unverified
2VLIS (LLaVA)Accuracy73Unverified
3Ground-truth Caption -> GPT3 (Oracle)Human (%)68Unverified
4Predicted Caption -> GPT3Human (%)33Unverified
5BLIP2 FlanT5-XXL (Fine-tuned)Human (%)27Unverified
6BLIP2 FlanT5-XL (Fine-tuned)Human (%)15Unverified
7BLIP2 FlanT5-XXL (Zero-shot)Human (%)0Unverified
#ModelMetricClaimedVerifiedStatus
1PJ-XB487.4Unverified
2FMB478.8Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating85.7Unverified
2OFA-X-MTHuman Explanation Rating80.4Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-X-MTHuman Explanation Rating77.3Unverified
2OFA-XHuman Explanation Rating68.9Unverified
#ModelMetricClaimedVerifiedStatus
1OFA-XHuman Explanation Rating89.5Unverified
2OFA-X-MTHuman Explanation Rating87.8Unverified