SOTAVerified

Explainable artificial intelligence

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Papers

Showing 751800 of 971 papers

TitleStatusHype
ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AICode0
Automated Quality Control of Vacuum Insulated Glazing by Convolutional Neural Network Image Classification0
Explaining deep learning models for spoofing and deepfake detection with SHapley Additive exPlanationsCode1
Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning with Shapley ValuesCode1
Classification of Viral Pneumonia X-ray Images with the Aucmedi Framework0
Consistent Explanations by Contrastive LearningCode1
Advancing Nearest Neighbor Explanation-by-Example with Critical Classification Regions0
Focus! Rating XAI Methods and Finding BiasesCode1
Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence0
A User-Centred Framework for Explainable Artificial Intelligence in Human-Robot Interaction0
Multihop: Leveraging Complex Models to Learn Accurate Simple Models0
When Stability meets Sufficiency: Informative Explanations that do not Overwhelm0
Adherence and Constancy in LIME-RS Explanations for Recommendation0
Knowledge-based XAI through CBR: There is more to explanations than models can tell0
Longitudinal Distance: Towards Accountable Instance Attribution0
Improvement of a Prediction Model for Heart Failure Survival through Explainable Artificial Intelligence0
Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey0
CARE: Coherent Actionable Recourse based on Sound Counterfactual ExplanationsCode0
Challenges for cognitive decoding using deep learning methods0
Logic Explained NetworksCode1
Interpretable Summaries of Black Box Incident Triaging with Subgroup DiscoveryCode0
Toward Improving Confidence in Autonomous Vehicle Software: A Study on Traffic Sign Recognition SystemsCode1
On the Importance of Domain-specific Explanations in AI-based Cybersecurity Systems (Technical Report)0
Towards explainable artificial intelligence (XAI) for early anticipation of traffic accidentsCode0
MAIR: Framework for mining relationships between research articles, strategies, and regulations in the field of explainable artificial intelligence0
Resisting Out-of-Distribution Data Problem in Perturbation of XAI0
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis0
GLIME: A new graphical methodology for interpretable model-agnostic explanations0
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI0
Explainable Debugger for Black-box Machine Learning ModelsCode0
Vehicle Fuel Optimization Under Real-World Driving Conditions: An Explainable Artificial Intelligence Approach0
Explainable AI: current status and future directions0
Levels of explainable artificial intelligence for human-aligned conversational explanations0
Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern ClassificationCode0
Does Dataset Complexity Matters for Model Explainers?Code0
A Review of Explainable Artificial Intelligence in Manufacturing0
Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property PredictionCode1
Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated0
Human-in-the-loop model explanation via verbatim boundary identification in generated neighborhoodsCode0
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAyCode1
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?Code0
A Turing Test for Transparency0
Rational Shapley ValuesCode0
Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions0
Exploring deterministic frequency deviations with explainable AICode0
Counterfactual Explanations as Interventions in Latent SpaceCode0
Can Explainable AI Explain Unfairness? A Framework for Evaluating Explainable AI0
An Interaction-based Convolutional Neural Network (ICNN) Towards Better Understanding of COVID-19 X-ray ImagesCode0
Entropy-based Logic Explanations of Neural NetworksCode1
Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features0
Show:102550
← PrevPage 16 of 20Next →

No leaderboard results yet.