SOTAVerified

Explainable artificial intelligence

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Papers

Showing 776800 of 971 papers

TitleStatusHype
Resisting Out-of-Distribution Data Problem in Perturbation of XAI0
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis0
GLIME: A new graphical methodology for interpretable model-agnostic explanations0
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI0
Explainable Debugger for Black-box Machine Learning ModelsCode0
Vehicle Fuel Optimization Under Real-World Driving Conditions: An Explainable Artificial Intelligence Approach0
Explainable AI: current status and future directions0
Levels of explainable artificial intelligence for human-aligned conversational explanations0
Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern ClassificationCode0
Does Dataset Complexity Matters for Model Explainers?Code0
A Review of Explainable Artificial Intelligence in Manufacturing0
Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property PredictionCode1
Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated0
Human-in-the-loop model explanation via verbatim boundary identification in generated neighborhoodsCode0
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAyCode1
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?Code0
A Turing Test for Transparency0
Rational Shapley ValuesCode0
Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions0
Exploring deterministic frequency deviations with explainable AICode0
Counterfactual Explanations as Interventions in Latent SpaceCode0
Can Explainable AI Explain Unfairness? A Framework for Evaluating Explainable AI0
An Interaction-based Convolutional Neural Network (ICNN) Towards Better Understanding of COVID-19 X-ray ImagesCode0
Entropy-based Logic Explanations of Neural NetworksCode1
Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features0
Show:102550
← PrevPage 32 of 39Next →

No leaderboard results yet.