SOTAVerified

Explainable artificial intelligence

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Papers

Showing 501550 of 971 papers

TitleStatusHype
A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME0
Calibrated Explanations: with Uncertainty Information and CounterfactualsCode1
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces0
The Dark Side of Explanations: Poisoning Recommender Systems with Counterfactual Examples0
Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability0
Disagreement amongst counterfactual explanations: How transparency can be deceptive0
SketchXAI: A First Look at Explainability for Human Sketches0
An XAI framework for robust and transparent data-driven wind turbine power curve modelsCode1
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study0
Reinforcement Learning Tutor Better Supported Lower Performers in a Math Task0
Selecting Robust Features for Machine Learning Applications using Multidata Causal DiscoveryCode0
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?0
Characterizing the contribution of dependent features in XAI methodsCode0
A Brief Review of Explainable Artificial Intelligence in Healthcare0
Why is plausibility surprisingly problematic as an XAI criterion?0
Regulatory Changes in Power Systems Explored with Explainable Artificial Intelligence0
Model-agnostic explainable artificial intelligence for object detection in image dataCode0
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?0
A New Deep Learning and XAI-Based Algorithm for Features Selection in Genomics0
Explainable Artificial Intelligence Architecture for Melanoma Diagnosis Using Indicator Localization and Self-Supervised Learning0
Shapley-based Explainable AI for Clustering Applications in Fault Diagnosis and PrognosisCode0
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep ModelsCode1
Rough Randomness and its Application0
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma0
cito: An R package for training neural networks using torchCode0
EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust ModelsCode0
Contextual Trust0
Challenges facing the explainability of age prediction models: case study for two modalitiesCode0
Explainable AI for Time Series via Virtual Inspection Layers0
Analysis and Evaluation of Explainable Artificial Intelligence on Suicide Risk Assessment0
Towards Trust of Explainable AI in Thyroid Nodule DiagnosisCode1
A Survey on Explainable Artificial Intelligence for Cybersecurity0
Finding Alignments Between Interpretable Causal Variables and Distributed Neural RepresentationsCode1
Rule-based Out-Of-Distribution DetectionCode0
Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations0
iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams0
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate ScienceCode0
Explainable Artificial Intelligence and Cybersecurity: A Systematic Literature Review0
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support0
Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations0
Using Explainable AI to Cross-Validate Socio-economic Disparities Among Covid-19 Patient Mortality0
Explainable artificial intelligence toward usable and trustworthy computer-aided early diagnosis of multiple sclerosis from Optical Coherence Tomography0
A novel approach to generate datasets with XAI ground truth to evaluate image modelsCode0
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and GoalsCode0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal0
Efficient XAI Techniques: A Taxonomic Survey0
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication0
LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI0
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models0
Show:102550
← PrevPage 11 of 20Next →

No leaderboard results yet.