SOTAVerified

Explainable artificial intelligence

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Papers

Showing 551600 of 971 papers

TitleStatusHype
Biomarker Investigation using Multiple Brain Measures from MRI through XAI in Alzheimer's Disease Classification0
A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME0
Metric Tools for Sensitivity Analysis with Applications to Neural Networks0
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces0
The Dark Side of Explanations: Poisoning Recommender Systems with Counterfactual Examples0
Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability0
Disagreement amongst counterfactual explanations: How transparency can be deceptive0
SketchXAI: A First Look at Explainability for Human Sketches0
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study0
Selecting Robust Features for Machine Learning Applications using Multidata Causal DiscoveryCode0
Reinforcement Learning Tutor Better Supported Lower Performers in a Math Task0
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?0
Characterizing the contribution of dependent features in XAI methodsCode0
A Brief Review of Explainable Artificial Intelligence in Healthcare0
Model-agnostic explainable artificial intelligence for object detection in image dataCode0
Why is plausibility surprisingly problematic as an XAI criterion?0
Regulatory Changes in Power Systems Explored with Explainable Artificial Intelligence0
A New Deep Learning and XAI-Based Algorithm for Features Selection in Genomics0
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?0
Explainable Artificial Intelligence Architecture for Melanoma Diagnosis Using Indicator Localization and Self-Supervised Learning0
Shapley-based Explainable AI for Clustering Applications in Fault Diagnosis and PrognosisCode0
Rough Randomness and its Application0
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma0
cito: An R package for training neural networks using torchCode0
Contextual Trust0
EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust ModelsCode0
Challenges facing the explainability of age prediction models: case study for two modalitiesCode0
Explainable AI for Time Series via Virtual Inspection Layers0
Analysis and Evaluation of Explainable Artificial Intelligence on Suicide Risk Assessment0
A Survey on Explainable Artificial Intelligence for Cybersecurity0
Rule-based Out-Of-Distribution DetectionCode0
iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams0
Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations0
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate ScienceCode0
Explainable Artificial Intelligence and Cybersecurity: A Systematic Literature Review0
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support0
Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations0
Using Explainable AI to Cross-Validate Socio-economic Disparities Among Covid-19 Patient Mortality0
Explainable artificial intelligence toward usable and trustworthy computer-aided early diagnosis of multiple sclerosis from Optical Coherence Tomography0
A novel approach to generate datasets with XAI ground truth to evaluate image modelsCode0
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and GoalsCode0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication0
Efficient XAI Techniques: A Taxonomic Survey0
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal0
LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI0
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models0
Example-Based Explainable AI and its Application for Remote Sensing Image Classification0
VR-LENS: Super Learning-based Cybersickness Detection and Explainable AI-Guided Deployment in Virtual Reality0
Approximating the Shapley Value without Marginal Contributions0
Show:102550
← PrevPage 12 of 20Next →

No leaderboard results yet.