SOTAVerified

Explainable artificial intelligence

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Papers

Showing 526550 of 971 papers

TitleStatusHype
EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust ModelsCode0
Contextual Trust0
Challenges facing the explainability of age prediction models: case study for two modalitiesCode0
Explainable AI for Time Series via Virtual Inspection Layers0
Analysis and Evaluation of Explainable Artificial Intelligence on Suicide Risk Assessment0
Towards Trust of Explainable AI in Thyroid Nodule DiagnosisCode1
A Survey on Explainable Artificial Intelligence for Cybersecurity0
Finding Alignments Between Interpretable Causal Variables and Distributed Neural RepresentationsCode1
Rule-based Out-Of-Distribution DetectionCode0
Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations0
iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams0
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate ScienceCode0
Explainable Artificial Intelligence and Cybersecurity: A Systematic Literature Review0
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support0
Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations0
Using Explainable AI to Cross-Validate Socio-economic Disparities Among Covid-19 Patient Mortality0
Explainable artificial intelligence toward usable and trustworthy computer-aided early diagnosis of multiple sclerosis from Optical Coherence Tomography0
A novel approach to generate datasets with XAI ground truth to evaluate image modelsCode0
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and GoalsCode0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal0
Efficient XAI Techniques: A Taxonomic Survey0
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication0
LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI0
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models0
Show:102550
← PrevPage 22 of 39Next →

No leaderboard results yet.