SOTAVerified

Explainable artificial intelligence

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Papers

Showing 541550 of 971 papers

TitleStatusHype
Using Explainable AI to Cross-Validate Socio-economic Disparities Among Covid-19 Patient Mortality0
Explainable artificial intelligence toward usable and trustworthy computer-aided early diagnosis of multiple sclerosis from Optical Coherence Tomography0
A novel approach to generate datasets with XAI ground truth to evaluate image modelsCode0
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and GoalsCode0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal0
Efficient XAI Techniques: A Taxonomic Survey0
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication0
LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI0
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models0
Show:102550
← PrevPage 55 of 98Next →

No leaderboard results yet.