SOTAVerified

Explainable artificial intelligence

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Papers

Showing 511520 of 971 papers

TitleStatusHype
Principles of Explanation in Human-AI Systems0
Privacy-Preserving Federated Learning with Differentially Private Hyperdimensional Computing0
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review0
Probabilities of Causation for Continuous and Vector Variables0
Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)0
QAmplifyNet: Pushing the Boundaries of Supply Chain Backorder Prediction Using Interpretable Hybrid Quantum-Classical Neural Network0
Qualitative Investigation in Explainable Artificial Intelligence: A Bit More Insight from Social Science0
Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated0
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces0
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression0
Show:102550
← PrevPage 52 of 98Next →

No leaderboard results yet.