SOTAVerified

Explainable artificial intelligence

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Papers

Showing 501525 of 971 papers

TitleStatusHype
Optimizing Binary Decision Diagrams with MaxSAT for classification0
Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF): A Data-Morphology-based Counterfactual Generation Method for Trustworthy Artificial Intelligence0
Peeking Inside the Schufa Blackbox: Explaining the German Housing Scoring System0
Polynomial Threshold Functions of Bounded Tree-Width: Some Explainability and Complexity Aspects0
Popularity, face and voice: Predicting and interpreting livestreamers' retail performance using machine learning techniques0
Post-hoc explanation of black-box classifiers using confident itemsets0
Precision of Individual Shapley Value Explanations0
Predicting and explaining nonlinear material response using deep Physically Guided Neural Networks with Internal Variables0
Explainable artificial intelligence model for identifying Market Value in Professional Soccer Players0
Prediction of Diblock Copolymer Morphology via Machine Learning0
Principles of Explanation in Human-AI Systems0
Privacy-Preserving Federated Learning with Differentially Private Hyperdimensional Computing0
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review0
Probabilities of Causation for Continuous and Vector Variables0
Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)0
QAmplifyNet: Pushing the Boundaries of Supply Chain Backorder Prediction Using Interpretable Hybrid Quantum-Classical Neural Network0
Qualitative Investigation in Explainable Artificial Intelligence: A Bit More Insight from Social Science0
Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated0
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces0
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression0
Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence0
Refutation of Shapley Values for XAI -- Additional Evidence0
Regulatory Changes in Power Systems Explored with Explainable Artificial Intelligence0
Reinforcement Learning Tutor Better Supported Lower Performers in a Math Task0
Reinforcing Clinical Decision Support through Multi-Agent Systems and Ethical AI Governance0
Show:102550
← PrevPage 21 of 39Next →

No leaderboard results yet.