SOTAVerified

Explainable artificial intelligence

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Papers

Showing 51100 of 971 papers

TitleStatusHype
XAI for transparent wind turbine power curve modelsCode1
Towards Trust of Explainable AI in Thyroid Nodule DiagnosisCode1
Unlocking the black box of CNNs: Visualising the decision-making process with PRISMCode1
Using Slisemap to interpret physical dataCode1
Calibrated Explanations for RegressionCode1
Automatic Extraction of Linguistic Description from Fuzzy Rule BaseCode1
A Fresh Look at Sanity Checks for Saliency MapsCode1
XAutoML: A Visual Analytics Tool for Understanding and Validating Automated Machine LearningCode1
TIMING: Temporality-Aware Integrated Gradients for Time Series ExplanationCode1
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep NetworksCode1
A Wearable Device Dataset for Mental Health Assessment Using Laser Doppler Flowmetry and Fluorescence Spectroscopy SensorsCode1
A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language ProcessingCode1
Calibrated Explanations: with Uncertainty Information and CounterfactualsCode1
Deep Learning for Gamma-Ray Bursts: A data driven event framework for X/Gamma-Ray analysis in space telescopesCode1
Axiomatic Attribution for Deep NetworksCode1
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial IntelligenceCode1
Causality-Aware Local Interpretable Model-Agnostic ExplanationsCode1
Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning with Shapley ValuesCode1
Confident Teacher, Confident Student? A Novel User Study Design for Investigating the Didactic Potential of Explanations and their Impact on UncertaintyCode1
Consistent Explanations by Contrastive LearningCode1
Driving Behavior Explanation with Multi-level FusionCode1
Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation ModelsCode1
Entropy-based Logic Explanations of Neural NetworksCode1
Evaluation of Interpretability for Deep Learning algorithms in EEG Emotion Recognition: A case study in AutismCode1
Extracting human interpretable structure-property relationships in chemistry using XAI and large language modelsCode1
Explainable Deep Learning Methods in Medical Image Classification: A SurveyCode1
Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local ExplanationsCode1
Explaining Black-Box Models through CounterfactualsCode1
Finding Alignments Between Interpretable Causal Variables and Distributed Neural RepresentationsCode1
From Attribution Maps to Human-Understandable Explanations through Concept Relevance PropagationCode1
TE2Rules: Explaining Tree Ensembles using RulesCode1
Gaussian Process Regression With Interpretable Sample-Wise Feature WeightsCode1
Proposed Guidelines for the Responsible Use of Explainable Machine LearningCode1
In-Context Explainers: Harnessing LLMs for Explaining Black Box ModelsCode1
AudioMNIST: Exploring Explainable Artificial Intelligence for Audio Analysis on a Simple BenchmarkCode1
Landscape of R packages for eXplainable Artificial IntelligenceCode1
Logic Explained NetworksCode1
Medical Slice Transformer: Improved Diagnosis and Explainability on 3D Medical Images with DINOv2Code1
MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level Image-Concept AlignmentCode1
Model-contrastive explanations through symbolic reasoningCode1
An Explainable AI Framework for Artificial Intelligence of Medical Things0
An Experimentation Platform for Explainable Coalition Situational Understanding0
Adversarial Attack for Explanation Robustness of Rationalization Models0
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)0
Abstraction, Validation, and Generalization for Explainable Artificial Intelligence0
A Unified Framework for Evaluating the Effectiveness and Enhancing the Transparency of Explainable AI Methods in Real-World Applications0
A New Deep Learning and XAI-Based Algorithm for Features Selection in Genomics0
Advancing Nearest Neighbor Explanation-by-Example with Critical Classification Regions0
An Artificial Intelligence-based model for cell killing prediction: development, validation and explainability analysis of the ANAKIN model0
Adherence and Constancy in LIME-RS Explanations for Recommendation0
Show:102550
← PrevPage 2 of 20Next →

No leaderboard results yet.