SOTAVerified

Explainable Artificial Intelligence (XAI)

Explainable Artificial Intelligence

Papers

Showing 51100 of 1041 papers

TitleStatusHype
TIMING: Temporality-Aware Integrated Gradients for Time Series ExplanationCode1
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methodsCode1
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAICode1
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species AnnotationsCode1
Trainable Noise Model as an XAI evaluation method: application on Sobol for remote sensing image segmentationCode1
An Ensemble Framework for Explainable Geospatial Machine Learning ModelsCode1
A Fresh Look at Sanity Checks for Saliency MapsCode1
Unlocking the black box of CNNs: Visualising the decision-making process with PRISMCode1
Deletion and Insertion Tests in Regression ModelsCode1
Why Should I Choose You? AutoXAI: A Framework for Selecting and Tuning eXplainable AI SolutionsCode1
XAI for Transformers: Better Explanations through Conservative PropagationCode1
XAutoML: A Visual Analytics Tool for Understanding and Validating Automated Machine LearningCode1
Evaluation of Interpretability for Deep Learning algorithms in EEG Emotion Recognition: A case study in AutismCode1
From Black Boxes to Conversations: Incorporating XAI in a Conversational AgentCode1
Collision Probability Distribution Estimation via Temporal Difference LearningCode1
ContrXT: Generating Contrastive Explanations from any Text ClassifierCode1
Causality-Aware Local Interpretable Model-Agnostic ExplanationsCode1
Calibrated Explanations: with Uncertainty Information and CounterfactualsCode1
Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning with Shapley ValuesCode1
Confident Teacher, Confident Student? A Novel User Study Design for Investigating the Didactic Potential of Explanations and their Impact on UncertaintyCode1
Deep Learning for Gamma-Ray Bursts: A data driven event framework for X/Gamma-Ray analysis in space telescopesCode1
ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific WritingCode1
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional BenchmarkCode1
Embedded Encoder-Decoder in Convolutional Networks Towards Explainable AICode1
Explainable AI Algorithms for Vibration Data-based Fault Detection: Use Case-adadpted Methods and Critical EvaluationCode1
Explainable AI Components for Narrative Map ExtractionCode1
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial IntelligenceCode1
Explainable Deep Learning Methods in Medical Image Classification: A SurveyCode1
BALANCE: Bayesian Linear Attribution for Root Cause LocalizationCode1
Extracting human interpretable structure-property relationships in chemistry using XAI and large language modelsCode1
In-Context Explainers: Harnessing LLMs for Explaining Black Box ModelsCode1
From Attribution Maps to Human-Understandable Explanations through Concept Relevance PropagationCode1
BayLIME: Bayesian Local Interpretable Model-Agnostic ExplanationsCode1
A Wearable Device Dataset for Mental Health Assessment Using Laser Doppler Flowmetry and Fluorescence Spectroscopy SensorsCode1
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural NetworksCode1
Insights Into the Inner Workings of Transformer Models for Protein Function PredictionCode1
Automatic Extraction of Linguistic Description from Fuzzy Rule BaseCode1
Landscape of R packages for eXplainable Artificial IntelligenceCode1
Learning Support and Trivial Prototypes for Interpretable Image ClassificationCode1
Local Universal Explainer (LUX) -- a rule-based explainer with factual, counterfactual and visual explanationsCode1
MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level Image-Concept AlignmentCode1
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image RecognitionCode1
Calibrated Explanations for RegressionCode1
NeuroXAI: Adaptive, robust, explainable surrogate framework for determination of channel importance in EEG applicationCode1
Deep-BIAS: Detecting Structural Bias using Explainable AICode1
Explaining Predictive Uncertainty with Information Theoretic Shapley ValuesCode1
An Explainable AI Framework for Artificial Intelligence of Medical Things0
An Experimental Study of Quantitative Evaluations on Saliency Methods0
A general approach to compute the relevance of middle-level input features0
Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems0
Show:102550
← PrevPage 2 of 21Next →

No leaderboard results yet.