SOTAVerified

Feature Importance

Papers

Showing 150 of 890 papers

TitleStatusHype
Attention is not ExplanationCode3
Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same EndCode2
An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare RecordsCode2
Inseq: An Interpretability Toolkit for Sequence Generation ModelsCode2
Fast Calibrated Explanations: Efficient and Uncertainty-Aware Explanations for Machine Learning ModelsCode2
OpenFE: Automated Feature Generation with Expert-level PerformanceCode2
Going Beyond H&E and Oncology: How Do Histopathology Foundation Models Perform for Multi-stain IHC and Immunology?Code1
Explaining Time Series Predictions with Dynamic MasksCode1
GraphXAIN: Narratives to Explain Graph Neural NetworksCode1
Efficient nonparametric statistical inference on population feature importance using Shapley valuesCode1
Explainable Multilayer Graph Neural Network for Cancer Gene PredictionCode1
FairDomain: Achieving Fairness in Cross-Domain Medical Image Segmentation and ClassificationCode1
Feature Importance Ranking for Deep LearningCode1
Group-level Brain Decoding with Deep LearningCode1
Development of Interpretable Machine Learning Models to Detect Arrhythmia based on ECG DataCode1
Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements?Code1
Collection and Validation of Psychophysiological Data from Professional and Amateur Players: a Multimodal eSports DatasetCode1
Calibrated Explanations for RegressionCode1
DBA: Distributed Backdoor Attacks against Federated LearningCode1
Detach-ROCKET: Sequential feature selection for time series classification with random convolutional kernelsCode1
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional BenchmarkCode1
E2E-FS: An End-to-End Feature Selection Method for Neural NetworksCode1
Explainability and Adversarial Robustness for RNNsCode1
GStarX: Explaining Graph Neural Networks with Structure-Aware Cooperative GamesCode1
Widening the Pipeline in Human-Guided Reinforcement Learning with Explanation and Context-Aware Data AugmentationCode1
Facial Expression Recognition in the Wild via Deep Attentive Center LossCode1
Feature Importance-aware Transferable Adversarial AttacksCode1
Feature Importance Explanations for Temporal Black-Box ModelsCode1
FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate PredictionCode1
fseval: A Benchmarking Framework for Feature Selection and Feature Ranking AlgorithmsCode1
Compressing Features for Learning with Noisy LabelsCode1
Benchmarking Deep Learning Interpretability in Time Series PredictionsCode1
A Unified Approach to Interpreting Model PredictionsCode1
CAFE-AD: Cross-Scenario Adaptive Feature Enhancement for Trajectory Planning in Autonomous DrivingCode1
CAFO: Feature-Centric Explanation on Time Series ClassificationCode1
agtboost: Adaptive and Automatic Gradient Tree Boosting ComputationsCode1
Calibrated Explanations: with Uncertainty Information and CounterfactualsCode1
Cards Against AI: Predicting Humor in a Fill-in-the-blank Party GameCode1
ControlBurn: Feature Selection by Sparse ForestsCode1
Counterfactual Shapley Additive ExplanationsCode1
Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the CloudCode1
Deep Learning for Gamma-Ray Bursts: A data driven event framework for X/Gamma-Ray analysis in space telescopesCode1
Discretized Integrated Gradients for Explaining Language ModelsCode1
Disentangled Attribution Curves for Interpreting Random Forests and Boosted TreesCode1
Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation ModelsCode1
All-in-One Image Coding for Joint Human-Machine Vision with Multi-Path AggregationCode1
A Matlab Toolbox for Feature Importance RankingCode1
Explainable Global Wildfire Prediction Models using Graph Neural NetworksCode1
Activation Modulation and Recalibration Scheme for Weakly Supervised Semantic SegmentationCode1
CAFE: Towards Compact, Adaptive, and Fast Embedding for Large-scale Recommendation ModelsCode1
Show:102550
← PrevPage 1 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Garson Variable ImportancePearson Correlation0.76Unverified
2VarImpVIANNPearson Correlation0.76Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.6Unverified
2Garson Variable ImportancePearson Correlation0.22Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.86Unverified
2Garson Variable ImportancePearson Correlation0.64Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.83Unverified
2Garson Variable ImportancePearson Correlation0.6Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.9Unverified
2Garson Variable ImportancePearson Correlation0.73Unverified
#ModelMetricClaimedVerifiedStatus
1Garson Variable ImportancePearson Correlation0.74Unverified
2VarImpVIANNPearson Correlation0.41Unverified