SOTAVerified

Feature Importance

Papers

Showing 150 of 890 papers

TitleStatusHype
Attention is not ExplanationCode3
Fast Calibrated Explanations: Efficient and Uncertainty-Aware Explanations for Machine Learning ModelsCode2
An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare RecordsCode2
Inseq: An Interpretability Toolkit for Sequence Generation ModelsCode2
OpenFE: Automated Feature Generation with Expert-level PerformanceCode2
Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same EndCode2
CAFE-AD: Cross-Scenario Adaptive Feature Enhancement for Trajectory Planning in Autonomous DrivingCode1
Underwater Image Restoration via Polymorphic Large Kernel CNNsCode1
WinTSR: A Windowed Temporal Saliency Rescaling Method for Interpreting Time Series Deep Learning ModelsCode1
GraphXAIN: Narratives to Explain Graph Neural NetworksCode1
High-Fidelity Document Stain Removal via A Large-Scale Real-World Dataset and A Memory-Augmented TransformerCode1
Going Beyond H&E and Oncology: How Do Histopathology Foundation Models Perform for Multi-stain IHC and Immunology?Code1
Neural Reasoning Networks: Efficient Interpretable Neural Networks With Automatic Textual ExplanationsCode1
All-in-One Image Coding for Joint Human-Machine Vision with Multi-Path AggregationCode1
MusicLIME: Explainable Multimodal Music UnderstandingCode1
Time is Not Enough: Time-Frequency based Explanation for Time-Series Black-Box ModelsCode1
FairDomain: Achieving Fairness in Cross-Domain Medical Image Segmentation and ClassificationCode1
CAFO: Feature-Centric Explanation on Time Series ClassificationCode1
Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation ModelsCode1
Explainable Global Wildfire Prediction Models using Graph Neural NetworksCode1
Deep Learning for Gamma-Ray Bursts: A data driven event framework for X/Gamma-Ray analysis in space telescopesCode1
CAFE: Towards Compact, Adaptive, and Fast Embedding for Large-scale Recommendation ModelsCode1
Sweetwater: An interpretable and adaptive autoencoder for efficient tissue deconvolutionCode1
Transformer-based nowcasting of radar composites from satellite images for severe weatherCode1
Local Universal Explainer (LUX) -- a rule-based explainer with factual, counterfactual and visual explanationsCode1
Physics Inspired Hybrid Attention for SAR Target RecognitionCode1
Detach-ROCKET: Sequential feature selection for time series classification with random convolutional kernelsCode1
MvFS: Multi-view Feature Selection for Recommender SystemCode1
Calibrated Explanations for RegressionCode1
VertiBench: Advancing Feature Distribution Diversity in Vertical Federated Learning BenchmarksCode1
Integrating Random Forests and Generalized Linear Models for Improved Accuracy and InterpretabilityCode1
Harvard Glaucoma Fairness: A Retinal Nerve Disease Dataset for Fairness Learning and Fair Identity NormalizationCode1
Unbiased Gradient Boosting Decision Tree with Unbiased Feature ImportanceCode1
Calibrated Explanations: with Uncertainty Information and CounterfactualsCode1
Interpretable machine learning for time-to-event prediction in medicine and healthcareCode1
SurvLIMEpy: A Python package implementing SurvLIMECode1
Explainable Multilayer Graph Neural Network for Cancer Gene PredictionCode1
fseval: A Benchmarking Framework for Feature Selection and Feature Ranking AlgorithmsCode1
Cards Against AI: Predicting Humor in a Fill-in-the-blank Party GameCode1
Neural Eigenfunctions Are Structured Representation LearnersCode1
Positive-Unlabeled Learning using Random Forests via Recursive Greedy Risk MinimizationCode1
Concept Activation Regions: A Generalized Framework For Concept-Based ExplanationsCode1
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language ConversationsCode1
Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCsCode1
Compressing Features for Learning with Noisy LabelsCode1
STREAMLINE: A Simple, Transparent, End-To-End Automated Machine Learning Pipeline Facilitating Data Analysis and Algorithm ComparisonCode1
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional BenchmarkCode1
Robust Semantic Communications with Masked VQ-VAE Enabled CodebookCode1
Group-level Brain Decoding with Deep LearningCode1
Development of Interpretable Machine Learning Models to Detect Arrhythmia based on ECG DataCode1
Show:102550
← PrevPage 1 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Garson Variable ImportancePearson Correlation0.76Unverified
2VarImpVIANNPearson Correlation0.76Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.6Unverified
2Garson Variable ImportancePearson Correlation0.22Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.86Unverified
2Garson Variable ImportancePearson Correlation0.64Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.83Unverified
2Garson Variable ImportancePearson Correlation0.6Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.9Unverified
2Garson Variable ImportancePearson Correlation0.73Unverified
#ModelMetricClaimedVerifiedStatus
1Garson Variable ImportancePearson Correlation0.74Unverified
2VarImpVIANNPearson Correlation0.41Unverified