SOTAVerified

Feature Importance

Papers

Showing 301350 of 890 papers

TitleStatusHype
Towards consistency of rule-based explainer and black box model -- fusion of rule induction and XAI-based feature importanceCode0
Towards Personalised Patient Risk Prediction Using Temporal Hospital Data Trajectories0
XAI-Guided Enhancement of Vegetation Indices for Crop Mapping0
Explainability of Sub-Field Level Crop Yield Prediction using Remote Sensing0
BrainMetDetect: Predicting Primary Tumor from Brain Metastasis MRI Data Using Radiomic Features and Machine Learning AlgorithmsCode0
DocXplain: A Novel Model-Agnostic Explainability Method for Document Image Classification0
ShapG: new feature importance method based on the Shapley valueCode0
Explainability of Machine Learning Models under Missing DataCode0
AI Data Readiness Inspector (AIDRIN) for Quantitative Assessment of Data Readiness for AI0
Predicting the duration of traffic incidents for Sydney greater metropolitan area using machine learning methodsCode0
The Impact of Feature Representation on the Accuracy of Photonic Neural NetworksCode0
Graph-Augmented LLMs for Personalized Health Insights: A Case Study in Sleep Analysis0
Fault Detection for agents on power grid topology optimization: A Comprehensive analysis0
Privacy Implications of Explainable AI in Data-Driven Systems0
Multi-level Phenotypic Models of Cardiovascular Disease and Obstructive Sleep Apnea Comorbidities: A Longitudinal Wisconsin Sleep Cohort Study0
Machine Learning Based Prediction of Proton Conductivity in Metal-Organic Frameworks0
Multi-LLM QA with Embodied Exploration0
FeatNavigator: Automatic Feature Augmentation on Tabular Data0
Deep reinforcement learning with positional context for intraday trading0
Learned Feature Importance Scores for Automated Feature Engineering0
MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning0
Model Interpretation and Explainability: Towards Creating Transparency in Prediction Models0
Enhancing Counterfactual Image Generation Using Mahalanobis Distance with Distribution Preferences in Feature Space0
Unified Explanations in Machine Learning Models: A Perturbation ApproachCode0
Explainable Data-driven Modeling of Adsorption Energy in Heterogeneous CatalysisCode0
I Bet You Did Not Mean That: Testing Semantic Importance via BettingCode0
MCDFN: Supply Chain Demand Forecasting via an Explainable Multi-Channel Data Fusion Network Model0
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for TransformersCode0
From SHAP Scores to Feature Importance Scores0
Exploring Commonalities in Explanation Frameworks: A Multi-Domain Survey Analysis0
Analyze Additive and Interaction Effects via Collaborative Trees0
Mitigating Text Toxicity with Counterfactual Generation0
Feature Importance and Explainability in Quantum Machine LearningCode0
Clustering of Disease Trajectories with Explainable Machine Learning: A Case Study on Postoperative Delirium Phenotypes0
Estimate the building height at a 10-meter resolution based on Sentinel data0
Feature importance to explain multimodal prediction models. A clinical use caseCode0
DTization: A New Method for Supervised Feature Scaling0
Accurate and fast anomaly detection in industrial processes and IoT environments0
Optimizing Universal Lesion Segmentation: State Space Model-Guided Hierarchical Networks with Feature Importance Adjustment0
SIDEs: Separating Idealization from Deceptive Explanations in xAI0
Fiper: a Visual-based Explanation Combining Rules and Feature Importance0
MISLEAD: Manipulating Importance of Selected features for Learning Epsilon in Evasion Attack Deception0
Capturing Momentum: Tennis Match Analysis Using Machine Learning and Time Series Theory0
A Guide to Feature Importance Methods for Scientific InferenceCode0
Explainable AI for Fair Sepsis Mortality Predictive Model0
Using a Local Surrogate Model to Interpret Temporal Shifts in Global Annual Data0
Explainable Machine Learning System for Predicting Chronic Kidney Disease in High-Risk Cardiovascular Patients0
CAGE: Causality-Aware Shapley Value for Global Explanations0
Interaction as Explanation: A User Interaction-based Method for Explaining Image Classification Models0
Application of the representative measure approach to assess the reliability of decision trees in dealing with unseen vehicle collision dataCode0
Show:102550
← PrevPage 7 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Garson Variable ImportancePearson Correlation0.76Unverified
2VarImpVIANNPearson Correlation0.76Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.6Unverified
2Garson Variable ImportancePearson Correlation0.22Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.86Unverified
2Garson Variable ImportancePearson Correlation0.64Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.83Unverified
2Garson Variable ImportancePearson Correlation0.6Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.9Unverified
2Garson Variable ImportancePearson Correlation0.73Unverified
#ModelMetricClaimedVerifiedStatus
1Garson Variable ImportancePearson Correlation0.74Unverified
2VarImpVIANNPearson Correlation0.41Unverified