SOTAVerified

Feature Importance

Papers

Showing 651700 of 890 papers

TitleStatusHype
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs0
Investigating cybersecurity incidents using large language models in latest-generation wireless networks0
Investigating the importance of social vulnerability in opioid-related mortality across the United States0
iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams0
Is Shapley Explanation for a model unique?0
Efficient and Interpretable Traffic Destination Prediction using Explainable Boosting MachinesCode0
User Intent Prediction in Information-seeking ConversationsCode0
Interpretable Multi Labeled Bengali Toxic Comments Classification using Deep LearningCode0
ECG Feature Importance Rankings: Cardiologists vs. AlgorithmsCode0
Efficient Novelty Detection Methods for Early Warning of Potential Fatal DiseasesCode0
EFI: A Toolbox for Feature Importance Fusion and Interpretation in PythonCode0
Altruist: Argumentative Explanations through Local Interpretations of Predictive ModelsCode0
Elastic Net based Feature Ranking and SelectionCode0
This part looks alike this: identifying important parts of explained instances and prototypesCode0
Verifying Machine Unlearning with Explainable AICode0
Interpretation of machine learning predictions for patient outcomes in electronic health recordsCode0
End-to-end Feature Selection Approach for Learning Skinny TreesCode0
A Comparative Study on Machine Learning-based Approaches for Improving Traffic Accident Severity PredictionCode0
Interpretation of Neural Networks is FragileCode0
Captum: A unified and generic model interpretability library for PyTorchCode0
Enhancing interpretability of rule-based classifiers through feature graphsCode0
Auto-Gait: Automatic Ataxia Risk Assessment with Computer Vision on Gait Task VideosCode0
Interpreting artificial neural networks to detect genome-wide association signals for complex traitsCode0
Optimizing model-agnostic Random Subspace ensemblesCode0
Interpreting County Level COVID-19 Infection and Feature Sensitivity using Deep Learning Time Series ModelsCode0
Ultra-marginal Feature Importance: Learning from Data with Causal GuaranteesCode0
EPIC: Explanation of Pretrained Image Classification Networks via PrototypeCode0
Allowing humans to interactively guide machines where to look does not always improve human-AI team's classification accuracyCode0
Interpreting Neural Networks With Nearest NeighborsCode0
eSports Pro-Players Behavior During the Game Events: Statistical Analysis of Data Obtained Using the Smart ChairCode0
Algorithm-Agnostic Explainability for Unsupervised ClusteringCode0
AFS-BM: Enhancing Model Performance through Adaptive Feature Selection with Binary MaskingCode0
Sequential Attention for Feature SelectionCode0
Dual feature-based and example-based explanation methodsCode0
Evaluating Explainable Methods for Predictive Process Analytics: A Functionally-Grounded ApproachCode0
A Benchmark for Interpretability Methods in Deep Neural NetworksCode0
SHAP-based Explanations are Sensitive to Feature RepresentationCode0
Can local explanation techniques explain linear additive models?Code0
Evaluating Model Explanations without Ground TruthCode0
Is Gender "In-the-Wild" Inference Really a Solved Problem?Code0
Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon SetCode0
Iterative Feature Exclusion Ranking for Deep Tabular LearningCode0
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis SystemsCode0
Pathomic Fusion: An Integrated Framework for Fusing Histopathology and Genomic Features for Cancer Diagnosis and PrognosisCode0
Calculating and Visualizing Counterfactual Feature Importance ValuesCode0
Patient2Vec: A Personalized Interpretable Deep Representation of the Longitudinal Electronic Health RecordCode0
ShapG: new feature importance method based on the Shapley valueCode0
Exclusion and Inclusion -- A model agnostic approach to feature importance in DNNsCode0
Enhancing Interpretability and Generalizability in Extended Isolation ForestsCode0
Distributed and parallel time series feature extraction for industrial big data applicationsCode0
Show:102550
← PrevPage 14 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Garson Variable ImportancePearson Correlation0.76Unverified
2VarImpVIANNPearson Correlation0.76Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.6Unverified
2Garson Variable ImportancePearson Correlation0.22Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.86Unverified
2Garson Variable ImportancePearson Correlation0.64Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.83Unverified
2Garson Variable ImportancePearson Correlation0.6Unverified
#ModelMetricClaimedVerifiedStatus
1VarImpVIANNPearson Correlation0.9Unverified
2Garson Variable ImportancePearson Correlation0.73Unverified
#ModelMetricClaimedVerifiedStatus
1Garson Variable ImportancePearson Correlation0.74Unverified
2VarImpVIANNPearson Correlation0.41Unverified