| Interpreting a Recurrent Neural Network's Predictions of ICU Mortality Risk | May 23, 2019 | Feature ImportanceICU Mortality | —Unverified | 0 | 0 |
| Interpreting Black-boxes Using Primitive Parameterized Functions | Sep 29, 2021 | Feature ImportanceForm | —Unverified | 0 | 0 |
| Interpreting Deep Forest through Feature Contribution and MDI Feature Importance | May 1, 2023 | Explainable ModelsFeature Importance | —Unverified | 0 | 0 |
| Interpreting Inflammation Prediction Model via Tag-based Cohort Explanation | Oct 17, 2024 | Decision MakingFeature Importance | —Unverified | 0 | 0 |
| Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs | Feb 17, 2021 | Feature Importancemodel | —Unverified | 0 | 0 |
| Investigating cybersecurity incidents using large language models in latest-generation wireless networks | Apr 14, 2025 | Binary ClassificationData Poisoning | —Unverified | 0 | 0 |
| Investigating the importance of social vulnerability in opioid-related mortality across the United States | Dec 3, 2024 | Feature Importance | —Unverified | 0 | 0 |
| iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams | Mar 2, 2023 | Explainable artificial intelligenceExplainable Artificial Intelligence (XAI) | —Unverified | 0 | 0 |
| Is Shapley Explanation for a model unique? | Nov 23, 2021 | Feature Importancemodel | —Unverified | 0 | 0 |
| Iterative missing value imputation based on feature importance | Nov 14, 2023 | Feature ImportanceImputation | —Unverified | 0 | 0 |