SOTAVerified

Error Understanding

Discover what causes the model’s prediction errors.

Papers

Showing 19 of 9 papers

TitleStatusHype
Less is More: Fewer Interpretable Region via Submodular Subset SelectionCode2
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence MeasureCode1
Grad-CAM++: Improved Visual Explanations for Deep Convolutional NetworksCode1
xTower: A Multilingual LLM for Explaining and Correcting Translation Errors0
Variable importance measure for spatial machine learning models with application to air pollution exposure prediction0
Error Detection in Egocentric Procedural Task Videos0
iSEA: An Interactive Pipeline for Semantic Error Analysis of NLP ModelsCode0
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural NetworksCode0
Understanding Humans' Strategies in Maze Solving0
Show:102550

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SMDL-Attribution (ICLR version)Average highest confidence (ResNet-101)0.45Unverified
2Grad-CAM++Average highest confidence (ResNet-101)0.26Unverified
3Score-CAMAverage highest confidence (ResNet-101)0.25Unverified
4HSIC-AttributionAverage highest confidence (ResNet-101)0.25Unverified
#ModelMetricClaimedVerifiedStatus
1Grad-CAM++Average highest confidence0.26Unverified
2Score-CAMAverage highest confidence0.25Unverified
3HSIC-AttributionAverage highest confidence0.25Unverified