SOTAVerified

Classifier calibration

Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).

Papers

Showing 1120 of 29 papers

TitleStatusHype
Improved User Identification through Calibrated Monte-Carlo DropoutCode0
Long-tailed Medical Diagnosis with Relation-aware Representation Learning and Iterative Classifier CalibrationCode0
Accuracy-Preserving Calibration via Statistical Modeling on Probability SimplexCode0
Enhancing Generalized Few-Shot Semantic Segmentation via Effective Knowledge TransferCode0
Expeditious Saliency-guided Mix-up through Random Gradient ThresholdingCode0
Hidden Heterogeneity: When to Choose Similarity-Based CalibrationCode0
No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID DataCode0
Classifier Calibration: with application to threat scores in cybersecurityCode0
What is Your Metric Telling You? Evaluating Classifier Calibration under Context-Specific Definitions of Reliability0
Better Classifier Calibration for Small Data Sets0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.