SOTAVerified

Classifier calibration

Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).

Papers

Showing 1120 of 29 papers

TitleStatusHype
Accuracy-Preserving Calibration via Statistical Modeling on Probability SimplexCode0
Long-tailed Medical Diagnosis with Relation-aware Representation Learning and Iterative Classifier CalibrationCode0
No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID DataCode0
Packed-Ensembles for Efficient Uncertainty EstimationCode0
Improved User Identification through Calibrated Monte-Carlo DropoutCode0
Classifier Calibration: with application to threat scores in cybersecurityCode0
Enhancing Generalized Few-Shot Semantic Segmentation via Effective Knowledge TransferCode0
Expeditious Saliency-guided Mix-up through Random Gradient ThresholdingCode0
Class-wise and reduced calibration methodsCode0
What is Your Metric Telling You? Evaluating Classifier Calibration under Context-Specific Definitions of Reliability0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.