SOTAVerified

Classifier calibration

Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).

Papers

Showing 1120 of 29 papers

TitleStatusHype
FedSA: A Unified Representation Learning via Semantic Anchors for Prototype-based Federated Learning0
Enhancing Generalized Few-Shot Semantic Segmentation via Effective Knowledge TransferCode0
Improved User Identification through Calibrated Monte-Carlo DropoutCode0
Accuracy-Preserving Calibration via Statistical Modeling on Probability SimplexCode0
Decoupling Decision-Making in Fraud Prevention through Classifier Calibration for Business Logic Action0
Classifier Calibration with ROC-Regularized Isotonic Regression0
Expeditious Saliency-guided Mix-up through Random Gradient ThresholdingCode0
Packed-Ensembles for Efficient Uncertainty EstimationCode0
Class-wise and reduced calibration methodsCode0
What is Your Metric Telling You? Evaluating Classifier Calibration under Context-Specific Definitions of Reliability0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.