SOTAVerified

Classifier calibration

Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).

Papers

Showing 1120 of 29 papers

TitleStatusHype
What is Your Metric Telling You? Evaluating Classifier Calibration under Context-Specific Definitions of Reliability0
Better Classifier Calibration for Small Data Sets0
Binary Classifier Calibration: Bayesian Non-Parametric Approach0
Binary Classifier Calibration: Non-parametric approach0
Binary Classifier Calibration using an Ensemble of Near Isotonic Regression Models0
Classifier Calibration: A survey on how to assess and improve predicted class probabilities0
Classifier Calibration with ROC-Regularized Isotonic Regression0
Decoupling Decision-Making in Fraud Prevention through Classifier Calibration for Business Logic Action0
FedSA: A Unified Representation Learning via Semantic Anchors for Prototype-based Federated Learning0
Improved User Identification through Calibrated Monte-Carlo DropoutCode0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.