SOTAVerified

The Certainty Ratio C_ρ: a novel metric for assessing the reliability of classifier predictions

2024-11-04Unverified0· sign in to hype

Jesus S. Aguilar-Ruiz

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Evaluating the performance of classifiers is critical in machine learning, particularly in high-stakes applications where the reliability of predictions can significantly impact decision-making. Traditional performance measures, such as accuracy and F-score, often fail to account for the uncertainty inherent in classifier predictions, leading to potentially misleading assessments. This paper introduces the Certainty Ratio (C_), a novel metric designed to quantify the contribution of confident (certain) versus uncertain predictions to any classification performance measure. By integrating the Probabilistic Confusion Matrix (CM^) and decomposing predictions into certainty and uncertainty components, C_ provides a more comprehensive evaluation of classifier reliability. Experimental results across 21 datasets and multiple classifiers, including Decision Trees, Naive-Bayes, 3-Nearest Neighbors, and Random Forests, demonstrate that C_ reveals critical insights that conventional metrics often overlook. These findings emphasize the importance of incorporating probabilistic information into classifier evaluation, offering a robust tool for researchers and practitioners seeking to improve model trustworthiness in complex environments.

Tasks

Reproductions