SOTAVerified

On Orderings of Probability Vectors and Unsupervised Performance Estimation

2023-06-16Code Available0· sign in to hype

Muhammad Maaz, Rui Qiao, Yiheng Zhou, Renxian Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Unsupervised performance estimation, or evaluating how well models perform on unlabeled data is a difficult task. Recently, a method was proposed by Garg et al. [2022] which performs much better than previous methods. Their method relies on having a score function, satisfying certain properties, to map probability vectors outputted by the classifier to the reals, but it is an open problem which score function is best. We explore this problem by first showing that their method fundamentally relies on the ordering induced by this score function. Thus, under monotone transformations of score functions, their method yields the same estimate. Next, we show that in the binary classification setting, nearly all common score functions - the L^ norm; the L^2 norm; negative entropy; and the L^2, L^1, and Jensen-Shannon distances to the uniform vector - all induce the same ordering over probability vectors. However, this does not hold for higher dimensional settings. We conduct numerous experiments on well-known NLP data sets and rigorously explore the performance of different score functions. We conclude that the L^ norm is the most appropriate.

Tasks

Reproductions