SOTAVerified

Relabeling Minimal Training Subset to Flip a Prediction

2023-05-22Code Available0· sign in to hype

Jinghan Yang, Linjie Xu, Lequan Yu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

When facing an unsatisfactory prediction from a machine learning model, users can be interested in investigating the underlying reasons and exploring the potential for reversing the outcome. We ask: To flip the prediction on a test point x_t, how to identify the smallest training subset S_t that we need to relabel? We propose an efficient algorithm to identify and relabel such a subset via an extended influence function for binary classification models with convex loss. We find that relabeling fewer than 2% of the training points can always flip a prediction. This mechanism can serve multiple purposes: (1) providing an approach to challenge a model prediction by altering training points; (2) evaluating model robustness with the cardinality of the subset (i.e., |S_t|); we show that |S_t| is highly related to the noise ratio in the training set and |S_t| is correlated with but complementary to predicted probabilities; and (3) revealing training points lead to group attribution bias. To the best of our knowledge, we are the first to investigate identifying and relabeling the minimal training subset required to flip a given prediction.

Tasks

Reproductions