Evaluating Machine Unlearning via Epistemic Uncertainty
Alexander Becker, Thomas Liebig
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/royalbeff/evaluating_machine_unlearning_via_epistemic_uncertaintyOfficialIn paperpytorch★ 6
Abstract
There has been a growing interest in Machine Unlearning recently, primarily due to legal requirements such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act. Thus, multiple approaches were presented to remove the influence of specific target data points from a trained model. However, when evaluating the success of unlearning, current approaches either use adversarial attacks or compare their results to the optimal solution, which usually incorporates retraining from scratch. We argue that both ways are insufficient in practice. In this work, we present an evaluation metric for Machine Unlearning algorithms based on epistemic uncertainty. This is the first definition of a general evaluation metric for Machine Unlearning to our best knowledge.