SOTAVerified

LMEraser: Large Model Unlearning through Adaptive Prompt Tuning

2024-04-17Code Available0· sign in to hype

Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

To address the growing demand for privacy protection in machine learning, we propose a novel and efficient machine unlearning approach for Large Models, called LMEraser. Existing unlearning research suffers from entangled training data and complex model architectures, incurring extremely high computational costs for large models. LMEraser takes a divide-and-conquer strategy with a prompt tuning architecture to isolate data influence. The training dataset is partitioned into public and private datasets. Public data are used to train the backbone of the model. Private data are adaptively clustered based on their diversity, and each cluster is used to optimize a prompt separately. This adaptive prompt tuning mechanism reduces unlearning costs and maintains model performance. Experiments demonstrate that LMEraser achieves a 100-fold reduction in unlearning costs without compromising accuracy compared to prior work. Our code is available at: https://github.com/lmeraser/lmeraser.

Tasks

Reproductions