SOTAVerified

Knowledge Swapping via Learning and Unlearning

2025-02-12Code Available0· sign in to hype

Mingyu Xing, Lechao Cheng, Shenggeng Tang, Yaxiong Wang, Zhun Zhong, Meng Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce Knowledge Swapping, a novel task designed to selectively regulate knowledge of a pretrained model by enabling the forgetting of user\-specified information, retaining essential knowledge, and acquiring new knowledge simultaneously. By delving into the analysis of knock-on feature hierarchy, we find that incremental learning typically progresses from low\-level representations to higher\-level semantics, whereas forgetting tends to occur in the opposite direction\-starting from high-level semantics and moving down to low-level features. Building upon this, we propose to benchmark the knowledge swapping task with the strategy of Learning Before Forgetting. Comprehensive experiments on various tasks like image classification, object detection, and semantic segmentation validate the effectiveness of the proposed strategy. The source code is available at https://github.com/xingmingyu123456/KnowledgeSwapping.

Tasks

Reproductions