SOTAVerified

Dissecting Language Models: Machine Unlearning via Selective Pruning

2024-03-02Code Available0· sign in to hype

Nicholas Pochinkov, Nandi Schoots

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Understanding and shaping the behaviour of Large Language Models (LLMs) is increasingly important as applications become more powerful and more frequently adopted. This paper introduces a machine unlearning method specifically designed for LLMs. We introduce a selective pruning method for LLMs that removes neurons based on their relative importance on a targeted capability compared to overall network performance. This approach is a compute- and data-efficient method for identifying and removing neurons that enable specific behaviours. Our findings reveal that both feed-forward and attention neurons in LLMs are specialized; that is, for specific tasks, certain neurons are more crucial than others. Code from all experiments is available at https://github.com/nickypro/selective-pruning

Tasks

Reproductions