SOTAVerified

Linear Interpolation In Parameter Space is Good Enough for Fine-Tuned Language Models

2022-11-22Unverified0· sign in to hype

Mark Rofin, Nikita Balagansky, Daniil Gavrilov

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The simplest way to obtain continuous interpolation between two points in high dimensional space is to draw a line between them. While previous works focused on the general connectivity between model parameters, we explored linear interpolation for parameters of pre-trained models after fine-tuning. Surprisingly, we could perform linear interpolation without a performance drop in intermediate points for fine-tuned models. For controllable text generation, such interpolation could be seen as moving a model towards or against the desired text attribute (e.g., positive sentiment), which could be used as grounds for further methods for controllable text generation without inference speed overhead.

Tasks

Reproductions