SOTAVerified

Y-Tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

With the success of large-scale pre-trained models (PTMs), how efficiently adapting PTMs to downstream tasks has attracted tremendous attention, especially for PTMs with billions of parameters. Although some parameter-efficient tuning paradigms have been proposed to address this problem, they still require large resources to compute and store the gradients in the training phase. In this paper, we propose Y-Tuning, an efficient yet effective paradigm to adapt frozen large-scale PTMs to specific downstream tasks. Y-tuning learns dense representations for labels Y defined in a given task and aligns them to fixed feature representation. Without tuning the features of input text and model parameters, Y-tuning is both parameter-efficient and training-efficient. Although Y-tuning is currently still not comparable with fine-tuning in performance, it has a great advantage in saving computational cost and has the potential to further improve its performance.

Tasks

Reproductions