AdapterHub: A Framework for Adapting Transformers
Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Adapter-Hub/adapter-transformersOfficialpytorch★ 2,804
- github.com/Adapter-Hub/HubOfficialnone★ 69
- github.com/adapter-hub/adapterspytorch★ 2,804
- github.com/adapter-hub/playgroundnone★ 14
- github.com/adapter-hub/adapter-transformers-legacyjax★ 4
- github.com/dair-iitd/zguljax★ 3
- github.com/mklimasz/language-arithmeticjax★ 1
- github.com/parovicm/badxjax★ 1
- github.com/ffaisal93/adapt_lang_phylogenyjax★ 0
Abstract
The current modus operandi in NLP involves downloading and fine-tuning pre-trained models consisting of millions or billions of parameters. Storing and sharing such large trained models is expensive, slow, and time-consuming, which impedes progress towards more general and versatile NLP methods that learn from and for many tasks. Adapters -- small learnt bottleneck layers inserted within each layer of a pre-trained model -- ameliorate this issue by avoiding full fine-tuning of the entire model. However, sharing and integrating adapter layers is not straightforward. We propose AdapterHub, a framework that allows dynamic "stitching-in" of pre-trained adapters for different tasks and languages. The framework, built on top of the popular HuggingFace Transformers library, enables extremely easy and quick adaptations of state-of-the-art pre-trained models (e.g., BERT, RoBERTa, XLM-R) across tasks and languages. Downloading, sharing, and training adapters is as seamless as possible using minimal changes to the training scripts and a specialized infrastructure. Our framework enables scalable and easy access to sharing of task-specific models, particularly in low-resource scenarios. AdapterHub includes all recent adapter architectures and can be found at https://AdapterHub.ml.