Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/huggingface/transformerspytorch★ 158,292
- github.com/google-research/text-to-text-transfer-transformerIn papertf★ 6,496
- github.com/amazon-science/chronos-forecastingpytorch★ 4,966
- github.com/allenai/dolmanone★ 1,460
- github.com/thudm/swissarmytransformerpytorch★ 1,116
- github.com/google/seqiotf★ 594
- github.com/facebookresearch/atlaspytorch★ 555
- github.com/conceptofmind/lamda-rlhf-pytorchpytorch★ 470
- github.com/conceptofmind/LaMDA-pytorchpytorch★ 470
- github.com/thu-keg/omnieventpytorch★ 405
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.