SOTAVerified

The Pile: An 800GB Dataset of Diverse Text for Language Modeling

2020-12-31Code Available2· sign in to hype

Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent work has demonstrated that increased training dataset diversity improves general cross-domain knowledge and downstream generalization capability for large-scale language models. With this in mind, we present the Pile: an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets -- both existing and newly constructed -- many of which derive from academic or professional sources. Our evaluation of the untuned performance of GPT-2 and GPT-3 on the Pile shows that these models struggle on many of its components, such as academic writing. Conversely, models trained on the Pile improve significantly over both Raw CC and CC-100 on all components of the Pile, while improving performance on downstream evaluations. Through an in-depth exploratory analysis, we document potentially concerning aspects of the data for prospective users. We make publicly available the code used in its construction.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
The PileGPT-3 Davinci 175B (pre-trained)Bits per byte0.72Unverified
The PileGPT-3 Curie 6.7B (pre-trained)Bits per byte0.8Unverified
The PileGPT-3 Babbage 1.3B (pre-trained)Bits per byte0.87Unverified
The PileGPT-3 Ada 350M (pre-trained)Bits per byte0.96Unverified
The PileGPT-2 XL 1.5B (pre-trained)Bits per byte1.05Unverified
The PileGPT-2 Large 774M (pre-trained)Bits per byte1.08Unverified
The PileGPT-2 Medium 355M (pre-trained)Bits per byte1.09Unverified
The PileGPT-2 Small 124M (pre-trained)Bits per byte1.23Unverified

Reproductions