Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/microsoft/guidancenone★ 21,361
- github.com/guidance-ai/guidancenone★ 21,360
- github.com/thudm/chatglm2-6bpytorch★ 15,640
- github.com/srush/minichainpytorch★ 1,233
- github.com/lupantech/chameleon-llmnone★ 1,140
- github.com/lastmile-ai/aiconfignone★ 1,082
- github.com/rlqja1107/torch-LLM4SGGpytorch★ 116
- github.com/scofield7419/thor-isapytorch★ 109
- github.com/imnearth/coatnone★ 100
- github.com/coldmist-lu/erroranalysis_promptnone★ 92
Abstract
We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CommonsenseQA | Chain of thought ASDiv | Accuracy | 28.6 | — | Unverified |