Large Language Models are Zero-Shot Reasoners
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/kojima-takeshi188/zero_shot_cotOfficialIn paperpytorch★ 443
- github.com/skytliang/multi-agents-debatenone★ 539
- github.com/zongqianwu/st-cotpytorch★ 13
- github.com/nicolay-r/reasoning-for-sentiment-analysis-frameworkpytorch★ 4
Abstract
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| GSM8K | Text-davinci-002-175B (0-shot) | Accuracy | 10.4 | — | Unverified |
| GSM8K | Finetuned GPT-3 175B + verifier | Accuracy | 55 | — | Unverified |
| GSM8K | Text-davinci-002-175B (zero-plus-few-Shot-cot (8 samples)) | Accuracy | 51.5 | — | Unverified |
| GSM8K | text-davinci-002 175B (2-shot, CoT) | Accuracy | 41.3 | — | Unverified |
| GSM8K | text-davinci-002 175B (0-shot, CoT) | Accuracy | 40.7 | — | Unverified |
| GSM8K | PaLM 540B (few-shot) | Accuracy | 17.9 | — | Unverified |
| GSM8K | PaLM-540B (few-Shot-cot) | Accuracy | 58.1 | — | Unverified |
| MultiArith | Text-davinci-002 (175B)(zero-shot-cot) | Accuracy | 78.7 | — | Unverified |
| MultiArith | Text-davinci-002 (175B) (zero-shot) | Accuracy | 17.7 | — | Unverified |