SOTAVerified

Scaling Laws for Economic Productivity: Experimental Evidence in LLM-Assisted Translation

2024-09-04Unverified0· sign in to hype

Ali Merali

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper derives "scaling laws"--empirical relationships between the training compute of Large Language Models (LLMs) and their performance--for economic outcomes. In a preregistered online experiment, 300 professional translators completed 1,800 tasks using one of 13 LLMs (or a control). A tenfold increase in model compute improved task completion speed by 12.3%, grades by 0.18 standard deviations, and earnings per minute by 16.1%. Gains were four times larger for lower-skilled workers. These findings suggest continued model scaling could boost U.S. productivity by at least 6.9% over the next decade.

Tasks

Reproductions