SOTAVerified

metabench -- A Sparse Benchmark to Measure General Ability in Large Language Models

2024-07-04Code Available0· sign in to hype

Alex Kipnis, Konstantinos Voudouris, Luca M. Schulze Buschoff, Eric Schulz

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large Language Models (LLMs) vary in their abilities on a range of tasks. Initiatives such as the Open LLM Leaderboard aim to quantify these differences with several large benchmarks (sets of test items to which an LLM can respond either correctly or incorrectly). However, high correlations within and between benchmark scores suggest that (1) there exists a small set of common underlying abilities that these benchmarks measure, and (2) items tap into redundant information and the benchmarks may thus be considerably compressed. We use data from n > 5000 LLMs to identify the most informative items of six benchmarks, ARC, GSM8K, HellaSwag, MMLU, TruthfulQA and WinoGrande (with d=28,632 items in total). From them we distill a sparse benchmark, metabench, that has less than 3\% of the original size of all six benchmarks combined. This new sparse benchmark goes beyond point scores by yielding estimators of the underlying benchmark-specific abilities. We show that these estimators (1) can be used to reconstruct each original individual benchmark score with, on average, 1.5\% root mean square error (RMSE), (2) reconstruct the original total score with 0.8\% RMSE, and (3) have a single underlying common factor whose Spearman correlation with the total score is r = 0.93.

Tasks

Reproductions