SOTAVerified

Enhancing LLM Evaluations: The Garbling Trick

2024-11-03Unverified0· sign in to hype

William F. Bradley

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

As large language models (LLMs) become increasingly powerful, traditional evaluation metrics tend to saturate, making it challenging to distinguish between models. We propose a general method to transform existing LLM evaluations into a series of progressively more difficult tasks. These enhanced evaluations emphasize reasoning capabilities and can reveal relative performance differences that are not apparent in the original assessments. To demonstrate the effectiveness of our approach, we create a new multiple-choice test corpus, extend it into a family of evaluations, and assess a collection of LLMs. Our results offer insights into the comparative abilities of these models, particularly highlighting the differences between base LLMs and more recent "reasoning" models.

Tasks

Reproductions