SOTAVerified

BenTo: Benchmark Task Reduction with In-Context Transferability

2024-10-17Code Available0· sign in to hype

Hongyu Zhao, Ming Li, Lichao Sun, Tianyi Zhou

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Evaluating large language models (LLMs) is costly: it requires the generation and examination of LLM outputs on a large-scale benchmark of various tasks. This paper investigates how to efficiently reduce the tasks used to benchmark LLMs without affecting the evaluation quality. Our study reveals that task transferability and relevance provide critical information to identify the most representative subset of tasks via optimizing a facility location function. We propose a practically efficient metric for estimating the transferability between two tasks via in-context learning (ICL). By analyzing the pairwise transferability, we can reduce tasks in a modern LLM benchmark (e.g., MMLU or FLAN) to 5% while inducing only a <4% difference to the evaluation on the original benchmark. Compared to prior works, our method is training-free, gradient-free, and highly efficient requiring ICL only.

Tasks

Reproductions