SOTAVerified

uto\!L: Autonomous Evaluation of LLMs for Truth Maintenance and Reasoning Tasks

2024-10-11Unverified0· sign in to hype

Rushang Karia, Daniel Bramblett, Daksh Dobhal, Siddharth Srivastava

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents uto \!L is the first benchmarking paradigm that offers several key advantages necessary for scaling objective evaluation of LLMs without human labeling: (a) ability to evaluate LLMs of increasing sophistication by auto-generating tasks at different levels of difficulty; (b) auto-generation of ground truth that eliminates dependence on expensive and time-consuming human annotation; (c) the use of automatically generated, randomized datasets that mitigate the ability of successive LLMs to overfit to static datasets used in many contemporary benchmarks. Empirical analysis shows that an LLM's performance on uto\!L is highly indicative of its performance on a diverse array of other benchmarks focusing on translation and reasoning tasks, making it a valuable autonomous evaluation paradigm in settings where hand-curated datasets can be hard to obtain and/or update.

Tasks

Reproductions