SOTAVerified

The Comparative Trap: Pairwise Comparisons Amplifies Biased Preferences of LLM Evaluators

2024-06-18Unverified0· sign in to hype

Hawon Jeong, ChaeHun Park, Jimin Hong, Hojoon Lee, Jaegul Choo

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

As large language models (LLMs) are increasingly used as evaluators for natural language generation tasks, ensuring unbiased assessments is essential. However, LLM evaluators often display biased preferences, such as favoring verbosity and authoritative tones. Our empirical analysis reveals that these biases are exacerbated in pairwise evaluation, where LLMs directly compare two outputs and easily prioritize superficial attributes. In contrast, pointwise evaluation, which assesses outputs independently, is less susceptible to such bias because each output is judged in isolation. To address the limitations of the pairwise evaluation, we introduce a novel evaluation method, PRePair, which integrates pointwise reasoning within a pairwise framework. PRePair effectively alleviates biased preference, improving performance on the adversarial benchmark (LLMBar) while outperforming pointwise evaluation on the standard benchmark (MT-Bench).

Tasks

Reproductions