SOTAVerified

SCE: Scalable Consistency Ensembles Make Blackbox Large Language Model Generation More Reliable

2025-03-13Unverified0· sign in to hype

Jiaxin Zhang, Zhuohang Li, Wendi Cui, Kamalika Das, Bradley Malin, Sricharan Kumar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large language models (LLMs) have demonstrated remarkable performance, yet their diverse strengths and weaknesses prevent any single LLM from achieving dominance across all tasks. Ensembling multiple LLMs is a promising approach to generate reliable responses but conventional ensembling frameworks suffer from high computational overheads. This work introduces Scalable Consistency Ensemble (SCE), an efficient framework for ensembling LLMs by prompting consistent outputs. The SCE framework systematically evaluates and integrates outputs to produce a cohesive result through two core components: SCE-CHECK, a mechanism that gauges the consistency between response pairs via semantic equivalence; and SCE-FUSION, which adeptly merges the highest-ranked consistent responses from SCE-CHECK, to optimize collective strengths and mitigating potential weaknesses. To improve the scalability with multiple inference queries, we further propose ``You Only Prompt Once'' (YOPO), a novel technique that reduces the inference complexity of pairwise comparison from quadratic to constant time. We perform extensive empirical evaluations on diverse benchmark datasets to demonstrate 's effectiveness. Notably, the outperforms conventional baselines with enhanced performance and a significant reduction in computational overhead.

Tasks

Reproductions