SOTAVerified

CLUE: Concept-Level Uncertainty Estimation for Large Language Models

2024-09-04Unverified0· sign in to hype

Yu-Hsiang Wang, Andrew Bai, Che-Ping Tsai, Cho-Jui Hsieh

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large Language Models (LLMs) have demonstrated remarkable proficiency in various natural language generation (NLG) tasks. Previous studies suggest that LLMs' generation process involves uncertainty. However, existing approaches to uncertainty estimation mainly focus on sequence-level uncertainty, overlooking individual pieces of information within sequences. These methods fall short in separately assessing the uncertainty of each component in a sequence. In response, we propose a novel framework for Concept-Level Uncertainty Estimation (CLUE) for LLMs. We leverage LLMs to convert output sequences into concept-level representations, breaking down sequences into individual concepts and measuring the uncertainty of each concept separately. We conduct experiments to demonstrate that CLUE can provide more interpretable uncertainty estimation results compared with sentence-level uncertainty, and could be a useful tool for various tasks such as hallucination detection and story generation.

Tasks

Reproductions