SOTAVerified

Cognitive Mirage: A Review of Hallucinations in Large Language Models

2023-09-13Code Available1· sign in to hype

Hongbin Ye, Tong Liu, Aijia Zhang, Wei Hua, Weiqiang Jia

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

As large language models continue to develop in the field of AI, text generation systems are susceptible to a worrisome phenomenon known as hallucination. In this study, we summarize recent compelling insights into hallucinations in LLMs. We present a novel taxonomy of hallucinations from various text generation tasks, thus provide theoretical insights, detection methods and improvement approaches. Based on this, future research directions are proposed. Our contribution are threefold: (1) We provide a detailed and complete taxonomy for hallucinations appearing in text generation tasks; (2) We provide theoretical analyses of hallucinations in LLMs and provide existing detection and improvement methods; (3) We propose several research directions that can be developed in the future. As hallucinations garner significant attention from the community, we will maintain updates on relevant research progress.

Tasks

Reproductions