| Logical Consistency of Large Language Models in Fact-checking | Dec 20, 2024 | Fact CheckingHallucination | —Unverified | 0 | 0 |
| Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models | Jul 16, 2023 | Code GenerationHallucination | —Unverified | 0 | 0 |
| Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models | Feb 26, 2024 | Decision MakingHallucination | —Unverified | 0 | 0 |
| Look Within, Why LLMs Hallucinate: A Causal Perspective | Jul 14, 2024 | HallucinationReading Comprehension | —Unverified | 0 | 0 |
| Lost in Transcription, Found in Distribution Shift: Demystifying Hallucination in Speech Foundation Models | Feb 18, 2025 | Automatic Speech RecognitionAutomatic Speech Recognition (ASR) | —Unverified | 0 | 0 |
| Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused | Aug 16, 2024 | HallucinationTruthfulQA | —Unverified | 0 | 0 |
| Low-hallucination Synthetic Captions for Large-Scale Vision-Language Model Pre-training | Apr 17, 2025 | Caption GenerationHallucination | —Unverified | 0 | 0 |
| LR-to-HR Face Hallucination with an Adversarial Progressive Attribute-Induced Network | Sep 29, 2021 | AttributeFace Hallucination | —Unverified | 0 | 0 |
| Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost | Jun 3, 2024 | HallucinationLanguage Modeling | —Unverified | 0 | 0 |
| Lynx: An Open Source Hallucination Evaluation Model | Jul 11, 2024 | HallucinationHallucination Evaluation | —Unverified | 0 | 0 |