| Lifelong Neural Topic Learning in Contextualized Autoregressive Topic Models of Language via Informative Transfers | Sep 29, 2019 | Data AugmentationHallucination | —Unverified | 0 | 0 |
| Listening to Patients: A Framework of Detecting and Mitigating Patient Misreport for Medical Dialogue Generation | Oct 8, 2024 | Dialogue GenerationHallucination | —Unverified | 0 | 0 |
| LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models | May 25, 2025 | Hallucinationknowledge editing | —Unverified | 0 | 0 |
| LLM Agents for Education: Advances and Applications | Mar 14, 2025 | FairnessHallucination | —Unverified | 0 | 0 |
| LLM-Align: Utilizing Large Language Models for Entity Alignment in Knowledge Graphs | Dec 6, 2024 | Entity AlignmentEntity Embeddings | —Unverified | 0 | 0 |
| INVARLLM: LLM-assisted Physical Invariant Extraction for Cyber-Physical Systems Anomaly Detection | Nov 17, 2024 | Anomaly DetectionHallucination | —Unverified | 0 | 0 |
| LLM Hallucination Reasoning with Zero-shot Knowledge Test | Nov 14, 2024 | Hallucination | —Unverified | 0 | 0 |
| LLM-Powered Agents for Navigating Venice's Historical Cadastre | May 22, 2025 | HallucinationNatural Language Queries | —Unverified | 0 | 0 |
| LLM-R: A Framework for Domain-Adaptive Maintenance Scheme Generation Combining Hierarchical Agents and RAG | Nov 7, 2024 | HallucinationRAG | —Unverified | 0 | 0 |
| LLMs Can Check Their Own Results to Mitigate Hallucinations in Traffic Understanding Tasks | Sep 19, 2024 | Autonomous DrivingHallucination | —Unverified | 0 | 0 |
| LLMs can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-Thought | May 9, 2024 | HallucinationMath | —Unverified | 0 | 0 |
| LLMSeR: Enhancing Sequential Recommendation via LLM-based Data Augmentation | Mar 16, 2025 | Data AugmentationHallucination | —Unverified | 0 | 0 |
| LLMs Prompted for Graphs: Hallucinations and Generative Capabilities | Aug 30, 2024 | DiversityHallucination | —Unverified | 0 | 0 |
| LLMs in the Heart of Differential Testing: A Case Study on a Medical Rule Engine | Feb 16, 2024 | Hallucination | —Unverified | 0 | 0 |
| LLMs & Legal Aid: Understanding Legal Needs Exhibited Through User Queries | Jan 3, 2025 | Hallucinationzero-shot-classification | —Unverified | 0 | 0 |
| LLMs Will Always Hallucinate, and We Need to Live With This | Sep 9, 2024 | Fact CheckingHallucination | —Unverified | 0 | 0 |
| LLM Uncertainty Quantification through Directional Entailment Graph and Claim Level Response Augmentation | Jul 1, 2024 | HallucinationUncertainty Quantification | —Unverified | 0 | 0 |
| LMOD: A Large Multimodal Ophthalmology Dataset and Benchmark for Large Vision-Language Models | Oct 2, 2024 | Hallucination | —Unverified | 0 | 0 |
| Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs | Apr 30, 2025 | HallucinationHallucination Evaluation | —Unverified | 0 | 0 |
| Locate-then-Merge: Neuron-Level Parameter Fusion for Mitigating Catastrophic Forgetting in Multimodal LLMs | May 22, 2025 | Hallucination | —Unverified | 0 | 0 |
| Logical Consistency of Large Language Models in Fact-checking | Dec 20, 2024 | Fact CheckingHallucination | —Unverified | 0 | 0 |
| Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models | Jul 16, 2023 | Code GenerationHallucination | —Unverified | 0 | 0 |
| Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models | Feb 26, 2024 | Decision MakingHallucination | —Unverified | 0 | 0 |
| Look Within, Why LLMs Hallucinate: A Causal Perspective | Jul 14, 2024 | HallucinationReading Comprehension | —Unverified | 0 | 0 |
| Lost in Transcription, Found in Distribution Shift: Demystifying Hallucination in Speech Foundation Models | Feb 18, 2025 | Automatic Speech RecognitionAutomatic Speech Recognition (ASR) | —Unverified | 0 | 0 |