| Lifelong Neural Topic Learning in Contextualized Autoregressive Topic Models of Language via Informative Transfers | Sep 29, 2019 | Data AugmentationHallucination | —Unverified | 0 | 0 |
| Listening to Patients: A Framework of Detecting and Mitigating Patient Misreport for Medical Dialogue Generation | Oct 8, 2024 | Dialogue GenerationHallucination | —Unverified | 0 | 0 |
| LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models | May 25, 2025 | Hallucinationknowledge editing | —Unverified | 0 | 0 |
| LLM Agents for Education: Advances and Applications | Mar 14, 2025 | FairnessHallucination | —Unverified | 0 | 0 |
| LLM-Align: Utilizing Large Language Models for Entity Alignment in Knowledge Graphs | Dec 6, 2024 | Entity AlignmentEntity Embeddings | —Unverified | 0 | 0 |
| INVARLLM: LLM-assisted Physical Invariant Extraction for Cyber-Physical Systems Anomaly Detection | Nov 17, 2024 | Anomaly DetectionHallucination | —Unverified | 0 | 0 |
| LLM Hallucination Reasoning with Zero-shot Knowledge Test | Nov 14, 2024 | Hallucination | —Unverified | 0 | 0 |
| LLM-Powered Agents for Navigating Venice's Historical Cadastre | May 22, 2025 | HallucinationNatural Language Queries | —Unverified | 0 | 0 |
| LLM-R: A Framework for Domain-Adaptive Maintenance Scheme Generation Combining Hierarchical Agents and RAG | Nov 7, 2024 | HallucinationRAG | —Unverified | 0 | 0 |
| LLMs Can Check Their Own Results to Mitigate Hallucinations in Traffic Understanding Tasks | Sep 19, 2024 | Autonomous DrivingHallucination | —Unverified | 0 | 0 |
| LLMs can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-Thought | May 9, 2024 | HallucinationMath | —Unverified | 0 | 0 |
| LLMSeR: Enhancing Sequential Recommendation via LLM-based Data Augmentation | Mar 16, 2025 | Data AugmentationHallucination | —Unverified | 0 | 0 |
| LLMs Prompted for Graphs: Hallucinations and Generative Capabilities | Aug 30, 2024 | DiversityHallucination | —Unverified | 0 | 0 |
| LLMs in the Heart of Differential Testing: A Case Study on a Medical Rule Engine | Feb 16, 2024 | Hallucination | —Unverified | 0 | 0 |
| LLMs & Legal Aid: Understanding Legal Needs Exhibited Through User Queries | Jan 3, 2025 | Hallucinationzero-shot-classification | —Unverified | 0 | 0 |
| LLMs Will Always Hallucinate, and We Need to Live With This | Sep 9, 2024 | Fact CheckingHallucination | —Unverified | 0 | 0 |
| LLM Uncertainty Quantification through Directional Entailment Graph and Claim Level Response Augmentation | Jul 1, 2024 | HallucinationUncertainty Quantification | —Unverified | 0 | 0 |
| LMOD: A Large Multimodal Ophthalmology Dataset and Benchmark for Large Vision-Language Models | Oct 2, 2024 | Hallucination | —Unverified | 0 | 0 |
| Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs | Apr 30, 2025 | HallucinationHallucination Evaluation | —Unverified | 0 | 0 |
| Locate-then-Merge: Neuron-Level Parameter Fusion for Mitigating Catastrophic Forgetting in Multimodal LLMs | May 22, 2025 | Hallucination | —Unverified | 0 | 0 |
| Logical Consistency of Large Language Models in Fact-checking | Dec 20, 2024 | Fact CheckingHallucination | —Unverified | 0 | 0 |
| Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models | Jul 16, 2023 | Code GenerationHallucination | —Unverified | 0 | 0 |
| Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models | Feb 26, 2024 | Decision MakingHallucination | —Unverified | 0 | 0 |
| Look Within, Why LLMs Hallucinate: A Causal Perspective | Jul 14, 2024 | HallucinationReading Comprehension | —Unverified | 0 | 0 |
| Lost in Transcription, Found in Distribution Shift: Demystifying Hallucination in Speech Foundation Models | Feb 18, 2025 | Automatic Speech RecognitionAutomatic Speech Recognition (ASR) | —Unverified | 0 | 0 |
| Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused | Aug 16, 2024 | HallucinationTruthfulQA | —Unverified | 0 | 0 |
| Low-hallucination Synthetic Captions for Large-Scale Vision-Language Model Pre-training | Apr 17, 2025 | Caption GenerationHallucination | —Unverified | 0 | 0 |
| LR-to-HR Face Hallucination with an Adversarial Progressive Attribute-Induced Network | Sep 29, 2021 | AttributeFace Hallucination | —Unverified | 0 | 0 |
| Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost | Jun 3, 2024 | HallucinationLanguage Modeling | —Unverified | 0 | 0 |
| Lynx: An Open Source Hallucination Evaluation Model | Jul 11, 2024 | HallucinationHallucination Evaluation | —Unverified | 0 | 0 |
| M2K-VDG: Model-Adaptive Multimodal Knowledge Anchor Enhanced Video-grounded Dialogue Generation | Feb 19, 2024 | counterfactualDialogue Generation | —Unverified | 0 | 0 |
| Machine learning techniques for the Schizophrenia diagnosis: A comprehensive review and future research directions | Jan 16, 2023 | EEGElectroencephalogram (EEG) | —Unverified | 0 | 0 |
| Machine Mirages: Defining the Undefined | Jun 3, 2025 | Causal InferenceHallucination | —Unverified | 0 | 0 |
| MAC-Tuning: LLM Multi-Compositional Problem Reasoning with Enhanced Knowledge Boundary Awareness | Apr 30, 2025 | Hallucination | —Unverified | 0 | 0 |
| Magic Mushroom: A Customizable Benchmark for Fine-grained Analysis of Retrieval Noise Erosion in RAG Systems | Jun 4, 2025 | DenoisingHallucination | —Unverified | 0 | 0 |
| Magnifier Prompt: Tackling Multimodal Hallucination via Extremely Simple Instructions | Oct 15, 2024 | Hallucination | —Unverified | 0 | 0 |
| MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM Hallucination Detection | Mar 1, 2024 | Data AugmentationHallucination | —Unverified | 0 | 0 |
| Manipulating Attributes of Natural Scenes via Hallucination | Aug 22, 2018 | HallucinationStyle Transfer | —Unverified | 0 | 0 |
| MAPLE: Enhancing Review Generation with Multi-Aspect Prompt LEarning in Explainable Recommendation | Aug 19, 2024 | DiversityExplainable Recommendation | —Unverified | 0 | 0 |
| Map&Make: Schema Guided Text to Table Generation | May 29, 2025 | HallucinationInformation Retrieval | —Unverified | 0 | 0 |
| MARCO: Multi-Agent Real-time Chat Orchestration | Oct 29, 2024 | HallucinationLanguage Modeling | —Unverified | 0 | 0 |
| MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations | Mar 20, 2025 | HallucinationVideo Understanding | —Unverified | 0 | 0 |
| MASSIVE Multilingual Abstract Meaning Representation: A Dataset and Baselines for Hallucination Detection | May 29, 2024 | Abstract Meaning RepresentationHallucination | —Unverified | 0 | 0 |
| Maximum Hallucination Standards for Domain-Specific Large Language Models | Mar 7, 2025 | AttributeHallucination | —Unverified | 0 | 0 |
| Meaningless is better: hashing bias-inducing words in LLM prompts improves performance in logical reasoning and statistical learning | Nov 26, 2024 | HallucinationLogical Reasoning | —Unverified | 0 | 0 |
| Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing | Jan 24, 2025 | Caption GenerationDataset Generation | —Unverified | 0 | 0 |
| Measuring and Reducing LLM Hallucination without Gold-Standard Answers | Feb 16, 2024 | HallucinationIn-Context Learning | —Unverified | 0 | 0 |
| Measuring Faithfulness and Abstention: An Automated Pipeline for Evaluating LLM-Generated 3-ply Case-Based Legal Arguments | May 31, 2025 | Hallucination | —Unverified | 0 | 0 |
| Measuring text summarization factuality using atomic facts entailment metrics in the context of retrieval augmented generation | Aug 27, 2024 | HallucinationRetrieval-augmented Generation | —Unverified | 0 | 0 |
| Measuring the Inconsistency of Large Language Models in Preferential Ranking | Oct 11, 2024 | DiagnosticHallucination | —Unverified | 0 | 0 |