| MARCO: Multi-Agent Real-time Chat Orchestration | Oct 29, 2024 | HallucinationLanguage Modeling | —Unverified | 0 | 0 |
| MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations | Mar 20, 2025 | HallucinationVideo Understanding | —Unverified | 0 | 0 |
| MASSIVE Multilingual Abstract Meaning Representation: A Dataset and Baselines for Hallucination Detection | May 29, 2024 | Abstract Meaning RepresentationHallucination | —Unverified | 0 | 0 |
| Maximum Hallucination Standards for Domain-Specific Large Language Models | Mar 7, 2025 | AttributeHallucination | —Unverified | 0 | 0 |
| Meaningless is better: hashing bias-inducing words in LLM prompts improves performance in logical reasoning and statistical learning | Nov 26, 2024 | HallucinationLogical Reasoning | —Unverified | 0 | 0 |
| Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing | Jan 24, 2025 | Caption GenerationDataset Generation | —Unverified | 0 | 0 |
| Measuring and Reducing LLM Hallucination without Gold-Standard Answers | Feb 16, 2024 | HallucinationIn-Context Learning | —Unverified | 0 | 0 |
| Measuring Faithfulness and Abstention: An Automated Pipeline for Evaluating LLM-Generated 3-ply Case-Based Legal Arguments | May 31, 2025 | Hallucination | —Unverified | 0 | 0 |
| Measuring text summarization factuality using atomic facts entailment metrics in the context of retrieval augmented generation | Aug 27, 2024 | HallucinationRetrieval-augmented Generation | —Unverified | 0 | 0 |
| Measuring the Inconsistency of Large Language Models in Preferential Ranking | Oct 11, 2024 | DiagnosticHallucination | —Unverified | 0 | 0 |