SOTAVerified

Hallucination

Papers

Showing 16911700 of 1816 papers

TitleStatusHype
MARCO: Multi-Agent Real-time Chat Orchestration0
MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations0
MASSIVE Multilingual Abstract Meaning Representation: A Dataset and Baselines for Hallucination Detection0
Maximum Hallucination Standards for Domain-Specific Large Language Models0
Meaningless is better: hashing bias-inducing words in LLM prompts improves performance in logical reasoning and statistical learning0
Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing0
Measuring and Reducing LLM Hallucination without Gold-Standard Answers0
Measuring Faithfulness and Abstention: An Automated Pipeline for Evaluating LLM-Generated 3-ply Case-Based Legal Arguments0
Measuring text summarization factuality using atomic facts entailment metrics in the context of retrieval augmented generation0
Measuring the Inconsistency of Large Language Models in Preferential Ranking0
Show:102550
← PrevPage 170 of 182Next →

No leaderboard results yet.