SOTAVerified

Hallucination

Papers

Showing 15311540 of 1816 papers

TitleStatusHype
How Language Model Hallucinations Can SnowballCode1
Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought MethodCode1
Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous SourcesCode1
Scene Graph as Pivoting: Inference-time Image-free Unsupervised Multimodal Machine Translation with Visual Scene HallucinationCode1
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language ModelsCode2
HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine TranslationCode2
RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought0
Appraising the Potential Uses and Harms of LLMs for Medical Systematic ReviewsCode0
Evaluating Object Hallucination in Large Vision-Language ModelsCode2
Is ChatGPT a Good Causal Reasoner? A Comprehensive EvaluationCode1
Show:102550
← PrevPage 154 of 182Next →

No leaderboard results yet.