SOTAVerified

Hallucination

Papers

Showing 391400 of 1816 papers

TitleStatusHype
Chain of Natural Language Inference for Reducing Large Language Model Ungrounded HallucinationsCode1
Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic PapersCode1
Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous SourcesCode1
A Head to Predict and a Head to Question: Pre-trained Uncertainty Quantification Heads for Hallucination Detection in LLM OutputsCode1
BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language ModelsCode1
Face Hallucination via Split-Attention in Split-Attention NetworkCode1
FAIR GPT: A virtual consultant for research data management in ChatGPTCode1
Prevent the Language Model from being Overconfident in Neural Machine TranslationCode1
EventHallusion: Diagnosing Event Hallucinations in Video LLMsCode1
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and MitigationCode1
Show:102550
← PrevPage 40 of 182Next →

No leaderboard results yet.