SOTAVerified

Hallucination

Papers

Showing 10211030 of 1816 papers

TitleStatusHype
FactBench: A Dynamic Benchmark for In-the-Wild Language Model Factuality Evaluation0
Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification0
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs0
FACTOID: FACtual enTailment fOr hallucInation Detection0
FactSelfCheck: Fact-Level Black-Box Hallucination Detection for LLMs0
Fact :Teaching MLLMs with Faithful, Concise and Transferable Rationales0
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of Theories, Detection Methods, and Opportunities0
Feature Hallucination for Self-supervised Action Recognition0
Less for More: Enhanced Feedback-aligned Mixed LLMs for Molecule Caption Generation and Fine-Grained NLI Evaluation0
Fewer Truncations Improve Language Modeling0
Show:102550
← PrevPage 103 of 182Next →

No leaderboard results yet.