| FactBench: A Dynamic Benchmark for In-the-Wild Language Model Factuality Evaluation | Oct 29, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification | Mar 7, 2024 | Fact CheckingHallucination | —Unverified | 0 |
| FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs | Oct 3, 2024 | Hallucination | —Unverified | 0 |
| FACTOID: FACtual enTailment fOr hallucInation Detection | Mar 28, 2024 | AvgHallucination | —Unverified | 0 |
| FactSelfCheck: Fact-Level Black-Box Hallucination Detection for LLMs | Mar 21, 2025 | HallucinationKnowledge Graphs | —Unverified | 0 |
| Fact :Teaching MLLMs with Faithful, Concise and Transferable Rationales | Apr 17, 2024 | Hallucination | —Unverified | 0 |
| Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of Theories, Detection Methods, and Opportunities | Apr 25, 2024 | DeepFake DetectionFace Swapping | —Unverified | 0 |
| Feature Hallucination for Self-supervised Action Recognition | Jun 25, 2025 | Action RecognitionHallucination | —Unverified | 0 |
| Less for More: Enhanced Feedback-aligned Mixed LLMs for Molecule Caption Generation and Fine-Grained NLI Evaluation | May 22, 2024 | Caption GenerationHallucination | —Unverified | 0 |
| Fewer Truncations Improve Language Modeling | Apr 16, 2024 | Combinatorial OptimizationHallucination | —Unverified | 0 |