| Chain-of-Verification Reduces Hallucination in Large Language Models | Sep 20, 2023 | HallucinationText Generation | CodeCode Available | 0 | 5 |
| HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMs | Feb 25, 2024 | BenchmarkingChatbot | CodeCode Available | 0 | 5 |
| Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework | Sep 24, 2024 | Benchmarkingcounterfactual | CodeCode Available | 0 | 5 |
| On the Benefits of Fine-Grained Loss Truncation: A Case Study on Factuality in Summarization | Mar 9, 2024 | HallucinationText Summarization | CodeCode Available | 0 | 5 |
| How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation? | Aug 1, 2021 | Domain AdaptationHallucination | CodeCode Available | 0 | 5 |
| How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild | Feb 18, 2025 | ArticlesHallucination | CodeCode Available | 0 | 5 |
| Evolutionary thoughts: integration of large language models and evolutionary algorithms | May 9, 2025 | Evolutionary AlgorithmsHallucination | CodeCode Available | 0 | 5 |
| How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities | Nov 15, 2023 | EthicsFairness | CodeCode Available | 0 | 5 |
| Im2Avatar: Colorful 3D Reconstruction from a Single Image | Apr 17, 2018 | 3D ReconstructionHallucination | CodeCode Available | 0 | 5 |
| Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification | Nov 15, 2023 | HallucinationRetrieval | CodeCode Available | 0 | 5 |