| A Survey of Hallucination in Large Foundation Models | Sep 12, 2023 | HallucinationSurvey | CodeCode Available | 1 | 5 |
| Label Hallucination for Few-Shot Classification | Dec 6, 2021 | ClassificationFew-Shot Learning | CodeCode Available | 1 | 5 |
| AssistRAG: Boosting the Potential of Large Language Models with an Intelligent Information Assistant | Nov 11, 2024 | Decision MakingHallucination | CodeCode Available | 1 | 5 |
| CHATREPORT: Democratizing Sustainability Disclosure Analysis through LLM-based Tools | Jul 28, 2023 | Hallucination | CodeCode Available | 1 | 5 |
| Accuracy and Political Bias of News Source Credibility Ratings by Large Language Models | Apr 1, 2023 | Fact CheckingHallucination | CodeCode Available | 1 | 5 |
| DomainRAG: A Chinese Benchmark for Evaluating Domain-specific Retrieval-Augmented Generation | Jun 9, 2024 | Common Sense ReasoningDenoising | CodeCode Available | 1 | 5 |
| Detecting Machine-Generated Texts by Multi-Population Aware Optimization for Maximum Mean Discrepancy | Feb 25, 2024 | HallucinationSentence | CodeCode Available | 1 | 5 |
| Detecting Hallucinated Content in Conditional Neural Sequence Generation | Nov 5, 2020 | Abstractive Text SummarizationHallucination | CodeCode Available | 1 | 5 |
| Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback | Apr 22, 2024 | AttributeHallucination | CodeCode Available | 1 | 5 |
| Detecting and Preventing Hallucinations in Large Vision Language Models | Aug 11, 2023 | 16kHallucination | CodeCode Available | 1 | 5 |