| Evaluating the Quality of Hallucination Benchmarks for Large Vision-Language Models | Jun 24, 2024 | Hallucination | CodeCode Available | 1 |
| Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond | Jun 16, 2023 | BenchmarkingEvidence Selection | CodeCode Available | 1 |
| Evaluation and Analysis of Hallucination in Large Vision-Language Models | Aug 29, 2023 | HallucinationHallucination Evaluation | CodeCode Available | 1 |
| AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language Generation | Oct 4, 2023 | HallucinationText Generation | CodeCode Available | 1 |
| EventHallusion: Diagnosing Event Hallucinations in Video LLMs | Sep 25, 2024 | HallucinationInstruction Following | CodeCode Available | 1 |
| Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic Papers | Oct 16, 2023 | 16kHallucination | CodeCode Available | 1 |
| Entity-level Factual Consistency of Abstractive Text Summarization | Feb 18, 2021 | Abstractive Text SummarizationHallucination | CodeCode Available | 1 |
| Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback | Apr 22, 2024 | AttributeHallucination | CodeCode Available | 1 |
| Detecting and Preventing Hallucinations in Large Vision Language Models | Aug 11, 2023 | 16kHallucination | CodeCode Available | 1 |
| Entity-Based Knowledge Conflicts in Question Answering | Sep 10, 2021 | HallucinationOut-of-Distribution Generalization | CodeCode Available | 1 |