| Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning | Feb 18, 2024 | HallucinationVisual Question Answering | —Unverified | 0 |
| Measuring and Reducing LLM Hallucination without Gold-Standard Answers | Feb 16, 2024 | HallucinationIn-Context Learning | —Unverified | 0 |
| Trading off Consistency and Dimensionality of Convex Surrogates for the Mode | Feb 16, 2024 | HallucinationInformation Retrieval | —Unverified | 0 |
| Towards Uncovering How Large Language Model Works: An Explainability Perspective | Feb 16, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| LLMs in the Heart of Differential Testing: A Case Study on a Medical Rule Engine | Feb 16, 2024 | Hallucination | —Unverified | 0 |
| Comparing Hallucination Detection Metrics for Multilingual Generation | Feb 16, 2024 | HallucinationNatural Language Inference | —Unverified | 0 |
| Using Hallucinations to Bypass GPT4's Filter | Feb 16, 2024 | Hallucination | —Unverified | 0 |
| Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models | Feb 16, 2024 | HallucinationRetrieval | —Unverified | 0 |
| Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States | Feb 15, 2024 | Hallucination | —Unverified | 0 |
| Visually Dehallucinative Instruction Generation: Know What You Don't Know | Feb 15, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |