| Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models | Oct 9, 2023 | HallucinationObject | —Unverified | 0 |
| The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations | Oct 8, 2023 | Hallucination | —Unverified | 0 |
| Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning | Oct 7, 2023 | HallucinationIn-Context Learning | —Unverified | 0 |
| AutoHall: Automated Hallucination Dataset Generation for Large Language Models | Sep 30, 2023 | Dataset GenerationFact Checking | —Unverified | 0 |
| Self-Specialization: Uncovering Latent Expertise within Large Language Models | Sep 29, 2023 | HallucinationInstruction Following | —Unverified | 0 |
| Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive Synthesis using Large Language Models and Satisfiability Solving | Sep 28, 2023 | HallucinationQuestion Answering | —Unverified | 0 |
| Hallucination Reduction in Long Input Text Summarization | Sep 28, 2023 | DecoderHallucination | CodeCode Available | 0 |
| Augmenting LLMs with Knowledge: A survey on hallucination prevention | Sep 28, 2023 | HallucinationLanguage Modeling | —Unverified | 0 |
| Aligning Large Multimodal Models with Factually Augmented RLHF | Sep 25, 2023 | HallucinationImage Captioning | —Unverified | 0 |
| Chain-of-Verification Reduces Hallucination in Large Language Models | Sep 20, 2023 | HallucinationText Generation | CodeCode Available | 0 |