| Localizing and Mitigating Errors in Long-form Question Answering | Jul 16, 2024 | FormHallucination | CodeCode Available | 0 |
| What's Wrong? Refining Meeting Summaries with LLM Feedback | Jul 16, 2024 | HallucinationInformativeness | CodeCode Available | 0 |
| Addressing Image Hallucination in Text-to-Image Generation through Factual Image Retrieval | Jul 15, 2024 | Common Sense ReasoningHallucination | —Unverified | 0 |
| GraphEval: A Knowledge-Graph Based LLM Hallucination Evaluation Framework | Jul 15, 2024 | HallucinationHallucination Evaluation | —Unverified | 0 |
| Look Within, Why LLMs Hallucinate: A Causal Perspective | Jul 14, 2024 | HallucinationReading Comprehension | —Unverified | 0 |
| On Mitigating Code LLM Hallucinations with API Documentation | Jul 13, 2024 | Hallucinationvalid | —Unverified | 0 |
| Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues | Jul 13, 2024 | DiversityHallucination | —Unverified | 0 |
| The Two Sides of the Coin: Hallucination Generation and Detection with LLMs as Evaluators for LLMs | Jul 12, 2024 | Hallucination | —Unverified | 0 |
| Mitigating Entity-Level Hallucination in Large Language Models | Jul 12, 2024 | HallucinationInformation Retrieval | CodeCode Available | 0 |
| DAHRS: Divergence-Aware Hallucination-Remediated SRL Projection | Jul 12, 2024 | fr-enHallucination | —Unverified | 0 |