| On Mitigating Code LLM Hallucinations with API Documentation | Jul 13, 2024 | Hallucinationvalid | —Unverified | 0 |
| DAHRS: Divergence-Aware Hallucination-Remediated SRL Projection | Jul 12, 2024 | fr-enHallucination | —Unverified | 0 |
| Mitigating Entity-Level Hallucination in Large Language Models | Jul 12, 2024 | HallucinationInformation Retrieval | CodeCode Available | 0 |
| The Two Sides of the Coin: Hallucination Generation and Detection with LLMs as Evaluators for LLMs | Jul 12, 2024 | Hallucination | —Unverified | 0 |
| On the Universal Truthfulness Hyperplane Inside LLMs | Jul 11, 2024 | DiversityDomain Generalization | CodeCode Available | 0 |
| Lynx: An Open Source Hallucination Evaluation Model | Jul 11, 2024 | HallucinationHallucination Evaluation | —Unverified | 0 |
| Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models | Jul 10, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Learning with Instance-Dependent Noisy Labels by Anchor Hallucination and Hard Sample Label Correction | Jul 10, 2024 | Hallucination | —Unverified | 0 |
| Fuse, Reason and Verify: Geometry Problem Solving with Parsed Clauses from Diagram | Jul 10, 2024 | DecoderGeometry Problem Solving | —Unverified | 0 |
| Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps | Jul 9, 2024 | ArticlesHallucination | CodeCode Available | 2 |