| ReFIR: Grounding Large Restoration Models with Retrieval Augmentation | Oct 8, 2024 | HallucinationImage Restoration | CodeCode Available | 2 |
| Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models | Oct 8, 2024 | HallucinationOverall - Test | —Unverified | 0 |
| Listening to Patients: A Framework of Detecting and Mitigating Patient Misreport for Medical Dialogue Generation | Oct 8, 2024 | Dialogue GenerationHallucination | —Unverified | 0 |
| FG-PRM: Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning | Oct 8, 2024 | GSM8KHallucination | —Unverified | 0 |
| Differential Transformer | Oct 7, 2024 | HallucinationIn-Context Learning | CodeCode Available | 2 |
| Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality | Oct 7, 2024 | Causal Inferencecounterfactual | CodeCode Available | 2 |
| AI-Enhanced Ethical Hacking: A Linux-Focused Experiment | Oct 7, 2024 | Hallucination | —Unverified | 0 |
| TLDR: Token-Level Detective Reward Model for Large Vision Language Models | Oct 7, 2024 | HallucinationHallucination Evaluation | —Unverified | 0 |
| Mitigating Hallucinations Using Ensemble of Knowledge Graph and Vector Store in Large Language Models to Enhance Mental Health Support | Oct 6, 2024 | Hallucination | —Unverified | 0 |
| DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination | Oct 6, 2024 | AttributeDecoder | —Unverified | 0 |