| Mitigating Object Hallucination via Concentric Causal Attention | Oct 21, 2024 | HallucinationObject | CodeCode Available | 2 |
| Reducing Hallucinations in Vision-Language Models via Latent Space Steering | Oct 21, 2024 | Hallucination | CodeCode Available | 2 |
| MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation | Oct 15, 2024 | HallucinationLanguage Modeling | CodeCode Available | 2 |
| VideoAgent: Self-Improving Video Generation | Oct 14, 2024 | HallucinationVideo Generation | CodeCode Available | 2 |
| ReFIR: Grounding Large Restoration Models with Retrieval Augmentation | Oct 8, 2024 | HallucinationImage Restoration | CodeCode Available | 2 |
| Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality | Oct 7, 2024 | Causal Inferencecounterfactual | CodeCode Available | 2 |
| Differential Transformer | Oct 7, 2024 | HallucinationIn-Context Learning | CodeCode Available | 2 |
| Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models | Oct 4, 2024 | DecoderHallucination | CodeCode Available | 2 |
| FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows" | Sep 30, 2024 | counterfactualHallucination | CodeCode Available | 2 |
| SSL: A Self-similarity Loss for Improving Generative Image Super-resolution | Aug 11, 2024 | HallucinationImage Super-Resolution | CodeCode Available | 2 |