| Understanding Multimodal Hallucination with Parameter-Free Representation Alignment | Sep 2, 2024 | HallucinationObject | CodeCode Available | 0 |
| Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? | Jun 20, 2024 | Caption GenerationHallucination | —Unverified | 0 |
| Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning? | Jun 18, 2024 | AttributeHallucination | —Unverified | 0 |
| Data-augmented phrase-level alignment for mitigating object hallucination | May 28, 2024 | Data AugmentationHallucination | —Unverified | 0 |
| RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language Models | May 28, 2024 | HallucinationMME | —Unverified | 0 |
| Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language Tasks | May 27, 2024 | HallucinationObject Hallucination | CodeCode Available | 0 |
| ALOHa: A New Measure for Hallucination in Captioning Models | Apr 3, 2024 | HallucinationObject | —Unverified | 0 |
| Effectiveness Assessment of Recent Large Vision-Language Models | Mar 7, 2024 | Anomaly DetectionAttribute | —Unverified | 0 |
| GROUNDHOG: Grounding Large Language Models to Holistic Segmentation | Feb 26, 2024 | Causal Language ModelingGeneralized Referring Expression Segmentation | —Unverified | 0 |
| Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance | Feb 13, 2024 | HallucinationObject Hallucination | —Unverified | 0 |