| Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? | Jun 20, 2024 | Caption GenerationHallucination | —Unverified | 0 |
| Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning? | Jun 18, 2024 | AttributeHallucination | —Unverified | 0 |
| Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models | Jun 12, 2024 | Audio captioningHallucination | CodeCode Available | 2 |
| Data-augmented phrase-level alignment for mitigating object hallucination | May 28, 2024 | Data AugmentationHallucination | —Unverified | 0 |
| RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language Models | May 28, 2024 | HallucinationMME | —Unverified | 0 |
| RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness | May 27, 2024 | HallucinationImage Captioning | CodeCode Available | 11 |
| Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language Tasks | May 27, 2024 | HallucinationObject Hallucination | CodeCode Available | 0 |
| ALOHa: A New Measure for Hallucination in Captioning Models | Apr 3, 2024 | HallucinationObject | —Unverified | 0 |
| Effectiveness Assessment of Recent Large Vision-Language Models | Mar 7, 2024 | Anomaly DetectionAttribute | —Unverified | 0 |
| HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding | Mar 1, 2024 | HallucinationObject | CodeCode Available | 2 |