| GROUNDHOG: Grounding Large Language Models to Holistic Segmentation | Feb 26, 2024 | Causal Language ModelingGeneralized Referring Expression Segmentation | —Unverified | 0 |
| Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided Decoding | Feb 23, 2024 | HallucinationObject | CodeCode Available | 1 |
| Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models | Feb 18, 2024 | HallucinationObject | CodeCode Available | 1 |
| EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language Models | Feb 15, 2024 | HallucinationObject Hallucination | CodeCode Available | 1 |
| Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance | Feb 13, 2024 | HallucinationObject Hallucination | —Unverified | 0 |
| Instruction Makes a Difference | Feb 1, 2024 | HallucinationInstruction Following | CodeCode Available | 0 |
| MoE-LLaVA: Mixture of Experts for Large Vision-Language Models | Jan 29, 2024 | HallucinationMixture-of-Experts | CodeCode Available | 7 |
| Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion and Preserving Color Consistency | Dec 8, 2023 | DecoderHallucination | CodeCode Available | 1 |
| Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption Rewrites | Dec 4, 2023 | HallucinationHallucination Evaluation | CodeCode Available | 1 |
| Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding | Nov 28, 2023 | HallucinationObject | CodeCode Available | 2 |