| FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models | Jan 6, 2025 | Adversarial AttackHallucination | —Unverified | 0 |
| EAGLE: Enhanced Visual Grounding Minimizes Hallucinations in Instructional Multimodal Models | Jan 6, 2025 | HallucinationVisual Grounding | —Unverified | 0 |
| Socratic Questioning: Learn to Self-guide Multimodal Reasoning in the Wild | Jan 6, 2025 | HallucinationMultimodal Reasoning | CodeCode Available | 0 |
| Foundations of GenIR | Jan 6, 2025 | HallucinationRetrieval-augmented Generation | —Unverified | 0 |
| CHAIR -- Classifier of Hallucination as Improver | Jan 5, 2025 | HallucinationMMLU | CodeCode Available | 0 |
| A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and Challenges | Jan 4, 2025 | FairnessHallucination | CodeCode Available | 4 |
| CarbonChat: Large Language Model-Based Corporate Carbon Emission Analysis and Climate Knowledge Q&A System | Jan 3, 2025 | ChunkingHallucination | —Unverified | 0 |
| Mitigating Hallucination for Large Vision Language Model by Inter-Modality Correlation Calibration Decoding | Jan 3, 2025 | HallucinationLanguage Modeling | CodeCode Available | 1 |
| LLMs & Legal Aid: Understanding Legal Needs Exhibited Through User Queries | Jan 3, 2025 | Hallucinationzero-shot-classification | —Unverified | 0 |
| Enhancing Uncertainty Modeling with Semantic Graph for Hallucination Detection | Jan 2, 2025 | HallucinationSentence | —Unverified | 0 |