| Misinforming LLMs: vulnerabilities, challenges and opportunities | Aug 2, 2024 | HallucinationMisinformation | —Unverified | 0 |
| Piculet: Specialized Models-Guided Hallucination Decrease for MultiModal Large Language Models | Aug 2, 2024 | Hallucination | —Unverified | 0 |
| Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed Inputs | Aug 2, 2024 | AttributeHallucination | CodeCode Available | 1 |
| RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework | Aug 2, 2024 | BenchmarkingDataset Generation | CodeCode Available | 3 |
| Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation | Aug 1, 2024 | HallucinationImage Comprehension | —Unverified | 0 |
| Mitigating Multilingual Hallucination in Large Vision-Language Models | Aug 1, 2024 | Hallucination | CodeCode Available | 1 |
| DeliLaw: A Chinese Legal Counselling System Based on a Large Language Model | Aug 1, 2024 | ArticlesHallucination | CodeCode Available | 2 |
| Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs | Jul 31, 2024 | HallucinationImage Comprehension | CodeCode Available | 1 |
| Cost-Effective Hallucination Detection for LLMs | Jul 31, 2024 | Decision MakingFact Checking | —Unverified | 0 |
| Prompting Medical Large Vision-Language Models to Diagnose Pathologies by Visual Question Answering | Jul 31, 2024 | DiagnosticHallucination | —Unverified | 0 |