| Harmonic LLMs are Trustworthy | Apr 30, 2024 | HallucinationTruthfulQA | —Unverified | 0 |
| Visual Fact Checker: Enabling High-Fidelity Detailed Caption Generation | Apr 30, 2024 | Caption GenerationHallucination | —Unverified | 0 |
| A robust and scalable framework for hallucination detection in virtual tissue staining and digital pathology | Apr 29, 2024 | HallucinationImage Generation | —Unverified | 0 |
| Hallucination of Multimodal Large Language Models: A Survey | Apr 29, 2024 | HallucinationSurvey | CodeCode Available | 4 |
| MMAC-Copilot: Multi-modal Agent Collaboration Operating Copilot | Apr 28, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| SERPENT-VLM : Self-Refining Radiology Report Generation Using Vision Language Models | Apr 27, 2024 | Causal Language ModelingHallucination | —Unverified | 0 |
| Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of Theories, Detection Methods, and Opportunities | Apr 25, 2024 | DeepFake DetectionFace Swapping | —Unverified | 0 |
| Can Foundational Large Language Models Assist with Conducting Pharmaceuticals Manufacturing Investigations? | Apr 24, 2024 | HallucinationLanguage Modelling | —Unverified | 0 |
| Retrieval Head Mechanistically Explains Long-Context Factuality | Apr 24, 2024 | Continual PretrainingHallucination | CodeCode Available | 3 |
| KS-LLM: Knowledge Selection of Large Language Models with Evidence Document for Question Answering | Apr 24, 2024 | HallucinationQuestion Answering | —Unverified | 0 |