| DEE: Dual-stage Explainable Evaluation Method for Text Generation | Mar 18, 2024 | DiagnosticHallucination | —Unverified | 0 |
| Logic Query of Thoughts: Guiding Large Language Models to Answer Complex Logic Queries with Knowledge Graphs | Mar 17, 2024 | HallucinationKnowledge Graphs | CodeCode Available | 0 |
| PhD: A ChatGPT-Prompted Visual hallucination Evaluation Dataset | Mar 17, 2024 | AttributeCommon Sense Reasoning | CodeCode Available | 1 |
| Think Twice Before Trusting: Self-Detection for Large Language Models through Comprehensive Answer Reflection | Mar 15, 2024 | HallucinationLanguage Modelling | —Unverified | 0 |
| DiffMAC: Diffusion Manifold Hallucination Correction for High Generalization Blind Face Restoration | Mar 15, 2024 | AttributeBlind Face Restoration | —Unverified | 0 |
| Mitigating Dialogue Hallucination for Large Vision Language Models via Adversarial Instruction Tuning | Mar 15, 2024 | HallucinationInstruction Following | —Unverified | 0 |
| Circuit Transformer: A Transformer That Preserves Logical Equivalence | Mar 14, 2024 | Hallucination | CodeCode Available | 1 |
| XReal: Realistic Anatomy and Pathology-Aware X-ray Generation via Controllable Diffusion Model | Mar 14, 2024 | AnatomyHallucination | CodeCode Available | 1 |
| The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models? | Mar 14, 2024 | Hallucinationimage-classification | CodeCode Available | 1 |
| Detecting Hallucination and Coverage Errors in Retrieval Augmented Generation for Controversial Topics | Mar 13, 2024 | HallucinationRetrieval | —Unverified | 0 |
| AIGCs Confuse AI Too: Investigating and Explaining Synthetic Image-induced Hallucinations in Large Vision-Language Models | Mar 13, 2024 | Hallucination | CodeCode Available | 0 |
| Investigating the performance of Retrieval-Augmented Generation and fine-tuning for the development of AI-driven knowledge-based systems | Mar 12, 2024 | Domain AdaptationHallucination | CodeCode Available | 0 |
| Put Myself in Your Shoes: Lifting the Egocentric Perspective from Exocentric Videos | Mar 11, 2024 | HallucinationTranslation | —Unverified | 0 |
| TRAWL: External Knowledge-Enhanced Recommendation with LLM Assistance | Mar 11, 2024 | Contrastive LearningDenoising | —Unverified | 0 |
| Guiding Clinical Reasoning with Large Language Models via Knowledge Seeds | Mar 11, 2024 | Hallucination | —Unverified | 0 |
| Unsupervised Real-Time Hallucination Detection based on the Internal States of Large Language Models | Mar 11, 2024 | Hallucination | CodeCode Available | 2 |
| On the Benefits of Fine-Grained Loss Truncation: A Case Study on Factuality in Summarization | Mar 9, 2024 | HallucinationText Summarization | CodeCode Available | 0 |
| Tuning-Free Accountable Intervention for LLM Deployment -- A Metacognitive Approach | Mar 8, 2024 | Decision MakingHallucination | —Unverified | 0 |
| Can Large Language Models Play Games? A Case Study of A Self-Play Approach | Mar 8, 2024 | Decision MakingHallucination | —Unverified | 0 |
| ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models | Mar 8, 2024 | AttributeHallucination | CodeCode Available | 0 |
| ChatASU: Evoking LLM's Reflexion to Truly Understand Aspect Sentiment in Dialogues | Mar 8, 2024 | HallucinationQuestion Answering | —Unverified | 0 |
| Sora as an AGI World Model? A Complete Survey on Text-to-Video Generation | Mar 8, 2024 | ArticlesHallucination | —Unverified | 0 |
| RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation | Mar 8, 2024 | Code GenerationHallucination | CodeCode Available | 3 |
| HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild | Mar 7, 2024 | HallucinationQuestion Answering | CodeCode Available | 0 |
| Federated Recommendation via Hybrid Retrieval Augmented Generation | Mar 7, 2024 | HallucinationPrivacy Preserving | CodeCode Available | 1 |