| M2K-VDG: Model-Adaptive Multimodal Knowledge Anchor Enhanced Video-grounded Dialogue Generation | Feb 19, 2024 | counterfactualDialogue Generation | —Unverified | 0 |
| Enabling Weak LLMs to Judge Response Reliability via Meta Ranking | Feb 19, 2024 | HallucinationIn-Context Learning | —Unverified | 0 |
| Reformatted Alignment | Feb 19, 2024 | GSM8KHallucination | CodeCode Available | 2 |
| Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning | Feb 18, 2024 | HallucinationVisual Question Answering | —Unverified | 0 |
| EventRL: Enhancing Event Extraction with Outcome Supervision for Large Language Models | Feb 18, 2024 | Event ExtractionHallucination | CodeCode Available | 3 |
| Aligning Modalities in Vision Large Language Models via Preference Fine-tuning | Feb 18, 2024 | HallucinationInstruction Following | CodeCode Available | 2 |
| Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models | Feb 18, 2024 | HallucinationObject | CodeCode Available | 1 |
| LLMs in the Heart of Differential Testing: A Case Study on a Medical Rule Engine | Feb 16, 2024 | Hallucination | —Unverified | 0 |
| Using Hallucinations to Bypass GPT4's Filter | Feb 16, 2024 | Hallucination | —Unverified | 0 |
| Comparing Hallucination Detection Metrics for Multilingual Generation | Feb 16, 2024 | HallucinationNatural Language Inference | —Unverified | 0 |
| LLMDFA: Analyzing Dataflow in Code with Large Language Models | Feb 16, 2024 | Hallucination | CodeCode Available | 3 |
| Measuring and Reducing LLM Hallucination without Gold-Standard Answers | Feb 16, 2024 | HallucinationIn-Context Learning | —Unverified | 0 |
| Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models | Feb 16, 2024 | HallucinationRetrieval | —Unverified | 0 |
| Towards Uncovering How Large Language Model Works: An Explainability Perspective | Feb 16, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Trading off Consistency and Dimensionality of Convex Surrogates for the Mode | Feb 16, 2024 | HallucinationInformation Retrieval | —Unverified | 0 |
| EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language Models | Feb 15, 2024 | HallucinationObject Hallucination | CodeCode Available | 1 |
| Uncertainty Quantification for In-Context Learning of Large Language Models | Feb 15, 2024 | HallucinationIn-Context Learning | CodeCode Available | 1 |
| Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States | Feb 15, 2024 | Hallucination | —Unverified | 0 |
| Visually Dehallucinative Instruction Generation: Know What You Don't Know | Feb 15, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Into the Unknown: Self-Learning Large Language Models | Feb 14, 2024 | HallucinationSelf-Learning | CodeCode Available | 1 |
| Large Language Model with Graph Convolution for Recommendation | Feb 14, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop | Feb 14, 2024 | HallucinationTruthfulQA | —Unverified | 0 |
| InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment | Feb 13, 2024 | Hallucination | CodeCode Available | 2 |
| Visually Dehallucinative Instruction Generation | Feb 13, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance | Feb 13, 2024 | HallucinationObject Hallucination | —Unverified | 0 |