| Large Language Model with Graph Convolution for Recommendation | Feb 14, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop | Feb 14, 2024 | HallucinationTruthfulQA | —Unverified | 0 |
| InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment | Feb 13, 2024 | Hallucination | CodeCode Available | 2 |
| Visually Dehallucinative Instruction Generation | Feb 13, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance | Feb 13, 2024 | HallucinationObject Hallucination | —Unverified | 0 |
| A Systematic Review of Data-to-Text NLG | Feb 13, 2024 | Data-to-Text GenerationHallucination | —Unverified | 0 |
| Careless Whisper: Speech-to-Text Hallucination Harms | Feb 12, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models | Feb 12, 2024 | HallucinationObject Localization | CodeCode Available | 4 |
| PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models | Feb 12, 2024 | Answer GenerationHallucination | CodeCode Available | 3 |
| G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering | Feb 12, 2024 | Common Sense ReasoningGraph Classification | CodeCode Available | 4 |