| Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-based Retrofitting | Nov 22, 2023 | HallucinationLanguage Modeling | —Unverified | 0 |
| KNVQA: A Benchmark for evaluation knowledge-based VQA | Nov 21, 2023 | HallucinationObject Hallucination | —Unverified | 0 |
| Adapting LLMs for Efficient, Personalized Information Retrieval: Methods and Implications | Nov 21, 2023 | ChatbotHallucination | —Unverified | 0 |
| Control in Hybrid Chatbots | Nov 20, 2023 | ChatbotHallucination | —Unverified | 0 |
| GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration | Nov 20, 2023 | HallucinationLanguage Modeling | —Unverified | 0 |
| Chain of Visual Perception: Harnessing Multimodal Large Language Models for Zero-shot Camouflaged Object Detection | Nov 19, 2023 | counterfactualHallucination | CodeCode Available | 0 |
| Journey of Hallucination-minimized Generative AI Solutions for Financial Decision Makers | Nov 18, 2023 | Answer GenerationDecision Making | —Unverified | 0 |
| Crafting In-context Examples according to LMs' Parametric Knowledge | Nov 16, 2023 | HallucinationIn-Context Learning | CodeCode Available | 0 |
| Deceptive Semantic Shortcuts on Reasoning Chains: How Far Can Models Go without Hallucination? | Nov 16, 2023 | HallucinationSentence | CodeCode Available | 0 |
| How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities | Nov 15, 2023 | EthicsFairness | CodeCode Available | 0 |