| Towards Reliable Medical Question Answering: Techniques and Challenges in Mitigating Hallucinations in Language Models | Aug 25, 2024 | Decision MakingHallucination | —Unverified | 0 |
| ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models | Aug 25, 2024 | Hallucination | CodeCode Available | 1 |
| Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning | Aug 23, 2024 | HallucinationPrompt Engineering | —Unverified | 0 |
| Internal and External Knowledge Interactive Refinement Framework for Knowledge-Intensive Question Answering | Aug 23, 2024 | HallucinationQuestion Answering | —Unverified | 0 |
| SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination Detection | Aug 22, 2024 | HallucinationLanguage Modeling | CodeCode Available | 1 |
| MedDiT: A Knowledge-Controlled Diffusion Transformer Framework for Dynamic Medical Image Generation in Virtual Simulated Patient | Aug 22, 2024 | DiagnosticHallucination | —Unverified | 0 |
| Improving Factuality in Large Language Models via Decoding-Time Hallucinatory and Truthful Comparators | Aug 22, 2024 | HallucinationMixture-of-Experts | CodeCode Available | 0 |
| RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data | Aug 22, 2024 | Hallucination | CodeCode Available | 0 |
| GRATR: Zero-Shot Evidence Graph Retrieval-Augmented Trustworthiness Reasoning | Aug 22, 2024 | Decision MakingHallucination | CodeCode Available | 0 |
| RAG-Optimized Tibetan Tourism LLMs: Enhancing Accuracy and Personalization | Aug 21, 2024 | HallucinationRAG | —Unverified | 0 |
| Towards Analyzing and Mitigating Sycophancy in Large Vision-Language Models | Aug 21, 2024 | HallucinationPrompt Engineering | —Unverified | 0 |
| Enhanced document retrieval with topic embeddings | Aug 19, 2024 | HallucinationRAG | —Unverified | 0 |
| MAPLE: Enhancing Review Generation with Multi-Aspect Prompt LEarning in Explainable Recommendation | Aug 19, 2024 | DiversityExplainable Recommendation | —Unverified | 0 |
| CLIP-DPO: Vision-Language Models as a Source of Preference for Fixing Hallucinations in LVLMs | Aug 19, 2024 | Hallucinationzero-shot-classification | —Unverified | 0 |
| Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language Models | Aug 18, 2024 | AttributeHallucination | CodeCode Available | 1 |
| Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making | Aug 17, 2024 | Decision MakingHallucination | —Unverified | 0 |
| Lower Layer Matters: Alleviating Hallucination via Multi-Layer Fusion Contrastive Decoding with Truthfulness Refocused | Aug 16, 2024 | HallucinationTruthfulQA | —Unverified | 0 |
| Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions | Aug 16, 2024 | DescriptiveHallucination | —Unverified | 0 |
| Graph Retrieval-Augmented Generation: A Survey | Aug 15, 2024 | HallucinationRAG | CodeCode Available | 3 |
| Plan with Code: Comparing approaches for robust NL to DSL generation | Aug 15, 2024 | Code GenerationHallucination | —Unverified | 0 |
| CodeMirage: Hallucinations in Code Generated by Large Language Models | Aug 14, 2024 | Code GenerationHallucination | —Unverified | 0 |
| Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability | Aug 14, 2024 | Hallucination | —Unverified | 0 |
| Audit-LLM: Multi-Agent Collaboration for Log-based Insider Threat Detection | Aug 12, 2024 | Common Sense ReasoningHallucination | —Unverified | 0 |
| SSL: A Self-similarity Loss for Improving Generative Image Super-resolution | Aug 11, 2024 | HallucinationImage Super-Resolution | CodeCode Available | 2 |
| Reference-free Hallucination Detection for Large Vision-Language Models | Aug 11, 2024 | HallucinationQuestion Answering | —Unverified | 0 |