| DHCP: Detecting Hallucinations by Cross-modal Attention Pattern in Large Vision-Language Models | Nov 27, 2024 | AttributeHallucination | —Unverified | 0 |
| Can LLMs be Good Graph Judge for Knowledge Graph Construction? | Nov 26, 2024 | Denoisinggraph construction | CodeCode Available | 1 |
| Efficient Self-Improvement in Multimodal Large Language Models: A Model-Level Judge-Free Approach | Nov 26, 2024 | Hallucination | —Unverified | 0 |
| Meaningless is better: hashing bias-inducing words in LLM prompts improves performance in logical reasoning and statistical learning | Nov 26, 2024 | HallucinationLogical Reasoning | —Unverified | 0 |
| VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models | Nov 26, 2024 | Hallucination | —Unverified | 0 |
| A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs | Nov 26, 2024 | Hallucination | —Unverified | 0 |
| AI2T: Building Trustable AI Tutors by Interactively Teaching a Self-Aware Learning Agent | Nov 26, 2024 | Hallucination | —Unverified | 0 |
| VidHal: Benchmarking Temporal Hallucinations in Vision LLMs | Nov 25, 2024 | BenchmarkingHallucination | CodeCode Available | 1 |
| AtomR: Atomic Operator-Empowered Large Language Models for Heterogeneous Knowledge Reasoning | Nov 25, 2024 | HallucinationQuestion Answering | CodeCode Available | 1 |
| Enhancing Multi-Agent Consensus through Third-Party LLM Integration: Analyzing Uncertainty and Mitigating Hallucinations in Large Language Models | Nov 25, 2024 | Hallucination | —Unverified | 0 |
| O1 Replication Journey -- Part 2: Surpassing O1-preview through Simple Distillation, Big Progress or Bitter Lesson? | Nov 25, 2024 | HallucinationKnowledge Distillation | CodeCode Available | 7 |
| VaLiD: Mitigating the Hallucination of Large Vision Language Models by Visual Layer Fusion Contrastive Decoding | Nov 24, 2024 | HallucinationLanguage Modeling | CodeCode Available | 1 |
| Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention Lens | Nov 23, 2024 | Hallucination | CodeCode Available | 2 |
| Ontology-Constrained Generation of Domain-Specific Clinical Summaries | Nov 23, 2024 | HallucinationText Summarization | CodeCode Available | 0 |
| ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models | Nov 22, 2024 | HallucinationObject | —Unverified | 0 |
| Detecting Hallucinations in Virtual Histology with Neural Precursors | Nov 22, 2024 | HallucinationVirtual Staining | —Unverified | 0 |
| Leveraging LLMs for Legacy Code Modernization: Challenges and Opportunities for LLM-Generated Documentation | Nov 22, 2024 | Hallucination | —Unverified | 0 |
| Sycophancy in Large Language Models: Causes and Mitigations | Nov 22, 2024 | Hallucination | —Unverified | 0 |
| CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs | Nov 19, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Can Open-source LLMs Enhance Data Synthesis for Toxic Detection?: An Experimental Study | Nov 18, 2024 | Data AugmentationHallucination | —Unverified | 0 |
| VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation | Nov 18, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Mitigating Knowledge Conflicts in Language Model-Driven Question Answering | Nov 18, 2024 | Document SummarizationHallucination | —Unverified | 0 |
| Enabling Explainable Recommendation in E-commerce with LLM-powered Product Knowledge Graph | Nov 17, 2024 | Explainable RecommendationHallucination | —Unverified | 0 |
| INVARLLM: LLM-assisted Physical Invariant Extraction for Cyber-Physical Systems Anomaly Detection | Nov 17, 2024 | Anomaly DetectionHallucination | —Unverified | 0 |
| Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question Answering | Nov 17, 2024 | HallucinationIn-Context Learning | CodeCode Available | 0 |