| Detecting Hallucinations in Virtual Histology with Neural Precursors | Nov 22, 2024 | HallucinationVirtual Staining | —Unverified | 0 |
| Leveraging LLMs for Legacy Code Modernization: Challenges and Opportunities for LLM-Generated Documentation | Nov 22, 2024 | Hallucination | —Unverified | 0 |
| Sycophancy in Large Language Models: Causes and Mitigations | Nov 22, 2024 | Hallucination | —Unverified | 0 |
| CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs | Nov 19, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Can Open-source LLMs Enhance Data Synthesis for Toxic Detection?: An Experimental Study | Nov 18, 2024 | Data AugmentationHallucination | —Unverified | 0 |
| VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation | Nov 18, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Mitigating Knowledge Conflicts in Language Model-Driven Question Answering | Nov 18, 2024 | Document SummarizationHallucination | —Unverified | 0 |
| Enabling Explainable Recommendation in E-commerce with LLM-powered Product Knowledge Graph | Nov 17, 2024 | Explainable RecommendationHallucination | —Unverified | 0 |
| INVARLLM: LLM-assisted Physical Invariant Extraction for Cyber-Physical Systems Anomaly Detection | Nov 17, 2024 | Anomaly DetectionHallucination | —Unverified | 0 |
| Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question Answering | Nov 17, 2024 | HallucinationIn-Context Learning | CodeCode Available | 0 |