| LLMDFA: Analyzing Dataflow in Code with Large Language Models | Feb 16, 2024 | Hallucination | CodeCode Available | 3 |
| Measuring and Reducing LLM Hallucination without Gold-Standard Answers | Feb 16, 2024 | HallucinationIn-Context Learning | —Unverified | 0 |
| Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models | Feb 16, 2024 | HallucinationRetrieval | —Unverified | 0 |
| Towards Uncovering How Large Language Model Works: An Explainability Perspective | Feb 16, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Trading off Consistency and Dimensionality of Convex Surrogates for the Mode | Feb 16, 2024 | HallucinationInformation Retrieval | —Unverified | 0 |
| EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language Models | Feb 15, 2024 | HallucinationObject Hallucination | CodeCode Available | 1 |
| Uncertainty Quantification for In-Context Learning of Large Language Models | Feb 15, 2024 | HallucinationIn-Context Learning | CodeCode Available | 1 |
| Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States | Feb 15, 2024 | Hallucination | —Unverified | 0 |
| Visually Dehallucinative Instruction Generation: Know What You Don't Know | Feb 15, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Into the Unknown: Self-Learning Large Language Models | Feb 14, 2024 | HallucinationSelf-Learning | CodeCode Available | 1 |