| Crafting In-context Examples according to LMs' Parametric Knowledge | Nov 16, 2023 | HallucinationIn-Context Learning | CodeCode Available | 0 |
| Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization | Nov 15, 2023 | Abstractive Text SummarizationHallucination | CodeCode Available | 1 |
| How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities | Nov 15, 2023 | EthicsFairness | CodeCode Available | 0 |
| Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification | Nov 15, 2023 | HallucinationRetrieval | CodeCode Available | 0 |
| Enhancing Emergency Decision-making with Knowledge Graphs and Large Language Models | Nov 15, 2023 | Decision MakingHallucination | —Unverified | 0 |
| Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models | Nov 15, 2023 | HallucinationRetrieval | —Unverified | 0 |
| Insights into Classifying and Mitigating LLMs' Hallucinations | Nov 14, 2023 | HallucinationMachine Translation | —Unverified | 0 |
| Predicting Text Preference Via Structured Comparative Reasoning | Nov 14, 2023 | HallucinationRetrieval | —Unverified | 0 |
| Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision | Nov 13, 2023 | HallucinationMM-Vet | CodeCode Available | 1 |
| AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination Evaluation | Nov 13, 2023 | AttributeHallucination | CodeCode Available | 1 |