| On A Scale From 1 to 5: Quantifying Hallucination in Faithfulness Evaluation | Oct 16, 2024 | HallucinationNatural Language Inference | —Unverified | 0 |
| What Do LLMs Need to Understand Graphs: A Survey of Parametric Representation of Graphs | Oct 16, 2024 | Drug DiscoveryGraph Generation | —Unverified | 0 |
| Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning | Oct 16, 2024 | Contrastive Learninggraph construction | —Unverified | 0 |
| A Claim Decomposition Benchmark for Long-form Answer Verification | Oct 16, 2024 | FormHallucination | CodeCode Available | 0 |
| RosePO: Aligning LLM-based Recommenders with Human Values | Oct 16, 2024 | HallucinationRecommendation Systems | —Unverified | 0 |
| When Not to Answer: Evaluating Prompts on GPT Models for Effective Abstention in Unanswerable Math Word Problems | Oct 16, 2024 | HallucinationMath | —Unverified | 0 |
| Controlled Automatic Task-Specific Synthetic Data Generation for Hallucination Detection | Oct 16, 2024 | HallucinationIn-Context Learning | —Unverified | 0 |
| AGENTiGraph: An Interactive Knowledge Graph Platform for LLM-based Chatbots Utilizing Private Data | Oct 15, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| On the Capacity of Citation Generation by Large Language Models | Oct 15, 2024 | AttributeHallucination | —Unverified | 0 |
| ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability | Oct 15, 2024 | HallucinationRAG | —Unverified | 0 |