| Distinguishing Ignorance from Error in LLM Hallucinations | Oct 29, 2024 | HallucinationQuestion Answering | CodeCode Available | 1 |
| Can Knowledge Editing Really Correct Hallucinations? | Oct 21, 2024 | Hallucinationknowledge editing | CodeCode Available | 1 |
| Paths-over-Graph: Knowledge Graph Empowered Large Language Model Reasoning | Oct 18, 2024 | HallucinationKnowledge Base Question Answering | CodeCode Available | 1 |
| Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided Decoding | Oct 17, 2024 | HallucinationObject Hallucination | CodeCode Available | 1 |
| FaithBench: A Diverse Hallucination Benchmark for Summarization by Modern LLMs | Oct 17, 2024 | DiversityHallucination | CodeCode Available | 1 |
| Search Engines in an AI Era: The False Promise of Factual and Verifiable Source-Cited Responses | Oct 15, 2024 | HallucinationLanguage Modeling | CodeCode Available | 1 |
| VERIFIED: A Video Corpus Moment Retrieval Benchmark for Fine-Grained Video Understanding | Oct 11, 2024 | HallucinationMoment Retrieval | CodeCode Available | 1 |
| Automatic Curriculum Expert Iteration for Reliable LLM Reasoning | Oct 10, 2024 | HallucinationLogical Reasoning | CodeCode Available | 1 |
| OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model Prompting | Oct 10, 2024 | Entity LinkingFew-Shot Learning | CodeCode Available | 1 |
| IterGen: Iterative Semantic-aware Structured LLM Generation with Backtracking | Oct 9, 2024 | ARCCode Generation | CodeCode Available | 1 |
| CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generation | Oct 3, 2024 | Abstractive Text SummarizationHallucination | CodeCode Available | 1 |
| FactAlign: Long-form Factuality Alignment of Large Language Models | Oct 2, 2024 | FormHallucination | CodeCode Available | 1 |
| EventHallusion: Diagnosing Event Hallucinations in Video LLMs | Sep 25, 2024 | HallucinationInstruction Following | CodeCode Available | 1 |
| XTRUST: On the Multilingual Trustworthiness of Large Language Models | Sep 24, 2024 | EthicsFairness | CodeCode Available | 1 |
| FAIR GPT: A virtual consultant for research data management in ChatGPT | Sep 20, 2024 | FairnessHallucination | CodeCode Available | 1 |
| Evaluating Image Hallucination in Text-to-Image Generation with Question-Answering | Sep 19, 2024 | HallucinationHallucination Evaluation | CodeCode Available | 1 |
| Trustworthiness in Retrieval-Augmented Generation Systems: A Survey | Sep 16, 2024 | FairnessHallucination | CodeCode Available | 1 |
| Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning | Aug 30, 2024 | Hallucination | CodeCode Available | 1 |
| Towards Empathetic Conversational Recommender Systems | Aug 30, 2024 | HallucinationRecommendation Systems | CodeCode Available | 1 |
| ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models | Aug 25, 2024 | Hallucination | CodeCode Available | 1 |
| SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination Detection | Aug 22, 2024 | HallucinationLanguage Modeling | CodeCode Available | 1 |
| Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language Models | Aug 18, 2024 | AttributeHallucination | CodeCode Available | 1 |
| Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed Inputs | Aug 2, 2024 | AttributeHallucination | CodeCode Available | 1 |
| Mitigating Multilingual Hallucination in Large Vision-Language Models | Aug 1, 2024 | Hallucination | CodeCode Available | 1 |
| Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs | Jul 31, 2024 | HallucinationImage Comprehension | CodeCode Available | 1 |