| VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models | Jun 24, 2024 | HallucinationVideo Understanding | —Unverified | 0 |
| Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models | Jun 24, 2024 | Common Sense ReasoningHallucination | CodeCode Available | 1 |
| Prompt-Consistency Image Generation (PCIG): A Unified Framework Integrating LLMs, Knowledge Graphs, and Controllable Diffusion Models | Jun 24, 2024 | HallucinationImage Generation | CodeCode Available | 0 |
| Semantic Entropy Probes: Robust and Cheap Hallucination Detection in LLMs | Jun 22, 2024 | HallucinationUncertainty Quantification | CodeCode Available | 2 |
| Evaluating RAG-Fusion with RAGElo: an Automated Elo-based Framework | Jun 20, 2024 | HallucinationQuestion Answering | CodeCode Available | 2 |
| Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? | Jun 20, 2024 | Caption GenerationHallucination | —Unverified | 0 |
| From Descriptive Richness to Bias: Unveiling the Dark Side of Generative Image Caption Enrichment | Jun 20, 2024 | DescriptiveHallucination | —Unverified | 0 |
| HIGHT: Hierarchical Graph Tokenization for Molecule-Language Alignment | Jun 20, 2024 | Graph Neural NetworkHallucination | —Unverified | 0 |
| Large Language Models are Skeptics: False Negative Problem of Input-conflicting Hallucination | Jun 20, 2024 | Hallucination | —Unverified | 0 |
| Rethinking Abdominal Organ Segmentation (RAOS) in the clinical scenario: A robustness evaluation benchmark with challenging cases | Jun 19, 2024 | 8kHallucination | CodeCode Available | 2 |
| Knowledge Graph-Enhanced Large Language Models via Path Selection | Jun 19, 2024 | HallucinationKnowledge Graphs | CodeCode Available | 1 |
| StackRAG Agent: Improving Developer Answers with Retrieval-Augmented Generation | Jun 19, 2024 | HallucinationRetrieval | CodeCode Available | 0 |
| Detecting Errors through Ensembling Prompts (DEEP): An End-to-End LLM Framework for Detecting Factual Errors | Jun 18, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding | Jun 18, 2024 | Hallucination | CodeCode Available | 1 |
| RichRAG: Crafting Rich Responses for Multi-faceted Queries in Retrieval-Augmented Generation | Jun 18, 2024 | HallucinationRAG | —Unverified | 0 |
| What Matters in Memorizing and Recalling Facts? Multifaceted Benchmarks for Knowledge Probing in Language Models | Jun 18, 2024 | DecoderHallucination | —Unverified | 0 |
| On-Policy Fine-grained Knowledge Feedback for Hallucination Mitigation | Jun 18, 2024 | HallucinationResponse Generation | CodeCode Available | 0 |
| Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning? | Jun 18, 2024 | AttributeHallucination | —Unverified | 0 |
| Beyond Under-Alignment: Atomic Preference Enhanced Factuality Tuning for Large Language Models | Jun 18, 2024 | Hallucination | —Unverified | 0 |
| InternalInspector I^2: Robust Confidence Estimation in LLMs through Internal States | Jun 17, 2024 | BenchmarkingContrastive Learning | —Unverified | 0 |
| Self-training Large Language Models through Knowledge Detection | Jun 17, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector | Jun 17, 2024 | 2kHallucination | CodeCode Available | 1 |
| Mitigating Large Language Model Hallucination with Faithful Finetuning | Jun 17, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs | Jun 17, 2024 | counterfactualHallucination | CodeCode Available | 0 |
| Hallucination Mitigation Prompts Long-term Video Understanding | Jun 17, 2024 | Answer GenerationHallucination | CodeCode Available | 0 |