| Unveiling Glitches: A Deep Dive into Image Encoding Bugs within CLIP | Jun 30, 2024 | HallucinationImage Comprehension | —Unverified | 0 |
| A Study on Effect of Reference Knowledge Choice in Generating Technical Content Relevant to SAPPhIRE Model Using Large Language Model | Jun 29, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science | Jun 29, 2024 | AI AgentClaim Verification | CodeCode Available | 0 |
| PFME: A Modular Approach for Fine-grained Hallucination Detection and Editing of Large Language Models | Jun 29, 2024 | HallucinationSentence | —Unverified | 0 |
| Applying RLAIF for Code Generation with API-usage in Lightweight LLMs | Jun 28, 2024 | Code GenerationHallucination | —Unverified | 0 |
| Handling Ontology Gaps in Semantic Parsing | Jun 27, 2024 | HallucinationQuestion Answering | CodeCode Available | 0 |
| From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data | Jun 27, 2024 | HallucinationInformation Retrieval | CodeCode Available | 0 |
| Mitigating Hallucination in Fictional Character Role-Play | Jun 25, 2024 | HallucinationWorld Knowledge | CodeCode Available | 0 |
| VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models | Jun 24, 2024 | HallucinationVideo Understanding | —Unverified | 0 |
| Prompt-Consistency Image Generation (PCIG): A Unified Framework Integrating LLMs, Knowledge Graphs, and Controllable Diffusion Models | Jun 24, 2024 | HallucinationImage Generation | CodeCode Available | 0 |
| Large Language Models are Skeptics: False Negative Problem of Input-conflicting Hallucination | Jun 20, 2024 | Hallucination | —Unverified | 0 |
| HIGHT: Hierarchical Graph Tokenization for Molecule-Language Alignment | Jun 20, 2024 | Graph Neural NetworkHallucination | —Unverified | 0 |
| Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? | Jun 20, 2024 | Caption GenerationHallucination | —Unverified | 0 |
| From Descriptive Richness to Bias: Unveiling the Dark Side of Generative Image Caption Enrichment | Jun 20, 2024 | DescriptiveHallucination | —Unverified | 0 |
| StackRAG Agent: Improving Developer Answers with Retrieval-Augmented Generation | Jun 19, 2024 | HallucinationRetrieval | CodeCode Available | 0 |
| What Matters in Memorizing and Recalling Facts? Multifaceted Benchmarks for Knowledge Probing in Language Models | Jun 18, 2024 | DecoderHallucination | —Unverified | 0 |
| Detecting Errors through Ensembling Prompts (DEEP): An End-to-End LLM Framework for Detecting Factual Errors | Jun 18, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| RichRAG: Crafting Rich Responses for Multi-faceted Queries in Retrieval-Augmented Generation | Jun 18, 2024 | HallucinationRAG | —Unverified | 0 |
| On-Policy Fine-grained Knowledge Feedback for Hallucination Mitigation | Jun 18, 2024 | HallucinationResponse Generation | CodeCode Available | 0 |
| Beyond Under-Alignment: Atomic Preference Enhanced Factuality Tuning for Large Language Models | Jun 18, 2024 | Hallucination | —Unverified | 0 |
| Do More Details Always Introduce More Hallucinations in LVLM-based Image Captioning? | Jun 18, 2024 | AttributeHallucination | —Unverified | 0 |
| Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs | Jun 17, 2024 | counterfactualHallucination | CodeCode Available | 0 |
| CoMT: Chain-of-Medical-Thought Reduces Hallucination in Medical Report Generation | Jun 17, 2024 | DiagnosticHallucination | —Unverified | 0 |
| Mitigating Large Language Model Hallucination with Faithful Finetuning | Jun 17, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| InternalInspector I^2: Robust Confidence Estimation in LLMs through Internal States | Jun 17, 2024 | BenchmarkingContrastive Learning | —Unverified | 0 |