| VL-RewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models | Jan 1, 2025 | Hallucination | —Unverified | 0 |
| A review of faithfulness metrics for hallucination assessment in Large Language Models | Dec 31, 2024 | BenchmarkingHallucination | —Unverified | 0 |
| Distilling Desired Comments for Enhanced Code Review with Large Language Models | Dec 29, 2024 | Dataset DistillationHallucination | —Unverified | 0 |
| HALLUCINOGEN: A Benchmark for Evaluating Object Hallucination in Large Visual-Language Models | Dec 29, 2024 | HallucinationObject | CodeCode Available | 0 |
| Is Your Text-to-Image Model Robust to Caption Noise? | Dec 27, 2024 | DescriptiveHallucination | —Unverified | 0 |
| An End-to-End Depth-Based Pipeline for Selfie Image Rectification | Dec 26, 2024 | Depth EstimationHallucination | —Unverified | 0 |
| MedHallBench: A New Benchmark for Assessing Hallucination in Medical Large Language Models | Dec 25, 2024 | Hallucinationreinforcement-learning | —Unverified | 0 |
| Improving Factuality with Explicit Working Memory | Dec 24, 2024 | Fact CheckingHallucination | —Unverified | 0 |
| From Hallucinations to Facts: Enhancing Language Models with Curated Knowledge Graphs | Dec 24, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| Multimodal Preference Data Synthetic Alignment with Reward Model | Dec 23, 2024 | 2kCaption Generation | CodeCode Available | 0 |
| CiteBART: Learning to Generate Citations for Local Citation Recommendation | Dec 23, 2024 | Citation PredictionCitation Recommendation | CodeCode Available | 0 |
| AlzheimerRAG: Multimodal Retrieval Augmented Generation for PubMed articles | Dec 21, 2024 | ArticlesDecision Making | —Unverified | 0 |
| Logical Consistency of Large Language Models in Fact-checking | Dec 20, 2024 | Fact CheckingHallucination | —Unverified | 0 |
| Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage | Dec 20, 2024 | AttributeBenchmarking | —Unverified | 0 |
| Token Preference Optimization with Self-Calibrated Visual-Anchored Rewards for Hallucination Mitigation | Dec 19, 2024 | Hallucination | —Unverified | 0 |
| Dehallucinating Parallel Context Extension for Retrieval-Augmented Generation | Dec 19, 2024 | HallucinationRAG | —Unverified | 0 |
| Think&Cite: Improving Attributed Text Generation with Self-Guided Tree Search and Progress Reward Modeling | Dec 19, 2024 | HallucinationText Generation | —Unverified | 0 |
| Query pipeline optimization for cancer patient question answering systems | Dec 19, 2024 | HallucinationPassage Retrieval | —Unverified | 0 |
| A Comparative Study of DSPy Teleprompter Algorithms for Aligning Large Language Models Evaluation Metrics to Human Evaluation | Dec 19, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence | Dec 18, 2024 | HallucinationMultimodal Reasoning | —Unverified | 0 |
| Are LLMs Good Literature Review Writers? Evaluating the Literature Review Writing Ability of Large Language Models | Dec 18, 2024 | Hallucination | —Unverified | 0 |
| When to Speak, When to Abstain: Contrastive Decoding with Abstention | Dec 17, 2024 | HallucinationQuestion Answering | —Unverified | 0 |
| A MapReduce Approach to Effectively Utilize Long Context Information in Retrieval Augmented Language Models | Dec 17, 2024 | HallucinationRAG | —Unverified | 0 |
| What External Knowledge is Preferred by LLMs? Characterizing and Exploring Chain of Evidence in Imperfect Context | Dec 17, 2024 | HallucinationMisinformation | —Unverified | 0 |
| ReXTrust: A Model for Fine-Grained Hallucination Detection in AI-Generated Radiology Reports | Dec 17, 2024 | Hallucination | —Unverified | 0 |