| Quantifying the Capabilities of LLMs across Scale and Precision | May 6, 2024 | HallucinationMisinformation | —Unverified | 0 |
| Score-based Generative Priors Guided Model-driven Network for MRI Reconstruction | May 5, 2024 | DenoisingHallucination | —Unverified | 0 |
| R4: Reinforced Retriever-Reorder-Responder for Retrieval-Augmented Large Language Models | May 4, 2024 | Graph AttentionHallucination | —Unverified | 0 |
| Attribution in Scientific Literature: New Benchmark and Methods | May 3, 2024 | Author AttributionHallucination | —Unverified | 0 |
| FLAME: Factuality-Aware Alignment for Large Language Models | May 2, 2024 | HallucinationInstruction Following | —Unverified | 0 |
| Can a Hallucinating Model help in Reducing Human "Hallucination"? | May 1, 2024 | HallucinationLogical Fallacies | —Unverified | 0 |
| Addressing Topic Granularity and Hallucination in Large Language Models for Topic Modelling | May 1, 2024 | HallucinationTopic Classification | CodeCode Available | 0 |
| What Makes for Good Image Captions? | May 1, 2024 | HallucinationImage Captioning | —Unverified | 0 |
| CodeHalu: Investigating Code Hallucinations in LLMs via Execution-based Verification | Apr 30, 2024 | Code GenerationHallucination | CodeCode Available | 1 |
| RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language Processing | Apr 30, 2024 | Computational EfficiencyHallucination | CodeCode Available | 3 |