| FGAIF: Aligning Large Vision-Language Models with Fine-grained AI Feedback | Apr 7, 2024 | AttributeHallucination | —Unverified | 0 |
| HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models | Apr 7, 2024 | HallucinationRepresentation Learning | —Unverified | 0 |
| SLPL SHROOM at SemEval2024 Task 06: A comprehensive study on models ability to detect hallucination | Apr 7, 2024 | HallucinationMachine Translation | CodeCode Available | 0 |
| On the Limitations of Large Language Models (LLMs): False Attribution | Apr 6, 2024 | Author AttributionHallucination | —Unverified | 0 |
| PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics | Apr 6, 2024 | BenchmarkingHallucination | CodeCode Available | 0 |
| FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping | Apr 5, 2024 | AttributeHallucination | —Unverified | 0 |
| Mitigating LLM Hallucinations via Conformal Abstention | Apr 4, 2024 | Conformal PredictionGenerative Question Answering | —Unverified | 0 |
| SHROOM-INDElab at SemEval-2024 Task 6: Zero- and Few-Shot LLM-Based Classification for Hallucination Detection | Apr 4, 2024 | HallucinationIn-Context Learning | CodeCode Available | 0 |
| Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations | Apr 4, 2024 | HallucinationHuman Detection | CodeCode Available | 0 |
| A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation | Apr 4, 2024 | counterfactualCounterfactual Reasoning | —Unverified | 0 |