| Distilling Reasoning Ability from Large Language Models with Adaptive Thinking | Apr 14, 2024 | Hallucination | —Unverified | 0 |
| Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models | Apr 14, 2024 | Hallucination | —Unverified | 0 |
| Reducing hallucination in structured outputs via Retrieval-Augmented Generation | Apr 12, 2024 | HallucinationRAG | —Unverified | 0 |
| Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs | Apr 11, 2024 | DescriptiveHallucination | CodeCode Available | 0 |
| An Audit on the Perspectives and Challenges of Hallucinations in NLP | Apr 11, 2024 | HallucinationSurvey | —Unverified | 0 |
| BRAVE: Broadening the visual encoding of vision-language models | Apr 10, 2024 | HallucinationLanguage Modelling | —Unverified | 0 |
| MetaCheckGPT -- A Multi-task Hallucination Detector Using LLM Uncertainty and Meta-models | Apr 10, 2024 | Hallucination | —Unverified | 0 |
| Characterizing Multimodal Long-form Summarization: A Case Study on Financial Reports | Apr 9, 2024 | FormHallucination | CodeCode Available | 0 |
| SmurfCat at SemEval-2024 Task 6: Leveraging Synthetic Data for Hallucination Detection | Apr 9, 2024 | Hallucination | CodeCode Available | 0 |
| Automating Research Synthesis with Domain-Specific Large Language Model Fine-Tuning | Apr 8, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Hyperbolic Learning with Synthetic Captions for Open-World Detection | Apr 7, 2024 | HallucinationNovel Concepts | —Unverified | 0 |
| HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models | Apr 7, 2024 | HallucinationRepresentation Learning | —Unverified | 0 |
| FGAIF: Aligning Large Vision-Language Models with Fine-grained AI Feedback | Apr 7, 2024 | AttributeHallucination | —Unverified | 0 |
| SLPL SHROOM at SemEval2024 Task 06: A comprehensive study on models ability to detect hallucination | Apr 7, 2024 | HallucinationMachine Translation | CodeCode Available | 0 |
| PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics | Apr 6, 2024 | BenchmarkingHallucination | CodeCode Available | 0 |
| On the Limitations of Large Language Models (LLMs): False Attribution | Apr 6, 2024 | Author AttributionHallucination | —Unverified | 0 |
| FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping | Apr 5, 2024 | AttributeHallucination | —Unverified | 0 |
| Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations | Apr 4, 2024 | HallucinationHuman Detection | CodeCode Available | 0 |
| A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation | Apr 4, 2024 | counterfactualCounterfactual Reasoning | —Unverified | 0 |
| SHROOM-INDElab at SemEval-2024 Task 6: Zero- and Few-Shot LLM-Based Classification for Hallucination Detection | Apr 4, 2024 | HallucinationIn-Context Learning | CodeCode Available | 0 |
| Mitigating LLM Hallucinations via Conformal Abstention | Apr 4, 2024 | Conformal PredictionGenerative Question Answering | —Unverified | 0 |
| Scalable Model Editing via Customized Expert Networks | Apr 3, 2024 | Hallucinationmodel | CodeCode Available | 0 |
| ALOHa: A New Measure for Hallucination in Captioning Models | Apr 3, 2024 | HallucinationObject | —Unverified | 0 |
| Hallucination Diversity-Aware Active Learning for Text Summarization | Apr 2, 2024 | Active LearningDiversity | —Unverified | 0 |
| Extracting Norms from Contracts Via ChatGPT: Opportunities and Challenges | Apr 2, 2024 | Hallucination | —Unverified | 0 |