| Unified Hallucination Detection for Multimodal Large Language Models | Feb 5, 2024 | Hallucination | CodeCode Available | 1 |
| Improving Assessment of Tutoring Practices using Retrieval-Augmented Generation | Feb 4, 2024 | HallucinationMath | —Unverified | 0 |
| Aligner: Efficient Alignment by Learning to Correct | Feb 4, 2024 | Hallucination | —Unverified | 0 |
| LLM-Enhanced Data Management | Feb 4, 2024 | HallucinationManagement | CodeCode Available | 4 |
| A Closer Look at the Limitations of Instruction Tuning | Feb 3, 2024 | Hallucination | —Unverified | 0 |
| A Survey on Large Language Model Hallucination via a Creativity Perspective | Feb 2, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks | Feb 2, 2024 | Answer GenerationHallucination | —Unverified | 0 |
| Skip : A Simple Method to Reduce Hallucination in Large Vision-Language Models | Feb 2, 2024 | Hallucination | CodeCode Available | 1 |
| PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language Models | Feb 2, 2024 | Action GenerationDecision Making | CodeCode Available | 3 |
| Redefining "Hallucination" in LLMs: Towards a psychology-informed framework for mitigating misinformation | Feb 1, 2024 | HallucinationMisinformation | —Unverified | 0 |