| Enhancing Retrieval Processes for Language Generation with Augmented Queries | Feb 6, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Improving Assessment of Tutoring Practices using Retrieval-Augmented Generation | Feb 4, 2024 | HallucinationMath | —Unverified | 0 |
| Aligner: Efficient Alignment by Learning to Correct | Feb 4, 2024 | Hallucination | —Unverified | 0 |
| A Closer Look at the Limitations of Instruction Tuning | Feb 3, 2024 | Hallucination | —Unverified | 0 |
| CorpusLM: Towards a Unified Language Model on Corpus for Knowledge-Intensive Tasks | Feb 2, 2024 | Answer GenerationHallucination | —Unverified | 0 |
| A Survey on Large Language Model Hallucination via a Creativity Perspective | Feb 2, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Efficient Non-Parametric Uncertainty Quantification for Black-Box Large Language Models and Decision Planning | Feb 1, 2024 | AI AgentDecision Making | —Unverified | 0 |
| Redefining "Hallucination" in LLMs: Towards a psychology-informed framework for mitigating misinformation | Feb 1, 2024 | HallucinationMisinformation | —Unverified | 0 |
| Instruction Makes a Difference | Feb 1, 2024 | HallucinationInstruction Following | CodeCode Available | 0 |
| Learning Planning-based Reasoning by Trajectories Collection and Process Reward Synthesizing | Feb 1, 2024 | HallucinationLogical Reasoning | —Unverified | 0 |