| Towards Trustable Language Models: Investigating Information Quality of Large Language Models | Jan 23, 2024 | Hallucination | —Unverified | 0 |
| How well can a large language model explain business processes as perceived by users? | Jan 23, 2024 | HallucinationLanguage Modeling | CodeCode Available | 1 |
| Hallucination is Inevitable: An Innate Limitation of Large Language Models | Jan 22, 2024 | HallucinationLearning Theory | —Unverified | 0 |
| Knowledge Verification to Nip Hallucination in the Bud | Jan 19, 2024 | HallucinationWorld Knowledge | CodeCode Available | 1 |
| On the Audio Hallucinations in Large Audio-Video Language Models | Jan 18, 2024 | HallucinationSentence | —Unverified | 0 |
| Temporal Insight Enhancement: Mitigating Temporal Hallucination in Multimodal Large Language Models | Jan 18, 2024 | Hallucination | —Unverified | 0 |
| BibSonomy Meets ChatLLMs for Publication Management: From Chat to Publication Management: Organizing your related work using BibSonomy & LLMs | Jan 17, 2024 | HallucinationManagement | —Unverified | 0 |
| Hallucination Detection and Hallucination Mitigation: An Investigation | Jan 16, 2024 | Hallucination | —Unverified | 0 |
| Large Language Models are Null-Shot Learners | Jan 16, 2024 | Arithmetic ReasoningBenchmarking | —Unverified | 0 |
| The Pitfalls of Defining Hallucination | Jan 15, 2024 | Hallucinationnlg evaluation | —Unverified | 0 |