| A Survey of Hallucination in Large Foundation Models | Sep 12, 2023 | HallucinationSurvey | CodeCode Available | 1 |
| Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis | Sep 11, 2023 | HallucinationInstruction Following | —Unverified | 0 |
| DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping | Sep 11, 2023 | HallucinationInstruction Following | CodeCode Available | 0 |
| Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Reliable Response Generation in Chinese | Sep 8, 2023 | Domain AdaptationHallucination | CodeCode Available | 4 |
| Knowledge Solver: Teaching LLMs to Search for Domain Knowledge from Knowledge Graphs | Sep 6, 2023 | HallucinationKnowledge Graphs | —Unverified | 0 |
| Zero-Resource Hallucination Prevention for Large Language Models | Sep 6, 2023 | Hallucination | CodeCode Available | 0 |
| Parameter Efficient Audio Captioning With Faithful Guidance Using Audio-text Shared Latent Representation | Sep 6, 2023 | Audio captioningData Augmentation | —Unverified | 0 |
| CIEM: Contrastive Instruction Evaluation Method for Better Instruction Tuning | Sep 5, 2023 | Hallucination | —Unverified | 0 |
| Benchmarking Large Language Models in Retrieval-Augmented Generation | Sep 4, 2023 | Benchmarkingcounterfactual | CodeCode Available | 2 |
| Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | Sep 3, 2023 | HallucinationWorld Knowledge | CodeCode Available | 3 |