| Safety challenges of AI in medicine in the era of large language models | Sep 11, 2024 | Hallucination | —Unverified | 0 |
| MEDIC: Towards a Comprehensive Framework for Evaluating LLMs in Clinical Applications | Sep 11, 2024 | EthicsHallucination | —Unverified | 0 |
| Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding | Sep 10, 2024 | HallucinationImage Captioning | —Unverified | 0 |
| LLMs Will Always Hallucinate, and We Need to Live With This | Sep 9, 2024 | Fact CheckingHallucination | —Unverified | 0 |
| Detecting Buggy Contracts via Smart Testing | Sep 6, 2024 | Hallucination | —Unverified | 0 |
| Generating Faithful and Salient Text from Multimodal Data | Sep 6, 2024 | HallucinationKnowledge Graphs | CodeCode Available | 0 |
| Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Question Answering | Sep 6, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| Vietnamese Legal Information Retrieval in Question-Answering System | Sep 5, 2024 | HallucinationInformation Retrieval | —Unverified | 0 |
| Hallucination Detection in LLMs: Fast and Memory-Efficient Fine-Tuned Models | Sep 4, 2024 | GPUHallucination | CodeCode Available | 0 |
| CLUE: Concept-Level Uncertainty Estimation for Large Language Models | Sep 4, 2024 | HallucinationSentence | —Unverified | 0 |