| Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models From Edge to Giant | Sep 17, 2024 | HallucinationInstruction Following | CodeCode Available | 0 |
| Optimizing Resource Consumption in Diffusion Models through Hallucination Early Detection | Sep 16, 2024 | Hallucination | —Unverified | 0 |
| HALO: Hallucination Analysis and Learning Optimization to Empower LLMs with Retrieval-Augmented Context for Guided Clinical Decision Making | Sep 16, 2024 | Answer GenerationDecision Making | CodeCode Available | 0 |
| SFR-RAG: Towards Contextually Faithful LLMs | Sep 16, 2024 | counterfactualHallucination | —Unverified | 0 |
| Confidence Estimation for LLM-Based Dialogue State Tracking | Sep 15, 2024 | Dialogue State TrackingHallucination | CodeCode Available | 0 |
| Explore the Hallucination on Low-level Perception for MLLMs | Sep 15, 2024 | HallucinationQuestion Answering | —Unverified | 0 |
| ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models | Sep 14, 2024 | AttributeHallucination | —Unverified | 0 |
| Winning Solution For Meta KDD Cup' 24 | Sep 13, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| MEDIC: Towards a Comprehensive Framework for Evaluating LLMs in Clinical Applications | Sep 11, 2024 | EthicsHallucination | —Unverified | 0 |
| Safety challenges of AI in medicine in the era of large language models | Sep 11, 2024 | Hallucination | —Unverified | 0 |