| Self-training Large Language Models through Knowledge Detection | Jun 17, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector | Jun 17, 2024 | 2kHallucination | CodeCode Available | 1 |
| Mitigating Large Language Model Hallucination with Faithful Finetuning | Jun 17, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs | Jun 17, 2024 | counterfactualHallucination | CodeCode Available | 0 |
| Hallucination Mitigation Prompts Long-term Video Understanding | Jun 17, 2024 | Answer GenerationHallucination | CodeCode Available | 0 |
| CoMT: Chain-of-Medical-Thought Reduces Hallucination in Medical Report Generation | Jun 17, 2024 | DiagnosticHallucination | —Unverified | 0 |
| MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts | Jun 17, 2024 | HallucinationMixture-of-Experts | CodeCode Available | 1 |
| Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models | Jun 17, 2024 | Benchmarking | CodeCode Available | 2 |
| mDPO: Conditional Preference Optimization for Multimodal Large Language Models | Jun 17, 2024 | HallucinationLanguage Modeling | CodeCode Available | 2 |
| Teaching Large Language Models to Express Knowledge Boundary from Their Own Signals | Jun 16, 2024 | Hallucination | —Unverified | 0 |