| ToW: Thoughts of Words Improve Reasoning in Large Language Models | Oct 21, 2024 | Data AugmentationHallucination | CodeCode Available | 0 |
| Mitigating Object Hallucination via Concentric Causal Attention | Oct 21, 2024 | HallucinationObject | CodeCode Available | 2 |
| Can Knowledge Editing Really Correct Hallucinations? | Oct 21, 2024 | Hallucinationknowledge editing | CodeCode Available | 1 |
| Reducing Hallucinations in Vision-Language Models via Latent Space Steering | Oct 21, 2024 | Hallucination | CodeCode Available | 2 |
| NetSafe: Exploring the Topological Safety of Multi-agent Networks | Oct 21, 2024 | HallucinationMisinformation | —Unverified | 0 |
| Learning to Generate and Evaluate Fact-checking Explanations with Transformers | Oct 21, 2024 | Fact CheckingHallucination | —Unverified | 0 |
| A Survey of Hallucination in Large Visual Language Models | Oct 20, 2024 | HallucinationHallucination Evaluation | —Unverified | 0 |
| Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Training | Oct 20, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |
| Explaining Graph Neural Networks with Large Language Models: A Counterfactual Perspective for Molecular Property Prediction | Oct 19, 2024 | counterfactualCounterfactual Explanation | CodeCode Available | 0 |
| Coarse-to-Fine Highlighting: Reducing Knowledge Hallucination in Large Language Models | Oct 19, 2024 | HallucinationLanguage Modeling | —Unverified | 0 |