| ProveRAG: Provenance-Driven Vulnerability Analysis with Automated Retrieval-Augmented LLMs | Oct 22, 2024 | ChunkingHallucination | CodeCode Available | 0 |
| Privacy-hardened and hallucination-resistant synthetic data generation with logic-solvers | Oct 22, 2024 | Generative Adversarial NetworkHallucination | —Unverified | 0 |
| Navigating Noisy Feedback: Enhancing Reinforcement Learning with Error-Prone Language Models | Oct 22, 2024 | HallucinationLanguage Modeling | CodeCode Available | 0 |
| Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination | Oct 22, 2024 | Hallucination | —Unverified | 0 |
| Mitigating Hallucinations of Large Language Models in Medical Information Extraction via Contrastive Decoding | Oct 21, 2024 | Hallucination | —Unverified | 0 |
| Towards a Reliable Offline Personal AI Assistant for Long Duration Spaceflight | Oct 21, 2024 | HallucinationKnowledge Graphs | —Unverified | 0 |
| NetSafe: Exploring the Topological Safety of Multi-agent Networks | Oct 21, 2024 | HallucinationMisinformation | —Unverified | 0 |
| Large language models enabled multiagent ensemble method for efficient EHR data labeling | Oct 21, 2024 | Hallucination | —Unverified | 0 |
| Learning to Generate and Evaluate Fact-checking Explanations with Transformers | Oct 21, 2024 | Fact CheckingHallucination | —Unverified | 0 |
| ToW: Thoughts of Words Improve Reasoning in Large Language Models | Oct 21, 2024 | Data AugmentationHallucination | CodeCode Available | 0 |