| Maximum Hallucination Standards for Domain-Specific Large Language Models | Mar 7, 2025 | AttributeHallucination | —Unverified | 0 |
| Evaluating open-source Large Language Models for automated fact-checking | Mar 7, 2025 | Fact CheckingMisinformation | —Unverified | 0 |
| SafeArena: Evaluating the Safety of Autonomous Web Agents | Mar 6, 2025 | MisinformationSafety Alignment | —Unverified | 0 |
| On Fact and Frequency: LLM Responses to Misinformation Expressed with Uncertainty | Mar 6, 2025 | Misinformation | —Unverified | 0 |
| Cite Before You Speak: Enhancing Context-Response Grounding in E-commerce Conversational LLM-Agents | Mar 5, 2025 | AttributeIn-Context Learning | —Unverified | 0 |
| When Claims Evolve: Evaluating and Enhancing the Robustness of Embedding Models Against Misinformation Edits | Mar 5, 2025 | Domain GeneralizationFact Checking | CodeCode Available | 0 |
| Social hierarchy shapes foraging decisions | Mar 4, 2025 | Misinformation | —Unverified | 0 |
| Limited Effectiveness of LLM-based Data Augmentation for COVID-19 Misinformation Stance Detection | Mar 4, 2025 | Data AugmentationMisinformation | —Unverified | 0 |
| Persuasion at Play: Understanding Misinformation Dynamics in Demographic-Aware Human-LLM Interactions | Mar 3, 2025 | Misinformation | —Unverified | 0 |
| Mind the (Belief) Gap: Group Identity in the World of LLMs | Mar 3, 2025 | MisinformationNavigate | CodeCode Available | 0 |