| Battling Misinformation: An Empirical Study on Adversarial Factuality in Open-Source Large Language Models | Mar 12, 2025 | Misinformation | —Unverified | 0 |
| How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation | Mar 12, 2025 | counterfactualMisconceptions | CodeCode Available | 0 |
| Certainly Bot Or Not? Trustworthy Social Bot Detection via Robust Multi-Modal Neural Processes | Mar 11, 2025 | Misinformation | —Unverified | 0 |
| A Graph-based Verification Framework for Fact-Checking | Mar 10, 2025 | Fact Checkinggraph construction | —Unverified | 0 |
| TH-Bench: Evaluating Evading Attacks via Humanizing AI Text on Machine-Generated Text Detectors | Mar 10, 2025 | Misinformation | —Unverified | 0 |
| Simulating Influence Dynamics with LLM Agents | Mar 10, 2025 | Misinformation | —Unverified | 0 |
| Fine-Grained Bias Detection in LLM: Enhancing detection mechanisms for nuanced biases | Mar 8, 2025 | Bias Detectioncounterfactual | —Unverified | 0 |
| Evaluating open-source Large Language Models for automated fact-checking | Mar 7, 2025 | Fact CheckingMisinformation | —Unverified | 0 |
| Maximum Hallucination Standards for Domain-Specific Large Language Models | Mar 7, 2025 | AttributeHallucination | —Unverified | 0 |
| SafeArena: Evaluating the Safety of Autonomous Web Agents | Mar 6, 2025 | MisinformationSafety Alignment | —Unverified | 0 |