| How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild | Feb 18, 2025 | ArticlesHallucination | CodeCode Available | 0 |
| SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities | Feb 17, 2025 | Large Language ModelMisinformation | —Unverified | 0 |
| Competing LLM Agents in a Non-Cooperative Game of Opinion Polarisation | Feb 17, 2025 | Decision MakingLanguage Modeling | —Unverified | 0 |
| G-Safeguard: A Topology-Guided Security Lens and Treatment on LLM-based Multi-agent Systems | Feb 16, 2025 | Decision MakingLanguage Modeling | CodeCode Available | 0 |
| LLM-Enhanced Multiple Instance Learning for Joint Rumor and Stance Detection with Social Context Information | Feb 13, 2025 | Binary ClassificationMisinformation | —Unverified | 0 |
| Mind What You Ask For: Emotional and Rational Faces of Persuasion by Large Language Models | Feb 13, 2025 | Misinformation | —Unverified | 0 |
| Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking | Feb 13, 2025 | Decision MakingFact Checking | —Unverified | 0 |
| Towards Automated Fact-Checking of Real-World Claims: Exploring Task Formulation and Assessment with LLMs | Feb 13, 2025 | Claim VerificationFact Checking | —Unverified | 0 |
| Large Language Models and Provenance Metadata for Determining the Relevance of Images and Videos in News Stories | Feb 13, 2025 | ArticlesLanguage Modeling | —Unverified | 0 |
| E2LVLM:Evidence-Enhanced Large Vision-Language Model for Multimodal Out-of-Context Misinformation Detection | Feb 12, 2025 | Instruction FollowingLanguage Modeling | —Unverified | 0 |