| Persona-Assigned Large Language Models Exhibit Human-Like Motivated Reasoning | Jun 24, 2025 | Misinformation | CodeCode Available | 0 |
| Social Media Can Reduce Misinformation When Public Scrutiny is High | Jun 19, 2025 | Misinformation | —Unverified | 0 |
| Veracity: An Open-Source AI Fact-Checking System | Jun 18, 2025 | Fact CheckingMisinformation | —Unverified | 0 |
| PhantomHunter: Detecting Unseen Privately-Tuned LLM-Generated Text via Family-Aware Learning | Jun 18, 2025 | LLM-generated Text DetectionMisinformation | —Unverified | 0 |
| RealFactBench: A Benchmark for Evaluating Large Language Models in Real-World Fact-Checking | Jun 14, 2025 | Explanation GenerationFact Checking | CodeCode Available | 0 |
| Step-by-Step Reasoning Attack: Revealing 'Erased' Knowledge in Large Language Models | Jun 14, 2025 | Misinformation | —Unverified | 0 |
| Dataset of News Articles with Provenance Metadata for Media Relevance Assessment | Jun 11, 2025 | ArticlesMisinformation | —Unverified | 0 |
| In Crowd Veritas: Leveraging Human Intelligence To Fight Misinformation | Jun 10, 2025 | Fact CheckingMisinformation | —Unverified | 0 |
| Evaluation empirique de la sécurisation et de l'alignement de ChatGPT et Gemini: analyse comparative des vulnérabilités par expérimentations de jailbreaks | Jun 10, 2025 | Information RetrievalMisinformation | —Unverified | 0 |
| Can LLMs Ground when they (Don't) Know: A Study on Direct and Loaded Political Questions | Jun 10, 2025 | Misinformation | —Unverified | 0 |