| RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models | Apr 18, 2024 | Fact CheckingLanguage Modeling | —Unverified | 0 |
| Misinformation Resilient Search Rankings with Webgraph-based Interventions | Apr 13, 2024 | FairnessMisinformation | CodeCode Available | 0 |
| Mitigating Cascading Effects in Large Adversarial Graph Environments | Apr 12, 2024 | counterfactualData Augmentation | —Unverified | 0 |
| Rumour Evaluation with Very Large Language Models | Apr 11, 2024 | MisinformationPrompt Engineering | CodeCode Available | 0 |
| Introducing L2M3, A Multilingual Medical Large Language Model to Advance Health Equity in Low-Resource Regions | Apr 11, 2024 | DiagnosticLanguage Modeling | —Unverified | 0 |
| Auditing health-related recommendations in social media: A Case Study of Abortion on YouTube | Apr 11, 2024 | Misinformation | —Unverified | 0 |
| Pitfalls of Conversational LLMs on News Debiasing | Apr 9, 2024 | Misinformation | —Unverified | 0 |
| Evaluation of an LLM in Identifying Logical Fallacies: A Call for Rigor When Adopting LLMs in HCI Research | Apr 8, 2024 | Logical FallaciesMisinformation | —Unverified | 0 |
| NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps | Apr 2, 2024 | Hate Speech DetectionMisinformation | CodeCode Available | 0 |
| A (More) Realistic Evaluation Setup for Generalisation of Community Models on Malicious Content Detection | Apr 2, 2024 | Misinformation | CodeCode Available | 0 |