| Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths? | Nov 8, 2024 | ArticlesFact Checking | —Unverified | 0 |
| A Guide to Misinformation Detection Data and Evaluation | Nov 7, 2024 | AllMisinformation | —Unverified | 0 |
| Harmful YouTube Video Detection: A Taxonomy of Online Harm and MLLMs as Alternative Annotators | Nov 6, 2024 | Binary ClassificationMisinformation | —Unverified | 0 |
| TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation | Nov 5, 2024 | Image to Video GenerationMisinformation | —Unverified | 0 |
| Revisiting Game-Theoretic Control in Socio-Technical Networks: Emerging Design Frameworks and Contemporary Applications | Nov 4, 2024 | ManagementMisinformation | —Unverified | 0 |
| AMREx: AMR for Explainable Fact Verification | Nov 2, 2024 | Abstract Meaning RepresentationClaim Verification | —Unverified | 0 |
| E2E-AFG: An End-to-End Model with Adaptive Filtering for Retrieval-Augmented Generation | Nov 1, 2024 | MisinformationRetrieval | —Unverified | 0 |
| A graph-based approach to extracting narrative signals from public discourse | Nov 1, 2024 | Abstract Meaning RepresentationInformation Retrieval | CodeCode Available | 0 |
| Exploring the Knowledge Mismatch Hypothesis: Hallucination Propensity in Small Models Fine-tuned on Data from Larger Models | Oct 31, 2024 | HallucinationMisinformation | —Unverified | 0 |
| Retrieval-Augmented Generation with Estimation of Source Reliability | Oct 30, 2024 | MisinformationRAG | —Unverified | 0 |
| Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by Betting | Oct 29, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 0 |
| Can Users Detect Biases or Factual Errors in Generated Responses in Conversational Information-Seeking? | Oct 28, 2024 | DiversityMisinformation | CodeCode Available | 0 |
| Systematically Analyzing Prompt Injection Vulnerabilities in Diverse LLM Architectures | Oct 28, 2024 | Misinformation | —Unverified | 0 |
| Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models | Oct 28, 2024 | ArticlesMisinformation | —Unverified | 0 |
| SubjECTive-QA: Measuring Subjectivity in Earnings Call Transcripts' QA Through Six-Dimensional Feature Analysis | Oct 28, 2024 | Fact CheckingMisinformation | CodeCode Available | 0 |
| LLM Robustness Against Misinformation in Biomedical Question Answering | Oct 27, 2024 | MisinformationQuestion Answering | CodeCode Available | 0 |
| Malinowski in the Age of AI: Can large language models create a text game based on an anthropological classic? | Oct 27, 2024 | Misinformationtext-based games | —Unverified | 0 |
| LLM-Consensus: Multi-Agent Debate for Visual Misinformation Detection | Oct 26, 2024 | Decision MakingMisinformation | —Unverified | 0 |
| A Systematic Review of Machine Learning Approaches for Detecting Deceptive Activities on Social Media: Methods, Challenges, and Biases | Oct 26, 2024 | MisinformationSelection bias | —Unverified | 0 |
| Can We Trust AI Agents? A Case Study of an LLM-Based Multi-Agent System for Ethical AI | Oct 25, 2024 | Bias DetectionEthics | —Unverified | 0 |
| Detection of Human and Machine-Authored Fake News in Urdu | Oct 25, 2024 | Binary ClassificationFake News Detection | CodeCode Available | 0 |
| A Debate-Driven Experiment on LLM Hallucinations and Accuracy | Oct 25, 2024 | Fact CheckingHallucination | —Unverified | 0 |
| The Stepwise Deception: Simulating the Evolution from True News to Fake News with LLM Agents | Oct 24, 2024 | Large Language ModelMisinformation | —Unverified | 0 |
| Monolingual and Multilingual Misinformation Detection for Low-Resource Languages: A Comprehensive Survey | Oct 24, 2024 | Misinformation | —Unverified | 0 |
| Watermarking Large Language Models and the Generated Content: Opportunities and Challenges | Oct 24, 2024 | Code GenerationMisinformation | —Unverified | 0 |