SOTAVerified

Misinformation

Papers

Showing 251300 of 1282 papers

TitleStatusHype
Assessing the Potential of Generative Agents in Crowdsourced Fact-Checking0
CRAVE: A Conflicting Reasoning Approach for Explainable Claim Verification Using LLMsCode0
Adaptation Method for Misinformation Identification0
MLEP: Multi-granularity Local Entropy Patterns for Universal AI-generated Image Detection0
Amplify Initiative: Building A Localized Data Platform for Globalized AI0
Accommodate Knowledge Conflicts in Retrieval-augmented LLMs: Towards Reliable Response Generation in the Wild0
ViClaim: A Multilingual Multilabel Dataset for Automatic Claim Detection in Videos0
SLURG: Investigating the Feasibility of Generating Synthetic Online Fallacious Discourse0
Watermarking Needs Input Repetition Masking0
Large Language Model-Informed Feature Discovery Improves Prediction and Interpretation of Credibility Perceptions of Visual Content0
Why am I seeing this? Towards recognizing social media recommender systems with missing recommendations0
RealHarm: A Collection of Real-World Language Model Application FailuresCode0
Working with Large Language Models to Enhance Messaging Effectiveness for Vaccine Confidence0
HalluShift: Measuring Distribution Shifts towards Hallucination Detection in LLMsCode0
MOSAIC: Modeling Social AI for Content Dissemination and Regulation in Multi-Agent SimulationsCode0
Generative AI in Collaborative Academic Report Writing: Advantages, Disadvantages, and Ethical Considerations0
FMNV: A Dataset of Media-Published News Videos for Fake News DetectionCode0
Latent Multimodal Reconstruction for Misinformation DetectionCode0
Query Smarter, Trust Better? Exploring Search Behaviours for Verifying News Accuracy0
Surveying Professional Writers on AI: Limitations, Expectations, and FearsCode0
A Survey of Social Cybersecurity: Techniques for Attack Detection, Evaluations, Challenges, and Future Prospects0
Comparative Analysis of Deepfake Detection Models: New Approaches and Perspectives0
A Practical Synthesis of Detecting AI-Generated Textual, Visual, and Audio Content0
One Pic is All it Takes: Poisoning Visual Document Retrieval Augmented Generation with a Single Image0
Is Less Really More? Fake News Detection with Limited InformationCode0
When Persuasion Overrides Truth in Multi-Agent LLM Debates: Introducing a Confidence-Weighted Persuasion Override Rate (CW-POR)0
BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models0
A Multi-Agent Framework with Automated Decision Rule Optimization for Cross-Domain Misinformation Detection0
Identifying Multi-modal Knowledge Neurons in Pretrained Transformers via Two-stage Filtering0
A Framework for Cryptographic Verifiability of End-to-End AI Pipelines0
Susceptibility of Large Language Models to User-Driven Factors in Medical Queries0
Detection of Somali-written Fake News and Toxic Messages on the Social Media Using Transformer-based Language Models0
Safe RLHF-V: Safe Reinforcement Learning from Human Feedback in Multimodal Large Language Models0
Deceptive Humor: A Synthetic Multilingual Benchmark Dataset for Bridging Fabricated Claims with Humorous Content0
ChatGPT or A Silent Everywhere Helper: A Survey of Large Language Models0
Entity-aware Cross-lingual Claim Detection for Automated Fact-checkingCode0
Iffy-Or-Not: Extending the Web to Support the Critical Evaluation of Fallacious Texts0
Reliable and Efficient Amortized Model-based Evaluation0
Team NYCU at Defactify4: Robust Detection and Source Identification of AI-Generated Images Using CNN and CLIP-Based ModelsCode0
VaxGuard: A Multi-Generator, Multi-Type, and Multi-Role Dataset for Detecting LLM-Generated Vaccine Misinformation0
Battling Misinformation: An Empirical Study on Adversarial Factuality in Open-Source Large Language Models0
How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit MisinformationCode0
Certainly Bot Or Not? Trustworthy Social Bot Detection via Robust Multi-Modal Neural Processes0
A Graph-based Verification Framework for Fact-Checking0
TH-Bench: Evaluating Evading Attacks via Humanizing AI Text on Machine-Generated Text Detectors0
Simulating Influence Dynamics with LLM Agents0
Fine-Grained Bias Detection in LLM: Enhancing detection mechanisms for nuanced biases0
Evaluating open-source Large Language Models for automated fact-checking0
Maximum Hallucination Standards for Domain-Specific Large Language Models0
SafeArena: Evaluating the Safety of Autonomous Web Agents0
Show:102550
← PrevPage 6 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TOKOFOUAverage F189.7Unverified