SOTAVerified

Misinformation

Papers

Showing 451500 of 1282 papers

TitleStatusHype
Implications for Governance in Public Perceptions of Societal-scale AI Risks0
An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics0
ThatiAR: Subjectivity Detection in Arabic News Sentences0
Interpretable Multimodal Out-of-context Detection with Soft Logic Regularization0
Chaos with Keywords: Exposing Large Language Models Sycophantic Hallucination to Misleading Keywords and Evaluating Defense Strategies0
Evaluating the Efficacy of Large Language Models in Detecting Fake News: A Comparative Analysis0
Censorship in Democracy0
Missci: Reconstructing Fallacies in Misrepresented ScienceCode0
Cluster-Aware Similarity Diffusion for Instance Retrieval0
Early Detection of Misinformation for Infodemic Management: A Domain Adaptation Approach0
Enhancing Text Authenticity: A Novel Hybrid Approach for AI-Generated Text Detection0
SPOT: Text Source Prediction from Originality Score Thresholding0
Unlearning Climate Misinformation in Large Language Models0
The global landscape of academic guidelines for generative AI and Large Language Models0
Semantic Density: Uncertainty Quantification for Large Language Models through Confidence Measurement in Semantic SpaceCode1
The Influencer Next Door: How Misinformation Creators Use GenAI0
Consumer lying in online reviews: recent evidence0
LG AI Research & KAIST at EHRSQL 2024: Self-Training Large Language Models with Pseudo-Labeled Unanswerable Questions for a Reliable Text-to-SQL System on EHRs0
SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks0
Tailoring Vaccine Messaging with Common-Ground OpinionsCode0
Detecting Fallacies in Climate Misinformation: A Technocognitive Approach to Identifying Misleading Argumentation0
Discursive objection strategies in online comments: Developing a classification schema and validating its training0
ViWikiFC: Fact-Checking for Vietnamese Wikipedia-Based Textual Knowledge Source0
LingML: Linguistic-Informed Machine Learning for Enhanced Fake News Detection0
Quantifying the Capabilities of LLMs across Scale and Precision0
Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines0
Detecting Edited Knowledge in Language Models0
Can a Hallucinating Model help in Reducing Human "Hallucination"?0
Large Language Model Agent for Fake News Detection0
FactCheck Editor: Multilingual Text Editor with End-to-End fact-checking0
Credible, Unreliable or Leaked?: Evidence Verification for Enhanced Automated Fact-checkingCode0
Exposing Text-Image Inconsistency Using Diffusion ModelsCode1
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of Theories, Detection Methods, and Opportunities0
Inside the echo chamber: Linguistic underpinnings of misinformation on TwitterCode0
Augmented CARDS: A machine learning approach to identifying triggers of climate change misinformation on Twitter0
Classifying Human-Generated and AI-Generated Election Claims in Social Media0
BotDGT: Dynamicity-aware Social Bot Detection with Dynamic Graph TransformersCode1
Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMsCode2
The Adversarial AI-Art: Understanding, Generation, Detection, and Benchmarking0
Autoformalizing Natural Language to First-Order Logic: A Case Study in Logical Fallacy Detection0
RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models0
Misinformation Resilient Search Rankings with Webgraph-based InterventionsCode0
Mitigating Cascading Effects in Large Adversarial Graph Environments0
Rumour Evaluation with Very Large Language ModelsCode0
Introducing L2M3, A Multilingual Medical Large Language Model to Advance Health Equity in Low-Resource Regions0
Auditing health-related recommendations in social media: A Case Study of Abortion on YouTube0
Pitfalls of Conversational LLMs on News Debiasing0
Evaluation of an LLM in Identifying Logical Fallacies: A Call for Rigor When Adopting LLMs in HCI Research0
NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction HelpsCode0
A (More) Realistic Evaluation Setup for Generalisation of Community Models on Malicious Content DetectionCode0
Show:102550
← PrevPage 10 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TOKOFOUAverage F189.7Unverified