SOTAVerified

Misinformation

Papers

Showing 331340 of 1282 papers

TitleStatusHype
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the WildCode0
SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities0
Competing LLM Agents in a Non-Cooperative Game of Opinion Polarisation0
G-Safeguard: A Topology-Guided Security Lens and Treatment on LLM-based Multi-agent SystemsCode0
LLM-Enhanced Multiple Instance Learning for Joint Rumor and Stance Detection with Social Context Information0
Mind What You Ask For: Emotional and Rational Faces of Persuasion by Large Language Models0
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking0
Towards Automated Fact-Checking of Real-World Claims: Exploring Task Formulation and Assessment with LLMs0
Large Language Models and Provenance Metadata for Determining the Relevance of Images and Videos in News Stories0
E2LVLM:Evidence-Enhanced Large Vision-Language Model for Multimodal Out-of-Context Misinformation Detection0
Show:102550
← PrevPage 34 of 129Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TOKOFOUAverage F189.7Unverified