SOTAVerified

Fact Checking

Papers

Showing 151200 of 669 papers

TitleStatusHype
LookupForensics: A Large-Scale Multi-Task Dataset for Multi-Phase Image-Based Fact Verification0
Multimodal Misinformation Detection using Large Vision-Language Models0
Similarity over Factuality: Are we making progress on multimodal out-of-context misinformation detection?Code0
MetaSumPerceiver: Multimodal Multi-Document Evidence Summarization for Fact-CheckingCode0
Semantic Operators: A Declarative Model for Rich, AI-based Data ProcessingCode5
Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent CommunitiesCode1
African Democracy in the Era of Generative Disinformation: Challenges and Countermeasures against AI-Generated Propaganda0
Automated Justification Production for Claim Veracity in Fact Checking: A Survey on Architectures and Approaches0
ChartGemma: Visual Instruction-tuning for Chart Reasoning in the WildCode2
Generative Large Language Models in Automated Fact-Checking: A Survey0
POLygraph: Polish Fake News Dataset0
Ground Every Sentence: Improving Retrieval-Augmented LLMs with Interleaved Reference-Claim Generation0
Meerkat: Audio-Visual Large Language Model for Grounding in Space and TimeCode1
How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models0
Molecular Facts: Desiderata for Decontextualization in LLM Fact VerificationCode0
"Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models0
FactFinders at CheckThat! 2024: Refining Check-worthy Statement Detection with LLMs through Data PruningCode0
Catching Chameleons: Detecting Evolving Disinformation Generated using Large Language Models0
An Enhanced Fake News Detection System With Fuzzy Deep LearningCode1
Evaluating Evidence Attribution in Generated Fact Checking ExplanationsCode0
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language ModelsCode1
SparseCL: Sparse Contrastive Learning for Contradiction Retrieval0
Bag of Lies: Robustness in Continuous Pre-training BERT0
Document-level Claim Extraction and Decontextualisation for Fact-CheckingCode1
Missci: Reconstructing Fallacies in Misrepresented ScienceCode0
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework0
RATT: A Thought Structure for Coherent and Correct LLM ReasoningCode1
FactGenius: Combining Zero-Shot Prompting and Fuzzy Relation Mining to Improve Fact Verification with Knowledge GraphsCode0
Are Large Vision Language Models up to the Challenge of Chart Comprehension and Reasoning? An Extensive Investigation into the Capabilities and Limitations of LVLMs0
ExU: AI Models for Examining Multilingual Disinformation Narratives and Understanding their Spread0
The Impact and Opportunities of Generative AI in Fact-Checking0
Mining the Explainability and Generalization: Fact Verification Based on Self-Instruction0
Automatic News Generation and Fact-Checking System Based on Language Processing0
SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks0
Tell Me Why: Explainable Public Health Fact-Checking with Large Language ModelsCode0
ViWikiFC: Fact-Checking for Vietnamese Wikipedia-Based Textual Knowledge Source0
OpenFactCheck: Building, Benchmarking Customized Fact-Checking Systems and Evaluating the Factuality of Claims and LLMsCode2
New contexts, old heuristics: How young people in India and the US trust online content in the age of generative AI0
FactCheck Editor: Multilingual Text Editor with End-to-End fact-checking0
Credible, Unreliable or Leaked?: Evidence Verification for Enhanced Automated Fact-checkingCode0
EkoHate: Abusive Language and Hate Speech Detection for Code-switched Political Discussions on Nigerian TwitterCode0
ReproHum #0087-01: Human Evaluation Reproduction Report for Generating Fact Checking Explanations0
Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact Checking News Claims with Black-Box LLMCode0
KGValidator: A Framework for Automatic Validation of Knowledge Graph Construction0
RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models0
Claim Check-Worthiness Detection: How Well do LLMs Grasp Annotation Guidelines?0
MiniCheck: Efficient Fact-Checking of LLMs on Grounding DocumentsCode7
Reliability Estimation of News Media Sources: Birds of a Feather Flock TogetherCode0
Collaboratively adding context to social media posts reduces the sharing of false news0
KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual CheckingCode2
Show:102550
← PrevPage 4 of 14Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1monoT5-3BnDCG@100.78Unverified
2SGPT-BE-5.8BnDCG@100.75Unverified
3BM25+CEnDCG@100.69Unverified
4SGPT-CE-6.1BnDCG@100.68Unverified
5ColBERTnDCG@100.67Unverified
#ModelMetricClaimedVerifiedStatus
1SGPT-BE-5.8BnDCG@100.31Unverified
2monoT5-3BnDCG@100.28Unverified
3BM25+CEnDCG@100.25Unverified
4SGPT-CE-6.1BnDCG@100.16Unverified
#ModelMetricClaimedVerifiedStatus
1monoT5-3BnDCG@100.85Unverified
2BM25+CEnDCG@100.82Unverified
3SGPT-BE-5.8BnDCG@100.78Unverified
4SGPT-CE-6.1BnDCG@100.73Unverified
#ModelMetricClaimedVerifiedStatus
1HerOQuestion Only score0.48Unverified
2CTU AICQuestion Only score0.46Unverified
3InFactQuestion Only score0.45Unverified
#ModelMetricClaimedVerifiedStatus
1Abc0..5sec2Unverified
#ModelMetricClaimedVerifiedStatus
1MA-CINPrecision0.26Unverified
#ModelMetricClaimedVerifiedStatus
1FDHNAccuracy (Test)0.7Unverified