SOTAVerified

Fact Checking

Papers

Showing 151175 of 669 papers

TitleStatusHype
LookupForensics: A Large-Scale Multi-Task Dataset for Multi-Phase Image-Based Fact Verification0
Multimodal Misinformation Detection using Large Vision-Language Models0
MetaSumPerceiver: Multimodal Multi-Document Evidence Summarization for Fact-CheckingCode0
Similarity over Factuality: Are we making progress on multimodal out-of-context misinformation detection?Code0
Semantic Operators: A Declarative Model for Rich, AI-based Data ProcessingCode5
Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent CommunitiesCode1
African Democracy in the Era of Generative Disinformation: Challenges and Countermeasures against AI-Generated Propaganda0
Automated Justification Production for Claim Veracity in Fact Checking: A Survey on Architectures and Approaches0
ChartGemma: Visual Instruction-tuning for Chart Reasoning in the WildCode2
Generative Large Language Models in Automated Fact-Checking: A Survey0
POLygraph: Polish Fake News Dataset0
Ground Every Sentence: Improving Retrieval-Augmented LLMs with Interleaved Reference-Claim Generation0
Meerkat: Audio-Visual Large Language Model for Grounding in Space and TimeCode1
How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models0
Molecular Facts: Desiderata for Decontextualization in LLM Fact VerificationCode0
"Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models0
FactFinders at CheckThat! 2024: Refining Check-worthy Statement Detection with LLMs through Data PruningCode0
Catching Chameleons: Detecting Evolving Disinformation Generated using Large Language Models0
An Enhanced Fake News Detection System With Fuzzy Deep LearningCode1
Evaluating Evidence Attribution in Generated Fact Checking ExplanationsCode0
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language ModelsCode1
SparseCL: Sparse Contrastive Learning for Contradiction Retrieval0
Bag of Lies: Robustness in Continuous Pre-training BERT0
Document-level Claim Extraction and Decontextualisation for Fact-CheckingCode1
Missci: Reconstructing Fallacies in Misrepresented ScienceCode0
Show:102550
← PrevPage 7 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1monoT5-3BnDCG@100.78Unverified
2SGPT-BE-5.8BnDCG@100.75Unverified
3BM25+CEnDCG@100.69Unverified
4SGPT-CE-6.1BnDCG@100.68Unverified
5ColBERTnDCG@100.67Unverified
#ModelMetricClaimedVerifiedStatus
1SGPT-BE-5.8BnDCG@100.31Unverified
2monoT5-3BnDCG@100.28Unverified
3BM25+CEnDCG@100.25Unverified
4SGPT-CE-6.1BnDCG@100.16Unverified
#ModelMetricClaimedVerifiedStatus
1monoT5-3BnDCG@100.85Unverified
2BM25+CEnDCG@100.82Unverified
3SGPT-BE-5.8BnDCG@100.78Unverified
4SGPT-CE-6.1BnDCG@100.73Unverified
#ModelMetricClaimedVerifiedStatus
1HerOQuestion Only score0.48Unverified
2CTU AICQuestion Only score0.46Unverified
3InFactQuestion Only score0.45Unverified
#ModelMetricClaimedVerifiedStatus
1Abc0..5sec2Unverified
#ModelMetricClaimedVerifiedStatus
1MA-CINPrecision0.26Unverified
#ModelMetricClaimedVerifiedStatus
1FDHNAccuracy (Test)0.7Unverified