SOTAVerified

Fact Checking

Papers

Showing 501550 of 669 papers

TitleStatusHype
Decomposition Dilemmas: Does Claim Decomposition Boost or Burden Fact-Checking Performance?0
Deep Ensemble Learning for News Stance Detection0
Detecting Deception in Political Debates Using Acoustic and Textual Features0
Detecting False Claims in Low-Resource Regions: A Case Study of Caribbean Islands0
DialFact: A Benchmark for Fact-Checking in Dialogue0
Did They Really Tweet That? Querying Fact-Checking Sites and Politwoops to Determine Tweet Misattribution0
Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion Related to Harms of Misinformation0
Do LLMs Understand Ambiguity in Text? A Case Study in Open-world Question Answering0
DOMLIN at SemEval-2019 Task 8: Automated Fact Checking exploiting Ratings in Community Question Answering Forums0
Do We Need Language-Specific Fact-Checking Models? The Case of Chinese0
DUTH at SemEval-2019 Task 8: Part-Of-Speech Features for Question Classification0
Surprising Efficacy of Fine-Tuned Transformers for Fact-Checking over Larger Language Models0
Entanglement: Balancing Punishment and Compensation, Repeated Dilemma Game-Theoretic Analysis of Maximum Compensation Problem for Bypass and Least Cost Paths in Fact-Checking, Case of Fake News with Weak Wallace's Law0
Entity-based Claim Representation Improves Fact-Checking of Medical Content in Tweets0
Fact-Checking Generative AI: Ontology-Driven Biological Graphs for Disease-Gene Link Verification0
Ev2R: Evaluating Evidence Retrieval in Automated Fact-Checking0
Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting0
Evaluating Large Language Model Capability in Vietnamese Fact-Checking Data Generation0
Evaluating open-source Large Language Models for automated fact-checking0
Evaluating the Performance of Large Language Models in Scientific Claim Detection and Classification0
Evidence-based Interpretable Open-domain Fact-checking with Large Language Models0
ExFake: Towards an Explainable Fake News Detection Based on Content and Social Context Information0
eXplainable Bayesian Multi-Perspective Generative Retrieval0
Explainable Fact-checking through Question Answering0
Explainable Fact Checking with Probabilistic Answer Set Programming0
Explainable Tsetlin Machine framework for fake news detection with credibility score assessment0
Exploiting stance hierarchies for cost-sensitive stance detection of Web documents0
Exploring Multidimensional Checkworthiness: Designing AI-assisted Claim Prioritization for Human Fact-checkers0
Extract and Aggregate: A Novel Domain-Independent Approach to Factual Data Verification0
Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News0
ExU: AI Models for Examining Multilingual Disinformation Narratives and Understanding their Spread0
FaBULOUS: Fact-checking Based on Understanding of Language Over Unstructured and Structured information0
FactCheck Editor: Multilingual Text Editor with End-to-End fact-checking0
Fact-checking AI-generated news reports: Can LLMs catch their own lies?0
Fact-Checking at Scale with DimensionRank0
Fact-checking based fake news detection: a review0
Fact-Checking, Fake News, Propaganda, and Media Bias: Truth Seeking in the Post-Truth Era0
Fact-Checking of AI-Generated Reports0
Fact Checking or Psycholinguistics: How to Distinguish Fake and True Claims?0
Fact Checking: Task definition and dataset construction0
Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification0
Fact Checking via Path Embedding and Aggregation0
Fact-checking with Generative AI: A Systematic Cross-Topic Examination of LLMs Capacity to Detect Veracity of Political Information0
FactCorp: A Corpus of Dutch Fact-checks and its Multiple Usages0
FacTeR-Check: Semi-automated fact-checking through Semantic Similarity and Natural Language Inference0
FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs0
FACTIFY-5WQA: 5W Aspect-based Fact Verification through Question Answering0
FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Automated Fact-Checking0
Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths?0
Factorization of Fact-Checks for Low Resource Indian Languages0
Show:102550
← PrevPage 11 of 14Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1monoT5-3BnDCG@100.78Unverified
2SGPT-BE-5.8BnDCG@100.75Unverified
3BM25+CEnDCG@100.69Unverified
4SGPT-CE-6.1BnDCG@100.68Unverified
5ColBERTnDCG@100.67Unverified
#ModelMetricClaimedVerifiedStatus
1SGPT-BE-5.8BnDCG@100.31Unverified
2monoT5-3BnDCG@100.28Unverified
3BM25+CEnDCG@100.25Unverified
4SGPT-CE-6.1BnDCG@100.16Unverified
#ModelMetricClaimedVerifiedStatus
1monoT5-3BnDCG@100.85Unverified
2BM25+CEnDCG@100.82Unverified
3SGPT-BE-5.8BnDCG@100.78Unverified
4SGPT-CE-6.1BnDCG@100.73Unverified
#ModelMetricClaimedVerifiedStatus
1HerOQuestion Only score0.48Unverified
2CTU AICQuestion Only score0.46Unverified
3InFactQuestion Only score0.45Unverified
#ModelMetricClaimedVerifiedStatus
1Abc0..5sec2Unverified
#ModelMetricClaimedVerifiedStatus
1MA-CINPrecision0.26Unverified
#ModelMetricClaimedVerifiedStatus
1FDHNAccuracy (Test)0.7Unverified