Fake News Detection
Fake News Detection is a natural language processing task that involves identifying and classifying news articles or other types of text as real or fake. The goal of fake news detection is to develop algorithms that can automatically identify and flag fake news articles, which can be used to combat misinformation and promote the dissemination of accurate information.
Papers
Showing 126–150 of 490 papers
All datasetsFNC-1RAWFCGrover-MegaLIARHostility Detection Dataset in HindiCOVID-19 Fake News DatasetMediaEval2016PolitiFactSocial mediaWeibo NER
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Sepúlveda-Torres R., Vicente M., Saquete E., Lloret E., Palomar M. (2021) | Weighted Accuracy | 90.73 | — | Unverified |
| 2 | ZAINAB A. JAWAD, AHMED J. OBAID (CNN and DNN with SCM, 2022) | Weighted Accuracy | 84.6 | — | Unverified |
| 3 | Bhatt et al. | Weighted Accuracy | 83.08 | — | Unverified |
| 4 | Bi-LSTM (max-pooling, attention) | Weighted Accuracy | 82.23 | — | Unverified |
| 5 | 3rd place at FNC-1 - Team UCL Machine Reading (Riedel et al., 2017) | Weighted Accuracy | 81.72 | — | Unverified |
| 6 | Neural method from Mohtarami et al. + TF-IDF (Mohtarami et al., 2018) | Weighted Accuracy | 81.23 | — | Unverified |
| 7 | Neural method from Mohtarami et al. (Mohtarami et al., 2018) | Weighted Accuracy | 78.97 | — | Unverified |
| 8 | Baseline based on skip-thought embeddings (Bhatt et al., 2017) | Weighted Accuracy | 76.18 | — | Unverified |
| 9 | Baseline based on word2vec + hand-crafted features (Bhatt et al., 2017) | Weighted Accuracy | 72.78 | — | Unverified |
| 10 | Neural baseline based on bi-directional LSTMs (Bhatt et al., 2017) | Weighted Accuracy | 63.11 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Persuasive Writing Strategy | F1 | 55.8 | — | Unverified |
| 2 | HiSS | F1 | 53.9 | — | Unverified |
| 3 | CofCED | F1 | 51.1 | — | Unverified |
| 4 | ReAct | F1 | 49.8 | — | Unverified |
| 5 | Standard prompting with articles | F1 | 47.9 | — | Unverified |
| 6 | CoT | F1 | 44.4 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Text-Transformers + Five-fold five model cross-validation +Pseudo Label Algorithm | Unpaired Accuracy | 98.5 | — | Unverified |
| 2 | Grover-Mega | Unpaired Accuracy | 92 | — | Unverified |
| 3 | Grover-Large | Unpaired Accuracy | 80.8 | — | Unverified |
| 4 | BERT-Large | Unpaired Accuracy | 73.1 | — | Unverified |
| 5 | GPT2 (355M) | Unpaired Accuracy | 70.1 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Hybrid CNNs (Text + All) | Test Accuracy | 0.27 | — | Unverified |
| 2 | CNNs | Test Accuracy | 0.27 | — | Unverified |
| 3 | Hybrid CNNs (Text + Speaker) | Test Accuracy | 0.25 | — | Unverified |
| 4 | Bi-LSTMs | Test Accuracy | 0.23 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Auxiliary IndicBert | F1 score | 0.77 | — | Unverified |
| 2 | Auxiliary IndicBert | F1 score | 0.57 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Ensemble Model + Heuristic Post-Processing | F1 | 0.99 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | SEMI-FND | Accuracy | 85.8 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Convolutional Tsetlin Machine | 1:1 Accuracy | 91.21 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | TextRNN | Accuracy | 92.4 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | SEMI-FND | Accuracy | 86.83 | — | Unverified |