Sentiment Analysis
Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.
Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.
More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.
Further readings:
Papers
Showing 1–10 of 5630 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Word+ES (Scratch) | Attack Success Rate | 100 | — | Unverified |
| 2 | MT-DNN-SMART | Accuracy | 97.5 | — | Unverified |
| 3 | T5-11B | Accuracy | 97.5 | — | Unverified |
| 4 | MUPPET Roberta Large | Accuracy | 97.4 | — | Unverified |
| 5 | T5-3B | Accuracy | 97.4 | — | Unverified |
| 6 | ALBERT | Accuracy | 97.1 | — | Unverified |
| 7 | StructBERTRoBERTa ensemble | Accuracy | 97.1 | — | Unverified |
| 8 | XLNet (single model) | Accuracy | 97 | — | Unverified |
| 9 | SMARTRoBERTa | Dev Accuracy | 96.9 | — | Unverified |
| 10 | ELECTRA | Accuracy | 96.9 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | RoBERTa-large with LlamBERT | Accuracy | 96.68 | — | Unverified |
| 2 | RoBERTa-large | Accuracy | 96.54 | — | Unverified |
| 3 | XLNet | Accuracy | 96.21 | — | Unverified |
| 4 | Heinsen Routing + RoBERTa Large | Accuracy | 96.2 | — | Unverified |
| 5 | RoBERTa-large 355M + Entailment as Few-shot Learner | Accuracy | 96.1 | — | Unverified |
| 6 | GraphStar | Accuracy | 96 | — | Unverified |
| 7 | DV-ngrams-cosine with NB sub-sampling + RoBERTa.base | Accuracy | 95.94 | — | Unverified |
| 8 | DV-ngrams-cosine + RoBERTa.base | Accuracy | 95.92 | — | Unverified |
| 9 | Roberta_Large ST + Cosine Similarity Loss | Accuracy | 95.9 | — | Unverified |
| 10 | BERT large finetune UDA | Accuracy | 95.8 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Llama-3.3-70B + CAPO | Accuracy | 62.27 | — | Unverified |
| 2 | Mistral-Small-24B + CAPO | Accuracy | 60.2 | — | Unverified |
| 3 | Heinsen Routing + RoBERTa Large | Accuracy | 59.8 | — | Unverified |
| 4 | RoBERTa-large+Self-Explaining | Accuracy | 59.1 | — | Unverified |
| 5 | Qwen2.5-32B + CAPO | Accuracy | 59.07 | — | Unverified |
| 6 | Heinsen Routing + GPT-2 | Accuracy | 58.5 | — | Unverified |
| 7 | BCN+Suffix BiLSTM-Tied+CoVe | Accuracy | 56.2 | — | Unverified |
| 8 | BERT Large | Accuracy | 55.5 | — | Unverified |
| 9 | LM-CPPF RoBERTa-base | Accuracy | 54.9 | — | Unverified |
| 10 | BCN+ELMo | Accuracy | 54.7 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Char-level CNN | Error | 4.88 | — | Unverified |
| 2 | SVDCNN | Error | 4.74 | — | Unverified |
| 3 | LEAM | Error | 4.69 | — | Unverified |
| 4 | fastText, h=10, bigram | Error | 4.3 | — | Unverified |
| 5 | SWEM-hier | Error | 4.19 | — | Unverified |
| 6 | SRNN | Error | 3.96 | — | Unverified |
| 7 | M-ACNN | Error | 3.89 | — | Unverified |
| 8 | DNC+CUW | Error | 3.6 | — | Unverified |
| 9 | CCCapsNet | Error | 3.52 | — | Unverified |
| 10 | Block-sparse LSTM | Error | 3.27 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Millions of Emoji | Training Time | 1,500 | — | Unverified |
| 2 | VLAWE | Accuracy | 93.3 | — | Unverified |
| 3 | RoBERTa-large 355M + Entailment as Few-shot Learner | Accuracy | 92.5 | — | Unverified |
| 4 | AnglE-LLaMA-7B | Accuracy | 91.09 | — | Unverified |
| 5 | byte mLSTM7 | Accuracy | 86.8 | — | Unverified |
| 6 | MEAN | Accuracy | 84.5 | — | Unverified |
| 7 | RNN-Capsule | Accuracy | 83.8 | — | Unverified |
| 8 | Capsule-B | Accuracy | 82.3 | — | Unverified |
| 9 | SuBiLSTM-Tied | Accuracy | 81.6 | — | Unverified |
| 10 | USE_T+CNN | Accuracy | 81.59 | — | Unverified |