Aspect-Based Sentiment Analysis (ABSA)
Aspect-Based Sentiment Analysis (ABSA) is a Natural Language Processing task that aims to identify and extract the sentiment of specific aspects or components of a product or service. ABSA typically involves a multi-step process that begins with identifying the aspects or features of the product or service that are being discussed in the text. This is followed by sentiment analysis, where the sentiment polarity (positive, negative, or neutral) is assigned to each aspect based on the context of the sentence or document. Finally, the results are aggregated to provide an overall sentiment for each aspect.
And recent works propose more challenging ABSA tasks to predict sentiment triplets or quadruplets (Chen et al., 2022), the most influential of which are ASTE (Peng et al., 2020; Zhai et al., 2022), TASD (Wan et al., 2020), ASQP (Zhang et al., 2021a) and ACOS with an emphasis on the implicit aspects or opinions (Cai et al., 2020a).
( Source: MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction )
Papers
Showing 1–10 of 469 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | ABSA-DeBERTa | Mean Acc (Restaurant + Laptop) | 8,611 | — | Unverified |
| 2 | MT-ISA | Mean Acc (Restaurant + Laptop) | 89.21 | — | Unverified |
| 3 | RVISA | Mean Acc (Restaurant + Laptop) | 89.1 | — | Unverified |
| 4 | LSA+DeBERTa-V3-Large | Mean Acc (Restaurant + Laptop) | 88.27 | — | Unverified |
| 5 | MaskedABSA | Mean Acc (Restaurant + Laptop) | 86.95 | — | Unverified |
| 6 | LCF-ATEPC | Mean Acc (Restaurant + Laptop) | 86.24 | — | Unverified |
| 7 | BERT-IL Finetuned | Restaurant (Acc) | 86.2 | — | Unverified |
| 8 | DPL-BERT | Mean Acc (Restaurant + Laptop) | 85.75 | — | Unverified |
| 9 | RoBERTa+MLP | Mean Acc (Restaurant + Laptop) | 85.58 | — | Unverified |
| 10 | KaGRMN-DSG | Mean Acc (Restaurant + Laptop) | 84.61 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | MvP (multi-task) | F1 (L14) | 65.3 | — | Unverified |
| 2 | ASTE-Transformer | F1 (L14) | 64.9 | — | Unverified |
| 3 | Seq2Path | F1 (L14) | 64.82 | — | Unverified |
| 4 | MvP | F1 (L14) | 63.33 | — | Unverified |
| 5 | UIE | F1 (L14) | 62.94 | — | Unverified |
| 6 | AugABSA | F1 (L14) | 62.66 | — | Unverified |
| 7 | LEGO-ABSA (multi-task) | F1 (L14) | 62.2 | — | Unverified |
| 8 | DLO | F1 (L14) | 61.46 | — | Unverified |
| 9 | Paraphrase | F1 (L14) | 61.13 | — | Unverified |
| 10 | Span-ASTE | F1 (L14) | 59.38 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | MvP (multi-task) | F1 (R15) | 52.21 | — | Unverified |
| 2 | MvP | F1 (R15) | 51.04 | — | Unverified |
| 3 | AugABSA | F1 (R15) | 50.01 | — | Unverified |
| 4 | DLO | F1 (R15) | 48.18 | — | Unverified |
| 5 | Paraphrase | F1 (R15) | 46.93 | — | Unverified |
| 6 | LEGO-ABSA (multi-task) | F1 (R15) | 46.1 | — | Unverified |
| 7 | GAS | F1 (R15) | 45.98 | — | Unverified |
| 8 | Gemma-3-27B (50-shot, self-consistency learning) | F1 (R15) | 41.74 | — | Unverified |
| 9 | Gemma-3-27B (10-shot, self-consistency learning) | F1 (R15) | 39.95 | — | Unverified |
| 10 | TAS-BRET | F1 (R15) | 34.78 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | MvP (multi-task) | F1 (R15) | 64.74 | — | Unverified |
| 2 | MvP | F1 (R15) | 64.53 | — | Unverified |
| 3 | Paraphrase | F1 (R15) | 63.06 | — | Unverified |
| 4 | DLO | F1 (R15) | 62.95 | — | Unverified |
| 5 | LEGO-ABSA (multi-task) | F1 (R15) | 62.3 | — | Unverified |
| 6 | Gemma-3-27B (50-shot, self-consistency learning) | F1 (R15) | 62.12 | — | Unverified |
| 7 | GAS | F1 (R15) | 60.63 | — | Unverified |
| 8 | TAS-BERT | F1 (R15) | 57.51 | — | Unverified |
| 9 | Gemma-3-27B (10-shot, self-consistency learning) | F1 (R15) | 54.37 | — | Unverified |
| 10 | ChatGPT (gpt-3.5-turbo, few-shot) | F1 (R16) | 46.51 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | MvP | F1 (Laptop) | 43.92 | — | Unverified |
| 2 | MvP (muilti-task) | F1 (Laptop) | 43.84 | — | Unverified |
| 3 | DLO | F1 (Laptop) | 43.64 | — | Unverified |
| 4 | Paraphrase | F1 (Laptop) | 43.51 | — | Unverified |
| 5 | UnifiedABSA (multi-task) | F1 (Laptop) | 42.58 | — | Unverified |
| 6 | ChatGPT (gpt-3.5-turbo, few-shot) | F1 (Restaurant) | 37.71 | — | Unverified |
| 7 | Extract-Classify | F1 (Laptop) | 36.42 | — | Unverified |
| 8 | TAS-BERT | F1 (Laptop) | 27.31 | — | Unverified |
| 9 | ChatGPT (gpt-3.5-turbo, zero-shot) | F1 (Restaurant) | 27.11 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | InstructABSA | F1 | 79.34 | — | Unverified |
| 2 | FS-ABSA | F1 | 71.16 | — | Unverified |
| 3 | SPAN | F1 | 68.06 | — | Unverified |
| 4 | RACL-BERT | F1 | 63.4 | — | Unverified |
| 5 | BERT-E2E-ABSA | F1 | 61.12 | — | Unverified |
| 6 | DOER | F1 | 60.35 | — | Unverified |
| 7 | IMN | F1 | 58.37 | — | Unverified |
| 8 | E2E-TBSA | F1 | 57.9 | — | Unverified |
| 9 | Double-propagation | F1 | 27.1 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | YORO | Acc | 86.08 | — | Unverified |
| 2 | RGAT+ | Acc | 84.52 | — | Unverified |
| 3 | TGCN + BERT | Acc | 83.68 | — | Unverified |
| 4 | CapsNet-BERT | Acc | 83.39 | — | Unverified |
| 5 | CapsNet-BERT-DR | Acc | 82.97 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | BERT-pair-QA-B | Aspect | 87.9 | — | Unverified |
| 2 | BERT-pair-QA-M | Aspect | 86.4 | — | Unverified |
| 3 | Liu et al. | Aspect | 78.5 | — | Unverified |
| 4 | Sentic LSTM + TA + SA | Aspect | 78.18 | — | Unverified |
| 5 | LSTM-LOC | Aspect | 69.3 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | DeBERTa-pair-large | F1 (%) | 80.9 | — | Unverified |
| 2 | RoBERTa-pair-large | F1 (%) | 80 | — | Unverified |
| 3 | BERT-single-large | F1 (%) | 78.8 | — | Unverified |
| 4 | BERT-PT | F1 (%) | 78.8 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | InstructABSA | Laptop (F1) | 92.3 | — | Unverified |
| 2 | BERT-PT | Laptop (F1) | 84.26 | — | Unverified |
| 3 | SyMux | Laptop (F1) | 78.99 | — | Unverified |
| 4 | RNCRF | Laptop (F1) | 78.42 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | gpt-3.5 finetuned | F1 | 83.76 | — | Unverified |
| 2 | FS-ABSA | F1 | 71.16 | — | Unverified |
| 3 | RACL-BERT | F1 | 63.4 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | MaskedABSA | Restaurant (Acc) | 91.53 | — | Unverified |
| 2 | HAABSA++ | Restaurant (Acc) | 81.7 | — | Unverified |
| 3 | HAABSA | Restaurant (Acc) | 80.6 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | BERT-pair-QA-B | Accuracy (3-way) | 89.9 | — | Unverified |
| 2 | ATLX | Accuracy (3-way) | 82.62 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | HGCN | Acc | 78.64 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | HGCN | Acc | 84.09 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | HGCN | Acc | 82.66 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | HGCN | Acc | 89.84 | — | Unverified |