SOTAVerified

Sentiment Analysis

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Papers

Showing 150 of 5630 papers

TitleStatusHype
Fine-mixing: Mitigating Backdoors in Fine-tuned Language ModelsCode8
Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language ModelsCode6
h2oGPT: Democratizing Large Language ModelsCode6
Sample Design Engineering: An Empirical Study of What Makes Good Downstream Fine-Tuning Samples for LLMsCode5
LLM.int8(): 8-bit Matrix Multiplication for Transformers at ScaleCode5
Cross-Domain Aspect Extraction using Transformers Augmented with Knowledge GraphsCode4
Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice PerspectiveCode4
Sentiment Reasoning for HealthcareCode3
emotion2vec: Self-Supervised Pre-Training for Speech Emotion RepresentationCode3
PyABSA: A Modularized Framework for Reproducible Aspect-based Sentiment AnalysisCode3
Finetuned Language Models Are Zero-Shot LearnersCode3
Ludwig: a type-based declarative deep learning toolboxCode3
ERNIE 2.0: A Continual Pre-training Framework for Language UnderstandingCode3
Pre-Training with Whole Word Masking for Chinese BERTCode3
ERNIE: Enhanced Representation through Knowledge IntegrationCode3
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingCode3
Universal Language Model Fine-tuning for Text ClassificationCode3
CAPO: Cost-Aware Prompt OptimizationCode2
Fietje: An open, efficient LLM for DutchCode2
DLF: Disentangled-Language-Focused Multimodal Sentiment AnalysisCode2
CNMBERT: A Model for Converting Hanyu Pinyin Abbreviations to Chinese CharactersCode2
Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment AnalysisCode2
Towards Robust Multimodal Sentiment Analysis with Incomplete DataCode2
Recent Trends of Multimodal Affective Computing: A Survey from NLP PerspectiveCode2
Multimodal Prompt Learning with Missing Modalities for Sentiment Analysis and Emotion RecognitionCode2
Quantformer: from attention to profit with a quantitative transformer trading strategyCode2
VNLP: Turkish NLP PackageCode2
EmoLLMs: A Series of Emotional Large Language Models and Annotation Tools for Comprehensive Affective AnalysisCode2
Atom: Low-bit Quantization for Efficient and Accurate LLM ServingCode2
AnglE-optimized Text EmbeddingsCode2
UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion RecognitionCode2
MARLIN: Masked Autoencoder for facial video Representation LearnINgCode2
TweetNLP: Cutting-Edge Natural Language Processing for Social MediaCode2
M-SENA: An Integrated Platform for Multimodal Sentiment AnalysisCode2
Closed-form Continuous-time Neural ModelsCode2
The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP ModelsCode2
DeBERTa: Decoding-enhanced BERT with Disentangled AttentionCode2
SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment AnalysisCode2
Beyond Accuracy: Behavioral Testing of NLP models with CheckListCode2
A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term ExtractionCode2
Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerCode2
Well-Read Students Learn Better: On the Importance of Pre-training Compact ModelsCode2
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification TasksCode2
A Personalized Conversational Benchmark: Towards Simulating Personalized ConversationsCode1
A Python Tool for Reconstructing Full News Text from GDELTCode1
FanChuan: A Multilingual and Graph-Structured Benchmark For Parody Detection and AnalysisCode1
MSE-Adapter: A Lightweight Plugin Endowing LLMs with the Capability to Perform Multimodal Sentiment Analysis and Emotion RecognitionCode1
M-ABSA: A Multilingual Dataset for Aspect-Based Sentiment AnalysisCode1
Market-Derived Financial Sentiment Analysis: Context-Aware Language Models for Crypto ForecastingCode1
FinRLlama: A Solution to LLM-Engineered Signals Challenge at FinRL Contest 2024Code1
Show:102550
← PrevPage 1 of 113Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Word+ES (Scratch)Attack Success Rate100Unverified
2T5-11BAccuracy97.5Unverified
3MT-DNN-SMARTAccuracy97.5Unverified
4T5-3BAccuracy97.4Unverified
5MUPPET Roberta LargeAccuracy97.4Unverified
6StructBERTRoBERTa ensembleAccuracy97.1Unverified
7ALBERTAccuracy97.1Unverified
8XLNet (single model)Accuracy97Unverified
9SMARTRoBERTaDev Accuracy96.9Unverified
10ELECTRAAccuracy96.9Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large with LlamBERTAccuracy96.68Unverified
2RoBERTa-largeAccuracy96.54Unverified
3XLNetAccuracy96.21Unverified
4Heinsen Routing + RoBERTa LargeAccuracy96.2Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy96.1Unverified
6GraphStarAccuracy96Unverified
7DV-ngrams-cosine with NB sub-sampling + RoBERTa.baseAccuracy95.94Unverified
8DV-ngrams-cosine + RoBERTa.baseAccuracy95.92Unverified
9Roberta_Large ST + Cosine Similarity LossAccuracy95.9Unverified
10BERT large finetune UDAAccuracy95.8Unverified
#ModelMetricClaimedVerifiedStatus
1Llama-3.3-70B + CAPOAccuracy62.27Unverified
2Mistral-Small-24B + CAPOAccuracy 60.2Unverified
3Heinsen Routing + RoBERTa LargeAccuracy59.8Unverified
4RoBERTa-large+Self-ExplainingAccuracy59.1Unverified
5Qwen2.5-32B + CAPOAccuracy 59.07Unverified
6Heinsen Routing + GPT-2Accuracy58.5Unverified
7BCN+Suffix BiLSTM-Tied+CoVeAccuracy56.2Unverified
8BERT LargeAccuracy55.5Unverified
9LM-CPPF RoBERTa-baseAccuracy54.9Unverified
10BCN+ELMoAccuracy54.7Unverified
#ModelMetricClaimedVerifiedStatus
1Char-level CNNError4.88Unverified
2SVDCNNError4.74Unverified
3LEAMError4.69Unverified
4fastText, h=10, bigramError4.3Unverified
5SWEM-hierError4.19Unverified
6SRNNError3.96Unverified
7M-ACNNError3.89Unverified
8DNC+CUWError3.6Unverified
9CCCapsNetError3.52Unverified
10Block-sparse LSTMError3.27Unverified
#ModelMetricClaimedVerifiedStatus
1Millions of EmojiTraining Time1,500Unverified
2VLAWEAccuracy93.3Unverified
3RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy92.5Unverified
4AnglE-LLaMA-7BAccuracy91.09Unverified
5byte mLSTM7Accuracy86.8Unverified
6MEANAccuracy84.5Unverified
7RNN-CapsuleAccuracy83.8Unverified
8Capsule-BAccuracy82.3Unverified
9SuBiLSTM-TiedAccuracy81.6Unverified
10USE_T+CNNAccuracy81.59Unverified