SOTAVerified

SST-2

Papers

Showing 150 of 66 papers

TitleStatusHype
LLM2LLM: Boosting LLMs with Novel Iterative Data EnhancementCode2
Revisiting Character-level Adversarial Attacks for Language ModelsCode1
Text Classification via Large Language ModelsCode1
SpikeZIP-TF: Conversion is All You Need for Transformer-based SNNCode1
Adaptive Deep Neural Network Inference Optimization with EENetCode1
ScaleFL: Resource-Adaptive Federated Learning With Heterogeneous ClientsCode1
A Generative Language Model for Few-shot Aspect-Based Sentiment AnalysisCode1
Generating Training Data with Language Models: Towards Zero-Shot Language UnderstandingCode1
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution0
DAWSON: Data Augmentation using Weak Supervision On Natural Language0
Defending Deep Neural Networks against Backdoor Attacks via Module Switching0
Detecting Adversarial Text Attacks via SHapley Additive exPlanations0
Distilling BERT for low complexity network training0
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text0
EnCBP: A New Benchmark Dataset for Finer-Grained Cultural Background Prediction in English0
Enhancing Task-Specific Distillation in Small Data Regimes through Language Generation0
Exploring Variability in Fine-Tuned Models for Text Classification with DistilBERT0
Few-shot Multimodal Multitask Multilingual Learning0
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples0
Robustness of Large Language Models Against Adversarial Attacks0
An End-to-End Homomorphically Encrypted Neural Network0
Are Sample-Efficient NLP Models More Robust?0
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE0
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing0
BERMo: What can BERT learn from ELMo?0
Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning0
Catastrophic Forgetting in LLMs: A Comparative Analysis Across Language Tasks0
LoRA-Mixer: Coordinate Modular LoRA Experts Through Serial Attention Routing0
Margin-Based Regularization and Selective Sampling in Deep Neural Networks0
Efficient Automated Circuit Discovery in Transformers using Contextual Decomposition0
Noisy Text Data: Achilles’ Heel of BERT0
Objective-Based Hierarchical Clustering of Deep Embedding Vectors0
On the Importance of Local Information in Transformer Based Models0
PL-FGSA: A Prompt Learning Framework for Fine-Grained Sentiment Analysis Based on MindSpore0
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT0
A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis0
Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT0
Sentiment Analysis through LLM Negotiations0
Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models0
TangoBERT: Reducing Inference Cost by using Cascaded Architecture0
Textual Data Augmentation for Efficient Active Learning on Tiny Datasets0
The Impact of Quantization on the Robustness of Transformer-based Text Classifiers0
Two-in-One: A Model Hijacking Attack Against Text Generation Models0
Uncertainty Sentence Sampling by Virtual Adversarial Perturbation0
Noisy Text Data: Achilles' Heel of BERT0
Improving Natural Language Understanding by Reverse Mapping Bytepair Encoding0
Gradient-Based Word Substitution for Obstinate Adversarial Examples Generation in Language Models0
LMO-DP: Optimizing the Randomization Mechanism for Differentially Private Fine-Tuning (Large) Language Models0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Here's a Free Lunch: Sanitizing Backdoored Models with Model MergeCode0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.