SOTAVerified

Adversarial Text

Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.

Papers

Showing 2130 of 114 papers

TitleStatusHype
OpenFact at CheckThat! 2024: Combining Multiple Attack Methods for Effective Adversarial Text Generation0
Adversarial Text Rewriting for Text-aware Recommender SystemsCode1
Autonomous LLM-Enhanced Adversarial Attack for Text-to-Motion0
Enhancing Adversarial Text Attacks on BERT Models with Projected Gradient Descent0
Dissecting Adversarial Robustness of Multimodal LM AgentsCode2
Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense?0
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation0
White-box Multimodal Jailbreaks Against Large Vision-Language ModelsCode1
R.A.C.E.: Robust Adversarial Concept Erasure for Secure Text-to-Image Diffusion ModelCode0
Revisiting the Adversarial Robustness of Vision Language Models: a Multimodal PerspectiveCode0
Show:102550
← PrevPage 3 of 12Next →

No leaderboard results yet.