SOTAVerified

Adversarial Text

Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.

Papers

Showing 5160 of 114 papers

TitleStatusHype
R.A.C.E.: Robust Adversarial Concept Erasure for Secure Text-to-Image Diffusion ModelCode0
Revisiting the Adversarial Robustness of Vision Language Models: a Multimodal PerspectiveCode0
Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods0
Goal-guided Generative Prompt Injection Attack on Large Language Models0
A Curious Case of Searching for the Correlation between Training Data and Adversarial Robustness of Transformer Textual ModelsCode0
Adversarial Text Purification: A Large Language Model Approach for Defense0
Arabic Synonym BERT-based Adversarial Examples for Text ClassificationCode0
Adversarial Text to Continuous Image Generation0
BERT Lost Patience Won't Be Robust to Adversarial SlowdownCode0
Towards a Robust Detection of Language Model Generated Text: Is ChatGPT that Easy to Detect?0
Show:102550
← PrevPage 6 of 12Next →

No leaderboard results yet.