SOTAVerified

Adversarial Text

Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.

Papers

Showing 111114 of 114 papers

TitleStatusHype
BinarySelect to Improve Accessibility of Black-Box Attack ResearchCode0
Vision-fused Attack: Advancing Aggressive and Stealthy Adversarial Text against Neural Machine TranslationCode0
Adversarial Robustness of Neural-Statistical Features in Detection of Generative TransformersCode0
BERT Lost Patience Won't Be Robust to Adversarial SlowdownCode0
Show:102550
← PrevPage 12 of 12Next →

No leaderboard results yet.