SOTAVerified

Adversarial Text

Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.

Papers

Showing 111114 of 114 papers

TitleStatusHype
"That Is a Suspicious Reaction!": Interpreting Logits Variation to Detect NLP Adversarial Attacks0
From Unsupervised Machine Translation To Adversarial Text Generation0
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models0
Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense?0
Show:102550
← PrevPage 12 of 12Next →

No leaderboard results yet.