SOTAVerified

Adversarial Text

Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.

Papers

Showing 91100 of 114 papers

TitleStatusHype
Towards Imperceptible Document Manipulations against Neural Ranking Models0
What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images0
Universal Adversarial Perturbation for Text Classification0
Graded Suspiciousness of Adversarial Texts to Human0
Adversarial Text Generation with Dynamic Contextual Perturbation0
What Models Know About Their Attackers: Deriving Attacker Information From Latent Representations0
Target-driven Attack for Large Language Models0
Adversarial Text Generation via Sequence Contrast Discrimination0
Detecting Adversarial Text Attacks via SHapley Additive exPlanations0
Detecting Word-Level Adversarial Text Attacks via SHapley Additive exPlanations0
Show:102550
← PrevPage 10 of 12Next →

No leaderboard results yet.