SOTAVerified

Adversarial Text

Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.

Papers

Showing 3140 of 114 papers

TitleStatusHype
From Unsupervised Machine Translation To Adversarial Text Generation0
Adversarial Text Normalization0
Adversarial Text Generation Without Reinforcement Learning0
PBI-Attack: Prior-Guided Bimodal Interactive Black-Box Jailbreak Attack for Toxicity Maximization0
Fooling OCR Systems with Adversarial Text Images0
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models0
Autonomous LLM-Enhanced Adversarial Attack for Text-to-Motion0
A survey on text generation using generative adversarial networks0
Adversarial Text Generation with Dynamic Contextual Perturbation0
Adversarial Text Generation via Sequence Contrast Discrimination0
Show:102550
← PrevPage 4 of 12Next →

No leaderboard results yet.