SOTAVerified

Adversarial Text

Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.

Papers

Showing 1120 of 114 papers

TitleStatusHype
PBI-Attack: Prior-Guided Bimodal Interactive Black-Box Jailbreak Attack for Toxicity Maximization0
TSCheater: Generating High-Quality Tibetan Adversarial Texts via Visual SimilarityCode0
SceneTAP: Scene-Coherent Typographic Adversarial Planner against Vision-Language Models in Real-World Environments0
NMT-Obfuscator Attack: Ignore a sentence in translation with only one wordCode0
IAE: Irony-based Adversarial Examples for Sentiment Analysis Systems0
Target-driven Attack for Large Language Models0
AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion modelsCode1
Graded Suspiciousness of Adversarial Texts to Human0
Adversarial Decoding: Generating Readable Documents for Adversarial ObjectivesCode1
Vision-fused Attack: Advancing Aggressive and Stealthy Adversarial Text against Neural Machine TranslationCode0
Show:102550
← PrevPage 2 of 12Next →

No leaderboard results yet.