SOTAVerified

Red Teaming

Papers

Showing 201225 of 251 papers

TitleStatusHype
A Red Teaming Framework for Securing AI in Maritime Autonomous Systems0
Seamless: Multilingual Expressive and Streaming Speech TranslationCode6
DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions0
InfoPattern: Unveiling Information Propagation Patterns in Social MediaCode0
JAB: Joint Adversarial Prompting and Belief Augmentation0
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models0
Stealthy and Persistent Unalignment on Large Language Models via Backdoor InjectionsCode0
Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework0
Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-AlignmentCode1
Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts0
AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications0
MART: Improving LLM Safety with Multi-round Automatic Red-Teaming0
Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming0
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B0
Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and BiasesCode1
Attack Prompt Generation for Red Teaming and Defending Large Language ModelsCode1
Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models0
Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?Code1
Large Language Model UnlearningCode1
ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language ModelsCode0
Catastrophic Jailbreak of Open-source LLMs via Exploiting GenerationCode1
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!Code2
Can Language Models be Instructed to Protect Personal Information?0
Low-Resource Languages Jailbreak GPT-40
No Offense Taken: Eliciting Offensiveness from Language ModelsCode0
Show:102550
← PrevPage 9 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified