SOTAVerified

Red Teaming

Papers

Showing 151175 of 251 papers

TitleStatusHype
Effective Red-Teaming of Policy-Adherent Agents0
ELAB: Extensive LLM Alignment Benchmark in Persian Language0
Embodied Red Teaming for Auditing Robotic Foundation Models0
EVA: Red-Teaming GUI Agents via Evolving Indirect Prompt Injection0
Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity0
Exploring Straightforward Conversational Red-Teaming0
Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation0
Fast Proxies for LLM Robustness Evaluation0
Finding Safety Neurons in Large Language Models0
FLIRT: Feedback Loop In-context Red Teaming0
Games for AI Control: Models of Safety Evaluations of AI Deployment Protocols0
GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization0
Gradient-Based Language Model Red Teaming0
h4rm3l: A language for Composable Jailbreak Attack Synthesis0
"Haet Bhasha aur Diskrimineshun": Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs0
Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents0
HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback0
In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models0
Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis0
Investigating Bias Representations in Llama 2 Chat via Activation Steering0
IterAlign: Iterative Constitutional Alignment of Large Language Models0
JAB: Joint Adversarial Prompting and Belief Augmentation0
Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts0
Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters0
Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act0
Show:102550
← PrevPage 7 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified