| SeqAR: Jailbreak LLMs with Sequential Auto-Generated Characters | Jul 2, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 | 5 |
| Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models | Jan 19, 2024 | Model EditingRed Teaming | CodeCode Available | 0 | 5 |
| Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections | Nov 15, 2023 | Red Teaming | CodeCode Available | 0 | 5 |
| Steering Without Side Effects: Improving Post-Deployment Control of Language Models | Jun 21, 2024 | Red TeamingTruthfulQA | CodeCode Available | 0 | 5 |
| Distract Large Language Models for Automatic Jailbreak Attack | Mar 13, 2024 | Red Teaming | CodeCode Available | 0 | 5 |
| SAGE: A Generic Framework for LLM Safety Evaluation | Apr 28, 2025 | Red TeamingSafety Alignment | CodeCode Available | 0 | 5 |
| The Structural Safety Generalization Problem | Apr 13, 2025 | Red Teaming | CodeCode Available | 0 | 5 |
| TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red-Teaming Data Synthesis | May 30, 2025 | DiversityLanguage Modeling | CodeCode Available | 0 | 5 |
| We Should Identify and Mitigate Third-Party Safety Risks in MCP-Powered Agent Systems | Jun 16, 2025 | PositionRed Teaming | CodeCode Available | 0 | 5 |
| What Is Wrong with My Model? Identifying Systematic Problems with Semantic Data Slicing | Sep 14, 2024 | Red Teaming | CodeCode Available | 0 | 5 |
| Red Teaming with Mind Reading: White-Box Adversarial Policies Against RL Agents | Sep 5, 2022 | Red Teamingreinforcement-learning | CodeCode Available | 0 | 5 |
| Investigating Bias Representations in Llama 2 Chat via Activation Steering | Feb 1, 2024 | Decision MakingRed Teaming | —Unverified | 0 | 0 |
| IterAlign: Iterative Constitutional Alignment of Large Language Models | Mar 27, 2024 | Red Teaming | —Unverified | 0 | 0 |
| JAB: Joint Adversarial Prompting and Belief Augmentation | Nov 16, 2023 | Red Teaming | —Unverified | 0 | 0 |
| DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions | Dec 7, 2023 | Code GenerationRed Teaming | —Unverified | 0 | 0 |
| Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts | Nov 15, 2023 | Adversarial AttackRed Teaming | —Unverified | 0 | 0 |
| Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters | May 30, 2024 | Red Teaming | —Unverified | 0 | 0 |
| Jailbreaking Large Language Models with Symbolic Mathematics | Sep 17, 2024 | Red Teaming | —Unverified | 0 | 0 |
| Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency | Jan 9, 2025 | Red Teaming | —Unverified | 0 | 0 |
| CURE: Concept Unlearning via Orthogonal Representation Editing in Diffusion Models | May 19, 2025 | BenchmarkingRed Teaming | —Unverified | 0 | 0 |
| CulturalTeaming: AI-Assisted Interactive Red-Teaming for Challenging LLMs' (Lack of) Multicultural Knowledge | Apr 10, 2024 | Red Teaming | —Unverified | 0 | 0 |
| JBFuzz: Jailbreaking LLMs Efficiently and Effectively Using Fuzzing | Mar 12, 2025 | Red TeamingSafety Alignment | —Unverified | 0 | 0 |
| KDA: A Knowledge-Distilled Attacker for Generating Diverse Prompts to Jailbreak LLMs | Feb 5, 2025 | DiversityPrompt Engineering | —Unverified | 0 | 0 |
| Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges | Mar 6, 2025 | BenchmarkingLanguage Modeling | —Unverified | 0 | 0 |
| LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs | May 16, 2025 | Red Teaming | —Unverified | 0 | 0 |
| CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI Models | Aug 16, 2022 | Red Teaming | —Unverified | 0 | 0 |
| CoT Red-Handed: Stress Testing Chain-of-Thought Monitoring | May 29, 2025 | Red Teaming | —Unverified | 0 | 0 |
| Conversational Complexity for Assessing Risk in Large Language Models | Sep 2, 2024 | Red Teaming | —Unverified | 0 | 0 |
| Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models | Oct 17, 2023 | In-Context LearningRed Teaming | —Unverified | 0 | 0 |
| Lessons From Red Teaming 100 Generative AI Products | Jan 13, 2025 | BenchmarkingRed Teaming | —Unverified | 0 | 0 |
| Leveraging Reinforcement Learning in Red Teaming for Advanced Ransomware Attack Simulations | Jun 25, 2024 | Red TeamingReinforcement Learning (RL) | —Unverified | 0 | 0 |
| LLM-Assisted Red Teaming of Diffusion Models through "Failures Are Fated, But Can Be Faded" | Oct 22, 2024 | Deep Reinforcement LearningRed Teaming | —Unverified | 0 | 0 |
| Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming | Jan 31, 2025 | Red Teaming | —Unverified | 0 | 0 |
| LLM-Safety Evaluations Lack Robustness | Mar 4, 2025 | Red TeamingResponse Generation | —Unverified | 0 | 0 |
| LLMStinger: Jailbreaking LLMs using RL fine-tuned LLMs | Nov 13, 2024 | Prompt EngineeringRed Teaming | —Unverified | 0 | 0 |
| The Human Factor in AI Red Teaming: Perspectives from Social and Collaborative Computing | Jul 10, 2024 | FairnessRed Teaming | —Unverified | 0 | 0 |
| LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B | Oct 31, 2023 | GPURed Teaming | —Unverified | 0 | 0 |
| Low-Resource Languages Jailbreak GPT-4 | Oct 3, 2023 | Red Teaming | —Unverified | 0 | 0 |
| MAD-MAX: Modular And Diverse Malicious Attack MiXtures for Automated LLM Red Teaming | Mar 8, 2025 | Red Teaming | —Unverified | 0 | 0 |
| Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization | Mar 14, 2025 | Red Teaming | —Unverified | 0 | 0 |
| MART: Improving LLM Safety with Multi-round Automatic Red-Teaming | Nov 13, 2023 | Instruction FollowingRed Teaming | —Unverified | 0 | 0 |
| Computational Red Teaming in a Sudoku Solving Context: Neural Network Based Skill Representation and Acquisition | Feb 27, 2018 | Red Teaming | —Unverified | 0 | 0 |
| MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models | Mar 19, 2025 | Adversarial RobustnessAutonomous Driving | —Unverified | 0 | 0 |
| Model Card and Evaluations for Claude Models | Jul 11, 2023 | Arithmetic ReasoningBug fixing | —Unverified | 0 | 0 |
| CELL your Model: Contrastive Explanations for Large Language Models | Jun 17, 2024 | Red TeamingText Generation | —Unverified | 0 | 0 |
| Multi-lingual Multi-turn Automated Red Teaming for LLMs | Apr 4, 2025 | Red Teaming | —Unverified | 0 | 0 |
| The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm | Jun 26, 2024 | Cross-Lingual TransferRed Teaming | —Unverified | 0 | 0 |
| Can Large Language Models Change User Preference Adversarially? | Jan 5, 2023 | Red Teaming | —Unverified | 0 | 0 |
| Can Large Language Models Automatically Jailbreak GPT-4V? | Jul 23, 2024 | Face RecognitionIn-Context Learning | —Unverified | 0 | 0 |
| Offensive Security for AI Systems: Concepts, Practices, and Applications | May 9, 2025 | Red Teaming | —Unverified | 0 | 0 |