| RainbowPlus: Enhancing Adversarial Prompt Generation via Evolutionary Quality-Diversity Search | Apr 21, 2025 | DiversityEvolutionary Algorithms | CodeCode Available | 1 |
| sudo rm -rf agentic_security | Mar 26, 2025 | Adversarial AttackAI and Safety | CodeCode Available | 1 |
| Trajectory Balance with Asynchrony: Decoupling Exploration and Learning for Fast, Scalable LLM Post-Training | Mar 24, 2025 | DiversityLarge Language Model | CodeCode Available | 1 |
| UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning | Feb 28, 2025 | Large Language ModelRed Teaming | CodeCode Available | 1 |
| Understanding and Enhancing the Transferability of Jailbreaking Attacks | Feb 5, 2025 | Intent RecognitionRed Teaming | CodeCode Available | 1 |
| Siren: A Learning-Based Multi-Turn Attack Framework for Simulating Real-World Human Jailbreak Behaviors | Jan 24, 2025 | Red Teaming | CodeCode Available | 1 |
| Gandalf the Red: Adaptive Security for LLMs | Jan 14, 2025 | BlockingLanguage Modeling | CodeCode Available | 1 |
| PrivAgent: Agentic-based Red-teaming for LLM Privacy Leakage | Dec 7, 2024 | Red TeamingSafety Alignment | CodeCode Available | 1 |
| GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs | Nov 21, 2024 | Bayesian OptimizationRed Teaming | CodeCode Available | 1 |
| Refusal-Trained LLMs Are Easily Jailbroken As Browser Agents | Oct 11, 2024 | ChatbotRed Teaming | CodeCode Available | 1 |
| RED QUEEN: Safeguarding Large Language Models against Concealed Multi-Turn Jailbreaking | Sep 26, 2024 | Red Teaming | CodeCode Available | 1 |
| Holistic Automated Red Teaming for Large Language Models through Top-Down Test Case Generation and Multi-turn Interaction | Sep 25, 2024 | DiversityRed Teaming | CodeCode Available | 1 |
| Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique | Aug 20, 2024 | AI and SafetyDiversity | CodeCode Available | 1 |
| SEAS: Self-Evolving Adversarial Safety Optimization for Large Language Models | Aug 5, 2024 | Red Teaming | CodeCode Available | 1 |
| Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs | Jul 22, 2024 | Model EditingRed Teaming | CodeCode Available | 1 |
| Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs) | Jul 20, 2024 | Red Teaming | CodeCode Available | 1 |
| CoSafe: Evaluating Large Language Model Safety in Multi-Turn Dialogue Coreference | Jun 25, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| Jailbreaking as a Reward Misspecification Problem | Jun 20, 2024 | Red Teaming | CodeCode Available | 1 |
| Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner | Jun 17, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jailbreak | Jun 17, 2024 | Red Teaming | CodeCode Available | 1 |
| MLLMGuard: A Multi-dimensional Safety Evaluation Suite for Multimodal Large Language Models | Jun 11, 2024 | Red Teaming | CodeCode Available | 1 |
| Unelicitable Backdoors in Language Models via Cryptographic Transformer Circuits | Jun 3, 2024 | Red Teaming | CodeCode Available | 1 |
| DiveR-CT: Diversity-enhanced Red Teaming Large Language Model Assistants with Relaxing Constraints | May 29, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 |
| Learning diverse attacks on large language models for robust red-teaming and safety tuning | May 28, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 |
| ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users | May 24, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 |