| garak: A Framework for Security Probing Large Language Models | Jun 16, 2024 | Red Teaming | CodeCode Available | 9 | 5 |
| PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI System | Oct 1, 2024 | Red Teaming | CodeCode Available | 7 | 5 |
| Seamless: Multilingual Expressive and Streaming Speech Translation | Dec 8, 2023 | automatic-speech-translationMachine Translation | CodeCode Available | 6 | 5 |
| HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal | Feb 6, 2024 | Red Teaming | CodeCode Available | 4 | 5 |
| Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned | Aug 23, 2022 | Language ModellingRed Teaming | CodeCode Available | 3 | 5 |
| AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs | Oct 3, 2024 | Red Teaming | CodeCode Available | 3 | 5 |
| AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases | Jul 17, 2024 | Autonomous DrivingBackdoor Attack | CodeCode Available | 3 | 5 |
| Improved Techniques for Optimization-Based Jailbreaking on Large Language Models | May 31, 2024 | Red Teaming | CodeCode Available | 2 | 5 |
| Jailbreak Vision Language Models via Bi-Modal Adversarial Prompt | Jun 6, 2024 | Language ModellingLarge Language Model | CodeCode Available | 2 | 5 |
| GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts | Sep 19, 2023 | Red Teaming | CodeCode Available | 2 | 5 |
| Curiosity-driven Red-teaming for Large Language Models | Feb 29, 2024 | Red TeamingReinforcement Learning (RL) | CodeCode Available | 2 | 5 |
| GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher | Aug 12, 2023 | EthicsRed Teaming | CodeCode Available | 2 | 5 |
| LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet | Aug 27, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 2 | 5 |
| Against The Achilles' Heel: A Survey on Red Teaming for Generative Models | Mar 31, 2024 | Red TeamingSurvey | CodeCode Available | 2 | 5 |
| Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation | Jan 29, 2025 | Red TeamingSafety Alignment | CodeCode Available | 2 | 5 |
| AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs | Apr 21, 2024 | MMLURed Teaming | CodeCode Available | 2 | 5 |
| Tamper-Resistant Safeguards for Open-Weight LLMs | Aug 1, 2024 | Red TeamingTAR | CodeCode Available | 2 | 5 |
| Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! | Oct 5, 2023 | Red TeamingSafety Alignment | CodeCode Available | 2 | 5 |
| WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models | Jun 26, 2024 | ChatbotRed Teaming | CodeCode Available | 2 | 5 |
| ALERT: A Comprehensive Benchmark for Assessing Large Language Models' Safety through Red Teaming | Apr 6, 2024 | Adversarial RobustnessDialogue Safety Prediction | CodeCode Available | 2 | 5 |
| Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast | Feb 13, 2024 | Language ModellingLarge Language Model | CodeCode Available | 2 | 5 |
| Reliable and Efficient Concept Erasure of Text-to-Image Diffusion Models | Jul 17, 2024 | BenchmarkingRed Teaming | CodeCode Available | 2 | 5 |
| Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts | Sep 12, 2023 | Red TeamingText-to-Image Generation | CodeCode Available | 1 | 5 |
| ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users | May 24, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 | 5 |
| GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs | Nov 21, 2024 | Bayesian OptimizationRed Teaming | CodeCode Available | 1 | 5 |
| Holistic Automated Red Teaming for Large Language Models through Top-Down Test Case Generation and Multi-turn Interaction | Sep 25, 2024 | DiversityRed Teaming | CodeCode Available | 1 | 5 |
| Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo | Apr 26, 2024 | Language ModellingPrompt Engineering | CodeCode Available | 1 | 5 |
| Query-Efficient Black-Box Red Teaming via Bayesian Optimization | May 27, 2023 | Bayesian OptimizationLanguage Modeling | CodeCode Available | 1 | 5 |
| "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jailbreak | Jun 17, 2024 | Red Teaming | CodeCode Available | 1 | 5 |
| Jailbreaking as a Reward Misspecification Problem | Jun 20, 2024 | Red Teaming | CodeCode Available | 1 | 5 |
| MLLMGuard: A Multi-dimensional Safety Evaluation Suite for Multimodal Large Language Models | Jun 11, 2024 | Red Teaming | CodeCode Available | 1 | 5 |
| OET: Optimization-based prompt injection Evaluation Toolkit | May 1, 2025 | Adversarial RobustnessNatural Language Understanding | CodeCode Available | 1 | 5 |
| Defending Against Unforeseen Failure Modes with Latent Adversarial Training | Mar 8, 2024 | image-classificationImage Classification | CodeCode Available | 1 | 5 |
| Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner | Jun 17, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| Learning diverse attacks on large language models for robust red-teaming and safety tuning | May 28, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 | 5 |
| Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs) | Jul 20, 2024 | Red Teaming | CodeCode Available | 1 | 5 |
| CoSafe: Evaluating Large Language Model Safety in Multi-Turn Dialogue Coreference | Jun 25, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| Control Risk for Potential Misuse of Artificial Intelligence in Science | Dec 11, 2023 | Red Teaming | CodeCode Available | 1 | 5 |
| Jailbroken: How Does LLM Safety Training Fail? | Jul 5, 2023 | Red Teaming | CodeCode Available | 1 | 5 |
| Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and Biases | Oct 22, 2023 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| Aloe: A Family of Fine-tuned Open Healthcare LLMs | May 3, 2024 | Prompt EngineeringRed Teaming | CodeCode Available | 1 | 5 |
| Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment | Nov 15, 2023 | Red TeamingSafety Alignment | CodeCode Available | 1 | 5 |
| DiveR-CT: Diversity-enhanced Red Teaming Large Language Model Assistants with Relaxing Constraints | May 29, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 | 5 |
| Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique | Aug 20, 2024 | AI and SafetyDiversity | CodeCode Available | 1 | 5 |
| Explore, Establish, Exploit: Red Teaming Language Models from Scratch | Jun 15, 2023 | Red Teaming | CodeCode Available | 1 | 5 |
| MTSA: Multi-turn Safety Alignment for LLMs through Multi-round Red-teaming | May 22, 2025 | Red TeamingSafety Alignment | CodeCode Available | 1 | 5 |
| Adversarial Nibbler: An Open Red-Teaming Method for Identifying Diverse Harms in Text-to-Image Generation | Feb 14, 2024 | Image GenerationRed Teaming | CodeCode Available | 1 | 5 |
| Attack Prompt Generation for Red Teaming and Defending Large Language Models | Oct 19, 2023 | In-Context LearningRed Teaming | CodeCode Available | 1 | 5 |
| AI Control: Improving Safety Despite Intentional Subversion | Dec 12, 2023 | Red Teaming | CodeCode Available | 1 | 5 |
| Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation | Oct 10, 2023 | Red Teaming | CodeCode Available | 1 | 5 |