| Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique | Aug 20, 2024 | AI and SafetyDiversity | CodeCode Available | 1 |
| Gandalf the Red: Adaptive Security for LLMs | Jan 14, 2025 | BlockingLanguage Modeling | CodeCode Available | 1 |
| Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation | Oct 10, 2023 | Red Teaming | CodeCode Available | 1 |
| Causality Analysis for Evaluating the Security of Large Language Models | Dec 13, 2023 | Red Teaming | CodeCode Available | 1 |
| Attack Prompt Generation for Red Teaming and Defending Large Language Models | Oct 19, 2023 | In-Context LearningRed Teaming | CodeCode Available | 1 |
| AI Control: Improving Safety Despite Intentional Subversion | Dec 12, 2023 | Red Teaming | CodeCode Available | 1 |
| ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users | May 24, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 |
| Control Risk for Potential Misuse of Artificial Intelligence in Science | Dec 11, 2023 | Red Teaming | CodeCode Available | 1 |
| DiveR-CT: Diversity-enhanced Red Teaming Large Language Model Assistants with Relaxing Constraints | May 29, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 |
| Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner | Jun 17, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 |