| RICoTA: Red-teaming of In-the-wild Conversation with Test Attempts | Jan 29, 2025 | ChatbotRed Teaming | CodeCode Available | 0 |
| Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation | Jan 29, 2025 | Red TeamingSafety Alignment | CodeCode Available | 2 |
| Siren: A Learning-Based Multi-Turn Attack Framework for Simulating Real-World Human Jailbreak Behaviors | Jan 24, 2025 | Red Teaming | CodeCode Available | 1 |
| Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models | Jan 14, 2025 | Red Teaming | —Unverified | 0 |
| Gandalf the Red: Adaptive Security for LLMs | Jan 14, 2025 | BlockingLanguage Modeling | CodeCode Available | 1 |
| Text-Diffusion Red-Teaming of Large Language Models: Unveiling Harmful Behaviors with Proximity Constraints | Jan 14, 2025 | Large Language ModelRed Teaming | —Unverified | 0 |
| Lessons From Red Teaming 100 Generative AI Products | Jan 13, 2025 | BenchmarkingRed Teaming | —Unverified | 0 |
| Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency | Jan 9, 2025 | Red Teaming | —Unverified | 0 |
| Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models | Jan 3, 2025 | Red Teaming | —Unverified | 0 |
| Diverse and Effective Red Teaming with Auto-generated Rewards and Multi-step Reinforcement Learning | Dec 24, 2024 | DiversityLarge Language Model | —Unverified | 0 |