| Derail Yourself: Multi-turn LLM Jailbreak Attack through Self-discovered Clues | Oct 14, 2024 | LLM JailbreakSafety Alignment | CodeCode Available | 2 |
| PandaGuard: Systematic Evaluation of LLM Safety against Jailbreaking Attacks | May 20, 2025 | LLM JailbreakSafety Alignment | CodeCode Available | 2 |
| JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks | Apr 3, 2024 | LLM Jailbreak | CodeCode Available | 2 |
| JailbreakZoo: Survey, Landscapes, and Horizons in Jailbreaking Large Language and Vision-Language Models | Jun 26, 2024 | LLM JailbreakSurvey | CodeCode Available | 2 |
| Cognitive Overload Attack:Prompt Injection for Long Context | Oct 15, 2024 | In-Context LearningLLM Jailbreak | CodeCode Available | 1 |
| Automatic Prompt Optimization with "Gradient Descent" and Beam Search | May 4, 2023 | LLM Jailbreak | CodeCode Available | 1 |
| CySecBench: Generative AI-based CyberSecurity-focused Prompt Dataset for Benchmarking Large Language Models | Jan 2, 2025 | BenchmarkingComputer Security | CodeCode Available | 1 |
| Efficient Indirect LLM Jailbreak via Multimodal-LLM Jailbreak | May 30, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| DiffusionAttacker: Diffusion-Driven Prompt Manipulation for LLM Jailbreak | Dec 23, 2024 | DenoisingDiversity | —Unverified | 0 |
| Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack | Apr 2, 2024 | LLM Jailbreak | —Unverified | 0 |