| Derail Yourself: Multi-turn LLM Jailbreak Attack through Self-discovered Clues | Oct 14, 2024 | LLM JailbreakSafety Alignment | CodeCode Available | 2 |
| JailbreakZoo: Survey, Landscapes, and Horizons in Jailbreaking Large Language and Vision-Language Models | Jun 26, 2024 | LLM JailbreakSurvey | CodeCode Available | 2 |
| PandaGuard: Systematic Evaluation of LLM Safety against Jailbreaking Attacks | May 20, 2025 | LLM JailbreakSafety Alignment | CodeCode Available | 2 |
| JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks | Apr 3, 2024 | LLM Jailbreak | CodeCode Available | 2 |
| CySecBench: Generative AI-based CyberSecurity-focused Prompt Dataset for Benchmarking Large Language Models | Jan 2, 2025 | BenchmarkingComputer Security | CodeCode Available | 1 |
| Automatic Prompt Optimization with "Gradient Descent" and Beam Search | May 4, 2023 | LLM Jailbreak | CodeCode Available | 1 |
| Cognitive Overload Attack:Prompt Injection for Long Context | Oct 15, 2024 | In-Context LearningLLM Jailbreak | CodeCode Available | 1 |
| WordGame: Efficient & Effective LLM Jailbreak via Simultaneous Obfuscation in Query and Response | May 22, 2024 | LLM JailbreakSafety Alignment | —Unverified | 0 |
| DiffusionAttacker: Diffusion-Driven Prompt Manipulation for LLM Jailbreak | Dec 23, 2024 | DenoisingDiversity | —Unverified | 0 |
| Efficient Indirect LLM Jailbreak via Multimodal-LLM Jailbreak | May 30, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks | Oct 5, 2024 | LLM Jailbreak | —Unverified | 0 |
| Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack | Apr 2, 2024 | LLM Jailbreak | —Unverified | 0 |
| Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Carrier Articles | Aug 20, 2024 | ArticlesLanguage Modeling | —Unverified | 0 |
| HSF: Defending against Jailbreak Attacks with Hidden State Filtering | Aug 31, 2024 | LLM Jailbreak | —Unverified | 0 |
| LLM Jailbreak Oracle | Jun 17, 2025 | LLM Jailbreak | —Unverified | 0 |
| POEX: Understanding and Mitigating Policy Executable Jailbreak Attacks against Embodied AI | Dec 21, 2024 | LLM JailbreakRed Teaming | —Unverified | 0 |
| SecurityLingua: Efficient Defense of LLM Jailbreak Attacks via Security-Aware Prompt Compression | Jun 15, 2025 | LLM JailbreakSafety Alignment | —Unverified | 0 |
| Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models | Aug 16, 2023 | LLM Jailbreak | —Unverified | 0 |
| SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner | Jun 8, 2024 | Adversarial AttackLLM Jailbreak | —Unverified | 0 |
| Graph of Attacks with Pruning: Optimizing Stealthy Jailbreak Prompt Generation for Enhanced LLM Content Moderation | Jan 28, 2025 | LLM Jailbreak | CodeCode Available | 0 |
| CAVGAN: Unifying Jailbreak and Defense of LLMs via Generative Adversarial Attacks on their Internal Representations | Jul 8, 2025 | Generative Adversarial NetworkLarge Language Model | CodeCode Available | 0 |
| Efficient LLM Jailbreak via Adaptive Dense-to-sparse Constrained Optimization | May 15, 2024 | LLM Jailbreak | CodeCode Available | 0 |
| SMILES-Prompting: A Novel Approach to LLM Jailbreak Attacks in Chemical Synthesis | Oct 21, 2024 | LLM JailbreakRed Teaming | CodeCode Available | 0 |
| SATA: A Paradigm for LLM Jailbreak via Simple Assistive Task Linkage | Dec 19, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 0 |