| h4rm3l: A language for Composable Jailbreak Attack Synthesis | Aug 9, 2024 | BenchmarkingProgram Synthesis | —Unverified | 0 |
| "Haet Bhasha aur Diskrimineshun": Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs | May 20, 2025 | Image GenerationRed Teaming | —Unverified | 0 |
| LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B | Oct 31, 2023 | GPURed Teaming | —Unverified | 0 |
| MART: Improving LLM Safety with Multi-round Automatic Red-Teaming | Nov 13, 2023 | Instruction FollowingRed Teaming | —Unverified | 0 |
| Games for AI Control: Models of Safety Evaluations of AI Deployment Protocols | Sep 12, 2024 | Decision MakingRed Teaming | —Unverified | 0 |
| HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback | Mar 13, 2024 | Language ModellingLarge Language Model | —Unverified | 0 |
| LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs | May 16, 2025 | Red Teaming | —Unverified | 0 |
| FLIRT: Feedback Loop In-context Red Teaming | Aug 8, 2023 | In-Context LearningRed Teaming | —Unverified | 0 |
| A Multi-Disciplinary Review of Knowledge Acquisition Methods: From Human to Autonomous Eliciting Agents | Feb 27, 2018 | General ClassificationRed Teaming | —Unverified | 0 |
| Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis | Oct 21, 2024 | Red Teaming | —Unverified | 0 |
| Lessons From Red Teaming 100 Generative AI Products | Jan 13, 2025 | BenchmarkingRed Teaming | —Unverified | 0 |
| IterAlign: Iterative Constitutional Alignment of Large Language Models | Mar 27, 2024 | Red Teaming | —Unverified | 0 |
| JAB: Joint Adversarial Prompting and Belief Augmentation | Nov 16, 2023 | Red Teaming | —Unverified | 0 |
| Finding Safety Neurons in Large Language Models | Jun 20, 2024 | MisinformationRed Teaming | —Unverified | 0 |
| A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI | Apr 23, 2024 | Prompt EngineeringRed Teaming | —Unverified | 0 |
| Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters | May 30, 2024 | Red Teaming | —Unverified | 0 |
| Fast Proxies for LLM Robustness Evaluation | Feb 14, 2025 | Red Teaming | —Unverified | 0 |
| Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency | Jan 9, 2025 | Red Teaming | —Unverified | 0 |
| Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation | May 24, 2025 | Intent DetectionNatural Language Understanding | —Unverified | 0 |
| Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming | Feb 22, 2025 | DiversityIn-Context Learning | —Unverified | 0 |
| A Framework for Evaluating Emerging Cyberattack Capabilities of AI | Mar 14, 2025 | Red Teaming | —Unverified | 0 |
| KDA: A Knowledge-Distilled Attacker for Generating Diverse Prompts to Jailbreak LLMs | Feb 5, 2025 | DiversityPrompt Engineering | —Unverified | 0 |
| Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges | Mar 6, 2025 | BenchmarkingLanguage Modeling | —Unverified | 0 |
| CoT Red-Handed: Stress Testing Chain-of-Thought Monitoring | May 29, 2025 | Red Teaming | —Unverified | 0 |
| Leveraging Reinforcement Learning in Red Teaming for Advanced Ransomware Attack Simulations | Jun 25, 2024 | Red TeamingReinforcement Learning (RL) | —Unverified | 0 |