| A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI | Apr 23, 2024 | Prompt EngineeringRed Teaming | —Unverified | 0 |
| Fast Proxies for LLM Robustness Evaluation | Feb 14, 2025 | Red Teaming | —Unverified | 0 |
| JAB: Joint Adversarial Prompting and Belief Augmentation | Nov 16, 2023 | Red Teaming | —Unverified | 0 |
| Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation | May 24, 2025 | Intent DetectionNatural Language Understanding | —Unverified | 0 |
| Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming | Feb 22, 2025 | DiversityIn-Context Learning | —Unverified | 0 |
| Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters | May 30, 2024 | Red Teaming | —Unverified | 0 |
| A Framework for Evaluating Emerging Cyberattack Capabilities of AI | Mar 14, 2025 | Red Teaming | —Unverified | 0 |
| Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency | Jan 9, 2025 | Red Teaming | —Unverified | 0 |
| Exploring Straightforward Conversational Red-Teaming | Sep 7, 2024 | Red Teaming | —Unverified | 0 |
| Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity | Jan 30, 2023 | EthicsLanguage Modelling | —Unverified | 0 |