| Effective Red-Teaming of Policy-Adherent Agents | Jun 11, 2025 | Red Teaming | —Unverified | 0 |
| ELAB: Extensive LLM Alignment Benchmark in Persian Language | Apr 17, 2025 | FairnessRed Teaming | —Unverified | 0 |
| Embodied Red Teaming for Auditing Robotic Foundation Models | Nov 27, 2024 | Red Teaming | —Unverified | 0 |
| EVA: Red-Teaming GUI Agents via Evolving Indirect Prompt Injection | May 20, 2025 | Red Teaming | —Unverified | 0 |
| Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity | Jan 30, 2023 | EthicsLanguage Modelling | —Unverified | 0 |
| Exploring Straightforward Conversational Red-Teaming | Sep 7, 2024 | Red Teaming | —Unverified | 0 |
| Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation | May 24, 2025 | Intent DetectionNatural Language Understanding | —Unverified | 0 |
| Fast Proxies for LLM Robustness Evaluation | Feb 14, 2025 | Red Teaming | —Unverified | 0 |
| Finding Safety Neurons in Large Language Models | Jun 20, 2024 | MisinformationRed Teaming | —Unverified | 0 |
| FLIRT: Feedback Loop In-context Red Teaming | Aug 8, 2023 | In-Context LearningRed Teaming | —Unverified | 0 |
| Games for AI Control: Models of Safety Evaluations of AI Deployment Protocols | Sep 12, 2024 | Decision MakingRed Teaming | —Unverified | 0 |
| GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization | May 25, 2025 | Large Language ModelRed Teaming | —Unverified | 0 |
| Gradient-Based Language Model Red Teaming | Jan 30, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| h4rm3l: A language for Composable Jailbreak Attack Synthesis | Aug 9, 2024 | BenchmarkingProgram Synthesis | —Unverified | 0 |
| "Haet Bhasha aur Diskrimineshun": Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs | May 20, 2025 | Image GenerationRed Teaming | —Unverified | 0 |
| Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents | May 20, 2025 | Contrastive LearningRed Teaming | —Unverified | 0 |
| HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback | Mar 13, 2024 | Language ModellingLarge Language Model | —Unverified | 0 |
| In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models | Nov 25, 2024 | Red TeamingSemantic Similarity | —Unverified | 0 |
| Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis | Oct 21, 2024 | Red Teaming | —Unverified | 0 |
| Investigating Bias Representations in Llama 2 Chat via Activation Steering | Feb 1, 2024 | Decision MakingRed Teaming | —Unverified | 0 |
| IterAlign: Iterative Constitutional Alignment of Large Language Models | Mar 27, 2024 | Red Teaming | —Unverified | 0 |
| JAB: Joint Adversarial Prompting and Belief Augmentation | Nov 16, 2023 | Red Teaming | —Unverified | 0 |
| Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts | Nov 15, 2023 | Adversarial AttackRed Teaming | —Unverified | 0 |
| Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters | May 30, 2024 | Red Teaming | —Unverified | 0 |
| Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act | Jun 2, 2025 | Red Teaming | —Unverified | 0 |