| Red Teaming Contemporary AI Models: Insights from Spanish and Basque Perspectives | Mar 13, 2025 | Red Teaming | —Unverified | 0 |
| JBFuzz: Jailbreaking LLMs Efficiently and Effectively Using Fuzzing | Mar 12, 2025 | Red TeamingSafety Alignment | —Unverified | 0 |
| Reinforced Diffuser for Red Teaming Large Vision-Language Models | Mar 8, 2025 | Large Language ModelRed Teaming | —Unverified | 0 |
| MAD-MAX: Modular And Diverse Malicious Attack MiXtures for Automated LLM Red Teaming | Mar 8, 2025 | Red Teaming | —Unverified | 0 |
| Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges | Mar 6, 2025 | BenchmarkingLanguage Modeling | —Unverified | 0 |
| LLM-Safety Evaluations Lack Robustness | Mar 4, 2025 | Red TeamingResponse Generation | —Unverified | 0 |
| Building Safe GenAI Applications: An End-to-End Overview of Red Teaming for Large Language Models | Mar 3, 2025 | Red TeamingSurvey | —Unverified | 0 |
| Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming | Feb 22, 2025 | DiversityIn-Context Learning | —Unverified | 0 |
| Fast Proxies for LLM Robustness Evaluation | Feb 14, 2025 | Red Teaming | —Unverified | 0 |
| A Frontier AI Risk Management Framework: Bridging the Gap Between Current AI Practices and Established Risk Management | Feb 10, 2025 | ManagementRed Teaming | —Unverified | 0 |