| Safety by Measurement: A Systematic Literature Review of AI Safety Evaluation Methods | May 8, 2025 | Red TeamingSystematic Literature Review | —Unverified | 0 |
| SAGE-RT: Synthetic Alignment data Generation for Safety Evaluation and Red Teaming | Aug 14, 2024 | Red TeamingSafety Alignment | —Unverified | 0 |
| Seeing Seeds Beyond Weeds: Green Teaming Generative AI for Beneficial Uses | May 30, 2023 | Red Teaming | —Unverified | 0 |
| Shaping Influence and Influencing Shaping: A Computational Red Teaming Trust-based Swarm Intelligence Model | Feb 26, 2018 | Red Teaming | —Unverified | 0 |
| STACK: Adversarial Attacks on LLM Safeguard Pipelines | Jun 30, 2025 | Red Teaming | —Unverified | 0 |
| STAR: SocioTechnical Approach to Red Teaming Language Models | Jun 17, 2024 | Red Teaming | —Unverified | 0 |
| SteerDiff: Steering towards Safe Text-to-Image Diffusion Models | Oct 3, 2024 | Image GenerationRed Teaming | —Unverified | 0 |
| Strategize Globally, Adapt Locally: A Multi-Turn Red Teaming Agent with Dual-Level Learning | Apr 2, 2025 | Red Teaming | —Unverified | 0 |
| Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming | Nov 10, 2023 | Red Teaming | —Unverified | 0 |
| Testing and Evaluation of Large Language Models: Correctness, Non-Toxicity, and Fairness | Aug 31, 2024 | FairnessLanguage Modeling | —Unverified | 0 |