| LLM-Assisted Red Teaming of Diffusion Models through "Failures Are Fated, But Can Be Faded" | Oct 22, 2024 | Deep Reinforcement LearningRed Teaming | —Unverified | 0 |
| AdvAgent: Controllable Blackbox Red-teaming on Web Agents | Oct 22, 2024 | Decision MakingRed Teaming | —Unverified | 0 |
| SMILES-Prompting: A Novel Approach to LLM Jailbreak Attacks in Chemical Synthesis | Oct 21, 2024 | LLM JailbreakRed Teaming | CodeCode Available | 0 |
| Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis | Oct 21, 2024 | Red Teaming | —Unverified | 0 |
| BiasJailbreak:Analyzing Ethical Biases and Jailbreak Vulnerabilities in Large Language Models | Oct 17, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 |
| A Formal Framework for Assessing and Mitigating Emergent Security Risks in Generative AI Models: Bridging Theory and Dynamic Risk Mitigation | Oct 15, 2024 | Anomaly DetectionRed Teaming | —Unverified | 0 |
| VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment | Oct 12, 2024 | DiversityHallucination | —Unverified | 0 |
| Recent advancements in LLM Red-Teaming: Techniques, Defenses, and Ethical Considerations | Oct 9, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| SteerDiff: Steering towards Safe Text-to-Image Diffusion Models | Oct 3, 2024 | Image GenerationRed Teaming | —Unverified | 0 |
| Automated Red Teaming with GOAT: the Generative Offensive Agent Tester | Oct 2, 2024 | Red Teaming | —Unverified | 0 |