| EVA: Red-Teaming GUI Agents via Evolving Indirect Prompt Injection | May 20, 2025 | Red Teaming | —Unverified | 0 |
| "Haet Bhasha aur Diskrimineshun": Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs | May 20, 2025 | Image GenerationRed Teaming | —Unverified | 0 |
| Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents | May 20, 2025 | Contrastive LearningRed Teaming | —Unverified | 0 |
| CURE: Concept Unlearning via Orthogonal Representation Editing in Diffusion Models | May 19, 2025 | BenchmarkingRed Teaming | —Unverified | 0 |
| LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs | May 16, 2025 | Red Teaming | —Unverified | 0 |
| Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks Safety | May 11, 2025 | Outlier DetectionRed Teaming | CodeCode Available | 0 |
| Offensive Security for AI Systems: Concepts, Practices, and Applications | May 9, 2025 | Red Teaming | —Unverified | 0 |
| AgentVigil: Generic Black-Box Red-teaming for Indirect Prompt Injection against LLM Agents | May 9, 2025 | NavigateRed Teaming | —Unverified | 0 |
| Safety by Measurement: A Systematic Literature Review of AI Safety Evaluation Methods | May 8, 2025 | Red TeamingSystematic Literature Review | —Unverified | 0 |
| DMRL: Data- and Model-aware Reward Learning for Data Extraction | May 7, 2025 | Prompt EngineeringRed Teaming | —Unverified | 0 |
| Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs | May 7, 2025 | Red Teaming | —Unverified | 0 |
| Red Teaming Large Language Models for Healthcare | May 1, 2025 | Language ModelingLanguage Modelling | —Unverified | 0 |
| OET: Optimization-based prompt injection Evaluation Toolkit | May 1, 2025 | Adversarial RobustnessNatural Language Understanding | CodeCode Available | 1 |
| When Testing AI Tests Us: Safeguarding Mental Health on the Digital Frontlines | Apr 29, 2025 | Red Teaming | —Unverified | 0 |
| SAGE: A Generic Framework for LLM Safety Evaluation | Apr 28, 2025 | Red TeamingSafety Alignment | CodeCode Available | 0 |
| RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models | Apr 25, 2025 | RAGRed Teaming | —Unverified | 0 |
| Understanding and Mitigating Risks of Generative AI in Financial Services | Apr 25, 2025 | FairnessRed Teaming | —Unverified | 0 |
| RainbowPlus: Enhancing Adversarial Prompt Generation via Evolutionary Quality-Diversity Search | Apr 21, 2025 | DiversityEvolutionary Algorithms | CodeCode Available | 1 |
| ELAB: Extensive LLM Alignment Benchmark in Persian Language | Apr 17, 2025 | FairnessRed Teaming | —Unverified | 0 |
| X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents | Apr 15, 2025 | DiversityRed Teaming | —Unverified | 0 |
| The Structural Safety Generalization Problem | Apr 13, 2025 | Red Teaming | CodeCode Available | 0 |
| Multi-lingual Multi-turn Automated Red Teaming for LLMs | Apr 4, 2025 | Red Teaming | —Unverified | 0 |
| Strategize Globally, Adapt Locally: A Multi-Turn Red Teaming Agent with Dual-Level Learning | Apr 2, 2025 | Red Teaming | —Unverified | 0 |
| sudo rm -rf agentic_security | Mar 26, 2025 | Adversarial AttackAI and Safety | CodeCode Available | 1 |
| Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review | Mar 25, 2025 | ArticlesRed Teaming | —Unverified | 0 |