| Effective Red-Teaming of Policy-Adherent Agents | Jun 11, 2025 | Red Teaming | —Unverified | 0 |
| Quality-Diversity Red-Teaming: Automated Generation of High-Quality and Diverse Attackers for Large Language Models | Jun 8, 2025 | DiversityRed Teaming | —Unverified | 0 |
| RedDebate: Safer Responses through Multi-Agent Red Teaming Debates | Jun 4, 2025 | Red Teaming | CodeCode Available | 0 |
| RedRFT: A Light-Weight Benchmark for Reinforcement Fine-Tuning-Based Red Teaming | Jun 4, 2025 | Red Teaming | CodeCode Available | 0 |
| BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream Camouflage | Jun 3, 2025 | Prompt EngineeringRed Teaming | CodeCode Available | 0 |
| Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act | Jun 2, 2025 | Red Teaming | —Unverified | 0 |
| A Reward-driven Automated Webshell Malicious-code Generator for Red-teaming | May 30, 2025 | Code GenerationDiversity | —Unverified | 0 |
| A Red Teaming Roadmap Towards System-Level Safety | May 30, 2025 | Large Language ModelRed Teaming | —Unverified | 0 |
| Towards Secure MLOps: Surveying Attacks, Mitigation Strategies, and Research Challenges | May 30, 2025 | Red Teaming | —Unverified | 0 |
| TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red-Teaming Data Synthesis | May 30, 2025 | DiversityLanguage Modeling | CodeCode Available | 0 |
| CoT Red-Handed: Stress Testing Chain-of-Thought Monitoring | May 29, 2025 | Red Teaming | —Unverified | 0 |
| SafeCOMM: What about Safety Alignment in Fine-Tuned Telecom Large Language Models? | May 29, 2025 | DiagnosticRed Teaming | —Unverified | 0 |
| Red-Teaming Text-to-Image Systems by Rule-based Preference Modeling | May 27, 2025 | Red Teaming | —Unverified | 0 |
| Capability-Based Scaling Laws for LLM Red-Teaming | May 26, 2025 | MMLUPrompt Engineering | CodeCode Available | 0 |
| GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization | May 25, 2025 | Large Language ModelRed Teaming | —Unverified | 0 |
| Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation | May 24, 2025 | Intent DetectionNatural Language Understanding | —Unverified | 0 |
| Towards medical AI misalignment: a preliminary study | May 22, 2025 | Red Teaming | —Unverified | 0 |
| RRTL: Red Teaming Reasoning Large Language Models in Tool Learning | May 21, 2025 | Red Teaming | —Unverified | 0 |
| "Haet Bhasha aur Diskrimineshun": Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs | May 20, 2025 | Image GenerationRed Teaming | —Unverified | 0 |
| Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents | May 20, 2025 | Contrastive LearningRed Teaming | —Unverified | 0 |
| EVA: Red-Teaming GUI Agents via Evolving Indirect Prompt Injection | May 20, 2025 | Red Teaming | —Unverified | 0 |
| Soft Prompts for Evaluation: Measuring Conditional Distance of Capabilities | May 20, 2025 | Red Teaming | CodeCode Available | 0 |
| CURE: Concept Unlearning via Orthogonal Representation Editing in Diffusion Models | May 19, 2025 | BenchmarkingRed Teaming | —Unverified | 0 |
| LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs | May 16, 2025 | Red Teaming | —Unverified | 0 |
| Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks Safety | May 11, 2025 | Outlier DetectionRed Teaming | CodeCode Available | 0 |