SOTAVerified

Red Teaming

Papers

Showing 150 of 251 papers

TitleStatusHype
RabakBench: Scaling Human Annotations to Construct Localized Multilingual Safety Benchmarks for Low-Resource LanguagesCode0
STACK: Adversarial Attacks on LLM Safeguard Pipelines0
We Should Identify and Mitigate Third-Party Safety Risks in MCP-Powered Agent SystemsCode0
Effective Red-Teaming of Policy-Adherent Agents0
GenBreak: Red Teaming Text-to-Image Generators Using Large Language Models0
Quality-Diversity Red-Teaming: Automated Generation of High-Quality and Diverse Attackers for Large Language Models0
RedDebate: Safer Responses through Multi-Agent Red Teaming DebatesCode0
RedRFT: A Light-Weight Benchmark for Reinforcement Fine-Tuning-Based Red TeamingCode0
BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream CamouflageCode0
Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act0
A Red Teaming Roadmap Towards System-Level Safety0
Towards Secure MLOps: Surveying Attacks, Mitigation Strategies, and Research Challenges0
A Reward-driven Automated Webshell Malicious-code Generator for Red-teaming0
TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red-Teaming Data SynthesisCode0
SafeCOMM: What about Safety Alignment in Fine-Tuned Telecom Large Language Models?0
CoT Red-Handed: Stress Testing Chain-of-Thought Monitoring0
RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS EnvironmentsCode1
Red-Teaming Text-to-Image Systems by Rule-based Preference Modeling0
Capability-Based Scaling Laws for LLM Red-TeamingCode0
GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization0
Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation0
MTSA: Multi-turn Safety Alignment for LLMs through Multi-round Red-teamingCode1
Towards medical AI misalignment: a preliminary study0
RRTL: Red Teaming Reasoning Large Language Models in Tool Learning0
Soft Prompts for Evaluation: Measuring Conditional Distance of CapabilitiesCode0
EVA: Red-Teaming GUI Agents via Evolving Indirect Prompt Injection0
"Haet Bhasha aur Diskrimineshun": Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs0
Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents0
CURE: Concept Unlearning via Orthogonal Representation Editing in Diffusion Models0
LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs0
Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks SafetyCode0
Offensive Security for AI Systems: Concepts, Practices, and Applications0
AgentVigil: Generic Black-Box Red-teaming for Indirect Prompt Injection against LLM Agents0
Safety by Measurement: A Systematic Literature Review of AI Safety Evaluation Methods0
DMRL: Data- and Model-aware Reward Learning for Data Extraction0
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs0
Red Teaming Large Language Models for Healthcare0
OET: Optimization-based prompt injection Evaluation ToolkitCode1
When Testing AI Tests Us: Safeguarding Mental Health on the Digital Frontlines0
SAGE: A Generic Framework for LLM Safety EvaluationCode0
RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models0
Understanding and Mitigating Risks of Generative AI in Financial Services0
RainbowPlus: Enhancing Adversarial Prompt Generation via Evolutionary Quality-Diversity SearchCode1
ELAB: Extensive LLM Alignment Benchmark in Persian Language0
X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents0
The Structural Safety Generalization ProblemCode0
Multi-lingual Multi-turn Automated Red Teaming for LLMs0
Strategize Globally, Adapt Locally: A Multi-Turn Red Teaming Agent with Dual-Level Learning0
sudo rm -rf agentic_securityCode1
Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review0
Show:102550
← PrevPage 1 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified