SOTAVerified

Red Teaming

Papers

Showing 51100 of 251 papers

TitleStatusHype
Aloe: A Family of Fine-tuned Open Healthcare LLMsCode1
Probabilistic Inference in Language Models via Twisted Sequential Monte CarloCode1
Defending Against Unforeseen Failure Modes with Latent Adversarial TrainingCode1
Adversarial Nibbler: An Open Red-Teaming Method for Identifying Diverse Harms in Text-to-Image GenerationCode1
Causality Analysis for Evaluating the Security of Large Language ModelsCode1
AI Control: Improving Safety Despite Intentional SubversionCode1
Control Risk for Potential Misuse of Artificial Intelligence in ScienceCode1
Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-AlignmentCode1
Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and BiasesCode1
Attack Prompt Generation for Red Teaming and Defending Large Language ModelsCode1
Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?Code1
Large Language Model UnlearningCode1
Catastrophic Jailbreak of Open-source LLMs via Exploiting GenerationCode1
Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic PromptsCode1
Red-Teaming Large Language Models using Chain of Utterances for Safety-AlignmentCode1
XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language ModelsCode1
Jailbroken: How Does LLM Safety Training Fail?Code1
Explore, Establish, Exploit: Red Teaming Language Models from ScratchCode1
Red Teaming Language Model Detectors with Language ModelsCode1
Query-Efficient Black-Box Red Teaming via Bayesian OptimizationCode1
Red Teaming Language Models with Language ModelsCode1
RabakBench: Scaling Human Annotations to Construct Localized Multilingual Safety Benchmarks for Low-Resource LanguagesCode0
STACK: Adversarial Attacks on LLM Safeguard Pipelines0
We Should Identify and Mitigate Third-Party Safety Risks in MCP-Powered Agent SystemsCode0
GenBreak: Red Teaming Text-to-Image Generators Using Large Language Models0
Effective Red-Teaming of Policy-Adherent Agents0
Quality-Diversity Red-Teaming: Automated Generation of High-Quality and Diverse Attackers for Large Language Models0
RedDebate: Safer Responses through Multi-Agent Red Teaming DebatesCode0
RedRFT: A Light-Weight Benchmark for Reinforcement Fine-Tuning-Based Red TeamingCode0
BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream CamouflageCode0
Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act0
A Reward-driven Automated Webshell Malicious-code Generator for Red-teaming0
A Red Teaming Roadmap Towards System-Level Safety0
Towards Secure MLOps: Surveying Attacks, Mitigation Strategies, and Research Challenges0
TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red-Teaming Data SynthesisCode0
CoT Red-Handed: Stress Testing Chain-of-Thought Monitoring0
SafeCOMM: What about Safety Alignment in Fine-Tuned Telecom Large Language Models?0
Red-Teaming Text-to-Image Systems by Rule-based Preference Modeling0
Capability-Based Scaling Laws for LLM Red-TeamingCode0
GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization0
Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation0
Towards medical AI misalignment: a preliminary study0
RRTL: Red Teaming Reasoning Large Language Models in Tool Learning0
"Haet Bhasha aur Diskrimineshun": Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs0
Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents0
EVA: Red-Teaming GUI Agents via Evolving Indirect Prompt Injection0
Soft Prompts for Evaluation: Measuring Conditional Distance of CapabilitiesCode0
CURE: Concept Unlearning via Orthogonal Representation Editing in Diffusion Models0
LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs0
Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks SafetyCode0
Show:102550
← PrevPage 2 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified