| Understanding and Enhancing the Transferability of Jailbreaking Attacks | Feb 5, 2025 | Intent RecognitionRed Teaming | CodeCode Available | 1 | 5 |
| Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner | Jun 17, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation | Oct 10, 2023 | Red Teaming | CodeCode Available | 1 | 5 |
| Causality Analysis for Evaluating the Security of Large Language Models | Dec 13, 2023 | Red Teaming | CodeCode Available | 1 | 5 |
| Learning diverse attacks on large language models for robust red-teaming and safety tuning | May 28, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 | 5 |
| Unelicitable Backdoors in Language Models via Cryptographic Transformer Circuits | Jun 3, 2024 | Red Teaming | CodeCode Available | 1 | 5 |
| ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users | May 24, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 | 5 |
| Control Risk for Potential Misuse of Artificial Intelligence in Science | Dec 11, 2023 | Red Teaming | CodeCode Available | 1 | 5 |
| Jailbreaking as a Reward Misspecification Problem | Jun 20, 2024 | Red Teaming | CodeCode Available | 1 | 5 |
| CoSafe: Evaluating Large Language Model Safety in Multi-Turn Dialogue Coreference | Jun 25, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 1 | 5 |
| Red Teaming Language Models with Language Models | Feb 7, 2022 | ChatbotDiversity | CodeCode Available | 1 | 5 |
| Trajectory Balance with Asynchrony: Decoupling Exploration and Learning for Fast, Scalable LLM Post-Training | Mar 24, 2025 | DiversityLarge Language Model | CodeCode Available | 1 | 5 |
| Holistic Automated Red Teaming for Large Language Models through Top-Down Test Case Generation and Multi-turn Interaction | Sep 25, 2024 | DiversityRed Teaming | CodeCode Available | 1 | 5 |
| Jailbroken: How Does LLM Safety Training Fail? | Jul 5, 2023 | Red Teaming | CodeCode Available | 1 | 5 |
| "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jailbreak | Jun 17, 2024 | Red Teaming | CodeCode Available | 1 | 5 |
| UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning | Feb 28, 2025 | Large Language ModelRed Teaming | CodeCode Available | 1 | 5 |
| AI Control: Improving Safety Despite Intentional Subversion | Dec 12, 2023 | Red Teaming | CodeCode Available | 1 | 5 |
| XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models | Aug 2, 2023 | Language ModellingRed Teaming | CodeCode Available | 1 | 5 |
| Defending Against Unforeseen Failure Modes with Latent Adversarial Training | Mar 8, 2024 | image-classificationImage Classification | CodeCode Available | 1 | 5 |
| Attack Prompt Generation for Red Teaming and Defending Large Language Models | Oct 19, 2023 | In-Context LearningRed Teaming | CodeCode Available | 1 | 5 |
| Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts | Sep 12, 2023 | Red TeamingText-to-Image Generation | CodeCode Available | 1 | 5 |
| Code-Switching Red-Teaming: LLM Evaluation for Safety and Multilingual Understanding | Jun 17, 2024 | 16kLanguage Modelling | CodeCode Available | 0 | 5 |
| ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models | Oct 14, 2023 | Red Teaming | CodeCode Available | 0 | 5 |
| Steering Without Side Effects: Improving Post-Deployment Control of Language Models | Jun 21, 2024 | Red TeamingTruthfulQA | CodeCode Available | 0 | 5 |
| Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models | Jan 19, 2024 | Model EditingRed Teaming | CodeCode Available | 0 | 5 |
| Soft Prompts for Evaluation: Measuring Conditional Distance of Capabilities | May 20, 2025 | Red Teaming | CodeCode Available | 0 | 5 |
| SeqAR: Jailbreak LLMs with Sequential Auto-Generated Characters | Jul 2, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 | 5 |
| Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections | Nov 15, 2023 | Red Teaming | CodeCode Available | 0 | 5 |
| Distract Large Language Models for Automatic Jailbreak Attack | Mar 13, 2024 | Red Teaming | CodeCode Available | 0 | 5 |
| Capability-Based Scaling Laws for LLM Red-Teaming | May 26, 2025 | MMLUPrompt Engineering | CodeCode Available | 0 | 5 |
| An Auditing Test To Detect Behavioral Shift in Language Models | Oct 25, 2024 | BenchmarkingChange Detection | CodeCode Available | 0 | 5 |
| RICoTA: Red-teaming of In-the-wild Conversation with Test Attempts | Jan 29, 2025 | ChatbotRed Teaming | CodeCode Available | 0 | 5 |
| Red-Teaming Segment Anything Model | Apr 2, 2024 | Image Segmentationmodel | CodeCode Available | 0 | 5 |
| SMILES-Prompting: A Novel Approach to LLM Jailbreak Attacks in Chemical Synthesis | Oct 21, 2024 | LLM JailbreakRed Teaming | CodeCode Available | 0 | 5 |
| SAGE: A Generic Framework for LLM Safety Evaluation | Apr 28, 2025 | Red TeamingSafety Alignment | CodeCode Available | 0 | 5 |
| Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks? | Apr 4, 2024 | Red Teaming | CodeCode Available | 0 | 5 |
| Bias patterns in the application of LLMs for clinical decision support: A comprehensive study | Apr 23, 2024 | Decision MakingQuestion Answering | CodeCode Available | 0 | 5 |
| Red Teaming for Large Language Models At Scale: Tackling Hallucinations on Mathematics Tasks | Dec 30, 2023 | Red Teaming | CodeCode Available | 0 | 5 |
| Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks Safety | May 11, 2025 | Outlier DetectionRed Teaming | CodeCode Available | 0 | 5 |
| RedRFT: A Light-Weight Benchmark for Reinforcement Fine-Tuning-Based Red Teaming | Jun 4, 2025 | Red Teaming | CodeCode Available | 0 | 5 |
| RedDebate: Safer Responses through Multi-Agent Red Teaming Debates | Jun 4, 2025 | Red Teaming | CodeCode Available | 0 | 5 |
| Red Teaming Language Models for Processing Contradictory Dialogues | May 16, 2024 | Red Teamingvalid | CodeCode Available | 0 | 5 |
| BitBypass: A New Direction in Jailbreaking Aligned Large Language Models with Bitstream Camouflage | Jun 3, 2025 | Prompt EngineeringRed Teaming | CodeCode Available | 0 | 5 |
| Advancing Adversarial Suffix Transfer Learning on Aligned Large Language Models | Aug 27, 2024 | Red TeamingTransfer Learning | CodeCode Available | 0 | 5 |
| Overriding Safety protections of Open-source Models | Sep 28, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 | 5 |
| Aligners: Decoupling LLMs and Alignment | Mar 7, 2024 | Instruction FollowingRed Teaming | CodeCode Available | 0 | 5 |
| BiasJailbreak:Analyzing Ethical Biases and Jailbreak Vulnerabilities in Large Language Models | Oct 17, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 | 5 |
| RabakBench: Scaling Human Annotations to Construct Localized Multilingual Safety Benchmarks for Low-Resource Languages | Jul 8, 2025 | Red Teaming | CodeCode Available | 0 | 5 |
| Look Before You Leap: Enhancing Attention and Vigilance Regarding Harmful Content with GuidelineLLM | Dec 10, 2024 | Red Teaming | CodeCode Available | 0 | 5 |
| Audio Is the Achilles' Heel: Red Teaming Audio Large Multimodal Models | Oct 31, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 | 5 |