| A Red Teaming Framework for Securing AI in Maritime Autonomous Systems | Dec 8, 2023 | Red Teaming | —Unverified | 0 |
| Seamless: Multilingual Expressive and Streaming Speech Translation | Dec 8, 2023 | automatic-speech-translationMachine Translation | CodeCode Available | 6 |
| DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions | Dec 7, 2023 | Code GenerationRed Teaming | —Unverified | 0 |
| InfoPattern: Unveiling Information Propagation Patterns in Social Media | Nov 27, 2023 | Red TeamingStance Detection | CodeCode Available | 0 |
| JAB: Joint Adversarial Prompting and Belief Augmentation | Nov 16, 2023 | Red Teaming | —Unverified | 0 |
| RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models | Nov 16, 2023 | Backdoor AttackData Poisoning | —Unverified | 0 |
| Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections | Nov 15, 2023 | Red Teaming | CodeCode Available | 0 |
| Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework | Nov 15, 2023 | Red Teaming | —Unverified | 0 |
| Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment | Nov 15, 2023 | Red TeamingSafety Alignment | CodeCode Available | 1 |
| Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts | Nov 15, 2023 | Adversarial AttackRed Teaming | —Unverified | 0 |
| AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications | Nov 14, 2023 | DiversityRed Teaming | —Unverified | 0 |
| MART: Improving LLM Safety with Multi-round Automatic Red-Teaming | Nov 13, 2023 | Instruction FollowingRed Teaming | —Unverified | 0 |
| Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming | Nov 10, 2023 | Red Teaming | —Unverified | 0 |
| LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B | Oct 31, 2023 | GPURed Teaming | —Unverified | 0 |
| Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and Biases | Oct 22, 2023 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| Attack Prompt Generation for Red Teaming and Defending Large Language Models | Oct 19, 2023 | In-Context LearningRed Teaming | CodeCode Available | 1 |
| Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models | Oct 17, 2023 | In-Context LearningRed Teaming | —Unverified | 0 |
| Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models? | Oct 16, 2023 | Red Teaming | CodeCode Available | 1 |
| Large Language Model Unlearning | Oct 14, 2023 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models | Oct 14, 2023 | Red Teaming | CodeCode Available | 0 |
| Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation | Oct 10, 2023 | Red Teaming | CodeCode Available | 1 |
| Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! | Oct 5, 2023 | Red TeamingSafety Alignment | CodeCode Available | 2 |
| Can Language Models be Instructed to Protect Personal Information? | Oct 3, 2023 | Adversarial RobustnessRed Teaming | —Unverified | 0 |
| Low-Resource Languages Jailbreak GPT-4 | Oct 3, 2023 | Red Teaming | —Unverified | 0 |
| No Offense Taken: Eliciting Offensiveness from Language Models | Oct 2, 2023 | DiversityRed Teaming | CodeCode Available | 0 |
| GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts | Sep 19, 2023 | Red Teaming | CodeCode Available | 2 |
| Red Teaming Generative AI/NLP, the BB84 quantum cryptography protocol and the NIST-approved Quantum-Resistant Cryptographic Algorithms | Sep 17, 2023 | Red Teaming | —Unverified | 0 |
| Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts | Sep 12, 2023 | Red TeamingText-to-Image Generation | CodeCode Available | 1 |
| The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward | Aug 28, 2023 | EthicsPhilosophy | —Unverified | 0 |
| Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment | Aug 18, 2023 | MMLURed Teaming | CodeCode Available | 1 |
| GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher | Aug 12, 2023 | EthicsRed Teaming | CodeCode Available | 2 |
| FLIRT: Feedback Loop In-context Red Teaming | Aug 8, 2023 | In-Context LearningRed Teaming | —Unverified | 0 |
| XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models | Aug 2, 2023 | Language ModellingRed Teaming | CodeCode Available | 1 |
| Model Card and Evaluations for Claude Models | Jul 11, 2023 | Arithmetic ReasoningBug fixing | —Unverified | 0 |
| Jailbroken: How Does LLM Safety Training Fail? | Jul 5, 2023 | Red Teaming | CodeCode Available | 1 |
| Explore, Establish, Exploit: Red Teaming Language Models from Scratch | Jun 15, 2023 | Red Teaming | CodeCode Available | 1 |
| Red Teaming Language Model Detectors with Language Models | May 31, 2023 | Adversarial RobustnessLanguage Modeling | CodeCode Available | 1 |
| Seeing Seeds Beyond Weeds: Green Teaming Generative AI for Beneficial Uses | May 30, 2023 | Red Teaming | —Unverified | 0 |
| Query-Efficient Black-Box Red Teaming via Bayesian Optimization | May 27, 2023 | Bayesian OptimizationLanguage Modeling | CodeCode Available | 1 |
| Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback | Mar 9, 2023 | Red Teaming | —Unverified | 0 |
| Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity | Jan 30, 2023 | EthicsLanguage Modelling | —Unverified | 0 |
| Can Large Language Models Change User Preference Adversarially? | Jan 5, 2023 | Red Teaming | —Unverified | 0 |
| Red-Teaming the Stable Diffusion Safety Filter | Oct 3, 2022 | Image GenerationRed Teaming | —Unverified | 0 |
| Red Teaming with Mind Reading: White-Box Adversarial Policies Against RL Agents | Sep 5, 2022 | Red Teamingreinforcement-learning | CodeCode Available | 0 |
| Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned | Aug 23, 2022 | Language ModellingRed Teaming | CodeCode Available | 3 |
| CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI Models | Aug 16, 2022 | Red Teaming | —Unverified | 0 |
| Red Teaming Language Models with Language Models | Feb 7, 2022 | ChatbotDiversity | CodeCode Available | 1 |
| Automating Privilege Escalation with Deep Reinforcement Learning | Oct 4, 2021 | BIG-bench Machine LearningDeep Reinforcement Learning | —Unverified | 0 |
| Computational Red Teaming in a Sudoku Solving Context: Neural Network Based Skill Representation and Acquisition | Feb 27, 2018 | Red Teaming | —Unverified | 0 |
| A Multi-Disciplinary Review of Knowledge Acquisition Methods: From Human to Autonomous Eliciting Agents | Feb 27, 2018 | General ClassificationRed Teaming | —Unverified | 0 |