| AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications | Nov 14, 2023 | DiversityRed Teaming | —Unverified | 0 |
| MART: Improving LLM Safety with Multi-round Automatic Red-Teaming | Nov 13, 2023 | Instruction FollowingRed Teaming | —Unverified | 0 |
| Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming | Nov 10, 2023 | Red Teaming | —Unverified | 0 |
| LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B | Oct 31, 2023 | GPURed Teaming | —Unverified | 0 |
| Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and Biases | Oct 22, 2023 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| Attack Prompt Generation for Red Teaming and Defending Large Language Models | Oct 19, 2023 | In-Context LearningRed Teaming | CodeCode Available | 1 |
| Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models | Oct 17, 2023 | In-Context LearningRed Teaming | —Unverified | 0 |
| Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models? | Oct 16, 2023 | Red Teaming | CodeCode Available | 1 |
| Large Language Model Unlearning | Oct 14, 2023 | Language ModelingLanguage Modelling | CodeCode Available | 1 |
| ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models | Oct 14, 2023 | Red Teaming | CodeCode Available | 0 |