| LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B | Oct 31, 2023 | GPURed Teaming | —Unverified | 0 |
| Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models | Oct 17, 2023 | In-Context LearningRed Teaming | —Unverified | 0 |
| ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models | Oct 14, 2023 | Red Teaming | CodeCode Available | 0 |
| Low-Resource Languages Jailbreak GPT-4 | Oct 3, 2023 | Red Teaming | —Unverified | 0 |
| Can Language Models be Instructed to Protect Personal Information? | Oct 3, 2023 | Adversarial RobustnessRed Teaming | —Unverified | 0 |
| No Offense Taken: Eliciting Offensiveness from Language Models | Oct 2, 2023 | DiversityRed Teaming | CodeCode Available | 0 |
| Red Teaming Generative AI/NLP, the BB84 quantum cryptography protocol and the NIST-approved Quantum-Resistant Cryptographic Algorithms | Sep 17, 2023 | Red Teaming | —Unverified | 0 |
| The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward | Aug 28, 2023 | EthicsPhilosophy | —Unverified | 0 |
| FLIRT: Feedback Loop In-context Red Teaming | Aug 8, 2023 | In-Context LearningRed Teaming | —Unverified | 0 |
| Model Card and Evaluations for Claude Models | Jul 11, 2023 | Arithmetic ReasoningBug fixing | —Unverified | 0 |