| GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher | Aug 12, 2023 | EthicsRed Teaming | CodeCode Available | 2 |
| FLIRT: Feedback Loop In-context Red Teaming | Aug 8, 2023 | In-Context LearningRed Teaming | —Unverified | 0 |
| XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models | Aug 2, 2023 | Language ModellingRed Teaming | CodeCode Available | 1 |
| Model Card and Evaluations for Claude Models | Jul 11, 2023 | Arithmetic ReasoningBug fixing | —Unverified | 0 |
| Jailbroken: How Does LLM Safety Training Fail? | Jul 5, 2023 | Red Teaming | CodeCode Available | 1 |
| Explore, Establish, Exploit: Red Teaming Language Models from Scratch | Jun 15, 2023 | Red Teaming | CodeCode Available | 1 |
| Red Teaming Language Model Detectors with Language Models | May 31, 2023 | Adversarial RobustnessLanguage Modeling | CodeCode Available | 1 |
| Seeing Seeds Beyond Weeds: Green Teaming Generative AI for Beneficial Uses | May 30, 2023 | Red Teaming | —Unverified | 0 |
| Query-Efficient Black-Box Red Teaming via Bayesian Optimization | May 27, 2023 | Bayesian OptimizationLanguage Modeling | CodeCode Available | 1 |
| Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback | Mar 9, 2023 | Red Teaming | —Unverified | 0 |