| Ruby Teaming: Improving Quality Diversity Search with Memory for Automated Red Teaming | Jun 17, 2024 | DiversityRed Teaming | —Unverified | 0 |
| "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jailbreak | Jun 17, 2024 | Red Teaming | CodeCode Available | 1 |
| STAR: SocioTechnical Approach to Red Teaming Language Models | Jun 17, 2024 | Red Teaming | —Unverified | 0 |
| CELL your Model: Contrastive Explanations for Large Language Models | Jun 17, 2024 | Red TeamingText Generation | —Unverified | 0 |
| garak: A Framework for Security Probing Large Language Models | Jun 16, 2024 | Red Teaming | CodeCode Available | 9 |
| MLLMGuard: A Multi-dimensional Safety Evaluation Suite for Multimodal Large Language Models | Jun 11, 2024 | Red Teaming | CodeCode Available | 1 |
| Jailbreak Vision Language Models via Bi-Modal Adversarial Prompt | Jun 6, 2024 | Language ModellingLarge Language Model | CodeCode Available | 2 |
| Unelicitable Backdoors in Language Models via Cryptographic Transformer Circuits | Jun 3, 2024 | Red Teaming | CodeCode Available | 1 |
| Improved Techniques for Optimization-Based Jailbreaking on Large Language Models | May 31, 2024 | Red Teaming | CodeCode Available | 2 |
| Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters | May 30, 2024 | Red Teaming | —Unverified | 0 |
| DiveR-CT: Diversity-enhanced Red Teaming Large Language Model Assistants with Relaxing Constraints | May 29, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 |
| Learning diverse attacks on large language models for robust red-teaming and safety tuning | May 28, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 |
| ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users | May 24, 2024 | DiversityLanguage Modeling | CodeCode Available | 1 |
| Safety Alignment for Vision Language Models | May 22, 2024 | Red TeamingSafety Alignment | —Unverified | 0 |
| Tiny Refinements Elicit Resilience: Toward Efficient Prefix-Model Against LLM Red-Teaming | May 21, 2024 | Red Teaming | —Unverified | 0 |
| Red Teaming Language Models for Processing Contradictory Dialogues | May 16, 2024 | Red Teamingvalid | CodeCode Available | 0 |
| Aloe: A Family of Fine-tuned Open Healthcare LLMs | May 3, 2024 | Prompt EngineeringRed Teaming | CodeCode Available | 1 |
| Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo | Apr 26, 2024 | Language ModellingPrompt Engineering | CodeCode Available | 1 |
| Bias patterns in the application of LLMs for clinical decision support: A comprehensive study | Apr 23, 2024 | Decision MakingQuestion Answering | CodeCode Available | 0 |
| A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI | Apr 23, 2024 | Prompt EngineeringRed Teaming | —Unverified | 0 |
| AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs | Apr 21, 2024 | MMLURed Teaming | CodeCode Available | 2 |
| CulturalTeaming: AI-Assisted Interactive Red-Teaming for Challenging LLMs' (Lack of) Multicultural Knowledge | Apr 10, 2024 | Red Teaming | —Unverified | 0 |
| ALERT: A Comprehensive Benchmark for Assessing Large Language Models' Safety through Red Teaming | Apr 6, 2024 | Adversarial RobustnessDialogue Safety Prediction | CodeCode Available | 2 |
| Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks? | Apr 4, 2024 | Red Teaming | CodeCode Available | 0 |
| Red-Teaming Segment Anything Model | Apr 2, 2024 | Image Segmentationmodel | CodeCode Available | 0 |
| Against The Achilles' Heel: A Survey on Red Teaming for Generative Models | Mar 31, 2024 | Red TeamingSurvey | CodeCode Available | 2 |
| Aurora-M: Open Source Continual Pre-training for Multilingual Language and Code | Mar 30, 2024 | Continual PretrainingLanguage Modelling | —Unverified | 0 |
| IterAlign: Iterative Constitutional Alignment of Large Language Models | Mar 27, 2024 | Red Teaming | —Unverified | 0 |
| HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback | Mar 13, 2024 | Language ModellingLarge Language Model | —Unverified | 0 |
| Distract Large Language Models for Automatic Jailbreak Attack | Mar 13, 2024 | Red Teaming | CodeCode Available | 0 |
| Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI | Mar 12, 2024 | Hyperspectral image analysisHYPERVIEW Challenge | —Unverified | 0 |
| Defending Against Unforeseen Failure Modes with Latent Adversarial Training | Mar 8, 2024 | image-classificationImage Classification | CodeCode Available | 1 |
| Aligners: Decoupling LLMs and Alignment | Mar 7, 2024 | Instruction FollowingRed Teaming | CodeCode Available | 0 |
| A Safe Harbor for AI Evaluation and Red Teaming | Mar 7, 2024 | Red Teaming | —Unverified | 0 |
| Curiosity-driven Red-teaming for Large Language Models | Feb 29, 2024 | Red TeamingReinforcement Learning (RL) | CodeCode Available | 2 |
| AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning | Feb 21, 2024 | Graph Neural NetworkRed Teaming | —Unverified | 0 |
| Adversarial Nibbler: An Open Red-Teaming Method for Identifying Diverse Harms in Text-to-Image Generation | Feb 14, 2024 | Image GenerationRed Teaming | CodeCode Available | 1 |
| Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast | Feb 13, 2024 | Language ModellingLarge Language Model | CodeCode Available | 2 |
| HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal | Feb 6, 2024 | Red Teaming | CodeCode Available | 4 |
| Investigating Bias Representations in Llama 2 Chat via Activation Steering | Feb 1, 2024 | Decision MakingRed Teaming | —Unverified | 0 |
| Gradient-Based Language Model Red Teaming | Jan 30, 2024 | Language ModelingLanguage Modelling | CodeCode Available | 0 |
| Towards Red Teaming in Multimodal and Multilingual Translation | Jan 29, 2024 | Machine TranslationRed Teaming | —Unverified | 0 |
| Red-Teaming for Generative AI: Silver Bullet or Security Theater? | Jan 29, 2024 | Red Teaming | —Unverified | 0 |
| Digital cloning of online social networks for language-sensitive agent-based modeling of misinformation spread | Jan 23, 2024 | MisinformationRed Teaming | —Unverified | 0 |
| Red Teaming Visual Language Models | Jan 23, 2024 | FairnessRed Teaming | —Unverified | 0 |
| Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models | Jan 19, 2024 | Model EditingRed Teaming | CodeCode Available | 0 |
| Red Teaming for Large Language Models At Scale: Tackling Hallucinations on Mathematics Tasks | Dec 30, 2023 | Red Teaming | CodeCode Available | 0 |
| Causality Analysis for Evaluating the Security of Large Language Models | Dec 13, 2023 | Red Teaming | CodeCode Available | 1 |
| AI Control: Improving Safety Despite Intentional Subversion | Dec 12, 2023 | Red Teaming | CodeCode Available | 1 |
| Control Risk for Potential Misuse of Artificial Intelligence in Science | Dec 11, 2023 | Red Teaming | CodeCode Available | 1 |