| RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models | Nov 16, 2023 | Backdoor AttackData Poisoning | —Unverified | 0 | 0 |
| OpenAI o1 System Card | Dec 21, 2024 | ManagementRed Teaming | —Unverified | 0 | 0 |
| Can Language Models be Instructed to Protect Personal Information? | Oct 3, 2023 | Adversarial RobustnessRed Teaming | —Unverified | 0 | 0 |
| The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward | Aug 28, 2023 | EthicsPhilosophy | —Unverified | 0 | 0 |
| Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback | Mar 9, 2023 | Red Teaming | —Unverified | 0 | 0 |
| Phi-3 Safety Post-Training: Aligning Language Models with a "Break-Fix" Cycle | Jul 18, 2024 | BenchmarkingLanguage Modeling | —Unverified | 0 | 0 |
| Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models | Jan 14, 2025 | Red Teaming | —Unverified | 0 | 0 |
| POEX: Understanding and Mitigating Policy Executable Jailbreak Attacks against Embodied AI | Dec 21, 2024 | LLM JailbreakRed Teaming | —Unverified | 0 | 0 |
| Predictive Red Teaming: Breaking Policies Without Breaking Robots | Feb 10, 2025 | Imitation LearningRed Teaming | —Unverified | 0 | 0 |
| Building Safe GenAI Applications: An End-to-End Overview of Red Teaming for Large Language Models | Mar 3, 2025 | Red TeamingSurvey | —Unverified | 0 | 0 |
| Breaking the Global North Stereotype: A Global South-centric Benchmark Dataset for Auditing and Mitigating Biases in Facial Recognition Systems | Jul 22, 2024 | Contrastive LearningGender Prediction | —Unverified | 0 | 0 |
| Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming | Feb 22, 2025 | DiversityIn-Context Learning | —Unverified | 0 | 0 |
| Purple-teaming LLMs with Adversarial Defender Training | Jul 1, 2024 | Generative Adversarial NetworkRed Teaming | —Unverified | 0 | 0 |
| Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models | Jan 3, 2025 | Red Teaming | —Unverified | 0 | 0 |
| Quality-Diversity Red-Teaming: Automated Generation of High-Quality and Diverse Attackers for Large Language Models | Jun 8, 2025 | DiversityRed Teaming | —Unverified | 0 | 0 |
| AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration | Mar 20, 2025 | Red Teaming | —Unverified | 0 | 0 |
| When Testing AI Tests Us: Safeguarding Mental Health on the Digital Frontlines | Apr 29, 2025 | Red Teaming | —Unverified | 0 | 0 |
| RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models | Apr 25, 2025 | RAGRed Teaming | —Unverified | 0 | 0 |
| Automating Privilege Escalation with Deep Reinforcement Learning | Oct 4, 2021 | BIG-bench Machine LearningDeep Reinforcement Learning | —Unverified | 0 | 0 |
| Recent advancements in LLM Red-Teaming: Techniques, Defenses, and Ethical Considerations | Oct 9, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 | 0 |
| RedAgent: Red Teaming Large Language Models with Context-aware Autonomous Language Agent | Jul 23, 2024 | Red Teaming | —Unverified | 0 | 0 |
| Tiny Refinements Elicit Resilience: Toward Efficient Prefix-Model Against LLM Red-Teaming | May 21, 2024 | Red Teaming | —Unverified | 0 | 0 |
| Automated Red Teaming with GOAT: the Generative Offensive Agent Tester | Oct 2, 2024 | Red Teaming | —Unverified | 0 | 0 |
| Towards medical AI misalignment: a preliminary study | May 22, 2025 | Red Teaming | —Unverified | 0 | 0 |
| Aurora-M: Open Source Continual Pre-training for Multilingual Language and Code | Mar 30, 2024 | Continual PretrainingLanguage Modelling | —Unverified | 0 | 0 |