| Text-Diffusion Red-Teaming of Large Language Models: Unveiling Harmful Behaviors with Proximity Constraints | Jan 14, 2025 | Large Language ModelRed Teaming | —Unverified | 0 |
| The Human Factor in AI Red Teaming: Perspectives from Social and Collaborative Computing | Jul 10, 2024 | FairnessRed Teaming | —Unverified | 0 |
| The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm | Jun 26, 2024 | Cross-Lingual TransferRed Teaming | —Unverified | 0 |
| The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward | Aug 28, 2023 | EthicsPhilosophy | —Unverified | 0 |
| Tiny Refinements Elicit Resilience: Toward Efficient Prefix-Model Against LLM Red-Teaming | May 21, 2024 | Red Teaming | —Unverified | 0 |
| Towards medical AI misalignment: a preliminary study | May 22, 2025 | Red Teaming | —Unverified | 0 |
| Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework | Nov 15, 2023 | Red Teaming | —Unverified | 0 |
| Towards Red Teaming in Multimodal and Multilingual Translation | Jan 29, 2024 | Machine TranslationRed Teaming | —Unverified | 0 |
| Towards Secure MLOps: Surveying Attacks, Mitigation Strategies, and Research Challenges | May 30, 2025 | Red Teaming | —Unverified | 0 |
| Understanding and Mitigating Risks of Generative AI in Financial Services | Apr 25, 2025 | FairnessRed Teaming | —Unverified | 0 |
| VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment | Oct 12, 2024 | DiversityHallucination | —Unverified | 0 |
| When Testing AI Tests Us: Safeguarding Mental Health on the Digital Frontlines | Apr 29, 2025 | Red Teaming | —Unverified | 0 |
| SeqAR: Jailbreak LLMs with Sequential Auto-Generated Characters | Jul 2, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 |
| RedRFT: A Light-Weight Benchmark for Reinforcement Fine-Tuning-Based Red Teaming | Jun 4, 2025 | Red Teaming | CodeCode Available | 0 |
| Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models | Jan 19, 2024 | Model EditingRed Teaming | CodeCode Available | 0 |
| What Is Wrong with My Model? Identifying Systematic Problems with Semantic Data Slicing | Sep 14, 2024 | Red Teaming | CodeCode Available | 0 |
| Advancing Adversarial Suffix Transfer Learning on Aligned Large Language Models | Aug 27, 2024 | Red TeamingTransfer Learning | CodeCode Available | 0 |
| Red Teaming for Large Language Models At Scale: Tackling Hallucinations on Mathematics Tasks | Dec 30, 2023 | Red Teaming | CodeCode Available | 0 |
| Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections | Nov 15, 2023 | Red Teaming | CodeCode Available | 0 |
| Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks? | Apr 4, 2024 | Red Teaming | CodeCode Available | 0 |
| RedDebate: Safer Responses through Multi-Agent Red Teaming Debates | Jun 4, 2025 | Red Teaming | CodeCode Available | 0 |
| Red Teaming Language Models for Processing Contradictory Dialogues | May 16, 2024 | Red Teamingvalid | CodeCode Available | 0 |
| RabakBench: Scaling Human Annotations to Construct Localized Multilingual Safety Benchmarks for Low-Resource Languages | Jul 8, 2025 | Red Teaming | CodeCode Available | 0 |
| Overriding Safety protections of Open-source Models | Sep 28, 2024 | Red TeamingSafety Alignment | CodeCode Available | 0 |
| Red Teaming with Mind Reading: White-Box Adversarial Policies Against RL Agents | Sep 5, 2022 | Red Teamingreinforcement-learning | CodeCode Available | 0 |