| LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models | Apr 14, 2025 | Persuasion StrategiesSafety Alignment | —Unverified | 0 |
| Do We Really Need Curated Malicious Data for Safety Alignment in Multi-modal Large Language Models? | Apr 14, 2025 | Safety Alignment | —Unverified | 0 |
| AdaSteer: Your Aligned LLM is Inherently an Adaptive Jailbreak Defender | Apr 13, 2025 | Safety Alignment | CodeCode Available | 1 |
| LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation | Apr 10, 2025 | Code GenerationContinual Learning | CodeCode Available | 2 |
| SafeMLRM: Demystifying Safety in Multi-modal Large Reasoning Models | Apr 9, 2025 | Safety Alignment | —Unverified | 0 |
| ERPO: Advancing Safety Alignment via Ex-Ante Reasoning Preference Optimization | Apr 3, 2025 | Safety Alignment | —Unverified | 0 |
| More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment | Apr 3, 2025 | ARCHellaSwag | —Unverified | 0 |
| STAR-1: Safer Alignment of Reasoning LLMs with 1K Data | Apr 2, 2025 | DiversitySafety Alignment | —Unverified | 0 |
| Effectively Controlling Reasoning Models through Thinking Intervention | Mar 31, 2025 | Instruction FollowingSafety Alignment | —Unverified | 0 |
| sudo rm -rf agentic_security | Mar 26, 2025 | Adversarial AttackAI and Safety | CodeCode Available | 1 |