SOTAVerified

Safety Alignment

Papers

Showing 150 of 288 papers

TitleStatusHype
The Hidden Dimensions of LLM Alignment: A Multi-Dimensional Safety AnalysisCode3
Harmful Fine-tuning Attacks and Defenses for Large Language Models: A SurveyCode3
The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMsCode2
PandaGuard: Systematic Evaluation of LLM Safety against Jailbreaking AttacksCode2
Think Twice Before You Act: Enhancing Agent Behavioral Safety with Thought CorrectionCode2
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank AdaptationCode2
STAIR: Improving Safety Alignment with Introspective ReasoningCode2
Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail ModerationCode2
Derail Yourself: Multi-turn LLM Jailbreak Attack through Self-discovered CluesCode2
Cross-Modality Safety AlignmentCode2
Safety Alignment Should Be Made More Than Just a Few Tokens DeepCode2
How Alignment and Jailbreak Work: Explain LLM Safety through Intermediate Hidden StatesCode2
AmpleGCG: Learning a Universal and Transferable Generative Model of Adversarial Suffixes for Jailbreaking Both Open and Closed LLMsCode2
CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code CompletionCode2
DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM JailbreakersCode2
Self-Distillation Bridges Distribution Gap in Language Model Fine-TuningCode2
ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMsCode2
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language ModelsCode2
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!Code2
GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via CipherCode2
Probe before You Talk: Towards Black-box Defense against Backdoor Unalignment for Large Language ModelsCode1
Probing the Robustness of Large Language Models Safety to Latent PerturbationsCode1
DAVSP: Safety Alignment for Large Vision-Language Models via Deep Aligned Visual Safety PromptCode1
Chasing Moving Targets with Online Self-Play Reinforcement Learning for Safer Language ModelsCode1
RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguardsCode1
Lifelong Safety Alignment for Language ModelsCode1
MPO: Multilingual Safety Alignment via Reward Gap OptimizationCode1
MTSA: Multi-turn Safety Alignment for LLMs through Multi-round Red-teamingCode1
Safety Subspaces are Not Distinct: A Fine-Tuning Case StudyCode1
Linear Control of Test Awareness Reveals Differential Compliance in Reasoning ModelsCode1
AdaSteer: Your Aligned LLM is Inherently an Adaptive Jailbreak DefenderCode1
VPO: Aligning Text-to-Video Generation Models with Prompt OptimizationCode1
sudo rm -rf agentic_securityCode1
LookAhead Tuning: Safer Language Models via Partial Answer PreviewsCode1
SafeMERGE: Preserving Safety Alignment in Fine-Tuned Large Language Models via Selective Layer-Wise Model MergingCode1
Improving LLM Safety Alignment with Dual-Objective OptimizationCode1
Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less ReasonableCode1
Steering Dialogue Dynamics for Robustness against Multi-turn Jailbreaking AttacksCode1
Reasoning-Augmented Conversation for Multi-Turn Jailbreak Attacks on Large Language ModelsCode1
X-Boundary: Establishing Exact Safety Boundary to Shield LLMs from Multi-Turn Jailbreaks without Compromising UsabilityCode1
QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query LanguageCode1
Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple InteractionsCode1
Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning PerturbationCode1
xJailbreak: Representation Space Guided Reinforcement Learning for Interpretable LLM JailbreakingCode1
Autonomous Microscopy Experiments through Large Language Model AgentsCode1
PrivAgent: Agentic-based Red-teaming for LLM Privacy LeakageCode1
Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time AlignmentCode1
SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt TypesCode1
Bayesian scaling laws for in-context learningCode1
Locking Down the Finetuned LLMs SafetyCode1
Show:102550
← PrevPage 1 of 6Next →

No leaderboard results yet.