SOTAVerified

Safety Alignment

Papers

Showing 5175 of 288 papers

TitleStatusHype
Targeted Vaccine: Safety Alignment for Large Language Models against Harmful Fine-Tuning via Layer-wise PerturbationCode1
AttnGCG: Enhancing Jailbreaking Attacks on LLMs with Attention ManipulationCode1
SCANS: Mitigating the Exaggerated Safety for LLMs via Safety-Conscious Activation SteeringCode1
Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring TechniqueCode1
Antidote: Post-fine-tuning Safety Alignment for Large Language Models against Harmful Fine-tuningCode1
Cross-modality Information Check for Detecting Jailbreaking in Multimodal Large Language ModelsCode1
Can Editing LLMs Inject Harm?Code1
Course-Correction: Safety Alignment Using Synthetic PreferencesCode1
Q-Adapter: Customizing Pre-trained LLMs to New Preferences with Forgetting MitigationCode1
From Theft to Bomb-Making: The Ripple Effect of Unlearning in Defending Against Jailbreak AttacksCode1
SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference DatasetCode1
SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language ModelsCode1
ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat TemplatesCode1
SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision Language ModelCode1
Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and ActivationsCode1
OR-Bench: An Over-Refusal Benchmark for Large Language ModelsCode1
Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning AttackCode1
Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language ModelsCode1
PARDEN, Can You Repeat That? Defending against Jailbreaks via RepetitionCode1
Don't Say No: Jailbreaking LLM by Suppressing RefusalCode1
Uncovering Safety Risks of Large Language Models through Concept Activation VectorCode1
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt TemplatesCode1
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety AlignmentCode1
Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!Code1
Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding SpaceCode1
Show:102550
← PrevPage 3 of 12Next →

No leaderboard results yet.