SOTAVerified

Safety Alignment

Papers

Showing 151175 of 288 papers

TitleStatusHype
Na'vi or Knave: Jailbreaking Language Models via Metaphorical Avatars0
SafeWorld: Geo-Diverse Safety AlignmentCode0
PrivAgent: Agentic-based Red-teaming for LLM Privacy LeakageCode1
Safety Alignment Backfires: Preventing the Re-emergence of Suppressed Concepts in Fine-tuned Text-to-Image Diffusion Models0
PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning0
Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time AlignmentCode1
Exploring Visual Vulnerabilities via Multi-Loss Adversarial Search for Jailbreaking Vision-Language Models0
Don't Command, Cultivate: An Exploratory Study of System-2 AlignmentCode0
Ensuring Safety and Trust: Analyzing the Risks of Large Language Models in Medicine0
PSA-VLM: Enhancing Vision-Language Model Safety through Progressive Concept-Bottleneck-Driven Alignment0
Playing Language Game with LLMs Leads to Jailbreaking0
Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models0
Stochastic Monkeys at Play: Random Augmentations Cheaply Break LLM Safety AlignmentCode0
Code-Switching Curriculum Learning for Multilingual Transfer in LLMs0
Audio Is the Achilles' Heel: Red Teaming Audio Large Multimodal ModelsCode0
Smaller Large Language Models Can Do Moral Self-Correction0
SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt TypesCode1
Enhancing Safety in Reinforcement Learning with Human Feedback via Rectified Policy OptimizationCode0
Iterative Self-Tuning LLMs for Enhanced Jailbreaking CapabilitiesCode0
Towards Understanding the Fragility of Multilingual LLMs against Fine-Tuning Attacks0
Bayesian scaling laws for in-context learningCode1
BiasJailbreak:Analyzing Ethical Biases and Jailbreak Vulnerabilities in Large Language ModelsCode0
A Common Pitfall of Margin-based Language Model Alignment: Gradient EntanglementCode0
SPIN: Self-Supervised Prompt INjection0
Locking Down the Finetuned LLMs SafetyCode1
Show:102550
← PrevPage 7 of 12Next →

No leaderboard results yet.