SOTAVerified

Safety Alignment

Papers

Showing 76100 of 288 papers

TitleStatusHype
MLLM-Protector: Ensuring MLLM's Safety without Hurting PerformanceCode1
Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-AlignmentCode1
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual PromptsCode1
SuperHF: Supervised Iterative Learning from Human FeedbackCode1
AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language ModelsCode1
Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference OptimizationCode1
All Languages Matter: On the Multilingual Safety of Large Language ModelsCode1
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBenchCode1
Red-Teaming Large Language Models using Chain of Utterances for Safety-AlignmentCode1
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference DatasetCode1
TuneShield: Mitigating Toxicity in Conversational AI while Fine-tuning on Untrusted Data0
Trojan Horse Prompting: Jailbreaking Conversational Multimodal Models by Forging Assistant Message0
Just Enough Shifts: Mitigating Over-Refusal in Aligned Language Models with Targeted Representation Fine-Tuning0
Security Assessment of DeepSeek and GPT Series Models against Jailbreak Attacks0
Safe Pruning LoRA: Robust Distance-Guided Pruning for Safety Alignment in Adaptation of LLMsCode0
SAFEx: Analyzing Vulnerabilities of MoE-Based LLMs via Stable Safety-critical Expert Identification0
Don't Make It Up: Preserving Ignorance Awareness in LLM Fine-Tuning0
Mitigating Safety Fallback in Editing-based Backdoor Injection on LLMsCode0
SecurityLingua: Efficient Defense of LLM Jailbreak Attacks via Security-Aware Prompt Compression0
Monitoring Decomposition Attacks in LLMs with Lightweight Sequential MonitorsCode0
From Judgment to Interference: Early Stopping LLM Harmful Outputs via Streaming Content Monitoring0
AdversariaL attacK sAfety aLIgnment(ALKALI): Safeguarding LLMs through GRACE: Geometric Representation-Aware Contrastive Enhancement- Introducing Adversarial Vulnerability Quality Index (AVQI)0
Refusal-Feature-guided Teacher for Safe Finetuning via Data Filtering and Alignment Distillation0
From Threat to Tool: Leveraging Refusal-Aware Injection Attacks for Safety Alignment0
Why LLM Safety Guardrails Collapse After Fine-tuning: A Similarity Analysis Between Alignment and Fine-tuning Datasets0
Show:102550
← PrevPage 4 of 12Next →

No leaderboard results yet.