SOTAVerified

Safety Alignment

Papers

Showing 201250 of 288 papers

TitleStatusHype
Finding Safety Neurons in Large Language Models0
From Evaluation to Defense: Advancing Safety in Video Large Language Models0
From Judgment to Interference: Early Stopping LLM Harmful Outputs via Streaming Content Monitoring0
"Haet Bhasha aur Diskrimineshun": Phonetic Perturbations in Code-Mixed Hinglish to Red-Team LLMs0
Internal Activation as the Polar Star for Steering Unsafe LLM Behavior0
Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations0
Jailbreak Attacks and Defenses Against Large Language Models: A Survey0
Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models0
JBFuzz: Jailbreaking LLMs Efficiently and Effectively Using Fuzzing0
JULI: Jailbreak Large Language Models by Self-Introspection0
Just Enough Shifts: Mitigating Over-Refusal in Aligned Language Models with Targeted Representation Fine-Tuning0
Learn to Disguise: Avoid Refusal Responses in LLM's Defense via a Multi-agent Attacker-Disguiser Game0
Llama-3.1-Sherkala-8B-Chat: An Open Large Language Model for Kazakh0
LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models0
LLM-Safety Evaluations Lack Robustness0
LLMs Can Defend Themselves Against Jailbreaking in a Practical Manner: A Vision Paper0
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B0
LoRA-Guard: Parameter-Efficient Guardrail Adaptation for Content Moderation of Large Language Models0
MART: Improving LLM Safety with Multi-round Automatic Red-Teaming0
Mimicking User Data: On Mitigating Fine-Tuning Risks in Closed Large Language Models0
Model Card and Evaluations for Claude Models0
Model-Editing-Based Jailbreak against Safety-aligned Large Language Models0
Model Merging and Safety Alignment: One Bad Model Spoils the Bunch0
More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment0
Multilingual Blending: LLM Safety Alignment Evaluation with Language Mixture0
Na'vi or Knave: Jailbreaking Language Models via Metaphorical Avatars0
NeuRel-Attack: Neuron Relearning for Safety Disalignment in Large Language Models0
No Free Lunch for Defending Against Prefilling Attack by In-Context Learning0
Noise Injection Systemically Degrades Large Language Model Safety Guardrails0
No Two Devils Alike: Unveiling Distinct Mechanisms of Fine-tuning Attacks0
Off-Policy Risk Assessment in Markov Decision Processes0
One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models0
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models0
On the Intrinsic Self-Correction Capability of LLMs: Uncertainty and Latent Concept0
PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach0
PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning0
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference0
Playing Language Game with LLMs Leads to Jailbreaking0
PoisonSwarm: Universal Harmful Information Synthesis via Model Crowdsourcing0
Probing the Safety Response Boundary of Large Language Models via Unsafe Decoding Path Generation0
PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models0
RealSafe-R1: Safety-Aligned DeepSeek-R1 without Compromising Reasoning Capability0
Refining Positive and Toxic Samples for Dual Safety Self-Alignment of LLMs with Minimal Human Interventions0
Refusal-Feature-guided Teacher for Safe Finetuning via Data Filtering and Alignment Distillation0
Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model0
Reshaping Representation Space to Balance the Safety and Over-rejection in Large Audio Language Models0
Robustifying Safety-Aligned Large Language Models through Clean Data Curation0
SafeArena: Evaluating the Safety of Autonomous Web Agents0
SafeCOMM: What about Safety Alignment in Fine-Tuned Telecom Large Language Models?0
SafeDPO: A Simple Approach to Direct Preference Optimization with Enhanced Safety0
Show:102550
← PrevPage 5 of 6Next →

No leaderboard results yet.