SOTAVerified

Red Teaming

Papers

Showing 201250 of 251 papers

TitleStatusHype
CulturalTeaming: AI-Assisted Interactive Red-Teaming for Challenging LLMs' (Lack of) Multicultural Knowledge0
Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks?Code0
Red-Teaming Segment Anything ModelCode0
Aurora-M: Open Source Continual Pre-training for Multilingual Language and Code0
IterAlign: Iterative Constitutional Alignment of Large Language Models0
HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback0
Distract Large Language Models for Automatic Jailbreak AttackCode0
Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI0
A Safe Harbor for AI Evaluation and Red Teaming0
Aligners: Decoupling LLMs and AlignmentCode0
AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning0
Investigating Bias Representations in Llama 2 Chat via Activation Steering0
Gradient-Based Language Model Red TeamingCode0
Red-Teaming for Generative AI: Silver Bullet or Security Theater?0
Towards Red Teaming in Multimodal and Multilingual Translation0
Red Teaming Visual Language Models0
Digital cloning of online social networks for language-sensitive agent-based modeling of misinformation spread0
Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language ModelsCode0
Red Teaming for Large Language Models At Scale: Tackling Hallucinations on Mathematics TasksCode0
A Red Teaming Framework for Securing AI in Maritime Autonomous Systems0
DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions0
InfoPattern: Unveiling Information Propagation Patterns in Social MediaCode0
JAB: Joint Adversarial Prompting and Belief Augmentation0
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models0
Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework0
Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts0
Stealthy and Persistent Unalignment on Large Language Models via Backdoor InjectionsCode0
AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications0
MART: Improving LLM Safety with Multi-round Automatic Red-Teaming0
Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming0
LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B0
Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models0
ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language ModelsCode0
Low-Resource Languages Jailbreak GPT-40
Can Language Models be Instructed to Protect Personal Information?0
No Offense Taken: Eliciting Offensiveness from Language ModelsCode0
Red Teaming Generative AI/NLP, the BB84 quantum cryptography protocol and the NIST-approved Quantum-Resistant Cryptographic Algorithms0
The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward0
FLIRT: Feedback Loop In-context Red Teaming0
Model Card and Evaluations for Claude Models0
Seeing Seeds Beyond Weeds: Green Teaming Generative AI for Beneficial Uses0
Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback0
Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity0
Can Large Language Models Change User Preference Adversarially?0
Red-Teaming the Stable Diffusion Safety Filter0
Red Teaming with Mind Reading: White-Box Adversarial Policies Against RL AgentsCode0
CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI Models0
Automating Privilege Escalation with Deep Reinforcement Learning0
A Multi-Disciplinary Review of Knowledge Acquisition Methods: From Human to Autonomous Eliciting Agents0
Computational Red Teaming in a Sudoku Solving Context: Neural Network Based Skill Representation and Acquisition0
Show:102550
← PrevPage 5 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SUDOAttack Success Rate41Unverified