SOTAVerified

Confidential Prompting: Protecting User Prompts from Cloud LLM Providers

2024-09-27Code Available0· sign in to hype

In Gim, Caihua Li, Lin Zhong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Our work tackles the challenge of securing user inputs in cloud-hosted large language model (LLM) serving while ensuring model confidentiality, output invariance, and compute efficiency. We introduce Secure Partitioned Decoding (SPD), which uses confidential computing to confine user prompts to a trusted execution environment (TEE), namely a confidential virtual machine (CVM), while allowing service providers to generate tokens efficiently. We also introduce a novel cryptographic method, Prompt Obfuscation (PO), to ensure robustness against reconstruction attacks on SPD. We demonstrate our approach preserves both prompt confidentiality and LLM serving efficiency. Our solution enables privacy-preserving cloud LLM serving that handles sensitive prompts, such as clinical records, financial data, and personal information.

Tasks

Reproductions