SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 6170 of 176 papers

TitleStatusHype
Beyond Labeling Oracles: What does it mean to steal ML models?0
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Safe and Robust Watermark Injection with a Single OoD ImageCode0
Business Process Text Sketch Automation Generation Using Large Language Model0
The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement LearningCode0
Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models0
Data-Free Model Extraction Attacks in the Context of Object Detection0
Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator0
Show:102550
← PrevPage 7 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified