SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 2130 of 176 papers

TitleStatusHype
HoneypotNet: Backdoor Attacks Against Model Extraction0
Bounding-box Watermarking: Defense against Model Extraction Attacks on Object Detectors0
Few-shot Model Extraction Attacks against Sequential Recommender Systems0
A Hard-Label Cryptanalytic Extraction of Non-Fully Connected Deep Neural Networks using Side-Channel AttacksCode0
Your Semantic-Independent Watermark is Fragile: A Semantic Perturbation Attack against EaaS WatermarkCode0
Robust and Minimally Invasive Watermarking for EaaSCode0
Efficient Model Extraction via Boundary Sampling0
Efficient and Effective Model ExtractionCode0
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble0
Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Model Watermarking0
Show:102550
← PrevPage 3 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified