SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 161170 of 176 papers

TitleStatusHype
SAME: Sample Reconstruction against Model Extraction AttacksCode0
On the Difficulty of Defending Self-Supervised Learning against Model ExtractionCode0
On the Effectiveness of Dataset Watermarking in Adversarial SettingsCode0
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public DataCode0
Efficient and Effective Model ExtractionCode0
DAWN: Dynamic Adversarial Watermarking of Neural NetworksCode0
Defense Against Model Extraction Attacks on Recommender SystemsCode0
MeaeQ: Mount Model Extraction Attacks with Efficient QueriesCode0
Your Semantic-Independent Watermark is Fragile: A Semantic Perturbation Attack against EaaS WatermarkCode0
Thieves on Sesame Street! Model Extraction of BERT-based APIsCode0
Show:102550
← PrevPage 17 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified