SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 101110 of 176 papers

TitleStatusHype
Quantifying (Hyper) Parameter Leakage in Machine Learning0
QuantumLeak: Stealing Quantum Neural Networks from Cloud-based NISQ Machines0
QUEEN: Query Unlearning against Model Extraction0
Revealing Secrets From Pre-trained Models0
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack0
Security and Privacy Challenges in Deep Learning Models0
Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models0
SEEK: model extraction attack against hybrid secure inference protocols0
Split HE: Fast Secure Inference Combining Split Learning and Homomorphic Encryption0
Stealing Deep Reinforcement Learning Models for Fun and Profit0
Show:102550
← PrevPage 11 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified