SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 6170 of 176 papers

TitleStatusHype
An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM0
Enhancing TinyML Security: Study of Adversarial Attack Transferability0
Emerging AI Security Threats for Autonomous Cars -- Case Studies0
Beyond Labeling Oracles: What does it mean to steal ML models?0
Efficient Model Extraction via Boundary Sampling0
Efficiently Learning One Hidden Layer ReLU Networks From Queries0
Better Decisions through the Right Causal World Model0
Adversarial Exploitation of Policy Imitation0
Efficiently Learning Any One Hidden Layer ReLU Network From Queries0
Automating Agential Reasoning: Proof-Calculi and Syntactic Decidability for STIT Logics0
Show:102550
← PrevPage 7 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified