SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 5160 of 176 papers

TitleStatusHype
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations0
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking0
Automating Agential Reasoning: Proof-Calculi and Syntactic Decidability for STIT Logics0
Efficiently Learning Any One Hidden Layer ReLU Network From Queries0
A Framework for Double-Blind Federated Adaptation of Foundation Models0
Efficient Model Extraction via Boundary Sampling0
Emerging AI Security Threats for Autonomous Cars -- Case Studies0
Enhancing TinyML Security: Study of Adversarial Attack Transferability0
Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security0
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble0
Show:102550
← PrevPage 6 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified