SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 5175 of 176 papers

TitleStatusHype
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations0
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking0
Automating Agential Reasoning: Proof-Calculi and Syntactic Decidability for STIT Logics0
Efficiently Learning Any One Hidden Layer ReLU Network From Queries0
A Framework for Double-Blind Federated Adaptation of Foundation Models0
Efficient Model Extraction via Boundary Sampling0
Emerging AI Security Threats for Autonomous Cars -- Case Studies0
Enhancing TinyML Security: Study of Adversarial Attack Transferability0
Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security0
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble0
An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM0
Evaluating Query Efficiency and Accuracy of Transfer Learning-based Model Extraction Attack in Federated Learning0
Business Process Text Sketch Automation Generation Using Large Language Model0
A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters0
GradEscape: A Gradient-Based Evader Against AI-Generated Text Detectors0
FDINet: Protecting against DNN Model Extraction via Feature Distortion Index0
Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models0
Bound Your Models! How to Make OWL an ASP Modeling Language0
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles0
Extraction of Complex DNN Models: Real Threat or Boogeyman?0
Bounding-box Watermarking: Defense against Model Extraction Attacks on Object Detectors0
Few-shot Model Extraction Attacks against Sequential Recommender Systems0
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations0
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data0
A Novel Watermarking Framework for Ownership Verification of DNN Architectures0
Show:102550
← PrevPage 3 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified