SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 126150 of 176 papers

TitleStatusHype
A framework for the extraction of Deep Neural Networks by leveraging public data0
A Framework for Understanding Model Extraction Attack and Defense0
A Knowledge Representation Approach to Automated Mathematical Modelling0
An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM0
An Exact Poly-Time Membership-Queries Algorithm for Extraction a three-Layer ReLU Network0
A Novel Watermarking Framework for Ownership Verification of DNN Architectures0
A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters0
A Review of Confidentiality Threats Against Embedded Neural Network Models0
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments0
A Survey on Event-based News Narrative Extraction0
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models0
Automated Data-Driven Model Extraction and Validation of Inverter Dynamics with Grid Support Function0
Automating Agential Reasoning: Proof-Calculi and Syntactic Decidability for STIT Logics0
Better Decisions through the Right Causal World Model0
Beyond Labeling Oracles: What does it mean to steal ML models?0
Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs0
BODAME: Bilevel Optimization for Defense Against Model Extraction0
Bounding-box Watermarking: Defense against Model Extraction Attacks on Object Detectors0
Bound Your Models! How to Make OWL an ASP Modeling Language0
Business Process Text Sketch Automation Generation Using Large Language Model0
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble0
CopyQNN: Quantum Neural Network Extraction Attack under Varying Quantum Noise0
Data-Free Model Extraction Attacks in the Context of Object Detection0
Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI0
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks0
Show:102550
← PrevPage 6 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified