SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 101150 of 176 papers

TitleStatusHype
Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models0
A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters0
Towards Automatically Extracting UML Class Diagrams from Natural Language SpecificationsCode0
SEEK: model extraction attack against hybrid secure inference protocols0
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking0
Revealing Secrets From Pre-trained Models0
EVE: Environmental Adaptive Neural Network Models for Low-power Energy Harvesting System0
On the amplification of security and privacy risks by post-hoc explanations in machine learning models0
A Framework for Understanding Model Extraction Attack and Defense0
On the Difficulty of Defending Self-Supervised Learning against Model ExtractionCode0
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations0
Stealing and Evading Malware Classifiers and Antivirus at Low False Positive ConditionsCode0
Split HE: Fast Secure Inference Combining Split Learning and Homomorphic Encryption0
On the Effectiveness of Dataset Watermarking in Adversarial SettingsCode0
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations0
Increasing the Cost of Model Extraction with Calibrated Proof of Work0
Protecting Intellectual Property of Language Generation APIs with Lexical WatermarkCode0
Efficiently Learning One Hidden Layer ReLU Networks From Queries0
Efficiently Learning Any One Hidden Layer ReLU Network From Queries0
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories0
Watermarking Graph Neural Networks based on Backdoor Attacks0
Process Extraction from Text: Benchmarking the State of the Art and Paving the Way for Future ChallengesCode0
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data0
HODA: Protecting DNNs Against Model Extraction Attacks via Hardness of Samples0
NASPY: Automated Extraction of Automated Machine Learning Models0
A Novel Watermarking Framework for Ownership Verification of DNN Architectures0
Was my Model Stolen? Feature Sharing for Robust and Transferable Watermarks0
Emerging AI Security Threats for Autonomous Cars -- Case Studies0
Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs0
Power-Based Attacks on Spatial DNN Accelerators0
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI0
Stateful Detection of Model Extraction AttacksCode0
HODA: Hardness-Oriented Detection of Model Extraction Attacks0
Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information0
Killing One Bird with Two Stones: Model Extraction and Attribute Inference Attacks against BERT-based APIs0
An Exact Poly-Time Membership-Queries Algorithm for Extraction a three-Layer ReLU Network0
A Review of Confidentiality Threats Against Embedded Neural Network Models0
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Models0
Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack0
Using Python for Model Inference in Deep Learning0
BODAME: Bilevel Optimization for Defense Against Model Extraction0
Model Extraction and Defenses on Generative Adversarial Networks0
Grey-box Extraction of Natural Language Models0
EXPLORING VULNERABILITIES OF BERT-BASED APIS0
Sparsity-driven Digital Terrain Model Extraction0
A Knowledge Representation Approach to Automated Mathematical Modelling0
Monitoring-based Differential Privacy Mechanism Against Query-Flooding Parameter Duplication Attack0
Leveraging Extracted Model Adversaries for Improved Black Box Attacks0
Model Extraction Attacks on Graph Neural Networks: Taxonomy and RealizationCode0
Model extraction from counterfactual explanationsCode0
Show:102550
← PrevPage 3 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified