SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 111120 of 176 papers

TitleStatusHype
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories0
Watermarking Graph Neural Networks based on Backdoor Attacks0
Process Extraction from Text: Benchmarking the State of the Art and Paving the Way for Future ChallengesCode0
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data0
HODA: Protecting DNNs Against Model Extraction Attacks via Hardness of Samples0
A Novel Watermarking Framework for Ownership Verification of DNN Architectures0
NASPY: Automated Extraction of Automated Machine Learning Models0
Was my Model Stolen? Feature Sharing for Robust and Transferable Watermarks0
Emerging AI Security Threats for Autonomous Cars -- Case Studies0
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
Show:102550
← PrevPage 12 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified