model extraction
Recently Published Documents


TOTAL DOCUMENTS

332
(FIVE YEARS 64)

H-INDEX

18
(FIVE YEARS 3)

Author(s):  
Mucong Gao ◽  
Chunfang Li ◽  
Rui Yang ◽  
Minyong Shi ◽  
Jintian Yang

Author(s):  
Karim Ben Alaya ◽  
Laszlo Czuni
Keyword(s):  

2021 ◽  
Author(s):  
Zhenrui Yue ◽  
Zhankui He ◽  
Huimin Zeng ◽  
Julian McAuley

Author(s):  
Xueluan Gong ◽  
Yanjiao Chen ◽  
Wenbin Yang ◽  
Guanghao Mei ◽  
Qian Wang

Cloud service providers, including Google, Amazon, and Alibaba, have now launched machine-learning-as-a-service (MLaaS) platforms, allowing clients to access sophisticated cloud-based machine learning models via APIs. Unfortunately, however, the commercial value of these models makes them alluring targets for theft, and their strategic position as part of the IT infrastructure of many companies makes them an enticing springboard for conducting further adversarial attacks. In this paper, we put forth a novel and effective attack strategy, dubbed InverseNet, that steals the functionality of black-box cloud-based models with only a small number of queries. The crux of the innovation is that, unlike existing model extraction attacks that rely on public datasets or adversarial samples, InverseNet constructs inversed training samples to increase the similarity between the extracted substitute model and the victim model. Further, only a small number of data samples with high confidence scores (rather than an entire dataset) are used to reconstruct the inversed dataset, which substantially reduces the attack cost. Extensive experiments conducted on three simulated victim models and Alibaba Cloud's commercially-available API demonstrate that InverseNet yields a model with significantly greater functional similarity to the victim model than the current state-of-the-art attacks at a substantially lower query budget.


2021 ◽  
Author(s):  
Jean-Baptiste Truong ◽  
Pratyush Maini ◽  
Robert J. Walls ◽  
Nicolas Papernot
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document