Privacy-Preserving Mechanisms for Multi-Label Image Recognition

2022 ◽  
Vol 16 (4) ◽  
pp. 1-21
Author(s):  
Honghui Xu ◽  
Zhipeng Cai ◽  
Wei Li

Multi-label image recognition has been an indispensable fundamental component for many real computer vision applications. However, a severe threat of privacy leakage in multi-label image recognition has been overlooked by existing studies. To fill this gap, two privacy-preserving models, Privacy-Preserving Multi-label Graph Convolutional Networks (P2-ML-GCN) and Robust P2-ML-GCN (RP2-ML-GCN), are developed in this article, where differential privacy mechanism is implemented on the model’s outputs so as to defend black-box attack and avoid large aggregated noise simultaneously. In particular, a regularization term is exploited in the loss function of RP2-ML-GCN to increase the model prediction accuracy and robustness. After that, a proper differential privacy mechanism is designed with the intention of decreasing the bias of loss function in P2-ML-GCN and increasing prediction accuracy. Besides, we analyze that a bounded global sensitivity can mitigate excessive noise’s side effect and obtain a performance improvement for multi-label image recognition in our models. Theoretical proof shows that our two models can guarantee differential privacy for model’s outputs, weights and input features while preserving model robustness. Finally, comprehensive experiments are conducted to validate the advantages of our proposed models, including the implementation of differential privacy on model’s outputs, the incorporation of regularization term into loss function, and the adoption of bounded global sensitivity for multi-label image recognition.

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Hanyi Wang ◽  
Kun He ◽  
Ben Niu ◽  
Lihua Yin ◽  
Fenghua Li

Group activities on social networks are increasing rapidly with the development of mobile devices and IoT terminals, creating a huge demand for group recommendation. However, group recommender systems are facing an important problem of privacy leakage on user’s historical data and preference. Existing solutions always pay attention to protect the historical data but ignore the privacy of preference. In this paper, we design a privacy-preserving group recommendation scheme, consisting of a personalized recommendation algorithm and a preference aggregation algorithm. With the carefully introduced local differential privacy (LDP), our personalized recommendation algorithm can protect user’s historical data in each specific group. We also propose an Intra-group transfer Privacy-preserving Preference Aggregation algorithm (IntPPA). IntPPA protects each group member’s personal preference against either the untrusted servers or other users. It could also defend long-term observation attack. We also conduct several experiments to measure the privacy-preserving effect and usability of our scheme with some closely related schemes. Experimental results on two datasets show the utility and privacy of our scheme and further illustrate its advantages.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-30
Author(s):  
Qiben Yan ◽  
Jianzhi Lou ◽  
Mehmet C. Vuran ◽  
Suat Irmak

Precision agriculture has become a promising paradigm to transform modern agriculture. The recent revolution in big data and Internet-of-Things (IoT) provides unprecedented benefits including optimizing yield, minimizing environmental impact, and reducing cost. However, the mass collection of farm data in IoT applications raises serious concerns about potential privacy leakage that may harm the farmers’ welfare. In this work, we propose a novel scalable and private geo-distance evaluation system, called SPRIDE, to allow application servers to provide geographic-based services by computing the distances among sensors and farms privately. The servers determine the distances without learning any additional information about their locations. The key idea of SPRIDE is to perform efficient distance measurement and distance comparison on encrypted locations over a sphere by leveraging a homomorphic cryptosystem. To serve a large user base, we further propose SPRIDE+ with novel and practical performance enhancements based on pre-computation of cryptographic elements. Through extensive experiments using real-world datasets, we show SPRIDE+ achieves private distance evaluation on a large network of farms, attaining 3+ times runtime performance improvement over existing techniques. We further show SPRIDE+ can run on resource-constrained mobile devices, which offers a practical solution for privacy-preserving precision agriculture IoT applications.


Author(s):  
Dan Wang ◽  
Ju Ren ◽  
Zhibo Wang ◽  
Xiaoyi Pang ◽  
Yaoxue Zhang ◽  
...  

2021 ◽  
Vol 18 (11) ◽  
pp. 42-60
Author(s):  
Ting Bao ◽  
Lei Xu ◽  
Liehuang Zhu ◽  
Lihong Wang ◽  
Ruiguang Li ◽  
...  

Author(s):  
Shushu Liu ◽  
An Liu ◽  
Zhixu Li ◽  
Guanfeng Liu ◽  
Jiajie Xu ◽  
...  

Author(s):  
Cheng Huang ◽  
Rongxing Lu ◽  
Hui Zhu ◽  
Jun Shao ◽  
Abdulrahman Alamer ◽  
...  

2021 ◽  
Author(s):  
Jude TCHAYE-KONDI ◽  
Yanlong Zhai ◽  
Liehuang Zhu

<div>We address privacy and latency issues in the edge/cloud computing environment while training a centralized AI model. In our particular case, the edge devices are the only data source for the model to train on the central server. Current privacy-preserving and reducing network latency solutions rely on a pre-trained feature extractor deployed on the devices to help extract only important features from the sensitive dataset. However, finding a pre-trained model or pubic dataset to build a feature extractor for certain tasks may turn out to be very challenging. With the large amount of data generated by edge devices, the edge environment does not really lack data, but its improper access may lead to privacy concerns. In this paper, we present DeepGuess , a new privacy-preserving, and latency aware deeplearning framework. DeepGuess uses a new learning mechanism enabled by the AutoEncoder(AE) architecture called Inductive Learning, which makes it possible to train a central neural network using the data produced by end-devices while preserving their privacy. With inductive learning, sensitive data remains on devices and is not explicitly involved in any backpropagation process. The AE’s Encoder is deployed on devices to extracts and transfers important features to the server. To enhance privacy, we propose a new local deferentially private algorithm that allows the Edge devices to apply random noise to features extracted from their sensitive data before transferred to an untrusted server. The experimental evaluation of DeepGuess demonstrates its effectiveness and ability to converge on a series of experiments.</div>


2019 ◽  
Vol 90 ◽  
pp. 158-174 ◽  
Author(s):  
Chunhui Piao ◽  
Yajuan Shi ◽  
Jiaqi Yan ◽  
Changyou Zhang ◽  
Liping Liu

Sign in / Sign up

Export Citation Format

Share Document