scholarly journals An Efficient Privacy-Preserving Outsourced Computation over Public Data

2017 ◽  
Vol 10 (5) ◽  
pp. 756-770 ◽  
Author(s):  
Ximeng Liu ◽  
Baodong Qin ◽  
Robert H. Deng ◽  
Yingjiu Li
Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 874
Author(s):  
Taehoon Kim ◽  
Jihoon Yang

There is a strong positive correlation between the development of deep learning and the amount of public data available. Not all data can be released in their raw form because of the risk to the privacy of the related individuals. The main objective of privacy-preserving data publication is to anonymize the data while maintaining their utility. In this paper, we propose a privacy-preserving semi-generative adversarial network (PPSGAN) that selectively adds noise to class-independent features of each image to enable the processed image to maintain its original class label. Our experiments on training classifiers with synthetic datasets anonymized with various methods confirm that PPSGAN shows better utility than other conventional methods, including blurring, noise-adding, filtering, and generation using GANs.


2019 ◽  
Vol 9 (2) ◽  
pp. 79-98 ◽  
Author(s):  
Oladayo Olufemi Olakanmi ◽  
Adedamola Dada

In outsourcing computation models, weak devices (clients) increasingly rely on remote servers (workers) for data storage and computations. However, most of these servers are hackable or untrustworthy, which makes their computation questionable. Therefore, there is need for clients to validate the correctness of the results of their outsourced computations and ensure that servers learn nothing about their clients other than the outputs of their computation. In this work, an efficient privacy preservation validation approach is developed which allows clients to store and outsource their computations to servers in a semi-honest model such that servers' computational results could be validated by clients without re-computing the computation. This article employs a morphism approach for the client to efficiently perform the proof of correctness of its outsourced computation without re-computing the whole computation. A traceable pseudonym is employed by clients to enforce anonymity.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Haoran Li ◽  
Li Xiong ◽  
Lucila Ohno-Machado ◽  
Xiaoqian Jiang

Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data.


Author(s):  
Oladayo Olufemi Olakanmi ◽  
Adedamola Dada

In outsourcing computation models, weak devices (clients) increasingly rely on remote servers (workers) for data storage and computations. However, most of these servers are hackable or untrustworthy, which makes their computation questionable. Therefore, there is need for clients to validate the correctness of the results of their outsourced computations and ensure that servers learn nothing about their clients other than the outputs of their computation. In this work, an efficient privacy preservation validation approach is developed which allows clients to store and outsource their computations to servers in a semi-honest model such that servers' computational results could be validated by clients without re-computing the computation. This article employs a morphism approach for the client to efficiently perform the proof of correctness of its outsourced computation without re-computing the whole computation. A traceable pseudonym is employed by clients to enforce anonymity.


Cybersecurity ◽  
2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Ximeng Liu ◽  
Robert H. Deng ◽  
Pengfei Wu ◽  
Yang Yang

2015 ◽  
Vol 20 (9) ◽  
pp. 3735-3744 ◽  
Author(s):  
Can Xiang ◽  
Chunming Tang ◽  
Yunlu Cai ◽  
Qiuxia Xu

Sign in / Sign up

Export Citation Format

Share Document