scholarly journals PP-VCA: A Privacy-Preserving and Verifiable Combinatorial Auction Mechanism

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Mingwu Zhang ◽  
Bingruolan Zhou

Combinatorial auctions can be employed in the fields such as spectrum auction, network routing, railroad segment, and energy auction, which allow multiple goods to be sold simultaneously and any combination of goods to be bid and the maximum sum of combinations of bidding prices to be calculated. However, in traditional combinatorial auction mechanisms, data concerning bidders’ price and bundle might reveal sensitive information, such as personal preference and competitive relation since the winner determination problem needs to be resolved in terms of sensitive data as above. In order to solve this issue, this paper exploits a privacy-preserving and verifiable combinatorial auction protocol (PP-VCA) to protect bidders’ privacy and ensure the correct auction price in a secure manner, in which we design a one-way and monotonically increasing function to protect a bidder’s bid to enable the auctioneer to pick out the largest bid without revealing any information about bids. Moreover, we design and employ three subprotocols, namely, privacy-preserving winner determination protocol, privacy-preserving scalar protocol, and privacy-preserving verifiable payment determination protocol, to implement the combinatorial auction with bidder privacy and payment verifiability. The results of comprehensive experimental evaluations indicate that our proposed scheme provides a better efficiency and flexibility to meet different types of data volume in terms of the number of goods and bidders.

2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Qinghua Chen ◽  
Shengbao Zheng ◽  
Zhengqiu Weng

Mobile crowd sensing has been a very important paradigm for collecting sensing data from a large number of mobile nodes dispersed over a wide area. Although it provides a powerful means for sensing data collection, mobile nodes are subject to privacy leakage risks since the sensing data from a mobile node may contain sensitive information about the sensor node such as physical locations. Therefore, it is essential for mobile crowd sensing to have a privacy preserving scheme to protect the privacy of mobile nodes. A number of approaches have been proposed for preserving node privacy in mobile crowd sensing. Many of the existing approaches manipulate the sensing data so that attackers could not obtain the privacy-sensitive data. The main drawback of these approaches is that the manipulated data have a lower utility in real-world applications. In this paper, we propose an approach called P3 to preserve the privacy of the mobile nodes in a mobile crowd sensing system, leveraging node mobility. In essence, a mobile node determines a routing path that consists of a sequence of intermediate mobile nodes and then forwards the sensing data along the routing path. By using asymmetric encryptions, it is ensured that a malicious node is not able to determine the source nodes by tracing back along the path. With our approach, upper-layer applications are able to access the original sensing data from mobile nodes, while the privacy of the mobile node is not compromised. Our theoretical analysis shows that the proposed approach achieves a high level of privacy preserving capability. The simulation results also show that the proposed approach incurs only modest overhead.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yang Bai ◽  
Yu Li ◽  
Mingchuang Xie ◽  
Mingyu Fan

In recent years, machine learning approaches have been widely adopted for many applications, including classification. Machine learning models deal with collective sensitive data usually trained in a remote public cloud server, for instance, machine learning as a service (MLaaS) system. In this scene, users upload their local data and utilize the computation capability to train models, or users directly access models trained by MLaaS. Unfortunately, recent works reveal that the curious server (that trains the model with users’ sensitive local data and is curious to know the information about individuals) and the malicious MLaaS user (who abused to query from the MLaaS system) will cause privacy risks. The adversarial method as one of typical mitigation has been studied by several recent works. However, most of them focus on the privacy-preserving against the malicious user; in other words, they commonly consider the data owner and the model provider as one role. Under this assumption, the privacy leakage risks from the curious server are neglected. Differential privacy methods can defend against privacy threats from both the curious sever and the malicious MLaaS user by directly adding noise to the training data. Nonetheless, the differential privacy method will decrease the classification accuracy of the target model heavily. In this work, we propose a generic privacy-preserving framework based on the adversarial method to defend both the curious server and the malicious MLaaS user. The framework can adapt with several adversarial algorithms to generate adversarial examples directly with data owners’ original data. By doing so, sensitive information about the original data is hidden. Then, we explore the constraint conditions of this framework which help us to find the balance between privacy protection and the model utility. The experiments’ results show that our defense framework with the AdvGAN method is effective against MIA and our defense framework with the FGSM method can protect the sensitive data from direct content exposed attacks. In addition, our method can achieve better privacy and utility balance compared to the existing method.


2015 ◽  
Vol 26 (5) ◽  
pp. 1393-1404 ◽  
Author(s):  
He Huang ◽  
Xiang-Yang Li ◽  
Yu-e Sun ◽  
Hongli Xu ◽  
Liusheng Huang

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Ruiqi Hou ◽  
Fei Tang ◽  
Shikai Liang ◽  
Guowei Ling

As a commonly used algorithm in data mining, clustering has been widely applied in many fields, such as machine learning, information retrieval, and pattern recognition. In reality, data to be analyzed are often distributed to multiple parties. Moreover, the rapidly increasing data volume puts heavy computing pressure on data owners. Thus, data owners tend to outsource their own data to cloud servers and obtain data analysis results for the federated data. However, the existing privacy-preserving outsourced k -means schemes cannot verify whether participants share consistent data. Considering the scenarios with multiple data owners and sensitive information security in an outsourced environment, we propose a verifiable privacy-preserving federated k -means clustering scheme. In this article, cloud servers and participants perform k -means clustering algorithm over encrypted data without exposing private data and intermediate results in each iteration. In particular, our scheme can verify the shares from participants when updating the cluster centers based on secret sharing, hash function and blockchain, so that our scheme can resist inconsistent share attacks by malicious participants. Finally, the security and experimental analysis are carried out to show that our scheme can protect private data and get high-accuracy clustering results.


2020 ◽  
Vol 14 (2) ◽  
pp. 116-142
Author(s):  
Shelendra Kumar Jain ◽  
Nishtha Kesswani

Many emerging fields are adopting Internet of Things technologies to incorporate smartness in respective areas. Several IoT based application area produces large volumes of real time data. Data aggregated through sensor nodes may contain highly sensitive information. An effective and successful IoT system must protect sensitive data from revealing to unauthorized persons. In this article, the authors present an efficient privacy-preserving mechanism called Internet of Things privacy (IoTp). The research simulates and analyzes the effectiveness of the proposed data aggregation and data access mechanism for a typical IoT system. Proposed IoTp scheme ensures privacy at data collection, data store and data access phases of the IoT system. The authors have compared proposed work with existing model. Results show that IoTp scheme is efficient and lightweight mechanism for data collection and data access. It is suitable for the resource constrained IoT ecosystems.


Author(s):  
Shelendra Kumar Jain ◽  
Nishtha Kesswani

Many emerging fields are adopting Internet of Things technologies to incorporate smartness in respective areas. Several IoT based application area produces large volumes of real time data. Data aggregated through sensor nodes may contain highly sensitive information. An effective and successful IoT system must protect sensitive data from revealing to unauthorized persons. In this article, the authors present an efficient privacy-preserving mechanism called Internet of Things privacy (IoTp). The research simulates and analyzes the effectiveness of the proposed data aggregation and data access mechanism for a typical IoT system. Proposed IoTp scheme ensures privacy at data collection, data store and data access phases of the IoT system. The authors have compared proposed work with existing model. Results show that IoTp scheme is efficient and lightweight mechanism for data collection and data access. It is suitable for the resource constrained IoT ecosystems.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Qi Dou ◽  
Tiffany Y. So ◽  
Meirui Jiang ◽  
Quande Liu ◽  
Varut Vardhanabhuti ◽  
...  

AbstractData privacy mechanisms are essential for rapidly scaling medical training databases to capture the heterogeneity of patient data distributions toward robust and generalizable machine learning systems. In the current COVID-19 pandemic, a major focus of artificial intelligence (AI) is interpreting chest CT, which can be readily used in the assessment and management of the disease. This paper demonstrates the feasibility of a federated learning method for detecting COVID-19 related CT abnormalities with external validation on patients from a multinational study. We recruited 132 patients from seven multinational different centers, with three internal hospitals from Hong Kong for training and testing, and four external, independent datasets from Mainland China and Germany, for validating model generalizability. We also conducted case studies on longitudinal scans for automated estimation of lesion burden for hospitalized COVID-19 patients. We explore the federated learning algorithms to develop a privacy-preserving AI model for COVID-19 medical image diagnosis with good generalization capability on unseen multinational datasets. Federated learning could provide an effective mechanism during pandemics to rapidly develop clinically useful AI across institutions and countries overcoming the burden of central aggregation of large amounts of sensitive data.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1367
Author(s):  
Raghida El El Saj ◽  
Ehsan Sedgh Sedgh Gooya ◽  
Ayman Alfalou ◽  
Mohamad Khalil

Privacy-preserving deep neural networks have become essential and have attracted the attention of many researchers due to the need to maintain the privacy and the confidentiality of personal and sensitive data. The importance of privacy-preserving networks has increased with the widespread use of neural networks as a service in unsecured cloud environments. Different methods have been proposed and developed to solve the privacy-preserving problem using deep neural networks on encrypted data. In this article, we reviewed some of the most relevant and well-known computational and perceptual image encryption methods. These methods as well as their results have been presented, compared, and the conditions of their use, the durability and robustness of some of them against attacks, have been discussed. Some of the mentioned methods have demonstrated an ability to hide information and make it difficult for adversaries to retrieve it while maintaining high classification accuracy. Based on the obtained results, it was suggested to develop and use some of the cited privacy-preserving methods in applications other than classification.


2019 ◽  
Vol 140-141 ◽  
pp. 38-60 ◽  
Author(s):  
Josep Domingo-Ferrer ◽  
Oriol Farràs ◽  
Jordi Ribes-González ◽  
David Sánchez

Sign in / Sign up

Export Citation Format

Share Document