scholarly journals Privacy-Preserving Payment Splitting

2020 ◽  
Vol 2020 (2) ◽  
pp. 67-88
Author(s):  
Saba Eskandarian ◽  
Mihai Christodorescu ◽  
Payman Mohassel

AbstractWidely used payment splitting apps allow members of a group to keep track of debts between members by sending charges for expenses paid by one member on behalf of others. While offering a great deal of convenience, these apps gain access to sensitive data on users’ financial transactions. In this paper, we present a payment splitting app that hides all transaction data within a group from the service provider, provides privacy protections between users in a group, and provides integrity against malicious users or even a malicious server.The core protocol proceeds in a series of rounds in which users either submit real data or cover traffic, and the server blindly updates balances, informs users of charges, and computes integrity checks on user-submitted data. Our protocol requires no cryptographic operations on the server, and after a group’s initial setup, the only cryptographic tool users need is AES.We implement the payment splitting protocol as an Android app and the accompanying server. We find that, for realistic group sizes, it requires fewer than 50 milliseconds per round of computation on a user’s phone and the server requires fewer than 300 microseconds per round for each group, meaning that our protocol enjoys excellent performance and scalability properties.

2022 ◽  
Vol 12 (2) ◽  
pp. 842
Author(s):  
Junxin Huang ◽  
Yuchuan Luo ◽  
Ming Xu ◽  
Bowen Hu ◽  
Jian Long

Online ride-hailing (ORH) services allow people to enjoy on-demand transportation services through their mobile devices in a short responding time. Despite the great convenience, users need to submit their location information to the ORH service provider, which may incur unexpected privacy problems. In this paper, we mainly study the privacy and utility of the ride-sharing system, which enables multiple riders to share one driver. To solve the privacy problem and reduce the ride-sharing detouring waste, we propose a privacy-preserving ride-sharing system named pShare. To hide users’ precise locations from the service provider, we apply a zone-based travel time estimation approach to privately compute over sensitive data while cloaking each rider’s location in a zone area. To compute the matching results along with the least-detouring route, the service provider first computes the shortest path for each eligible rider combination, then compares the additional traveling time (ATT) of all combinations, and finally selects the combination with minimum ATT. We designed a secure comparing protocol by utilizing the garbled circuit, which enables the ORH server to execute the protocol with a crypto server without privacy leakage. Moreover, we apply the data packing technique, by which multiple data can be packed as one to reduce the communication and computation overhead. Through the theoretical analysis and evaluation results, we prove that pShare is a practical ride-sharing scheme that can find out the sharing riders with minimum ATT in acceptable accuracy while protecting users’ privacy.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Qi Dou ◽  
Tiffany Y. So ◽  
Meirui Jiang ◽  
Quande Liu ◽  
Varut Vardhanabhuti ◽  
...  

AbstractData privacy mechanisms are essential for rapidly scaling medical training databases to capture the heterogeneity of patient data distributions toward robust and generalizable machine learning systems. In the current COVID-19 pandemic, a major focus of artificial intelligence (AI) is interpreting chest CT, which can be readily used in the assessment and management of the disease. This paper demonstrates the feasibility of a federated learning method for detecting COVID-19 related CT abnormalities with external validation on patients from a multinational study. We recruited 132 patients from seven multinational different centers, with three internal hospitals from Hong Kong for training and testing, and four external, independent datasets from Mainland China and Germany, for validating model generalizability. We also conducted case studies on longitudinal scans for automated estimation of lesion burden for hospitalized COVID-19 patients. We explore the federated learning algorithms to develop a privacy-preserving AI model for COVID-19 medical image diagnosis with good generalization capability on unseen multinational datasets. Federated learning could provide an effective mechanism during pandemics to rapidly develop clinically useful AI across institutions and countries overcoming the burden of central aggregation of large amounts of sensitive data.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1367
Author(s):  
Raghida El El Saj ◽  
Ehsan Sedgh Sedgh Gooya ◽  
Ayman Alfalou ◽  
Mohamad Khalil

Privacy-preserving deep neural networks have become essential and have attracted the attention of many researchers due to the need to maintain the privacy and the confidentiality of personal and sensitive data. The importance of privacy-preserving networks has increased with the widespread use of neural networks as a service in unsecured cloud environments. Different methods have been proposed and developed to solve the privacy-preserving problem using deep neural networks on encrypted data. In this article, we reviewed some of the most relevant and well-known computational and perceptual image encryption methods. These methods as well as their results have been presented, compared, and the conditions of their use, the durability and robustness of some of them against attacks, have been discussed. Some of the mentioned methods have demonstrated an ability to hide information and make it difficult for adversaries to retrieve it while maintaining high classification accuracy. Based on the obtained results, it was suggested to develop and use some of the cited privacy-preserving methods in applications other than classification.


2019 ◽  
Vol 140-141 ◽  
pp. 38-60 ◽  
Author(s):  
Josep Domingo-Ferrer ◽  
Oriol Farràs ◽  
Jordi Ribes-González ◽  
David Sánchez

2013 ◽  
Vol 2013 ◽  
pp. 1-5 ◽  
Author(s):  
Yi Sun ◽  
Qiaoyan Wen ◽  
Yudong Zhang ◽  
Hua Zhang ◽  
Zhengping Jin

As a powerful tool in solving privacy preserving cooperative problems, secure multiparty computation is more and more popular in electronic bidding, anonymous voting, and online auction. Privacy preserving sequencing problem which is an essential link is regarded as the core issue in these applications. However, due to the difficulties of solving multiparty privacy preserving sequencing problem, related secure protocol is extremely rare. In order to break this deadlock, this paper first presents an efficient secure multiparty computation protocol for the general privacy-preserving sequencing problem based on symmetric homomorphic encryption. The result is of value not only in theory, but also in practice.


2018 ◽  
Vol 8 (2) ◽  
pp. 47-65 ◽  
Author(s):  
Ubaidullah Alias Kashif ◽  
Zulfiqar Ali Memon ◽  
Shafaq Siddiqui ◽  
Abdul Rasheed Balouch ◽  
Rakhi Batra

This article describes how the enormous potential benefits provided by the cloud services, made enterprises to show huge interest in adopting cloud computing. As the service provider has control over the entire data of an organization stored onto the cloud, a malicious activity, whether internal or external can tamper with the data and computation. This causes enterprises to lack trust in adopting services due to privacy, security and trust issues. Despite of having such issues, the consumer has no root level access right to secure and check the integrity of procured resources. To establish a trust between the consumer and the provider, it is desirable to let the consumer to check the procured platform hosted at provider side for safety and security. This article proposes an architectural design of a trusted platform for the IaaS cloud computing by the means of which the consumer can check the integrity of a guest platform. TCG's TPM is deployed and used on the consumer side as the core component of the proposed architecture and it is distributed between the service provider and the consumer.


2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Hua Dai ◽  
Hui Ren ◽  
Zhiye Chen ◽  
Geng Yang ◽  
Xun Yi

Outsourcing data in clouds is adopted by more and more companies and individuals due to the profits from data sharing and parallel, elastic, and on-demand computing. However, it forces data owners to lose control of their own data, which causes privacy-preserving problems on sensitive data. Sorting is a common operation in many areas, such as machine learning, service recommendation, and data query. It is a challenge to implement privacy-preserving sorting over encrypted data without leaking privacy of sensitive data. In this paper, we propose privacy-preserving sorting algorithms which are on the basis of the logistic map. Secure comparable codes are constructed by logistic map functions, which can be utilized to compare the corresponding encrypted data items even without knowing their plaintext values. Data owners firstly encrypt their data and generate the corresponding comparable codes and then outsource them to clouds. Cloud servers are capable of sorting the outsourced encrypted data in accordance with their corresponding comparable codes by the proposed privacy-preserving sorting algorithms. Security analysis and experimental results show that the proposed algorithms can protect data privacy, while providing efficient sorting on encrypted data.


2021 ◽  
Author(s):  
Jude TCHAYE-KONDI ◽  
Yanlong Zhai ◽  
Liehuang Zhu

<div>We address privacy and latency issues in the edge/cloud computing environment while training a centralized AI model. In our particular case, the edge devices are the only data source for the model to train on the central server. Current privacy-preserving and reducing network latency solutions rely on a pre-trained feature extractor deployed on the devices to help extract only important features from the sensitive dataset. However, finding a pre-trained model or pubic dataset to build a feature extractor for certain tasks may turn out to be very challenging. With the large amount of data generated by edge devices, the edge environment does not really lack data, but its improper access may lead to privacy concerns. In this paper, we present DeepGuess , a new privacy-preserving, and latency aware deeplearning framework. DeepGuess uses a new learning mechanism enabled by the AutoEncoder(AE) architecture called Inductive Learning, which makes it possible to train a central neural network using the data produced by end-devices while preserving their privacy. With inductive learning, sensitive data remains on devices and is not explicitly involved in any backpropagation process. The AE’s Encoder is deployed on devices to extracts and transfers important features to the server. To enhance privacy, we propose a new local deferentially private algorithm that allows the Edge devices to apply random noise to features extracted from their sensitive data before transferred to an untrusted server. The experimental evaluation of DeepGuess demonstrates its effectiveness and ability to converge on a series of experiments.</div>


Sign in / Sign up

Export Citation Format

Share Document