Efran (O): "Efficient Scalar Homomorphic Scheme on MapReduce for Data Privacy Preserving"

Author(s):  
Konan Martin ◽  
Wenyong Wang ◽  
Brighter Agyemang

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Qi Dou ◽  
Tiffany Y. So ◽  
Meirui Jiang ◽  
Quande Liu ◽  
Varut Vardhanabhuti ◽  
...  

AbstractData privacy mechanisms are essential for rapidly scaling medical training databases to capture the heterogeneity of patient data distributions toward robust and generalizable machine learning systems. In the current COVID-19 pandemic, a major focus of artificial intelligence (AI) is interpreting chest CT, which can be readily used in the assessment and management of the disease. This paper demonstrates the feasibility of a federated learning method for detecting COVID-19 related CT abnormalities with external validation on patients from a multinational study. We recruited 132 patients from seven multinational different centers, with three internal hospitals from Hong Kong for training and testing, and four external, independent datasets from Mainland China and Germany, for validating model generalizability. We also conducted case studies on longitudinal scans for automated estimation of lesion burden for hospitalized COVID-19 patients. We explore the federated learning algorithms to develop a privacy-preserving AI model for COVID-19 medical image diagnosis with good generalization capability on unseen multinational datasets. Federated learning could provide an effective mechanism during pandemics to rapidly develop clinically useful AI across institutions and countries overcoming the burden of central aggregation of large amounts of sensitive data.



2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Hua Dai ◽  
Hui Ren ◽  
Zhiye Chen ◽  
Geng Yang ◽  
Xun Yi

Outsourcing data in clouds is adopted by more and more companies and individuals due to the profits from data sharing and parallel, elastic, and on-demand computing. However, it forces data owners to lose control of their own data, which causes privacy-preserving problems on sensitive data. Sorting is a common operation in many areas, such as machine learning, service recommendation, and data query. It is a challenge to implement privacy-preserving sorting over encrypted data without leaking privacy of sensitive data. In this paper, we propose privacy-preserving sorting algorithms which are on the basis of the logistic map. Secure comparable codes are constructed by logistic map functions, which can be utilized to compare the corresponding encrypted data items even without knowing their plaintext values. Data owners firstly encrypt their data and generate the corresponding comparable codes and then outsource them to clouds. Cloud servers are capable of sorting the outsourced encrypted data in accordance with their corresponding comparable codes by the proposed privacy-preserving sorting algorithms. Security analysis and experimental results show that the proposed algorithms can protect data privacy, while providing efficient sorting on encrypted data.



2015 ◽  
pp. 426-458 ◽  
Author(s):  
S. R. Murugaiyan ◽  
D. Chandramohan ◽  
T. Vengattaraman ◽  
P. Dhavachelvan

The present focuses on the Cloud storage services are having a critical issue in handling the user's private information and its confidentiality. The User data privacy preserving is a vital facet of online storage in cloud computing. The information in cloud data storage is underneath, staid molests of baffling addict endeavor, and it may leads to user clandestine in a roar privacy breach. Moreover, privacy preservation is an indeed research pasture in contemporary information technology development. Preserving User Data in Cloud Service (PUDCS) happens due to the data privacy breach results to a rhythmic way of intruding high confidential digital storage area and barter those information into business by embezzle others information. This paper focuses on preventing (hush-hush) digital data using the proposed privacy preserving framework. It also describes the prevention of stored data and de-identifying unauthorized user attempts, log monitoring and maintaining it in the cloud for promoting allusion to providers and users.



2014 ◽  
Vol 25 (3) ◽  
pp. 48-71 ◽  
Author(s):  
Stepan Kozak ◽  
David Novak ◽  
Pavel Zezula

The general trend in data management is to outsource data to 3rd party systems that would provide data retrieval as a service. This approach naturally brings privacy concerns about the (potentially sensitive) data. Recently, quite extensive research has been done on privacy-preserving outsourcing of traditional exact-match and keyword search. However, not much attention has been paid to outsourcing of similarity search, which is essential in content-based retrieval in current multimedia, sensor or scientific data. In this paper, the authors propose a scheme of outsourcing similarity search. They define evaluation criteria for these systems with an emphasis on usability, privacy and efficiency in real applications. These criteria can be used as a general guideline for a practical system analysis and we use them to survey and mutually compare existing approaches. As the main result, the authors propose a novel dynamic similarity index EM-Index that works for an arbitrary metric space and ensures data privacy and thus is suitable for search systems outsourced for example in a cloud environment. In comparison with other approaches, the index is fully dynamic (update operations are efficient) and its aim is to transfer as much load from clients to the server as possible.



Information ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 166
Author(s):  
Yuelei Xiao ◽  
Haiqi Li

Privacy preserving data publishing has received considerable attention for publishing useful information while preserving data privacy. The existing privacy preserving data publishing methods for multiple sensitive attributes do not consider the situation that different values of a sensitive attribute may have different sensitivity requirements. To solve this problem, we defined three security levels for different sensitive attribute values that have different sensitivity requirements, and given an L s l -diversity model for multiple sensitive attributes. Following this, we proposed three specific greed algorithms based on the maximal-bucket first (MBF), maximal single-dimension-capacity first (MSDCF) and maximal multi-dimension-capacity first (MMDCF) algorithms and the maximal security-level first (MSLF) greed policy, named as MBF based on MSLF (MBF-MSLF), MSDCF based on MSLF (MSDCF-MSLF) and MMDCF based on MSLF (MMDCF-MSLF), to implement the L s l -diversity model for multiple sensitive attributes. The experimental results show that the three algorithms can greatly reduce the information loss of the published microdata, but their runtime is only a small increase, and their information loss tends to be stable with the increasing of data volume. And they can solve the problem that the information loss of MBF, MSDCF and MMDCF increases greatly with the increasing of sensitive attribute number.



Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5463 ◽  
Author(s):  
Po-Wen Chi ◽  
Ming-Hung Wang

Cloud-assisted cyber–physical systems (CCPSs) integrate the physical space with cloud computing. To do so, sensors on the field collect real-life data and forward it to clouds for further data analysis and decision-making. Since multiple services may be accessed at the same time, sensor data should be forwarded to different cloud service providers (CSPs). In this scenario, attribute-based encryption (ABE) is an appropriate technique for securing data communication between sensors and clouds. Each cloud has its own attributes and a broker can determine which cloud is authorized to access data by the requirements set at the time of encryption. In this paper, we propose a privacy-preserving broker-ABE scheme for multiple CCPSs (MCCPS). The ABE separates the policy embedding job from the ABE task. To ease the computational burden of the sensors, this scheme leaves the policy embedding task to the broker, which is generally more powerful than the sensors. Moreover, the proposed scheme provides a way for CSPs to protect data privacy from outside coercion.



Author(s):  
Anastasiia Pika ◽  
Moe T. Wynn ◽  
Stephanus Budiono ◽  
Arthur H.M. ter Hofstede ◽  
Wil M.P. van der Aalst ◽  
...  

Process mining has been successfully applied in the healthcare domain and has helped to uncover various insights for improving healthcare processes. While the benefits of process mining are widely acknowledged, many people rightfully have concerns about irresponsible uses of personal data. Healthcare information systems contain highly sensitive information and healthcare regulations often require protection of data privacy. The need to comply with strict privacy requirements may result in a decreased data utility for analysis. Until recently, data privacy issues did not get much attention in the process mining community; however, several privacy-preserving data transformation techniques have been proposed in the data mining community. Many similarities between data mining and process mining exist, but there are key differences that make privacy-preserving data mining techniques unsuitable to anonymise process data (without adaptations). In this article, we analyse data privacy and utility requirements for healthcare process data and assess the suitability of privacy-preserving data transformation methods to anonymise healthcare data. We demonstrate how some of these anonymisation methods affect various process mining results using three publicly available healthcare event logs. We describe a framework for privacy-preserving process mining that can support healthcare process mining analyses. We also advocate the recording of privacy metadata to capture information about privacy-preserving transformations performed on an event log.



Author(s):  
Nancy Victor ◽  
Daphne Lopez

Data privacy plays a noteworthy part in today's digital world where information is gathered at exceptional rates from different sources. Privacy preserving data publishing refers to the process of publishing personal data without questioning the privacy of individuals in any manner. A variety of approaches have been devised to forfend consumer privacy by applying traditional anonymization mechanisms. But these mechanisms are not well suited for Big Data, as the data which is generated nowadays is not just structured in manner. The data which is generated at very high velocities from various sources includes unstructured and semi-structured information, and thus becomes very difficult to process using traditional mechanisms. This chapter focuses on the various challenges with Big Data, PPDM and PPDP techniques for Big Data and how well it can be scaled for processing both historical and real-time data together using Lambda architecture. A distributed framework for privacy preservation in Big Data by combining Natural language processing techniques is also proposed in this chapter.



2014 ◽  
Vol 721 ◽  
pp. 732-735
Author(s):  
Hua Zhang

This paper proposed an integrity and privacy preserving data aggregation algorithm for WSNs, which is called IPPDA. First, it attached a group of congruent numbers to the sensing data in order to execute integrity checking operated by sink node using Chinese remainder theorem (CRT); then it computed the hash function-based message authentication codes with time and key as the parameters to satisfy data freshness; finally, it adopted a homomorphic encryption scheme to provide privacy preserving. The simulation results show that IPPDA can effectively preserve data privacy, check data integrity, satisfy data freshness, and get accurate data aggregation results while having less computation and communication cost than iCPDA and iPDA.



Sign in / Sign up

Export Citation Format

Share Document