scholarly journals Multi-Party Privacy-Preserving Logistic Regression with Poor Quality Data Filtering for IoT Contributors

Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2049
Author(s):  
Kennedy Edemacu ◽  
Jong Wook Kim

Nowadays, the internet of things (IoT) is used to generate data in several application domains. A logistic regression, which is a standard machine learning algorithm with a wide application range, is built on such data. Nevertheless, building a powerful and effective logistic regression model requires large amounts of data. Thus, collaboration between multiple IoT participants has often been the go-to approach. However, privacy concerns and poor data quality are two challenges that threaten the success of such a setting. Several studies have proposed different methods to address the privacy concern but to the best of our knowledge, little attention has been paid towards addressing the poor data quality problems in the multi-party logistic regression model. Thus, in this study, we propose a multi-party privacy-preserving logistic regression framework with poor quality data filtering for IoT data contributors to address both problems. Specifically, we propose a new metric gradient similarity in a distributed setting that we employ to filter out parameters from data contributors with poor quality data. To solve the privacy challenge, we employ homomorphic encryption. Theoretical analysis and experimental evaluations using real-world datasets demonstrate that our proposed framework is privacy-preserving and robust against poor quality data.

10.2196/22555 ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. e22555
Author(s):  
Yao Lu ◽  
Tianshu Zhou ◽  
Yu Tian ◽  
Shiqiang Zhu ◽  
Jingsong Li

Background Data sharing in multicenter medical research can improve the generalizability of research, accelerate progress, enhance collaborations among institutions, and lead to new discoveries from data pooled from multiple sources. Despite these benefits, many medical institutions are unwilling to share their data, as sharing may cause sensitive information to be leaked to researchers, other institutions, and unauthorized users. Great progress has been made in the development of secure machine learning frameworks based on homomorphic encryption in recent years; however, nearly all such frameworks use a single secret key and lack a description of how to securely evaluate the trained model, which makes them impractical for multicenter medical applications. Objective The aim of this study is to provide a privacy-preserving machine learning protocol for multiple data providers and researchers (eg, logistic regression). This protocol allows researchers to train models and then evaluate them on medical data from multiple sources while providing privacy protection for both the sensitive data and the learned model. Methods We adapted a novel threshold homomorphic encryption scheme to guarantee privacy requirements. We devised new relinearization key generation techniques for greater scalability and multiplicative depth and new model training strategies for simultaneously training multiple models through x-fold cross-validation. Results Using a client-server architecture, we evaluated the performance of our protocol. The experimental results demonstrated that, with 10-fold cross-validation, our privacy-preserving logistic regression model training and evaluation over 10 attributes in a data set of 49,152 samples took approximately 7 minutes and 20 minutes, respectively. Conclusions We present the first privacy-preserving multiparty logistic regression model training and evaluation protocol based on threshold homomorphic encryption. Our protocol is practical for real-world use and may promote multicenter medical research to some extent.


2019 ◽  
Vol 50 (1-2) ◽  
pp. 88-92
Author(s):  
Behrouz Ehsani-Moghaddam ◽  
Ken Martin ◽  
John A Queenan

Data quality (DQ) is the degree to which a given dataset meets a user’s requirements. In the primary healthcare setting, poor quality data can lead to poor patient care, negatively affect the validity and reproducibility of research results and limit the value that such data may have for public health surveillance. To extract reliable and useful information from a large quantity of data and to make more effective and informed decisions, data should be as clean and free of errors as possible. Moreover, because DQ is defined within the context of different user requirements that often change, DQ should be considered to be an emergent construct. As such, we cannot expect that a sufficient level of DQ will last forever. Therefore, the quality of clinical data should be constantly assessed and reassessed in an iterative fashion to ensure that appropriate levels of quality are sustained in an acceptable and transparent manner. This document is based on our hands-on experiences dealing with DQ improvement for the Canadian Primary Care Sentinel Surveillance Network database. The DQ dimensions that are discussed here are accuracy and precision, completeness and comprehensiveness, consistency, timeliness, uniqueness, data cleaning and coherence.


2018 ◽  
Vol 11 (S4) ◽  
Author(s):  
Andrey Kim ◽  
Yongsoo Song ◽  
Miran Kim ◽  
Keewoo Lee ◽  
Jung Hee Cheon

2020 ◽  
Author(s):  
Yao Lu ◽  
Tianshu Zhou ◽  
Yu Tian ◽  
Shiqiang Zhu ◽  
Jingsong Li

BACKGROUND Data sharing in multicenter medical research can improve the generalizability of research, accelerate progress, enhance collaborations among institutions, and lead to new discoveries from data pooled from multiple sources. Despite these benefits, many medical institutions are unwilling to share their data, as sharing may cause sensitive information to be leaked to researchers, other institutions, and unauthorized users. Great progress has been made in the development of secure machine learning frameworks based on homomorphic encryption in recent years; however, nearly all such frameworks use a single secret key and lack a description of how to securely evaluate the trained model, which makes them impractical for multicenter medical applications. OBJECTIVE The aim of this study is to provide a privacy-preserving machine learning protocol for multiple data providers and researchers (eg, logistic regression). This protocol allows researchers to train models and then evaluate them on medical data from multiple sources while providing privacy protection for both the sensitive data and the learned model. METHODS We adapted a novel threshold homomorphic encryption scheme to guarantee privacy requirements. We devised new relinearization key generation techniques for greater scalability and multiplicative depth and new model training strategies for simultaneously training multiple models through x-fold cross-validation. RESULTS Using a client-server architecture, we evaluated the performance of our protocol. The experimental results demonstrated that, with 10-fold cross-validation, our privacy-preserving logistic regression model training and evaluation over 10 attributes in a data set of 49,152 samples took approximately 7 minutes and 20 minutes, respectively. CONCLUSIONS We present the first privacy-preserving multiparty logistic regression model training and evaluation protocol based on threshold homomorphic encryption. Our protocol is practical for real-world use and may promote multicenter medical research to some extent.


Sign in / Sign up

Export Citation Format

Share Document