scholarly journals Federated Learning Model with Adaptive Differential Privacy Protection in Medical IoT

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Lina Ni ◽  
Peng Huang ◽  
Yongshan Wei ◽  
Minglei Shu ◽  
Jinquan Zhang

With the proliferation of intelligent services and applications authorized by artificial intelligence, the Internet of Things has penetrated into many aspects of our daily lives, and the medical field is no exception. The medical Internet of Things (MIoT) can be applied to wearable devices, remote diagnosis, mobile medical treatment, and remote monitoring. There is a large amount of medical information in the databases of various medical institutions. Nevertheless, due to the particularity of medical data, it is extremely related to personal privacy, and the data cannot be shared, resulting in data islands. Federated learning (FL), as a distributed collaborative artificial intelligence method, provides a solution. However, FL also involves multiple security and privacy issues. This paper proposes an adaptive Differential Privacy Federated Learning Medical IoT (DPFL-MIoT) model. Specifically, when the user updates the model locally, we propose a differential privacy federated learning deep neural network with adaptive gradient descent (DPFLAGD-DNN) algorithm, which can adaptively add noise to the model parameters according to the characteristics and gradient of the training data. Since privacy leaks often occur in downlink, we present differential privacy federated learning (DP-FL) algorithm where adaptive noise is added to the parameters when the server distributes the parameters. Our method effectively reduces the addition of unnecessary noise, and at the same time, the model has a good effect. Experimental results on real-world data show that our proposed algorithm can effectively protect data privacy.

Subject IoT ecosystem. Significance The market for the Internet of Things (IoT) or connected devices is expanding rapidly, with no manufacturer currently forecast to dominate the supply chain. This has fragmented the emerging IoT ecosystem, triggering questions about interoperability and cybersecurity of IoT devices. Impacts Firms in manufacturing, transportation and logistics and utilities are expected to see the highest IoT spending in coming years. The pace of IoT adoption is inextricably linked to that of related technologies such as 5G, artificial intelligence and cloud computing. Data privacy and security will be the greatest constraint to IoT adoption.


2021 ◽  
Vol 7 ◽  
pp. e799
Author(s):  
Zhenlong Sun ◽  
Jing Yang ◽  
Xiaoye Li ◽  
Jianpei Zhang

Support vector machine (SVM) is a robust machine learning method and is widely used in classification. However, the traditional SVM training methods may reveal personal privacy when the training data contains sensitive information. In the training process of SVMs, working set selection is a vital step for the sequential minimal optimization-type decomposition methods. To avoid complex sensitivity analysis and the influence of high-dimensional data on the noise of the existing SVM classifiers with privacy protection, we propose a new differentially private working set selection algorithm (DPWSS) in this paper, which utilizes the exponential mechanism to privately select working sets. We theoretically prove that the proposed algorithm satisfies differential privacy. The extended experiments show that the DPWSS algorithm achieves classification capability almost the same as the original non-privacy SVM under different parameters. The errors of optimized objective value between the two algorithms are nearly less than two, meanwhile, the DPWSS algorithm has a higher execution efficiency than the original non-privacy SVM by comparing iterations on different datasets. To the best of our knowledge, DPWSS is the first private working set selection algorithm based on differential privacy.


2020 ◽  
Vol 8 (6) ◽  
pp. 3892-3895

Internet of Things network today naturally is one of the huge quantities of devices from sensors linked through the communication framework to give value added service to the society and mankind. That allows equipment to be connected at anytime with anything rather using network and service. By 2020 there will be 50 to 100 billion devices connected to Internet and will generate heavy data that is to be analyzed for knowledge mining is a forecast. The data collected from individual devices of IoT is not going to give sufficient information to perform any type of analysis like disaster management, sentiment analysis, and smart cities and on surveillance. Privacy and Security related research increasing from last few years. IoT generated data is very huge, and the existing mechanisms like k- anonymity, l-diversity and differential privacy were not able to address these personal privacy issues because the Internet of Things Era is more vulnerable than the Internet Era [10][20]. To solve the personal privacy related problems researchers and IT professionals have to pay more attention to derive policies and to address the key issues of personal privacy preservation, so the utility and trade off will be increased to the Internet of Things applications. Personal Privacy Preserving Data Publication (PPPDP) is the area where the problems are identified and fixed in this IoT Era to ensure better personal privacy.


2021 ◽  
Author(s):  
Ali Hatamizadeh ◽  
Hongxu Yin ◽  
Pavlo Molchanov ◽  
Andriy Myronenko ◽  
Wenqi Li ◽  
...  

Abstract Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack that works for more realistic scenarios where the clients’ training involves updating the Batch Normalization (BN) statistics. Furthermore, we present new ways to measure and visualize potential data leakage in FL. Our work is a step towards establishing reproducible methods of measuring data leakage in FL and could help determine the optimal tradeoffs between privacy-preserving techniques, such as differential privacy, and model accuracy based on quantifiable metrics.


Author(s):  
Marmar Moussa ◽  
Steven A. Demurjian

This chapter presents a survey of the most important security and privacy issues related to large-scale data sharing and mining in big data with focus on differential privacy as a promising approach for achieving privacy especially in statistical databases often used in healthcare. A case study is presented utilizing differential privacy in healthcare domain, the chapter analyzes and compares the major differentially private data release strategies and noise mechanisms such as the Laplace and the exponential mechanisms. The background section discusses several security and privacy approaches in big data including authentication and encryption protocols, and privacy preserving techniques such as k-anonymity. Next, the chapter introduces the differential privacy concepts used in the interactive and non-interactive data sharing models and the various noise mechanisms used. An instrumental case study is then presented to examine the effect of applying differential privacy in analytics. The chapter then explores the future trends and finally, provides a conclusion.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Yingbo Li ◽  
Yucong Duan ◽  
Zakaria Maamar ◽  
Haoyang Che ◽  
Anamaria-Beatrice Spulber ◽  
...  

Privacy protection has recently been in the spotlight of attention to both academia and industry. Society protects individual data privacy through complex legal frameworks. The increasing number of applications of data science and artificial intelligence has resulted in a higher demand for the ubiquitous application of the data. The privacy protection of the broad Data-Information-Knowledge-Wisdom (DIKW) landscape, the next generation of information organization, has taken a secondary role. In this paper, we will explore DIKW architecture through the applications of the popular swarm intelligence and differential privacy. As differential privacy proved to be an effective data privacy approach, we will look at it from a DIKW domain perspective. Swarm intelligence can effectively optimize and reduce the number of items in DIKW used in differential privacy, thus accelerating both the effectiveness and the efficiency of differential privacy for crossing multiple modals of conceptual DIKW. The proposed approach is demonstrated through the application of personalized data that is based on the open-source IRIS dataset. This experiment demonstrates the efficiency of swarm intelligence in reducing computing complexity.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Mimi Ma ◽  
Min Luo ◽  
Shuqin Fan ◽  
Dengguo Feng

The Industrial Internet of Things (IIoT), as a special form of Internet of Things (IoT), has great potential in realizing intelligent transformation and industrial resource utilization. However, there are security and privacy concerns about industrial data, which is shared on an open channel via sensor devices. To address these issues, many searchable encryption schemes have been presented to provide both data privacy-protection and data searchability. However, due to the use of expensive pairing operations, most previous schemes were inefficient. Recently, a certificateless searchable public-key encryption (CLSPE) scheme was designed by Lu et al. to remove the pairing operation. Unfortunately, we find that Lu et al.’s scheme is vulnerable to user impersonation attacks. To enhance the security, a new pairing-free dual-server CLSPE (DS-CLSPE) scheme for cloud-based IIoT deployment is designed in this paper. In addition, we provide security and efficiency analysis for DS-CLSPE. The analysis results show that DS-CLSPE can resist chosen keyword attacks (CKA) and has better efficiency than other related schemes.


2021 ◽  
Author(s):  
Ali Hatamizadeh ◽  
Hongxu Yin ◽  
Pavlo Molchanov ◽  
Andriy Myronenko ◽  
Wenqi Li ◽  
...  

Abstract Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack that works for more realistic scenarios where the clients’ training involves updating the Batch Normalization (BN) statistics. Furthermore, we present new ways to measure and visualize potential data leakage in FL. Our work is a step towards establishing reproducible methods of measuring data leakage in FL and could help determine the optimal tradeoffs between privacy-preserving techniques, such as differential privacy, and model accuracy based on quantifiable metrics.


Web Services ◽  
2019 ◽  
pp. 1623-1645
Author(s):  
Marmar Moussa ◽  
Steven A. Demurjian

This chapter presents a survey of the most important security and privacy issues related to large-scale data sharing and mining in big data with focus on differential privacy as a promising approach for achieving privacy especially in statistical databases often used in healthcare. A case study is presented utilizing differential privacy in healthcare domain, the chapter analyzes and compares the major differentially private data release strategies and noise mechanisms such as the Laplace and the exponential mechanisms. The background section discusses several security and privacy approaches in big data including authentication and encryption protocols, and privacy preserving techniques such as k-anonymity. Next, the chapter introduces the differential privacy concepts used in the interactive and non-interactive data sharing models and the various noise mechanisms used. An instrumental case study is then presented to examine the effect of applying differential privacy in analytics. The chapter then explores the future trends and finally, provides a conclusion.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Zhanyang Xu ◽  
Wentao Liu ◽  
Jingwang Huang ◽  
Chenyi Yang ◽  
Jiawei Lu ◽  
...  

With the explosive growth of data generated by the Internet of Things (IoT) devices, the traditional cloud computing model by transferring all data to the cloud for processing has gradually failed to meet the real-time requirement of IoT services due to high network latency. Edge computing (EC) as a new computing paradigm shifts the data processing from the cloud to the edge nodes (ENs), greatly improving the Quality of Service (QoS) for those IoT applications with low-latency requirements. However, compared to other endpoint devices such as smartphones or computers, distributed ENs are more vulnerable to attacks for restricted computing resources and storage. In the context that security and privacy preservation have become urgent issues for EC, great progress in artificial intelligence (AI) opens many possible windows to address the security challenges. The powerful learning ability of AI enables the system to identify malicious attacks more accurately and efficiently. Meanwhile, to a certain extent, transferring model parameters instead of raw data avoids privacy leakage. In this paper, a comprehensive survey of the contribution of AI to the IoT security in EC is presented. First, the research status and some basic definitions are introduced. Next, the IoT service framework with EC is discussed. The survey of privacy preservation and blockchain for edge-enabled IoT services with AI is then presented. In the end, the open issues and challenges on the application of AI in IoT services based on EC are discussed.


Sign in / Sign up

Export Citation Format

Share Document