Security and Privacy Challenges of Deep Learning

Author(s):  
J. Andrew Onesimu ◽  
Karthikeyan J. ◽  
D. Samuel Joshua Viswas ◽  
Robin D Sebastian

Deep learning is the buzz word in recent times in the research field due to its various advantages in the fields of healthcare, medicine, automobiles, etc. A huge amount of data is required for deep learning to achieve better accuracy; thus, it is important to protect the data from security and privacy breaches. In this chapter, a comprehensive survey of security and privacy challenges in deep learning is presented. The security attacks such as poisoning attacks, evasion attacks, and black-box attacks are explored with its prevention and defence techniques. A comparative analysis is done on various techniques to prevent the data from such security attacks. Privacy is another major challenge in deep learning. In this chapter, the authors presented an in-depth survey on various privacy-preserving techniques for deep learning such as differential privacy, homomorphic encryption, secret sharing, and secure multi-party computation. A detailed comparison table to compare the various privacy-preserving techniques and approaches is also presented.

Author(s):  
J. Andrew Onesimu ◽  
Karthikeyan J. ◽  
D. Samuel Joshua Viswas ◽  
Robin D Sebastian

Deep learning is the buzz word in recent times in the research field due to its various advantages in the fields of healthcare, medicine, automobiles, etc. A huge amount of data is required for deep learning to achieve better accuracy; thus, it is important to protect the data from security and privacy breaches. In this chapter, a comprehensive survey of security and privacy challenges in deep learning is presented. The security attacks such as poisoning attacks, evasion attacks, and black-box attacks are explored with its prevention and defence techniques. A comparative analysis is done on various techniques to prevent the data from such security attacks. Privacy is another major challenge in deep learning. In this chapter, the authors presented an in-depth survey on various privacy-preserving techniques for deep learning such as differential privacy, homomorphic encryption, secret sharing, and secure multi-party computation. A detailed comparison table to compare the various privacy-preserving techniques and approaches is also presented.


2021 ◽  
Author(s):  
Faris. A. Almalki ◽  
Ben othman Soufiene

Abstract Internet of Things (IoT) connects various kinds of intelligent objects and devices using the internet to collect and exchange data. Nowadays, The IoT is used in diverse application domains, including the healthcare. In the healthcare domain, the IoT devices can collects patient data, and its forwards the data to the healthcare professionals can view it. The IoT devices are usually resource-constrained in terms of energy consumption, storage capacity, computational capability, and communication range, data aggregation techniques are used to reduce the communication overhead. However, in healthcare system using IoT, the heterogeneity of technologies, the large number of devices and systems, and the different types of users and roles create important challenges in terms of security. For that, the security and privacy aggregation of health data are very important aspects. In this paper, we propose a novel secure data aggregation scheme based on homomorphic primitives in IoT based healthcare systems, called “An Efficient and Privacy-Preserving Data Aggregation Scheme with authentication for IoT-Based Healthcare applications” (EPPDA). EPPDA is based the Verification and Authorization phase to verifying the legitimacy of the nodes wants to join the process of aggregation. EPPDA uses additive homomorphic encryption to protect data privacy and combines it with homomorphic MAC to check the data integrity. The security analysis and experimental results show that our proposed scheme guarantees data privacy, messages authenticity, and integrity, with lightweight communication overhead and computation.


Author(s):  
Artrim Kjamilji

Nowadays many different entities collect data of the same nature, but in slightly different environments. In this sense different hospitals collect data about their patients’ symptoms and corresponding disease diagnoses, different banks collect transactions of their customers’ bank accounts, multiple cyber-security companies collect data about log files and corresponding attacks, etc. It is shown that if those different entities would merge their privately collected data in a single dataset and use it to train a machine learning (ML) model, they often end up with a trained model that outperforms the human experts of the corresponding fields in terms of accurate predictions. However, there is a drawback. Due to privacy concerns, empowered by laws and ethical reasons, no entity is willing to share with others their privately collected data. The same problem appears during the classification case over an already trained ML model. On one hand, a user that has an unclassified query (record), doesn’t want to share with the server that owns the trained model neither the content of the query (which might contain private data such as credit card number, IP address, etc.), nor the final prediction (classification) of the query. On the other hand, the owner of the trained model doesn’t want to leak any parameter of the trained model to the user. In order to overcome those shortcomings, several cryptographic and probabilistic techniques have been proposed during the last few years to enable both privacy preserving training and privacy preserving classification schemes. Some of them include anonymization and k-anonymity, differential privacy, secure multiparty computation (MPC), federated learning, Private Information Retrieval (PIR), Oblivious Transfer (OT), garbled circuits and/or homomorphic encryption, to name a few. Theoretical analyses and experimental results show that the current privacy preserving schemes are suitable for real-case deployment, while the accuracy of most of them differ little or not at all with the schemes that work in non-privacy preserving fashion.


Author(s):  
Helen Chen ◽  
Shubhankar Mohapatra ◽  
George Michalopoulos ◽  
Xi He ◽  
Ian McKillop

Using deep learning to advance personalized healthcare requires data about patients to be collected and aggregated from disparate sources that often span institutions and geographies. Researchers regularly come face-to-face with legitimate security and privacy policies that constrain access to these data. In this work, we present a vision for privacy-preserving federated neural network architectures that permit data to remain at a custodian’s institution while enabling the data to be discovered and used in neural network modeling. Using a diabetes dataset, we demonstrate that accuracy and processing efficiencies using federated deep learning architectures are equivalent to the models built on centralized datasets.


2021 ◽  
Author(s):  
Tamilarasi G ◽  
Rajiv Gandhi K ◽  
Palanisamy V

Abstract In recent days, vehicular ad hoc networks (VANETs) has gained significant interest in the field of intelligent transportation system (ITS) owing to the safety and preventive measures to the drivers and passengers. Regardless of the merits provided by VANET, it faces several issues, particularly with respect to security and privacy of users/messages. Because of the decentralized structure and dynamic topologies of VANET, it is hard to detect malicious or faulty nodes or users. With this motivation, this paper designs new privacy preserving partially homomorphic encryption with optimal key generation using improved grasshopper optimization algorithm (IGOA-PHE) technique in VANETs. The goal of the proposed IGOA-PHE technique aims to achieve privacy and security in VANET. The proposed IGOA-PHE technique involves two stage processes namely ElGamal public key cryptosystem (EGPKC) for PHE and IGOA based optimal key generation process. In order to improve the security of the EGPKC technique, the keys are optimally chosen using the IGOA. Besides, the IGOA is derived by incorporating the concepts of Gaussian mutation (GM) and Levy flights. The experimental analysis of the proposed IGOA-PHE technique is examined in a wide range of experiments. The resultant outcomes exhibited the maximum performance of the presented IGOA-PHE technique over the recent state of art methods.


2021 ◽  
Vol 13 (11) ◽  
pp. 2221
Author(s):  
Munirah Alkhelaiwi ◽  
Wadii Boulila ◽  
Jawad Ahmad ◽  
Anis Koubaa ◽  
Maha Driss

Satellite images have drawn increasing interest from a wide variety of users, including business and government, ever since their increased usage in important fields ranging from weather, forestry and agriculture to surface changes and biodiversity monitoring. Recent updates in the field have also introduced various deep learning (DL) architectures to satellite imagery as a means of extracting useful information. However, this new approach comes with its own issues, including the fact that many users utilize ready-made cloud services (both public and private) in order to take advantage of built-in DL algorithms and thus avoid the complexity of developing their own DL architectures. However, this presents new challenges to protecting data against unauthorized access, mining and usage of sensitive information extracted from that data. Therefore, new privacy concerns regarding sensitive data in satellite images have arisen. This research proposes an efficient approach that takes advantage of privacy-preserving deep learning (PPDL)-based techniques to address privacy concerns regarding data from satellite images when applying public DL models. In this paper, we proposed a partially homomorphic encryption scheme (a Paillier scheme), which enables processing of confidential information without exposure of the underlying data. Our method achieves robust results when applied to a custom convolutional neural network (CNN) as well as to existing transfer learning methods. The proposed encryption scheme also allows for training CNN models on encrypted data directly, which requires lower computational overhead. Our experiments have been performed on a real-world dataset covering several regions across Saudi Arabia. The results demonstrate that our CNN-based models were able to retain data utility while maintaining data privacy. Security parameters such as correlation coefficient (−0.004), entropy (7.95), energy (0.01), contrast (10.57), number of pixel change rate (4.86), unified average change intensity (33.66), and more are in favor of our proposed encryption scheme. To the best of our knowledge, this research is also one of the first studies that applies PPDL-based techniques to satellite image data in any capacity.


Sign in / Sign up

Export Citation Format

Share Document