scholarly journals NoPeek: Information leakage reduction to share activations in distributed deep learning

Author(s):  
Praneeth Vepakomma ◽  
Abhishek Singh ◽  
Otkrist Gupta ◽  
Ramesh Raskar
2021 ◽  
Vol 21 (3) ◽  
pp. 1-20
Author(s):  
Mohamad Ali Mehrabi ◽  
Naila Mukhtar ◽  
Alireza Jolfaei

Many Internet of Things applications in smart cities use elliptic-curve cryptosystems due to their efficiency compared to other well-known public-key cryptosystems such as RSA. One of the important components of an elliptic-curve-based cryptosystem is the elliptic-curve point multiplication which has been shown to be vulnerable to various types of side-channel attacks. Recently, substantial progress has been made in applying deep learning to side-channel attacks. Conceptually, the idea is to monitor a core while it is running encryption for information leakage of a certain kind, for example, power consumption. The knowledge of the underlying encryption algorithm can be used to train a model to recognise the key used for encryption. The model is then applied to traces gathered from the crypto core in order to recover the encryption key. In this article, we propose an RNS GLV elliptic curve cryptography core which is immune to machine learning and deep learning based side-channel attacks. The experimental analysis confirms the proposed crypto core does not leak any information about the private key and therefore it is suitable for hardware implementations.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1730
Author(s):  
Dan Deng ◽  
Xingwang Li ◽  
Ming Zhao ◽  
Khaled M. Rabie ◽  
Rupak Kharel

Perfect channel state information (CSI) is required in most of the classical physical-layer security techniques, while it is difficult to obtain the ideal CSI due to the time-varying wireless fading channel. Although imperfect CSI has a great impact on the security of MIMO communications, deep learning is becoming a promising solution to handle the negative effect of imperfect CSI. In this work, we propose two types of deep learning-based secure MIMO detectors for heterogeneous networks, where the macro base station (BS) chooses the null-space eigenvectors to prevent information leakage to the femto BS. Thus, the bit error rate of the associated user is adopted as the metric to evaluate the system performance. With the help of deep convolutional neural networks (CNNs), the macro BS obtains the refined version from the imperfect CSI. Simulation results are provided to validate the proposed algorithms. The impacts of system parameters, such as the correlation factor of imperfect CSI, the normalized doppler frequency, the number of antennas is investigated in different setup scenarios. The results show that considerable performance gains can be obtained from the deep learning-based detectors compared with the classical maximum likelihood algorithm.


2021 ◽  
Vol 11 (18) ◽  
pp. 8497
Author(s):  
Yazhou Wang ◽  
Bing Li ◽  
Yan Zhang ◽  
Jiaxin Wu ◽  
Qianya Ma

Biometric keys are widely used in the digital identity system due to the inherent uniqueness of biometrics. However, existing biometric key generation methods may expose biometric data, which will cause users’ biometric traits to be permanently unavailable in the secure authentication system. To enhance its security and privacy, we propose a secure biometric key generation method based on deep learning in this paper. Firstly, to prevent the information leakage of biometric data, we utilize random binary codes to represent biometric data and adopt a deep learning model to establish the relationship between biometric data and random binary code for each user. Secondly, to protect the privacy and guarantee the revocability of the biometric key, we add a random permutation operation to shuffle the elements of binary code and update a new biometric key. Thirdly, to further enhance the reliability and security of the biometric key, we construct a fuzzy commitment module to generate the helper data without revealing any biometric information during enrollment. Three benchmark datasets including ORL, Extended YaleB, and CMU-PIE are used for evaluation. The experiment results show our scheme achieves a genuine accept rate (GAR) higher than the state-of-the-art methods at a 1% false accept rate (FAR), and meanwhile satisfies the properties of revocability and randomness of biometric keys. The security analyses show that our model can effectively resist information leakage, cross-matching, and other attacks. Moreover, the proposed model is applied to a data encryption scenario in our local computer, which takes less than 0.5 s to complete the whole encryption and decryption at different key lengths.


2021 ◽  
Vol 15 ◽  
Author(s):  
Sunao Yotsutsuji ◽  
Miaomei Lei ◽  
Hiroyuki Akama

Recently, several deep learning methods have been applied to decoding in task-related fMRI, and their advantages have been exploited in a variety of ways. However, this paradigm is sometimes problematic, due to the difficulty of applying deep learning to high-dimensional data and small sample size conditions. The difficulties in gathering a large amount of data to develop predictive machine learning models with multiple layers from fMRI experiments with complicated designs and tasks are well-recognized. Group-level, multi-voxel pattern analysis with small sample sizes results in low statistical power and large accuracy evaluation errors; failure in such instances is ascribed to the individual variability that risks information leakage, a particular issue when dealing with a limited number of subjects. In this study, using a small-size fMRI dataset evaluating bilingual language switch in a property generation task, we evaluated the relative fit of different deep learning models, incorporating moderate split methods to control the amount of information leakage. Our results indicated that using the session shuffle split as the data folding method, along with the multichannel 2D convolutional neural network (M2DCNN) classifier, recorded the best authentic classification accuracy, which outperformed the efficiency of 3D convolutional neural network (3DCNN). In this manuscript, we discuss the tolerability of within-subject or within-session information leakage, of which the impact is generally considered small but complex and essentially unknown; this requires clarification in future studies.


2020 ◽  
Vol 34 (04) ◽  
pp. 6372-6379
Author(s):  
Bingzhe Wu ◽  
Chaochao Chen ◽  
Shiwan Zhao ◽  
Cen Chen ◽  
Yuan Yao ◽  
...  

Bayesian deep learning is recently regarded as an intrinsic way to characterize the weight uncertainty of deep neural networks (DNNs). Stochastic Gradient Langevin Dynamics (SGLD) is an effective method to enable Bayesian deep learning on large-scale datasets. Previous theoretical studies have shown various appealing properties of SGLD, ranging from the convergence properties to the generalization bounds. In this paper, we study the properties of SGLD from a novel perspective of membership privacy protection (i.e., preventing the membership attack). The membership attack, which aims to determine whether a specific sample is used for training a given DNN model, has emerged as a common threat against deep learning algorithms. To this end, we build a theoretical framework to analyze the information leakage (w.r.t. the training dataset) of a model trained using SGLD. Based on this framework, we demonstrate that SGLD can prevent the information leakage of the training dataset to a certain extent. Moreover, our theoretical analysis can be naturally extended to other types of Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) methods. Empirical results on different datasets and models verify our theoretical findings and suggest that the SGLD algorithm can not only reduce the information leakage but also improve the generalization ability of the DNN models in real-world applications.


2020 ◽  
Vol 2020 (3) ◽  
pp. 5-24 ◽  
Author(s):  
Mohammad Saidur Rahman ◽  
Payap Sirinam ◽  
Nate Mathews ◽  
Kantha Girish Gangadhara ◽  
Matthew Wright

AbstractA passive local eavesdropper can leverage Website Fingerprinting (WF) to deanonymize the web browsing activity of Tor users. The value of timing information to WF has often been discounted in recent works due to the volatility of low-level timing information. In this paper, we more carefully examine the extent to which packet timing can be used to facilitate WF attacks. We first propose a new set of timing-related features based on burst-level characteristics to further identify more ways that timing patterns could be used by classifiers to identify sites. Then we evaluate the effectiveness of both raw timing and directional timing which is a combination of raw timing and direction in a deep-learning-based WF attack. Our closed-world evaluation shows that directional timing performs best in most of the settings we explored, achieving: (i) 98.4% in undefended Tor traffic; (ii) 93.5% on WTF-PAD traffic, several points higher than when only directional information is used; and (iii) 64.7% against onion sites, 12% higher than using only direction. Further evaluations in the open-world setting show small increases in both precision (+2%) and recall (+6%) with directional-timing on WTF-PAD traffic. To further investigate the value of timing information, we perform an information leakage analysis on our proposed handcrafted features. Our results show that while timing features leak less information than directional features, the information contained in each feature is mutually exclusive to one another and can thus improve the robustness of a classifier.


Author(s):  
Stellan Ohlsson
Keyword(s):  

2019 ◽  
Vol 53 (3) ◽  
pp. 281-294
Author(s):  
Jean-Michel Foucart ◽  
Augustin Chavanne ◽  
Jérôme Bourriau

Nombreux sont les apports envisagés de l’Intelligence Artificielle (IA) en médecine. En orthodontie, plusieurs solutions automatisées sont disponibles depuis quelques années en imagerie par rayons X (analyse céphalométrique automatisée, analyse automatisée des voies aériennes) ou depuis quelques mois (analyse automatique des modèles numériques, set-up automatisé; CS Model +, Carestream Dental™). L’objectif de cette étude, en deux parties, est d’évaluer la fiabilité de l’analyse automatisée des modèles tant au niveau de leur numérisation que de leur segmentation. La comparaison des résultats d’analyse des modèles obtenus automatiquement et par l’intermédiaire de plusieurs orthodontistes démontre la fiabilité de l’analyse automatique; l’erreur de mesure oscillant, in fine, entre 0,08 et 1,04 mm, ce qui est non significatif et comparable avec les erreurs de mesures inter-observateurs rapportées dans la littérature. Ces résultats ouvrent ainsi de nouvelles perspectives quand à l’apport de l’IA en Orthodontie qui, basée sur le deep learning et le big data, devrait permettre, à moyen terme, d’évoluer vers une orthodontie plus préventive et plus prédictive.


Sign in / Sign up

Export Citation Format

Share Document