scholarly journals Masked Implementation of Format Preserving Encryption on Low-End AVR Microcontrollers and High-End ARM Processors

Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1294
Author(s):  
Hyunjun Kim ◽  
Minjoo Sim ◽  
Kyoungbae Jang ◽  
Hyeokdong Kwon ◽  
Siwoo Uhm ◽  
...  

Format-Preserving Encryption (FPE) for Internet of Things (IoT) enables the data encryption while preserving the format and length of original data. With these advantages, FPE can be utilized in many IoT applications. However, FPE requires complicated computations and these are high overheads on IoT embedded devices. In this paper, we proposed an efficient implementation of Format-preserving Encryption Algorithm (FEA), which is the Korean standard of FPE, and the first-order masked implementation of FEA on both low-end (i.e., AVR microcontroller) and high-end (i.e., ARM processor) IoT devices. Firstly, we show the vulnerability of FEA when it comes to the Correlation Power Analysis (CPA) approach. Afterward, we propose an efficient implementation method and the masking technique for both low-end IoT device and high-end IoT device. The proposed method is secure against power analysis attacks but the performance degradation of masked measure is only 2.53∼3.77% than the naïve FEA implementation.

2021 ◽  
Vol 1 (1) ◽  
Author(s):  
E. Bertino ◽  
M. R. Jahanshahi ◽  
A. Singla ◽  
R.-T. Wu

AbstractThis paper addresses the problem of efficient and effective data collection and analytics for applications such as civil infrastructure monitoring and emergency management. Such problem requires the development of techniques by which data acquisition devices, such as IoT devices, can: (a) perform local analysis of collected data; and (b) based on the results of such analysis, autonomously decide further data acquisition. The ability to perform local analysis is critical in order to reduce the transmission costs and latency as the results of an analysis are usually smaller in size than the original data. As an example, in case of strict real-time requirements, the analysis results can be transmitted in real-time, whereas the actual collected data can be uploaded later on. The ability to autonomously decide about further data acquisition enhances scalability and reduces the need of real-time human involvement in data acquisition processes, especially in contexts with critical real-time requirements. The paper focuses on deep neural networks and discusses techniques for supporting transfer learning and pruning, so to reduce the times for training the networks and the size of the networks for deployment at IoT devices. We also discuss approaches based on machine learning reinforcement techniques enhancing the autonomy of IoT devices.


Author(s):  
Hamza Sajjad Ahmad ◽  
Muhammad Junaid Arshad ◽  
Muhammad Sohail Akram

To send data over the network, devices need to authenticate themselves within the network. After authentication, the device will be able to send the data in-network. After authentication, secure communication of devices is an important task that is done with an encryption method. IoT network devices have a very small circuit with low resources and low computation power. By considering low power, less memory, low computation, and all the aspect of IoT devices, an encryption technique is needed that is suitable for this type of device. As IoT networks are heterogeneous, each device has different hardware properties, and all the devices are not on one scale. To make IoT networks secure, this paper starts with the secure authentication mechanism to verify the device that wants to be a part of the network. After that, an encryption algorithm is presented that will make the communication secure. This encryption algorithm is designed by considering all the important aspects of IoT devices (low computation, low memory, and cost).


2011 ◽  
Vol 121-126 ◽  
pp. 867-871 ◽  
Author(s):  
Jie Li ◽  
Wei Wei Shan ◽  
Chao Xuan Tian

In order to evaluate the security of Application Specific Integrated Circuit (ASIC) implemented cryptographic algorithms at an early design stage, a Hamming distance model based power analysis is proposed. The Data Encryption Standard (DES) algorithm is taken as an example to illustrate the threats of differential power analysis (DPA) attack against the security of ASIC chip. A DPA attack against the ASIC implementation of a DES algorithm is realized based on hamming distance power model (HD model), and it realized the attack by successfully guessing the right 48-bit subkey. This result indicates that the power analysis attack based on the HD model is simple, rapid and effective for the design and evaluation of security chips.


Author(s):  
Tianhang Zheng ◽  
Changyou Chen ◽  
Kui Ren

Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically in the form of risk maximization/minimization, e.g., max/min Ep(x) L(x) with p(x) some unknown data distribution and L(·) a loss function. However, since PGD generates attack samples independently for each data sample based on L(·), the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial-data distribution, a perturbed distribution that satisfies the L∞ constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by MIT MadryLab. Notably, DAA ranks the first place on MadryLab’s white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.56% (with l∞ perturbations of ε = 0.3) and the accuracy of their secret CIFAR model to 44.71% (with l∞ perturbations of ε = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1549
Author(s):  
Jin-Kwan Jeon ◽  
In-Won Hwang ◽  
Hyun-Jun Lee ◽  
Younho Lee

We propose an improved RLizard implementation method that enables the RLizard key encapsulation mechanism (KEM) to run in a resource-constrained Internet of Things (IoT) environment with an 8-bit micro controller unit (MCU) and 8–16 KB of SRAM. Existing research has shown that the proposed method can function in a relatively high-end IoT environment, but there is a limitation when applying the existing implementation to our environment because of the insufficient SRAM space. We improve the implementation of the RLizard KEM by utilizing electrically erasable, programmable, read-only memory (EEPROM) and flash memory, which is possessed by all 8-bit ATmega MCUs. In addition, in order to prevent a decrease in execution time related to their use, we improve the multiplication process between polynomials utilizing the special property of the second multiplicand in each algorithm of the RLizard KEM. Thus, we reduce the required MCU clock cycle consumption. The results show that, compared to the existing code submitted to the National Institute of Standard and Technology (NIST) PQC standardization competition, the required MCU clock cycle is reduced by an average of 52%, and the memory used is reduced by approximately 77%. In this way, we verified that the RLizard KEM works well in our low-end IoT environments.


Author(s):  
Hassan B. Hassan ◽  
Qusay I. Sarham

Introduction: With the rapid deployment of embedded databases across a wide range of embedded devices such as mobile devices, Internet of Things (IoT) devices, etc., the amount of data generated by such devices is also growing increasingly. For this reason, the performance is considered as a crucial criterion in the process of selecting the most suitable embedded database management system to be used to store/retrieve data of these devices. Currently, many embedded databases are available to be utilized in this context. Materials and Methods: In this paper, four popular open-source relational embedded databases; namely, H2, HSQLDB, Apache Derby, and SQLite have been compared experimentally with each other to evaluate their operational performance in terms of creating database tables, retrieving data, inserting data, updating data, deleting data. Results and Discussion: The experimental results of this paper have been illustrated in Table 4. Conclusions: The experimental results and analysis showed that HSQLDB outperformed other databases in most evaluation scenarios.


Energies ◽  
2021 ◽  
Vol 14 (20) ◽  
pp. 6636
Author(s):  
Fouad Sakr ◽  
Riccardo Berta ◽  
Joseph Doyle ◽  
Alessandro De De Gloria ◽  
Francesco Bellotti

The trend of bringing machine learning (ML) to the Internet of Things (IoT) field devices is becoming ever more relevant, also reducing the overall energy need of the applications. ML models are usually trained in the cloud and then deployed on edge devices. Most IoT devices generate large amounts of unlabeled data, which are expensive and challenging to annotate. This paper introduces the self-learning autonomous edge learning and inferencing pipeline (AEP), deployable in a resource-constrained embedded system, which can be used for unsupervised local training and classification. AEP uses two complementary approaches: pseudo-label generation with a confidence measure using k-means clustering and periodic training of one of the supported classifiers, namely decision tree (DT) and k-nearest neighbor (k-NN), exploiting the pseudo-labels. We tested the proposed system on two IoT datasets. The AEP, running on the STM NUCLEO-H743ZI2 microcontroller, achieves comparable accuracy levels as same-type models trained on actual labels. The paper makes an in-depth performance analysis of the system, particularly addressing the limited memory footprint of embedded devices and the need to support remote training robustness.


Sign in / Sign up

Export Citation Format

Share Document