Realization of SMS4 Algorithm Based on Share Memory of the Heterogeneous Multi-Core Password Chip System

2014 ◽  
Vol 668-669 ◽  
pp. 1368-1373
Author(s):  
Lei Zhang ◽  
Ren Ping Dong ◽  
Ya Ping Yu

In order to meet the rapid demand of the modern cryptographic communication, heterogeneous multi-core password system architecture is set up based on shared memory in Xilinx XUP Virtex-II Pro chip in this paper. Under this architecture, the encryption and decryption of the SMS4 algorithm is realized fastly. Make a contrast with the homogeneous multi-core password system and the heterogeneous one-core password system’s performance in the execution time, throughput and resource utilization of the SMS4 algorithm.The experimental results show that the heterogeneous multi-core password system which based on shared memory has better performance.

2019 ◽  
Vol 4 (2) ◽  
Author(s):  
Rozali Toyib ◽  
Ardi Wijaya

Abstack: Data stored in storage media is often lost or opened by certain parties who are not responsible, so that it is very detrimental to the owner of the data, it is necessary to secure data so that the data can be locked so that it cannot be opened by irresponsible parties. The RC5 and RC6 algorithms are digestive massage algorithms or sometimes also known as the hash function which is an algorithm whose input is a message whose length is not certain, and produces an output message digest from its input message with exactly 128 bits in length. RC6 password is a protection for the user in securing data on a PC or computer. Based on the results of the conclusions taken: For the experiments carried out on the RC5 algorithm the execution time for the generation of keys (set-up key) is very fast, which is about 9-10 ns, a trial carried out on the RC6 algorithm execution time for the key generator (set up key ) faster than 10-11 ns. In the encryption and decryption process, the execution time depends on the size or size of the plaintext file. The larger the size of the plaintext file, the longer the execution time.Abstrak : Data yang tersimpan dalam media penyimpanan sering hilang atau dibuka oleh pihak-pihak tertentu yang tidak bertanggung jawab, sehinga merugikan sekali bagi pemilik data tersebut, untuk itu diperlukan suatu pengamanan data agar data tersebut dapat terkunci sehingga tidak dapat dibuka oleh pihak yang tidak bertanggung jawab.. Algoritma RC5 dan RC6 merupakan algoritma massage digest atau kadang juga dikenal dengan hash function yaitu suatu algoritma yang inputnya berupa sebuah pesan yang panjangnya tidak tertentu, dan menghasilkan keluaran sebuah message digest dari pesan inputnya dengan panjang tepat 128 bit. Password RC6 merupakan salah satu perlindungan kepada user dalam pengamanan data yang berada dalam sebuah Pc atau computer. Berdasarkan hasil pengujian diambil kesimpulan : Untuk uji coba yang dilakukan pada algoritma RC5 waktu eksekusi untuk pembangkitan kunci  (set up key) sangat cepat sekali yaitu sekitar  9-10 ns, uji coba yang dilakukan pada algoritma RC6 waktu eksekusi untuk pembangkit kunci (set up key) lebih cepat sekali yaitu 10-11 ns, Pada proses enkripsi dan dekripsi, waktu eksekusi tergantung dari besar atau kecilnya ukuran file plaintext.s emakin besar ukuran file plaintext maka semakin lama waktu eksekusinya.


Author(s):  
Keith M. Martin

In this chapter, we introduce public-key encryption. We first consider the motivation behind the concept of public-key cryptography and introduce the hard problems on which popular public-key encryption schemes are based. We then discuss two of the best-known public-key cryptosystems, RSA and ElGamal. For each of these public-key cryptosystems, we discuss how to set up key pairs and perform basic encryption and decryption. We also identify the basis for security for each of these cryptosystems. We then compare RSA, ElGamal, and elliptic-curve variants of ElGamal from the perspectives of performance and security. Finally, we look at how public-key encryption is used in practice, focusing on the popular use of hybrid encryption.


2021 ◽  
Vol 11 (15) ◽  
pp. 7169
Author(s):  
Mohamed Allouche ◽  
Tarek Frikha ◽  
Mihai Mitrea ◽  
Gérard Memmi ◽  
Faten Chaabane

To bridge the current gap between the Blockchain expectancies and their intensive computation constraints, the present paper advances a lightweight processing solution, based on a load-balancing architecture, compatible with the lightweight/embedding processing paradigms. In this way, the execution of complex operations is securely delegated to an off-chain general-purpose computing machine while the intimate Blockchain operations are kept on-chain. The illustrations correspond to an on-chain Tezos configuration and to a multiprocessor ARM embedded platform (integrated into a Raspberry Pi). The performances are assessed in terms of security, execution time, and CPU consumption when achieving a visual document fingerprint task. It is thus demonstrated that the advanced solution makes it possible for a computing intensive application to be deployed under severely constrained computation and memory resources, as set by a Raspberry Pi 3. The experimental results show that up to nine Tezos nodes can be deployed on a single Raspberry Pi 3 and that the limitation is not derived from the memory but from the computation resources. The execution time with a limited number of fingerprints is 40% higher than using a classical PC solution (value computed with 95% relative error lower than 5%).


Author(s):  
VINCENT ROBERGE ◽  
MOHAMMED TARBOUCHI ◽  
FRANÇOIS ALLAIRE

In this paper, we present a parallel hybrid metaheuristic that combines the strengths of the particle swarm optimization (PSO) and the genetic algorithm (GA) to produce an improved path-planner algorithm for fixed wing unmanned aerial vehicles (UAVs). The proposed solution uses a multi-objective cost function we developed and generates in real-time feasible and quasi-optimal trajectories in complex 3D environments. Our parallel hybrid algorithm simulates multiple GA populations and PSO swarms in parallel while allowing migration of solutions. This collaboration between the GA and the PSO leads to an algorithm that exhibits the strengths of both optimization methods and produces superior solutions. Moreover, by using the "single-program, multiple-data" parallel programming paradigm, we maximize the use of today's multicore CPU and significantly reduce the execution time of the parallel program compared to a sequential implementation. We observed a quasi-linear speedup of 10.7 times faster on a 12-core shared memory system resulting in an execution time of 5 s which allows in-flight planning. Finally, we show with statistical significance that our parallel hybrid algorithm produces superior trajectories to the parallel GA or the parallel PSO we previously developed.


1974 ◽  
Vol 96 (1) ◽  
pp. 118-126 ◽  
Author(s):  
G. G. Hirs

Turbulent film flow theories can only be verified on the basis of a large number of experimental results. Since it will be useful to handle these experimental results more or less systematically and to get some idea of the amount of work yet to be done, the first objective of this paper is to set up a classification system for turbulent film flow experiments. The second objective is to verify the bulk flow theory on the basis of the limited number of experimental results available in the literature and to show this theory to be compatible with these results.


2014 ◽  
Vol 631-632 ◽  
pp. 1053-1056
Author(s):  
Hui Xia

The paper addressed the issues of limited resource for data optimization for efficiency, reliability, scalability and security of data in distributed, cluster systems with huge datasets. The study’s experimental results predicted that the MapReduce tool developed improved data optimization. The system exhibits undesired speedup with smaller datasets, but reasonable speedup is achieved with a larger enough datasets that complements the number of computing nodes reducing the execution time by 30% as compared to normal data mining and processing. The MapReduce tool is able to handle data growth trendily, especially with larger number of computing nodes. Scaleup gracefully grows as data and number of computing nodes increases. Security of data is guaranteed at all computing nodes since data is replicated at various nodes on the cluster system hence reliable. Our implementation of the MapReduce runs on distributed cluster computing environment of a national education web portal and is highly scalable.


2020 ◽  
Vol 8 (4) ◽  
pp. 475
Author(s):  
Maria Okta Safira ◽  
I Komang Ari Mogi

In this paper two methods are used, namely the vigenere cipher method and the RSA method. The vigenere cipher method is an example of a symmetric algorithm, while RSA is an example of an asymmetric algorithm. The combination of these two methods is called hybrid cryptography which has the advantage in terms of speed during the encryption process. Each process, which is encryption and decryption, is carried out twice, so that security can be ensured. In the process of forming the key used the RSA method. In the encryption process using public keys that have been generated before when the key is formed. This public key is used in sending data to the recipient of a secret message where this key is used for the data encryption process. The Secret key is kept and will be used during the decryption process. There is a system architecture that describes how clients and servers communicate with each other over the internet using the TCP protocol where the client here is an IoT device and the server is a server. 


Author(s):  
Nurcin Celik ◽  
Esfandyar Mazhari ◽  
John Canby ◽  
Omid Kazemi ◽  
Parag Sarfare ◽  
...  

Simulating large-scale systems usually entails exhaustive computational powers and lengthy execution times. The goal of this research is to reduce execution time of large-scale simulations without sacrificing their accuracy by partitioning a monolithic model into multiple pieces automatically and executing them in a distributed computing environment. While this partitioning allows us to distribute required computational power to multiple computers, it creates a new challenge of synchronizing the partitioned models. In this article, a partitioning methodology based on a modified Prim’s algorithm is proposed to minimize the overall simulation execution time considering 1) internal computation in each of the partitioned models and 2) time synchronization between them. In addition, the authors seek to find the most advantageous number of partitioned models from the monolithic model by evaluating the tradeoff between reduced computations vs. increased time synchronization requirements. In this article, epoch- based synchronization is employed to synchronize logical times of the partitioned simulations, where an appropriate time interval is determined based on the off-line simulation analyses. A computational grid framework is employed for execution of the simulations partitioned by the proposed methodology. The experimental results reveal that the proposed approach reduces simulation execution time significantly while maintaining the accuracy as compared with the monolithic simulation execution approach.


Author(s):  
Joaquín Pérez Ortega ◽  
Nelva Nely Almanza Ortega ◽  
Andrea Vega Villalobos ◽  
Marco A. Aguirre L. ◽  
Crispín Zavala Díaz ◽  
...  

In recent years, the amount of texts in natural language, in digital format, has had an impressive increase. To obtain useful information from a large volume of data, new specialized techniques and efficient algorithms are required. Text mining consists of extracting meaningful patterns from texts; one of the basic approaches is clustering. The most used clustering algorithm is k-means. This chapter proposes an improvement of the k-means algorithm in the convergence step; the process stops whenever the number of objects that change their assigned cluster in the current iteration is bigger than the ones that changed in the previous iteration. Experimental results showed a reduction in execution time up to 93%. It is remarkable that, in general, better results are obtained when the volume of the text increase, particularly in those texts within big data environments.


Sign in / Sign up

Export Citation Format

Share Document