Text Encryption based on Huffman Coding and ElGamal Cryptosystem

2020 ◽  
Vol 14 ◽  
Author(s):  
Khoirom Motilal Singh ◽  
Laiphrakpam Dolendro Singh ◽  
Themrichon Tuithung

Background: Data which are in the form of text, audio, image and video are used everywhere in our modern scientific world. These data are stored in physical storage, cloud storage and other storage devices. Some of it are very sensitive and requires efficient security while storing as well as in transmitting from the sender to the receiver. Objective: With the increase in data transfer operation, enough space is also required to store these data. Many researchers have been working to develop different encryption schemes, yet there exist many limitations in their works. There is always a need for encryption schemes with smaller cipher data, faster execution time and low computation cost. Methods: A text encryption based on Huffman coding and ElGamal cryptosystem is proposed. Initially, the text data is converted to its corresponding binary bits using Huffman coding. Next, the binary bits are grouped and again converted into large integer values which will be used as the input for the ElGamal cryptosystem. Results: Encryption and Decryption are successfully performed where the data size is reduced using Huffman coding and advance security with the smaller key size is provided by the ElGamal cryptosystem. Conclusion: Simulation results and performance analysis specifies that our encryption algorithm is better than the existing algorithms under consideration.

2014 ◽  
Vol 22 (2) ◽  
pp. 173-185 ◽  
Author(s):  
Eli Dart ◽  
Lauren Rotman ◽  
Brian Tierney ◽  
Mary Hester ◽  
Jason Zurawski

The ever-increasing scale of scientific data has become a significant challenge for researchers that rely on networks to interact with remote computing systems and transfer results to collaborators worldwide. Despite the availability of high-capacity connections, scientists struggle with inadequate cyberinfrastructure that cripples data transfer performance, and impedes scientific progress. The ScienceDMZparadigm comprises a proven set of network design patterns that collectively address these problems for scientists. We explain the Science DMZ model, including network architecture, system configuration, cybersecurity, and performance tools, that creates an optimized network environment for science. We describe use cases from universities, supercomputing centers and research laboratories, highlighting the effectiveness of the Science DMZ model in diverse operational settings. In all, the Science DMZ model is a solid platform that supports any science workflow, and flexibly accommodates emerging network technologies. As a result, the Science DMZ vastly improves collaboration, accelerating scientific discovery.


2021 ◽  
pp. 2141001
Author(s):  
Sanqiang Wei ◽  
Hongxia Hou ◽  
Hua Sun ◽  
Wei Li ◽  
Wenxia Song

The plots in certain literary works are very complicated and hinder readers from understanding them. Therefore tools should be proposed to support readers; comprehension of complex literary works supports their understanding by providing the most important information to readers. A human reader must capture multiple levels of abstraction and meaning to formulate an understanding of a document. Hence, in this paper, an Improved [Formula: see text]-means clustering algorithm (IKCA) has been proposed for literary word classification. For text data, the words that can express exact semantic in a class are generally better features. This paper uses the proposed technique to capture numerous cluster centroids for every class and then select the high-frequency words in centroids the text features for classification. Furthermore, neural networks have been used to classify text documents and [Formula: see text]-mean to cluster text documents. To develop the model based on unsupervised and supervised techniques to meet and identify the similarity between documents. The numerical results show that the suggested model will enhance to increases quality comparison of the existing Algorithm and [Formula: see text]-means algorithm, accuracy comparison of ALA and IKCA (95.2%), time is taken for clustering is less than 2 hours, success rate (97.4%) and performance ratio (98.1%).


2021 ◽  
Vol 24 (3) ◽  
pp. 68-77
Author(s):  
T.G. Chikurov ◽  
M.V. Kibardin ◽  
S.L. Shirokih

The solution of the problem of the deficit of voltage level for the complete unlocking of MOSFETs used as keys in the shunt circuits of the cells of the active balancing of ionistor storage devices is given. In particular, a revision of the widespread two-pole circuit of the active balancing cell of the ionistor consisting of a comparison circuit and a shunt circuit with a key on the MOSFET is presented. The relevance of the problem is confirmed by the results of the analysis of the characteristics of the key MOSFETs at the level of the unlocking voltage of 2.5...2.7 V from the output of the comparison circuit. It is shown that this voltage is not sufficient to provide the channel resistance corresponding to a fully open transistor and the flow of the specified shunt currents in the entire range of external influencing factors (VVF), especially when exposed to a reduced temperature from plus 15 to minus 60 °C. The solution presented in the paper for finalizing the circuit of the active balancing cell is that voltage boost circuits are introduced between the comparison node and the shunt circuit. Their use allows you to increase the voltage at the gate of the key MOSFET by two, three, four, etc. times, which ensures the reliable operation of the shunt circuit key for different shunt currents. A special feature of the developed cell circuits is the three-pole switching, in which an additional output is connected to the adjacent ionistor cell. This method of switching on the developed active balancing cells provides doubling of the unlocking voltage on the gate and is sufficient for reliable unlocking of the key on the MOSFET at all shunt currents at the level of the charging voltage of the ionistors in the storage device 2.5...2.7 V. For shunt currents of the order of tens of amperes, it is shown that it is necessary to switch to a quasi-four-pole switching of the developed active balancing cell due to the separation of the supply power circuits (measuring circuits) of the comparison circuit and the power buses of the level-up circuit with the shunt circuit. The methods of switching on the developed cells that allow multiplying the unlocking voltage at the gate of the key MOSFET by three, four or more times are shown. The schemes and criteria for the necessity of applying such inclusion are given. Practical testing of the developed three-pole and quasi-four-pole active balancing cells, carried out on the ionistor NEE of JSC “Elecond”, showed satisfactory stability and performance under the influence of the entire set of VVF.


Compiler ◽  
2018 ◽  
Vol 7 (2) ◽  
pp. 85
Author(s):  
Sudaryanto Sudaryanto

      The need for efficient, stable, fast and reliable network access is influenced by network quality, one of the factors influencing network quality is the management of network devices, while the network devices that are enforced are the Lancard, cable, Switch, Router, Wifi Access Point and Compuitary System . In this study researchers will focus on the influence of Multilayer Switch network devices for data transfer speeds on computer networks. Data transfer speed at layer 2 text data, image data,  video data faster 0,85 % than for speed tranfers on layer 3 text data, image data, video data.Keyword: Network, Switch Multilayer, Data Tranfer, Osi Layer


Author(s):  
Pratiksha Bongale

Today’s world is mostly data-driven. To deal with the humongous amount of data, Machine Learning and Data Mining strategies are put into usage. Traditional ML approaches presume that the model is tested on a dataset extracted from the same domain from where the training data has been taken from. Nevertheless, some real-world situations require machines to provide good results with very little domain-specific training data. This creates room for the development of machines that are capable of predicting accurately by being trained on easily found data. Transfer Learning is the key to it. It is the scientific art of applying the knowledge gained while learning a task to another task that is similar to the previous one in some or another way. This article focuses on building a model that is capable of differentiating text data into binary classes; one roofing the text data that is spam and the other not containing spam using BERT’s pre-trained model (bert-base-uncased). This pre-trained model has been trained on Wikipedia and Book Corpus data and the goal of this paper is to highlight the pre-trained model’s capabilities to transfer the knowledge that it has learned from its training (Wiki and Book Corpus) to classifying spam texts from the rest.


2011 ◽  
Vol 2 (1) ◽  
Author(s):  
Victor Amrizal

Process to minimize file is by undertaking compression to that file. Text compression process aims to reduce symbol purpose repeat or character that arrange text by mengkodekan symbols or that character so room the need storage can be reduced and data Transfer time can faster. Text compression process can be done by mengkodekan segments of original text is next to be placed deep lexical. Process compression can be done by various algorithm media, amongst those Coding's Huffman that constitutes one of tech compression which involve frequency distribution a symbol to form unique code. Symbol frequency distribution will regard long its Huffman code, progressively frequent that symbol texts deep appearance therefore Huffman code length that resulting will get short. This method mengkodekan symbols or characters with binary treed help by merges two character emergence frequencies smallest until molded treed codes.   Keywords:. Huffman Coding, kompresi data, algoritma.


Cloud Computing enables users to use remote resources thus reduces the burden on local storage. However, the use of such services gives rise to new set of problems. The users have no control over the data which they have stored on those storages so to achieve data authentication with confidentiality is utmost important. As every user may not have that expertise so they can request for data verification task to Trusted Verifier (TV) which will be an authorized party to check the intactness of outsourced data. Since the data owner stores the data on the cloud in an encrypted format, it becomes difficult to check the integrity of the data without decrypting. But by using homomorphic encryption schemes the integrity checking can be made possible without original copy. In this paper, we have given implementation and performance details of two homomorphic encryption schemes, Rivest Shamir Adleman (RSA) and Paillier. The RSA is multiplicative homomorphic scheme where the Paillier is additive homomorphic scheme. Both the algorithms are partially homomorphic thus limited in their functions. Due to homomorphic property of these algorithms, original contents will not get revealed in the verification process. This framework will achieve authentication of data by maintaining confidentiality.


Sign in / Sign up

Export Citation Format

Share Document