scholarly journals Poly Encrypted Text Based on Dynamic Selection and Chaotic Behavior

Author(s):  
Amaria Wael ◽  
Seddik Hassene ◽  
Bouslehi Hamdi

Indeed, the current cryptography suffers from the rise of the computing power of computers and the advent of quantum computers could be the death knell of these algorithms. Therefore, with this paper, we present a new encryption approach based on chaotic outputs to insure more protection. This approach combines two encryption techniques in addition to random permutation. The first one consists to put in disorder binary data and the second technique is based on conditional logical function. The choice between those two techniques is perfectly random and generated from chaotic outputs. Each process has her own keys which make the encryption more complicated.

2020 ◽  
Vol 9 (01) ◽  
pp. 24919-24920
Author(s):  
Viplove Divyasheesh ◽  
Rakesh Jain

Quantum computers consist of a quantum processor – sets of quantum bits or qubits operating at an extremely low temperature – and a classical electronic controller to read out and control the processor. The machines utilize the unusual properties of matter at extremely small scales – the fact that a qubit, can represent “1” and “0” at the same time, a phenomenon known as superposition. (In traditional digital computing, transistors in silicon chips can exist in one of two states represented in binary by a 1 or 0 not both). Under the right conditions, computations carried out with qubits are equivalent to numerous classical computations performed in parallel, thus greatly enhancing computing power compared to today’s powerful supercomputers and the ability to solve complex problems without the sort of experiments necessary to generate quantum phenomena. this technology is unstable and needs to be stored in a cool environment for faster and more secure operation.In this paper, we discuss the possibility of integrating quantum computers with electronics at deep cryogenic temperatures.  


2021 ◽  
pp. 2150343
Author(s):  
Xiao-Jun Wen ◽  
Yong-Zhi Chen ◽  
Xin-Can Fan ◽  
Zheng-Zhong Yi ◽  
Zoe L. Jiang ◽  
...  

Blockchain technology represented by Bitcoin and Ethereum has been deeply developed and widely used due to its broad application prospects such as digital currency and IoT. However, the security of the existing blockchain technologies built on the classical cryptography depends on the computational complexity problem. With the enhancement of the attackers’ computing power, especially the upcoming quantum computers, this kind of security is seriously threatened. Based on quantum hash, quantum SWAP test and quantum teleportation, a quantum blockchain system is proposed with quantum secure communication. In classical cryptographic theory sense, the security of this system is unconditional since it has nothing to do with the attackers’ computing power and computing resources.


2019 ◽  
Vol 10 (4) ◽  
pp. 17
Author(s):  
Tae L. Aderman

Quantum computers leverage the incredible and dynamic properties behind quantum physics. In doing so, these computers are able to solve mathematical equations that, as of now, cannot be solved using today’s conventional computers. Realizing the potential that quantum computers represent, banking institutions are beginning to both analyze and apply these computers’ use potential, particularly in increasing the efficiency and speed of complex transactions. Simultaneously, banking institutions must also carefully examine quantum computers’ ability to bolster cybersecurity defenses. In the age of quantum computers, existing defenses will prove inadequate, even to lattice-based cryptography. At the dawn of the quantum age, banking institutions are in a unique position to leverage not only quantum computers’ vast computing power in completing complex transactions, but also to use such computers to counter the threat of quantum cybersecurity threats.


2021 ◽  
Vol 16 (93) ◽  
pp. 120-133
Author(s):  
Aleksei A. Gavrishev ◽  
◽  
Vladimir A. Burmistrov ◽  

In this paper, we evaluate the crypto resistance of known cryptographic methods and methods based on the use of noise-like signals, similar in properties to "limited" white noise and used to spread spectrum of transmitted messages, to the destructive effect of "viewing transmitted data" (decipher), based on the search of code structures (brute force), in the case of quantum computers. It’s established that the required value of the number of code structures (key space), taking into account the constantly improving and developing computing power of quantum computers, for the next few years should be considered a value of 1032 of the number of code structures (key space) and higher, providing crypto resistance for a minimum of 3 years. It’s shown that the Grover algorithm is similar to the destructive effect of "viewing transmitted data" (decipher), based on a complete search of all code structures (brute force) using modern super- computers. It’s established that well-known symmetric cryptographic methods can potentially be used in the post-quantum era and methods based on noise-like signals potentially, provided they are detected and aware of the methods underlying them (without knowledge of the key), cannot be applied in the post-quantum era. According to the authors, a promising approach in the post-quantum era for information security issues is the use of chaotic signals.


2015 ◽  
Vol 7 (2) ◽  
pp. 216-238
Author(s):  
Richárd Forster ◽  
Ágnes Fülöp

Abstract The Yang-Mills fields plays important role in the strong interaction, which describes the quark gluon plasma. The non-Abelian gauge theory provides the theoretical background understanding of this topic. The real time evolution of the classical fields is derived by the Hamiltonian for SU(2) gauge field tensor. The microcanonical equations of motion is solved on 3 dimensional lattice and chaotic dynamics was searched by the monodromy matrix. The entropy-energy relation was presented by Kolmogorov-Sinai entropy. We used block Hessenberg reduction to compute the eigenvalues of the current matrix. While the purely CPU based algorithm can handle effectively only a small amount of values, the GPUs provide enough performance to give more computing power to solve the problem.


Author(s):  
S.J.B. Reed

Characteristic fluorescenceThe theory of characteristic fluorescence corrections was first developed by Castaing. The same approach, with an improved expression for the relative primary x-ray intensities of the exciting and excited elements, was used by Reed, who also introduced some simplifications, which may be summarized as follows (with reference to K-K fluorescence, i.e. K radiation of element ‘B’ exciting K radiation of ‘A’):1.The exciting radiation is assumed to be monochromatic, consisting of the Kα line only (neglecting the Kβ line).2.Various parameters are lumped together in a single tabulated function J(A), which is assumed to be independent of B.3.For calculating the absorption of the emerging fluorescent radiation, the depth distribution of the primary radiation B is represented by a simple exponential.These approximations may no longer be justifiable given the much greater computing power now available. For example, the contribution of the Kβ line can easily be calculated separately.


Author(s):  
Stuart McKernan

For many years the concept of quantitative diffraction contrast experiments might have consisted of the determination of dislocation Burgers vectors using a g.b = 0 criterion from several different 2-beam images. Since the advent of the personal computer revolution, the available computing power for performing image-processing and image-simulation calculations is enormous and ubiquitous. Several programs now exist to perform simulations of diffraction contrast images using various approximations. The most common approximations are the use of only 2-beams or a single systematic row to calculate the image contrast, or calculating the image using a column approximation. The increasing amount of literature showing comparisons of experimental and simulated images shows that it is possible to obtain very close agreement between the two images; although the choice of parameters used, and the assumptions made, in performing the calculation must be properly dealt with. The simulation of the images of defects in materials has, in many cases, therefore become a tractable problem.


Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


2016 ◽  
Vol 32 (2) ◽  
pp. 111-118 ◽  
Author(s):  
Marianna Szabó ◽  
Veronika Mészáros ◽  
Judit Sallay ◽  
Gyöngyi Ajtay ◽  
Viktor Boross ◽  
...  

Abstract. The aim of the present study was to examine the construct and cross-cultural validity of the Beck Hopelessness Scale (BHS; Beck, Weissman, Lester, & Trexler, 1974 ). Beck et al. applied exploratory Principal Components Analysis and argued that the scale measured three specific components (affective, motivational, and cognitive). Subsequent studies identified one, two, three, or more factors, highlighting a lack of clarity regarding the scale’s construct validity. In a large clinical sample, we tested the original three-factor model and explored alternative models using both confirmatory and exploratory factor analytical techniques appropriate for analyzing binary data. In doing so, we investigated whether method variance needs to be taken into account in understanding the structure of the BHS. Our findings supported a bifactor model that explicitly included method effects. We concluded that the BHS measures a single underlying construct of hopelessness, and that an incorporation of method effects consolidates previous findings where positively and negatively worded items loaded on separate factors. Our study further contributes to establishing the cross-cultural validity of this instrument by showing that BHS scores differentiate between depressed, anxious, and nonclinical groups in a Hungarian population.


Sign in / Sign up

Export Citation Format

Share Document