scholarly journals Zooming into chaos as a pathway for the creation of a fast, light and reliable cryptosystem

Author(s):  
Jeaneth Machicao ◽  
Odemir M. Bruno ◽  
Murilo S. Baptista

AbstractMotivated by today’s huge volume of data that needs to be handled in secrecy, there is a wish to develop not only fast and light but also reliably secure cryptosystems. Chaos allows for the creation of pseudo-random numbers (PRNs) by low-dimensional transformations that need to be applied only a small number of times. These two properties may translate into a chaos-based cryptosystem that is both fast (short running time) and light (little computational effort). What we propose here is an approach to generate PRNs—and consequently digital secret keys—that can serve as a seed for an enhanced chaos-based cryptosystem. We use low-dimensional chaotic maps to quickly generate PRNs that have little correlation, and then, we quickly (“fast”) enhance secrecy by several orders (“reliability”) with very little computational cost (“light”) by simply looking at the less significant digits of the initial chaotic trajectory. This paper demonstrates this idea with rigor, by showing that a transformation applied a small number of times to chaotic trajectories significantly increases its entropy and Lyapunov exponents, as a consequence of the smoothing out of the probability density towards a uniform distribution.

2021 ◽  
Vol 39 (6) ◽  
pp. 9-22
Author(s):  
Rabah Bououden ◽  
Mohamed Salah Abdelouahab

Chaos optimization algorithms (COAs) usually utilize different chaotic maps(logistic, tent, Hénon, Lozi,...) to generate the pseudo-random numbers mapped as the design variables for global optimization. In this paper we are going to propose new technique to improve the chaotic optimization algorithm by using some transformations to modify the density of the map instead of changing it.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1313
Author(s):  
Wenhao Yan ◽  
Qun Ding

In this paper, a method to enhance the dynamic characteristics of one-dimension (1D) chaotic maps is first presented. Linear combinations and nonlinear transform based on existing chaotic systems (LNECS) are introduced. Then, a numerical chaotic map (LCLS), based on Logistic map and Sine map, is given. Through the analysis of a bifurcation diagram, Lyapunov exponent (LE), and Sample entropy (SE), we can see that CLS has overcome the shortcomings of a low-dimensional chaotic system and can be used in the field of cryptology. In addition, the construction of eight functions is designed to obtain an S-box. Finally, five security criteria of the S-box are shown, which indicate the S-box based on the proposed in this paper has strong encryption characteristics. The research of this paper is helpful for the development of cryptography study such as dynamic construction methods based on chaotic systems.


2020 ◽  
Vol 26 (3) ◽  
pp. 193-203
Author(s):  
Shady Ahmed Nagy ◽  
Mohamed A. El-Beltagy ◽  
Mohamed Wafa

AbstractMonte Carlo (MC) simulation depends on pseudo-random numbers. The generation of these numbers is examined in connection with the Brownian motion. We present the low discrepancy sequence known as Halton sequence that generates different stochastic samples in an equally distributed form. This will increase the convergence and accuracy using the generated different samples in the Multilevel Monte Carlo method (MLMC). We compare algorithms by using a pseudo-random generator and a random generator depending on a Halton sequence. The computational cost for different stochastic differential equations increases in a standard MC technique. It will be highly reduced using a Halton sequence, especially in multiplicative stochastic differential equations.


Author(s):  
Alessandra Cuneo ◽  
Alberto Traverso ◽  
Shahrokh Shahpar

In engineering design, uncertainty is inevitable and can cause a significant deviation in the performance of a system. Uncertainty in input parameters can be categorized into two groups: aleatory and epistemic uncertainty. The work presented here is focused on aleatory uncertainty, which can cause natural, unpredictable and uncontrollable variations in performance of the system under study. Such uncertainty can be quantified using statistical methods, but the main obstacle is often the computational cost, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation with as few evaluations as possible. In the last few years, different methodologies for uncertainty propagation and quantification have been proposed. The focus of this study is to evaluate four different methods to demonstrate strengths and weaknesses of each approach. The first method considered is Monte Carlo simulation, a sampling method that can give high accuracy but needs a relatively large computational effort. The second method is Polynomial Chaos, an approximated method where the probabilistic parameters of the response function are modelled with orthogonal polynomials. The third method considered is Mid-range Approximation Method. This approach is based on the assembly of multiple meta-models into one model to perform optimization under uncertainty. The fourth method is the application of the first two methods not directly to the model but to a response surface representing the model of the simulation, to decrease computational cost. All these methods have been applied to a set of analytical test functions and engineering test cases. Relevant aspects of the engineering design and analysis such as high number of stochastic variables and optimised design problem with and without stochastic design parameters were assessed. Polynomial Chaos emerges as the most promising methodology, and was then applied to a turbomachinery test case based on a thermal analysis of a high-pressure turbine disk.


Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. S317-S331 ◽  
Author(s):  
Jianfeng Zhang ◽  
Zhengwei Li ◽  
Linong Liu ◽  
Jin Wang ◽  
Jincheng Xu

We have improved the so-called deabsorption prestack time migration (PSTM) by introducing a dip-angle domain stationary-phase implementation. Deabsorption PSTM compensates absorption and dispersion via an actual wave propagation path using effective [Formula: see text] parameters that are obtained during migration. However, noises induced by the compensation degrade the resolution gained and deabsorption PSTM requires more computational effort than conventional PSTM. Our stationary-phase implementation improves deabsorption PSTM through the determination of an optimal migration aperture based on an estimate of the Fresnel zone. This significantly attenuates the noises and reduces the computational cost of 3D deabsorption PSTM. We have estimated the 2D Fresnel zone in terms of two dip angles through building a pair of 1D migrated dip-angle gathers using PSTM. Our stationary-phase QPSTM (deabsorption PSTM) was implemented as a two-stage process. First, we used conventional PSTM to obtain the Fresnel zones. Then, we performed deabsorption PSTM with the Fresnel-zone-based optimized migration aperture. We applied stationary-phase QPSTM to a 3D field data. Comparison with synthetic seismogram generated from well log data validates the resolution enhancements.


Author(s):  
Sibo Li ◽  
Hongtao Qiao

Abstract Real-time or faster-than-real-time flow simulation is crucial for studying airflow and heat transfer in buildings, such as building design, building emergency management and building energy performance evaluation. Computational Fluid Dynamics (CFD) with Pressure Implicit with Splitting of Operator (PISO) or Semi-Implicit Method for Pressure Linked Equations (SIMPLE) algorithm is accurate but requires great computational resources. Fast Fluid Dynamics (FFD) can reduce the computational effort but generally lack prediction accuracy due to simplification. This study developed a fast computational method based on FFD in combination with the PISO algorithm. Boussinesq approximation is adopted for simulating buoyancy effect. The proposed solver is tested in a two-dimensional case and a three-dimensional case with experimental data. The predicted results have good agreement with the experimental results. In the two test cases, the proposed solver generates lower Root Mean Square Error (RMSE) compared to the FFD and at the same time, the proposed method reduces computational cost by a factor of 10 and 13 in the two cases compared to CFD.


Author(s):  
Stefan Lammens ◽  
Marc Brughmans ◽  
Jan Leuridan ◽  
Ward Heylen ◽  
Paul Sas

Abstract This paper presents a model updating method based on experimental receptances. The presented method minimises the so called ‘indirect receptance difference’. First, the reduced analytical dynamic stiffness matrix is expressed as an approximate, linearised function of the updating parameters. In a numerically stable, iterative procedure, this reduced analytical dynamic stiffness matrix is changed in such a way that the analytical receptances match the experimental receptances at the updating frequencies. The updating frequencies are a set of selected frequency points in the frequency range of interest. Some considerations about an optimal selection of the updating frequencies are given. Finally, a mixed static-dynamic reduction scheme is discussed. Dynamic reduction of the analytical dynamic stiffness matrix at each updating frequency is physically exact, but it involves a great computational effort. The presented mixed static-dynamic reduction scheme is a simple strategy to reduce the computational cost with a minor loss of accuracy.


2019 ◽  
Vol 11 (1) ◽  
pp. 168781401881917
Author(s):  
Fang Lv ◽  
Yuliang Wei ◽  
Xixian Han ◽  
Bailing Wang

With the explosive growth of surveillance data, exact match queries become much more difficult for its high dimension and high volume. Owing to its good balance between the retrieval performance and the computational cost, hash learning technique is widely used in solving approximate nearest neighbor search problems. Dimensionality reduction plays a critical role in hash learning, as its target is to preserve the most original information into low-dimensional vectors. However, the existing dimensionality reduction methods neglect to unify diverse resources in original space when learning a downsized subspace. In this article, we propose a numeric and semantic consistency semi-supervised hash learning method, which unifies the numeric features and supervised semantic features into a low-dimensional subspace before hash encoding, and improves a multiple table hash method with complementary numeric local distribution structure. A consistency-based learning method, which confers the meaning of semantic to numeric features in dimensionality reduction, is presented. The experiments are conducted on two public datasets, that is, a web image NUS-WIDE and text dataset DBLP. Experimental results demonstrate that the semi-supervised hash learning method, with the consistency-based information subspace, is more effective in preserving useful information for hash encoding than state-of-the-art methods and achieves high-quality retrieval performance in multi-table context.


2019 ◽  
pp. 100018
Author(s):  
Aleksandra V. Tutueva ◽  
Erivelton G. Nepomuceno ◽  
Artur I. Karimov ◽  
Valery S. Andreev ◽  
Denis N. Butusov
Keyword(s):  

Author(s):  
Matthew A. Williams ◽  
Andrew G. Alleyne

In the early stages of control system development, designers often require multiple iterations for purposes of validating control designs in simulation. This has the potential to make high fidelity models undesirable due to increased computational complexity and time required for simulation. As a solution, lower fidelity or simplified models are used for initial designs before controllers are tested on higher fidelity models. In the event that unmodeled dynamics cause the controller to fail when applied on a higher fidelity model, an iterative approach involving designing and validating a controller’s performance may be required. In this paper, a switched-fidelity modeling formulation for closed loop dynamical systems is proposed to reduce computational effort while maintaining elevated accuracy levels of system outputs and control inputs. The effects on computational effort and accuracy are investigated by applying the formulation to a traditional vapor compression system with high and low fidelity models of the evaporator and condenser. This sample case showed the ability of the switched fidelity framework to closely match the outputs and inputs of the high fidelity model while decreasing computational cost by 32% from the high fidelity model. For contrast, the low fidelity model decreases computational cost by 48% relative to the high fidelity model.


Sign in / Sign up

Export Citation Format

Share Document