complexity of algorithm
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 5)

H-INDEX

2
(FIVE YEARS 1)

2022 ◽  
pp. 107754632110576
Author(s):  
Cong Wang ◽  
Hongwei Xia ◽  
Shunqing Ren

In conventional reaching law approaches, the disturbance suppression is achieved at the cost of high-frequency chattering or increasing the complexity of algorithm such as adding a high-order disturbance compensator. This paper presents the design and analysis of a novel implicit discretization-based adaptive reaching law for discrete-time sliding mode control systems. First, the implicit Euler technique is introduced into the design of discrete reaching laws, and it is proved to be able to eliminate numerical chattering completely. By using a self-adaptive power term, the newly designed reaching law can obtain an arbitrarily small boundary layer of sliding surface, and at the different phases of sliding mode motion, the adaptive power parameter can automatically regulate its value to guarantee globally fast convergence without destroying the accuracy of sliding variable. Then, based on a predefined trajectory of sliding variable, the discrete-time sliding mode control law is developed to realize high control accuracy without additional design. Compared with previous methods, the main contribution of proposed reaching law lies in that it can acquire high-precision sliding mode motion and simultaneously eliminate numerical chattering in spite of complex uncertainties only by adjusting the adaptive power parameter. Finally, a simulation example on the piezomotor-driven linear stage is provided to verify the theoretical results.


2021 ◽  
Vol 31 (10) ◽  
pp. 2150152
Author(s):  
Xiaojun Tong ◽  
Xudong Liu ◽  
Jing Liu ◽  
Miao Zhang ◽  
Zhu Wang

Due to high computational cost, traditional encryption algorithms are not suitable for the environments in which resources are limited. In view of the above problem, we first propose a combined chaotic map to increase the chaotic interval and Lyapunov exponent of the existing one-dimensional chaotic maps. Then, an S-box based on the proposed combined chaotic map is constructed. The performances of the designed S-box, such as bijection, nonlinearity, strict avalanche criteria, differential uniformity, the bits independence criterion, and the linear approximation probability, are tested to show that it has better cryptographic performances. Finally, we present a lightweight block encryption algorithm by using the above S-box. The algorithm is based on the generalized Feistel structure and SPN structure. In addtion, the processes of encryption and decryption of our algorithm are almost the same, which reduces the complexity of algorithm implementation. The experimental results show that the proposed encryption algorithm meets the requirements of lightweight algorithms and has good cryptographic characteristics.


Trudy MAI ◽  
2021 ◽  
Author(s):  
Maxim Tanygin ◽  
Haider Alshaea ◽  
Alexey Mitrofanov

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Wenjie Liu ◽  
Junxiu Chen ◽  
Yuxiang Wang ◽  
Peipei Gao ◽  
Zhibin Lei ◽  
...  

The complex systems with edge computing require a huge amount of multifeature data to extract appropriate insights for their decision making, so it is important to find a feasible feature selection method to improve the computational efficiency and save the resource consumption. In this paper, a quantum-based feature selection algorithm for the multiclassification problem, namely, QReliefF, is proposed, which can effectively reduce the complexity of algorithm and improve its computational efficiency. First, all features of each sample are encoded into a quantum state by performing operations CMP and Ry, and then the amplitude estimation is applied to calculate the similarity between any two quantum states (i.e., two samples). According to the similarities, the Grover–Long method is utilized to find the nearest k neighbor samples, and then the weight vector is updated. After a certain number of iterations through the above process, the desired features can be selected with regards to the final weight vector and the threshold τ. Compared with the classical ReliefF algorithm, our algorithm reduces the complexity of similarity calculation from O(MN) to O(M), the complexity of finding the nearest neighbor from O(M) to OM, and resource consumption from O(MN) to O(MlogN). Meanwhile, compared with the quantum Relief algorithm, our algorithm is superior in finding the nearest neighbor, reducing the complexity from O(M) to OM. Finally, in order to verify the feasibility of our algorithm, a simulation experiment based on Rigetti with a simple example is performed.


Author(s):  
Andrei Lissovoi ◽  
Pietro S. Oliveto ◽  
John Alasdair Warwicker

Selection hyper-heuristics are automated algorithm selection methodologies that choose between different heuristics during the optimisation process. Recently selection hyperheuristics choosing between a collection of elitist randomised local search heuristics with different neighbourhood sizes have been shown to optimise a standard unimodal benchmark function from evolutionary computation in the optimal expected runtime achievable with the available low-level heuristics. In this paper we extend our understanding to the domain of multimodal optimisation by considering a hyper-heuristic from the literature that can switch between elitist and nonelitist heuristics during the run. We first identify the range of parameters that allow the hyper-heuristic to hillclimb efficiently and prove that it can optimise a standard hillclimbing benchmark function in the best expected asymptotic time achievable by unbiased mutation-based randomised search heuristics. Afterwards, we use standard multimodal benchmark functions to highlight function characteristics where the hyper-heuristic is efficient by swiftly escaping local optima and ones where it is not. For a function class called CLIFFd where a new gradient of increasing fitness can be identified after escaping local optima, the hyper-heuristic is extremely efficient while a wide range of established elitist and non-elitist algorithms are not, including the well-studied Metropolis algorithm. We complete the picture with an analysis of another standard benchmark function called JUMPd as an example to highlight problem characteristics where the hyper-heuristic is inefficient. Yet, it still outperforms the wellestablished non-elitist Metropolis algorithm.


2018 ◽  
Vol 161 ◽  
pp. 02004
Author(s):  
Eugene Larkin ◽  
Alexey Bogomolov ◽  
Sergey Feofilov

Specific problems arising, when Von Neumann type computer is used as feedback element, are considered. It is shown, that due to specifics of operation this element introduce pure lag into control loop, and lag time depends on complexity of algorithm of control. Method of evaluation of runtime between reading data from sensors of object under control and write out data to actuator based on the theory of semi- Markov process is proposed. Formulae for time characteristics estimation are obtained. Lag time characteristics are used for investigation of stability of linear systems. Digital PID controller is divided onto linear part, which is realized with a soft and pure lag unit, which is realized with both hardware and software. With use notions amplitude and phase margins, condition for stability of system functioning are obtained. Theoretical results are confirm with computer experiment carried out on the third-order system.


Filomat ◽  
2018 ◽  
Vol 32 (5) ◽  
pp. 1727-1736 ◽  
Author(s):  
Binbin Sang ◽  
Xiaoyan Zhang ◽  
Weihua Xu

For the moment, the attribute reduction algorithm of relative knowledge granularity is very important research areas. It provides a new viewpoint to simplify feature set. Based on the decision information is unchanged, fast and accurate deletion of redundant attributes, which is the meaning of attribute reduction. Distinguishing ability of attribute sets can be well described by relative knowledge granularity in domain. Therefore, how to use the information based on relative knowledge granularity to simplify the calculation of attribute reduction. It is an important direction of research. For increasing productiveness and accuracy of attribute reduction, in this paper we investigate attribute reduction method of relative knowledge granularity in intuitionistic fuzzy ordered decision table(IFODT). More precisely, we redefine the granularity of knowledge and the relative knowledge granularity by ordered relation. And their relevant properties are proved. On the premise that the decision results remain unchanged, in order to accurately calculate the relative importance of any condition attributes about the decision attribute sets, the conditional attribute of internal and external significance are designed by relative knowledge granularity. And some important properties of relative attribute significance are proved. Therefore, we determine the importance of conditional attributes based on the size of the relative attribute significance. In the aspect of computation, the corresponding algorithm is designed and time complexity of algorithm is calculated. Moreover, the attribute reduction model of relative knowledge granularity of efficiency and accuracy is proved by test. Last, the validity of algorithm is demonstrated by an case about IFODT.


Author(s):  
Muhammad Ibn Ibrahimy

This paper illustrates designing and implementation process of floating point multiplier on Field Programmable Gate Array (FPGA). Floating-point operations are used in many fields like, digital signal processing, digital image processing, multimedia data analysis etc. Implementation of floating-point multiplication is handy and easy for high level language. However it is a challenging task to implement a floating-point multiplication in hardware level/low level language due to the complexity of algorithm. A top-down approach has been applied for the prototyping of IEEE 754-2008 standard floating-point multiplier module using Verilog Hardware Description Language (HDL). Electronic Design Automation (EDA) tool of Altera Quartus II has been used for floating-point multiplier. The hardware implementation has been done by downloading the Verilog code onto Altera DE2 FPGA development board and found a satisfactory performance.


2014 ◽  
Vol 687-691 ◽  
pp. 4105-4109 ◽  
Author(s):  
Yong Jun Zhang ◽  
She Nan Li

A novel indoor positioning algorithm named BPNN-LANDMARC is proposed in this paper to increase the positioning accuracy and reduce the high time complexity of classical LANDMARC algorithm. Simulation results prove that the proposed BPNN-LANDMARC algorithm can improve the average positioning accuracy by 24.35%. In addition, the improved algorithm reduces the time complexity of algorithm obviously.


Sign in / Sign up

Export Citation Format

Share Document