ESTIMATION OF COMPLEXITY OF EFFECTIVE ALGORITHMS FOR CHECKING A PRESENCE OF JOINT ACTION OF BINARY FACTORS

Author(s):  
Yuliya Nagrebeckaya ◽  
Vladimir Panov

Effective algorithms are provided for checking presence of joint action of k factors in a given outcome which depends on n factors (k < n) and for calculation of degrees of that joint action for any k. It is demonstrated that asymptotic time complexity of the proposed algorithms does not exceed square of the input data size representing the given outcome

2021 ◽  
Vol 2113 (1) ◽  
pp. 012038
Author(s):  
Mingzheng Yuan

Abstract This research designs an absolute-value detector with the function of threshold comparing. Specifically, it is an essential device in the spike detection of the brain-machine interface. The optimized design in the research can accomplish the main functions in spike detection and has good performance in both delay and energy consumption. It comes up with two types of design at the beginning. To make the design reliable and comprehensive, it decides to discuss both methods in this paper. The first design is using a full adder, multiplexer and comparator. The concept of its logic circuit is adding the logic one to the input when the given input data is negative, keeping the original information as the given input data is positive. To achieve the function of adding, this study chooses the full adders. The primary purpose of using multiplexers is to select from the processed input and original input, and the choice depends on the most significant bit (MSB) of the input data. To compare the absolute value of the input data with a given threshold, this research used a multi-bit comparator. The second design is based on the fundamental algorithms of calculating total numbers. It indicates that this study can operate it with the threshold value through a subtractor when the input is negative. On the contrary, an adder can be used when the information is positive. Based on the concept of logic optimization, this study chooses to use the only subtractors, and it just needs to focus on the borrow bit, which can indicate the more significant number. By connecting the MSB of the input with the subtractors through XOR gates, the selection can be achieved without using any multiplexer. In the process of removing and replacing the devices, it reached the optimization of the design. Then, this paper compared the minimum delay by calculating each stage’s size and finding that the second design is better. Finally, based on the dual design, this essay computed the energy consumption in the circuit and implement VDD optimization to obtain the minimum energy.


2012 ◽  
Vol 23 (07) ◽  
pp. 1451-1464 ◽  
Author(s):  
AMIR M. BEN-AMRAM ◽  
LARS KRISTIANSEN

We investigate the decidability of the feasibility problem for imperative programs with bounded loops. A program is called feasible if all values it computes are polynomially bounded in terms of the input. The feasibility problem is representative of a group of related properties, like that of polynomial time complexity. It is well known that such properties are undecidable for a Turing-complete programming language. They may be decidable, however, for languages that are not Turing-complete. But if these languages are expressive enough, they do pose a challenge for analysis. We are interested in tracing the edge of decidability for the feasibility problem and similar problems. In previous work, we proved that such problems are decidable for a language where loops are bounded but indefinite (that is, the loops may exit before completing the given iteration count). In this paper, we consider definite loops. A second language feature that we vary, is the kind of assignment statements. With ordinary assignment, we prove undecidability of a very tiny language fragment. We also prove undecidability with lossy assignment (that is, assignments where the modified variable may receive any value bounded by the given expression, even zero). But we prove decidability with max assignments (that is, assignments where the modified variable never decreases its value).


Author(s):  
Yaroslav Matviychuk ◽  
Tomáš Peráček ◽  
Natalya Shakhovska

The paper proposes a new principle of finding and removing elements of mathematical model, redundant in terms of parametric identification of the model. It allows reducing computational and time complexity of the applications built on the model. Especially this is important for AI based systems, systems based on IoT solutions, distributed systems etc. Besides, the complexity reduction allows increasing an accuracy of mathematical models implemented. Despite the model order reduction methods are well known, they are extremely depended however on the problem area. Thus, proposed reduction principles can be used in different areas, what is demonstrated in this paper. The proposed method for the reduction of mathematical models of dynamic systems allows also the assessment of the requirements for the parameters of the simulator elements to ensure the specified accuracy of dynamic similarity. Efficiency of the principle is shown on the ordinary differential equations and on the neural network model. The given examples demonstrate efficient normalizing properties of the reduction principle for the mathematical models in the form of neural networks.


Cryptography ◽  
2020 ◽  
pp. 129-141
Author(s):  
Filali Mohamed Amine ◽  
Gafour Abdelkader

Advanced Encryption Standard is one of the most popular symmetric key encryption algorithms to many works, which have employed to implement modified AES. In this paper, the modification that has been proposed on AES algorithm that has been developed to decrease its time complexity on bulky data and increased security will be included using the image as input data. The modification proposed itself including alteration in the mix column and shift rows transformation of AES encryption algorithm, embedding confusion-diffusion. This work has been implemented on the most recent Xilinx Spartan FPGA.


Author(s):  
Filali Mohamed Amine ◽  
Gafour Abdelkader

Advanced Encryption Standard is one of the most popular symmetric key encryption algorithms to many works, which have employed to implement modified AES. In this paper, the modification that has been proposed on AES algorithm that has been developed to decrease its time complexity on bulky data and increased security will be included using the image as input data. The modification proposed itself including alteration in the mix column and shift rows transformation of AES encryption algorithm, embedding confusion-diffusion. This work has been implemented on the most recent Xilinx Spartan FPGA.


2020 ◽  
Vol 1 (1) ◽  
pp. 1-7
Author(s):  
Kumarjit Banerjee ◽  
Satyendra Nath Mandal ◽  
Sanjoy Kumar Das

The RSA cryptosystem, invented by Ron Rivest, Adi Shamir and Len Adleman was first publicized in the August 1977 issue of Scientific American. The security level of this algorithm very much depends on two large prime numbers. The large primes have been taken by BigInteger in Java. An algorithm has been proposed to calculate the exact square root of the given number. Three methods have been used to check whether a given number is prime or not. In trial division approach, a number has to be divided from 2 to the half the square root of the number. The number will be not prime if it gives any factor in trial division. A prime number can be represented by 6n±1 but all numbers which are of the form 6n±1 may not be prime. A set of linear equations like 30k+1, 30k+7, 30k+11, 30k+13, 30k+17, 30k+19, 30k+23 and 30k+29 also have been used to produce pseudo primes. In this paper, an effort has been made to implement all three methods in implementation of RSA algorithm with large integers. A comparison has been made based on their time complexity and number of pseudo primes. It has been observed that the set of linear equations, have given better results compared to other methods.


2019 ◽  
Vol 10 (1) ◽  
pp. 178 ◽  
Author(s):  
Matevž Pesek ◽  
Aleš Leonardis ◽  
Matija Marolt

This paper presents a model capable of learning the rhythmic characteristics of a music signal through unsupervised learning. The model learns a multi-layer hierarchy of rhythmic patterns ranging from simple structures on lower layers to more complex patterns on higher layers. The learned hierarchy is fully transparent, which enables observation and explanation of the structure of the learned patterns. The model employs tempo-invariant encoding of patterns and can thus learn and perform inference on tempo-varying and noisy input data. We demonstrate the model’s capabilities of learning distinctive rhythmic structures of different music genres using unsupervised learning. To test its robustness, we show how the model can efficiently extract rhythmic structures in songs with changing time signatures and live recordings. Additionally, the model’s time-complexity is empirically tested to show its usability for analysis-related applications.


2020 ◽  
pp. 1-10
Author(s):  
M. Ghorani ◽  
S. Garhwal

In this paper, we study fuzzy top-down tree automata over lattices ( LTA s , for short). The purpose of this contribution is to investigate the minimization problem for LTA s . We first define the concept of statewise equivalence between two LTA s . Thereafter, we show the existence of the statewise minimal form for an LTA . To this end, we find a statewise irreducible LTA which is equivalent to a given LTA . Then, we provide an algorithm to find the statewise minimal LTA and by a theorem, we show that the output statewise minimal LTA is statewise equivalent to the given input. Moreover, we compute the time complexity of the given algorithm. The proposed algorithm can be applied to any given LTA and, unlike some minimization algorithms given in the literature, the input doesn’t need to be a complete, deterministic, or reduced lattice-valued tree automaton. Finally, we provide some examples to show the efficiency of the presented algorithm.


2017 ◽  
Vol 33 (2) ◽  
pp. 131-142
Author(s):  
Quang Minh Hoang ◽  
Vu Duc Thi ◽  
Nguyen Ngoc San

Rough set theory is useful mathematical tool developed to deal with vagueness and uncertainty. As an important concept of rough set theory, an attribute reduct is a subset of attributes that are jointly sufficient and individually necessary for preserving a particular property of the given information table. Rough set theory is also the most popular for generating decision rules from decision table. In this paper, we propose an algorithm finding object reduct of consistent decsion table. On the other hand, we also show an algorithm to find some attribute reducts and the correctness of our algorithms is proof-theoretically. These our algorithms have polynomial time complexity. Our finding object reduct helps other algorithms of finding attribute reducts become more effectively, especially as working with huge consistent decision table.


Sign in / Sign up

Export Citation Format

Share Document