scholarly journals SAVINGS TIME EXECUTION PRIMA NUMBERS GENERATOR USING BIT-ARRAY STRUCTURE

2021 ◽  
Vol 1 (1) ◽  
pp. 20-27
Author(s):  
Letnan Kolonel Elektronika Imat Rakhmat Hidayat, S.T., M.Eng

Prime number in growth computer science of number theory and very need to yield an tool which can yield an hardware storey level effectiveness use efficiency and Existing Tools can be used to awaken regular prime number sequence pattern, structure bit-array represent containing subdividing variables method of data aggregate with every data element which have type of equal, and also can be used in moth-balls the yielded number sequence. Prime number very useful to be applied by as bases from algorithm kriptografi key public creation, hash table, best algorithm if applied hence is prime number in order to can minimize collision (collisions) will happen, in determining pattern sequence of prime number which size measure is very big is not an work easy to, so that become problems which must be searched by the way of quickest to yield sequence of prime number which size measure is very big Serial use of prosesor in seeking sequence prime number which size measure is very big less be efficient remember needing of computing time which long enough, so also plural use prosesor in seeking sequence of prime number will concerning to price problem and require software newly. So that by using generator of prime number use structure bit-array expected by difficulty in searching pattern sequence of prime number can be overcome though without using plural processor even if, as well as time complexity minimization can accessed. Execution time savings gained from the research seen from the research data, using the algorithm on the input Atkins 676,999,999. 4235747.00 execution takes seconds. While the algorithm by using an array of input bits 676,999,999. 13955.00 execution takes seconds. So that there is a difference of execution time for 4221792.00 seconds.

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Lin Yang

In recent years, people have paid more and more attention to cloud data. However, because users do not have absolute control over the data stored on the cloud server, it is necessary for the cloud storage server to provide evidence that the data are completely saved to maintain their control over the data. Give users all management rights, users can independently install operating systems and applications and can choose self-service platforms and various remote management tools to manage and control the host according to personal habits. This paper mainly introduces the cloud data integrity verification algorithm of sustainable computing accounting informatization and studies the advantages and disadvantages of the existing data integrity proof mechanism and the new requirements under the cloud storage environment. In this paper, an LBT-based big data integrity proof mechanism is proposed, which introduces a multibranch path tree as the data structure used in the data integrity proof mechanism and proposes a multibranch path structure with rank and data integrity detection algorithm. In this paper, the proposed data integrity verification algorithm and two other integrity verification algorithms are used for simulation experiments. The experimental results show that the proposed scheme is about 10% better than scheme 1 and about 5% better than scheme 2 in computing time of 500 data blocks; in the change of operation data block time, the execution time of scheme 1 and scheme 2 increases with the increase of data blocks. The execution time of the proposed scheme remains unchanged, and the computational cost of the proposed scheme is also better than that of scheme 1 and scheme 2. The scheme in this paper not only can verify the integrity of cloud storage data but also has certain verification advantages, which has a certain significance in the application of big data integrity verification.


Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 235 ◽  
Author(s):  
Bruno Colonetti ◽  
Erlon Cristian Finardi ◽  
Welington de Oliveira

Independent System Operators (ISOs) worldwide face the ever-increasing challenge of coping with uncertainties, which requires sophisticated algorithms for solving unit-commitment (UC) problems of increasing complexity in less-and-less time. Hence, decomposition methods are appealing options to produce easier-to-handle problems that can hopefully return good solutions at reasonable times. When applied to two-stage stochastic models, decomposition often yields subproblems that are embarrassingly parallel. Synchronous parallel-computing techniques are applied to the decomposable subproblem and frequently result in considerable time savings. However, due to the inherent run-time differences amongst the subproblem’s optimization models, unequal equipment, and communication overheads, synchronous approaches may underuse the computing resources. Consequently, asynchronous computing constitutes a natural enhancement to existing methods. In this work, we propose a novel extension of the asynchronous level decomposition to solve stochastic hydrothermal UC problems with mixed-integer variables in the first stage. In addition, we combine this novel method with an efficient task allocation to yield an innovative algorithm that far outperforms the current state-of-the-art. We provide convergence analysis of our proposal and assess its computational performance on a testbed consisting of 54 problems from a 46-bus system. Results show that our asynchronous algorithm outperforms its synchronous counterpart in terms of wall-clock computing time in 40% of the problems, providing time savings averaging about 45%, while also reducing the standard deviation of running times over the testbed in the order of 25%.


2019 ◽  
Vol 57 (3) ◽  
pp. 344
Author(s):  
Dung Xuan Nguyen ◽  
Ban Van Doan ◽  
Ngoc Thi Bich Do

The Betweenness centrality is an important metric in the graph theory and can be applied in the analyzing social network. The main researches about Betweenness centrality often focus on reducing the complexity. Nowadays, the number of users in the social networks is huge. Thus, improving the computing time of Betweenness centrality to apply in the social network is neccessary. In this paper, we propose the algorithm of computing Betweenness centrality by reduce the similar nodes in the graph in order to reducing computing time. Our experiments with graph networks result shows that the computing time of the proposed algorithm is less than Brandes algorithm. The proposed algorithm is compared with the Brandes algorithm [3] in term of execution time.


2011 ◽  
Vol 36 (1) ◽  
pp. 60-63 ◽  
Author(s):  
Ratna Nandakumar ◽  
Lawrence Hotchkiss

The PROC NLMIXED procedure in Statistical Analysis System can be used to estimate parameters of item response theory (IRT) models. The data for this procedure are set up in a particular format called the “long format.” The long format takes a substantial amount of time to execute the program. This article describes a format called the “wide format” to estimate parameters of an IRT model more efficiently. The wide format substantially reduces execution time for models with few parameters. But the time savings decline as the number of parameters increases.


Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 755 ◽  
Author(s):  
Lukas Jancar ◽  
Marek Pagac ◽  
Jakub Mesicek ◽  
Petr Stefek

This article describes the design procedure of a topologically optimized scooter frame part. It is the rear heel of the frame, one of the four main parts of a scooter made with stainless steel 3D printing. The first part of the article deals with the design area definition and the determination of load cases for topology calculation. The second part describes the process of the topology optimization itself and the creation of the volume body based on the calculation results. Finally, the final control using an FEM (Finite Element Method) analysis and optimization of created Computer-Aided Design (CAD) data is shown. Part of the article is also a review of partial iterations and resulting versions of the designed part. Symmetry was used to define boundary conditions, which led to computing time savings, as well as during the CAD model creation, where non-parametric surfaces were mirrored to shorten the design time.


2015 ◽  
Vol 7 (4) ◽  
pp. 43
Author(s):  
Raul Alberto Ribeiro Correia de Sousa

<p>Euler{'}s formula establishes the relationship between the trigonometric function and the exponential function. In doing so unifies two waves, a real and an imaginary one, that propagate through the Complex number set, establishing relation between integer numbers. A complex wave, if anchored by zero and by a defined integer number \textit{N}, only can assume certain oscillation modes. The first mode of oscillation corresponds always to a \textit{N} prime number and the other modes to its multiples.</p><p>\begin{center}<br />\(\psi (x)=x e^{i\left(\frac{n \pi }{N}x\right)}\)<br />\end{center}</p><p>Under the above described conditions, these waves and their admissible oscillation modes allows for primality testing of integer numbers, the deduction of a new formula $\pi(x)$ for counting prime numbers and the identification of patterns in the prime numbers distribution with computing time gains in the calculations. In this article, four theorems and one rule of factorizing are put forward with consequences for prime number signaling, counting and distribution. Furthermore, it is establish the relationship between this complex wave with a time independent semi-classical harmonic oscillator, in which the spectrum of the allowed energy levels are always only prime numbers. Thus, it is affirmative the reply to the question if the prime numbers distribution is related to the energy levels of a physical system.</p>


2019 ◽  
Vol 8 (4) ◽  
pp. 5160-5165

Feature selection is a powerful tool to identify the important characteristics of data for prediction. Feature selection, therefore, can be a tool for avoiding overfitting, improving prediction accuracy and reducing execution time. The applications of feature selection procedures are particularly important in Support vector machines, which is used for prediction in large datasets. The larger the dataset, the more computationally exhaustive and challenging it is to build a predictive model using the support vector classifier. This paper investigates how the feature selection approach based on the analysis of variance (ANOVA) can be optimized for Support Vector Machines (SVMs) to improve its execution time and accuracy. We introduce new conditions on the SVMs prior to running the ANOVA to optimize the performance of the support vector classifier. We also establish the bootstrap procedure as alternative to cross validation to perform model selection. We run our experiments using popular datasets and compare our results to existing modifications of SVMs with feature selection procedure. We propose a number of ANOVA-SVM modifications which are simple to perform, while at the same time, boost significantly the accuracy and computing time of the SVMs in comparison to existing methods like the Mixed Integer Linear Feature Selection approach.


PAMM ◽  
2014 ◽  
Vol 14 (1) ◽  
pp. 45-46
Author(s):  
Tobias Gail ◽  
Sigrid Leyendecker ◽  
Sina Ober-Blöbaum
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document