cpu time
Recently Published Documents


TOTAL DOCUMENTS

402
(FIVE YEARS 113)

H-INDEX

21
(FIVE YEARS 3)

Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 238
Author(s):  
Weiwei Li ◽  
Fajie Wang

This paper presents a precorrected-FFT (pFFT) accelerated singular boundary method (SBM) for acoustic radiation and scattering in the high-frequency regime. The SBM is a boundary-type collocation method, which is truly free of mesh and integration and easy to program. However, due to the expensive CPU time and memory requirement in solving a fully-populated interpolation matrix equation, this method is usually limited to low-frequency acoustic problems. A new pFFT scheme is introduced to overcome this drawback. Since the models with lots of collocation points can be calculated by the new pFFT accelerated SBM (pFFT-SBM), high-frequency acoustic problems can be simulated. The results of numerical examples show that the new pFFT-SBM possesses an obvious advantage for high-frequency acoustic problems.


MAUSAM ◽  
2022 ◽  
Vol 44 (2) ◽  
pp. 135-142
Author(s):  
DHANNA SINGH ◽  
SUMAN GOYAL

The functions of a software package of 6 programmes developed for retrieving, decoding quality control and formatting of surface and upper air coded data have been presented here in brief. Intelligent use has been made of Fortran- 77 fact1ltles to make these programmes extremely efficient. Global data for surface and upper air received on GTS for an entire day is sorted, decoded & formatted after quality control in about three and a half minutes (CPU time) on VAX 8810 system.   The programmes do the management of files and can also be used for decoding the monthly data files of hard copy data. For coding of data, FGGE code has been used with very minor modifications. The results of quality control checks and number of reports received hour wise for each synoptic hour for each WMO block are monitored. Information from both is displayed on the terminal in tabular form and also recorded in disk for monthly archival.


Molecules ◽  
2021 ◽  
Vol 26 (24) ◽  
pp. 7584
Author(s):  
Iryna O. Kravets ◽  
Dmytro V. Dudenko ◽  
Alexander E. Pashenko ◽  
Tatiana A. Borisova ◽  
Ganna M. Tolstanova ◽  
...  

We elaborate new models for ACE and ACE2 receptors with an excellent prediction power compared to previous models. We propose promising workflows for working with huge compound collections, thereby enabling us to discover optimized protocols for virtual screening management. The efficacy of elaborated roadmaps is demonstrated through the cost-effective molecular docking of 1.4 billion compounds. Savings of up to 10-fold in CPU time are demonstrated. These developments allowed us to evaluate ACE2/ACE selectivity in silico, which is a crucial checkpoint for developing chemical probes for ACE2.


Author(s):  
Richard Olatokunbo Akinola

Aims/ Objectives: To compare the performance of four Sinc methods for the numerical approximation of indefinite integrals with algebraic or logarithmic end-point singularities. Methodology: The first two quadrature formulas were proposed by Haber based on the sinc method, the third is Stengers Single Exponential (SE) formula and Tanaka et al.s Double Exponential (DE) sinc method completes the number. Furthermore, an application of the four quadrature formulas on numerical examples, reveals convergence to the exact solution by Tanaka et al.s DE sinc method than by the other three formulae. In addition, we compared the CPU time of the four quadrature methods which was not done in an earlier work by the same author. Conclusion: Haber formula A is the fastest as revealed by the CPU time.


2021 ◽  
Vol 20 ◽  
pp. 362-371
Author(s):  
Alexander Zemliak

The minimization of the processor time of designing can be formulated as a problem of time minimization for transitional process of dynamic system. A special control vector that changes the internal structure of the equations of optimization procedure serves as a principal tool for searching the best strategies with the minimal CPU time. In this case a well-known maximum principle of Pontryagin is the best theoretical approach for finding of the optimum structure of control vector. Practical approach for realization of the maximum principle is based on the analysis of behavior of a Hamiltonian for various strategies of optimization. The possibility of applying the maximum principle to the problem of optimization of electronic circuits is analyzed. It is shown that in spite of the fact that the problem of optimization is formulated as a nonlinear task, and the maximum principle in this case isn't a sufficient condition for obtaining a minimum of the functional, it is possible to obtain the decision in the form of local minima. The relative acceleration of the CPU time for the best strategy found by means of maximum principle compared with the traditional approach is equal two to three orders of magnitude.


Author(s):  
Alexander Zemliak

Purpose In this paper, the previously developed idea of generalized optimization of circuits for deterministic methods has been extended to genetic algorithm (GA) to demonstrate new possibilities for solving an optimization problem that enhance accuracy and significantly reduce computing time. Design/methodology/approach The disadvantages of GAs are premature convergence to local minima and an increase in the computer operation time when setting a sufficiently high accuracy for obtaining the minimum. The idea of generalized optimization of circuits, previously developed for the methods of deterministic optimization, is built into the GA and allows one to implement various optimization strategies based on GA. The shape of the fitness function, as well as the length and structure of the chromosomes, is determined by a control vector artificially introduced within the framework of generalized optimization. This study found that changing the control vector that determines the method for calculating the fitness function makes it possible to bypass local minima and find the global minimum with high accuracy and a significant reduction in central processing unit (CPU) time. Findings The structure of the control vector is found, which makes it possible to reduce the CPU time by several orders of magnitude and increase the accuracy of the optimization process compared with the traditional approach for GAs. Originality/value It was demonstrated that incorporating the idea of generalized optimization into the body of a stochastic optimization method leads to qualitatively new properties of the optimization process, increasing the accuracy and minimizing the CPU time.


Author(s):  
Mark I. Modebei ◽  
Olumide O. Olaiya ◽  
Ignatius P. Ngwongwo

A Block of hybrid method with three off-step points based is presented in this work for direct approximation of solution of third-order Initial and Boundary Value Problems (IVPs and BVPs). This off-step points are formulated such that they exist only on a single step at a time. Hence, these points are shifted to three positions respectively in order to obtain three different integrators for computational analysis. These analysis includes; order of the methods, consistency, stability and convergence, global error, number of functions evaluation and CPU time. The superiority of these methods over existing methods is established numerically on different test problems in literature


Author(s):  
Rizk M. Rizk-Allah ◽  
O. Saleh ◽  
Enas A. Hagag ◽  
Abd Allah A. Mousa

AbstractNowadays optimization problems become difficult and complex, traditional methods become inefficient to reach global optimal solutions. Meanwhile, a huge number of meta-heuristic algorithms have been suggested to overcome the shortcomings of traditional methods. Tunicate Swarm Algorithm (TSA) is a new biologically inspired meta-heuristic optimization algorithm which mimics jet propulsion and swarm intelligence during the searching for a food source. In this paper, we suggested an enhancement to TSA, named Enhanced Tunicate Swarm Algorithm (ETSA), based on a novel searching strategy to improve the exploration and exploitation abilities. The proposed ETSA is applied to 20 unimodal, multimodal and fixed dimensional benchmark test functions and compared with other algorithms. The statistical measures, error analysis and the Wilcoxon test have affirmed the robustness and effectiveness of the ETSA. Furthermore, the scalability of the ETSA is confirmed using high dimensions and results exhibited that the ETSA is least affected by increasing the dimensions. Additionally, the CPU time of the proposed algorithms are obtained, the ETSA provides less CPU time than the others for most functions. Finally, the proposed algorithm is applied at one of the important electrical applications, Economic Dispatch Problem, and the results affirmed its applicability to deal with practical optimization tasks.


Author(s):  
Luciano Costa ◽  
Claudio Contardo ◽  
Guy Desaulniers ◽  
Julian Yarkony

Column generation (CG) algorithms are well known to suffer from convergence issues due, mainly, to the degenerate structure of their master problem and the instability associated with the dual variables involved in the process. In the literature, several strategies have been proposed to overcome this issue. These techniques rely either on the modification of the standard CG algorithm or on some prior information about the set of dual optimal solutions. In this paper, we propose a new stabilization framework, which relies on the dynamic generation of aggregated rows from the CG master problem. To evaluate the performance of our method and its flexibility, we consider instances of three different problems, namely, vehicle routing with time windows (VRPTW), bin packing with conflicts (BPPC), and multiperson pose estimation (MPPEP). When solving the VRPTW, the proposed stabilized CG method yields significant improvements in terms of CPU time and number of iterations with respect to a standard CG algorithm. Huge reductions in CPU time are also achieved when solving the BPPC and the MPPEP. For the latter, our method has shown to be competitive when compared with a tailored method. Summary of Contribution: Column generation (CG) algorithms are among the most important and studied solution methods in operations research. CG algorithms are suitable to cope with large-scale problems arising from several real-life applications. The present paper proposes a generic stabilization framework to address two of the main issues found in a CG method: degeneracy in the master problem and massive instability of the dual variables. The newly devised method, called dynamic separation of aggregated rows (dyn-SAR), relies on an extended master problem that contains redundant constraints obtained by aggregating constraints from the original master problem formulation. This new formulation is solved in a column/row generation fashion. The efficacy of the proposed method is tested through an extensive experimental campaign, where we solve three different problems that differ considerably in terms of their constraints and objective function. Despite being a generic framework, dyn-SAR requires the embedded CG algorithm to be tailored to the application at hand.


2021 ◽  
Author(s):  
◽  
Phillip Lee-Ming Wong

<p>One of the greater issues in Genetic Programming (GP) is the computational effort required to run the evolution and discover a good solution. Phenomena such as program bloating (where genetic programs rapidly grow in size) can quickly exhaust available memory resources and slow down the evolutionary process, while the heavy cost of performing fitness evaluation can make problems which have a lot of available data very slow to solve. These issues may limit GP in some tasks it can appropriately be applied to, as well as inhibit its applications in time/space sensitive environments. In this thesis, we look at developing solutions to some of these issues in GP computational cost. First, we develop an algebraic program simplification method based on simple rules and hashing techniques, and use this method in conjunction with the standard GP on a variety of tasks. Our results suggest that program simplification can lead to a significant reduction in program size, while not significantly changing the effectiveness of the systems in finding solution programs. Secondly, we analyse the effects of program simplification on the internal GP "building blocks" to investigate whether simplification is a destructive or constructive force. Using two models for building blocks (numerical-nodes and the more complex fixed-depth subtree), we track building blocks through GP runs on a symbolic regression problem, both with and without using simplification. We find that the program simplification process can both disrupt and construct building blocks in the GP populations. However, GP systems using simplification appear to retain important building blocks, and the simplification process appears to lead to an increase in genetic diversity. These may help explain why using simplification does not reduce the effectiveness of GP systems in solving tasks. Lastly, we develop two methods of reducing the cost of fitness evaluation by reducing the number of node evaluations performed. The first method is elitism avoidance, which avoids re-evaluating programs which have been placed in the population using elitismreproduction. Thismethod reduces the CPU time for evolving solutions for six different GP tasks. The second method is a subtree caching mechanism which store fitness evaluations for subtrees in a cache so that they may be fetched when these subtrees are encountered in future fitness evaluations. Results suggest that using this mechanism can significantly reduce both the number of node evaluations and the overall CPU time used in evolving solutions, without reducing the fitness of the solutions produced.</p>


Sign in / Sign up

Export Citation Format

Share Document