Limitations of computing time savings in variational multirate integrations

PAMM ◽  
2014 ◽  
Vol 14 (1) ◽  
pp. 45-46
Author(s):  
Tobias Gail ◽  
Sigrid Leyendecker ◽  
Sina Ober-Blöbaum
Keyword(s):  
2021 ◽  
Vol 1 (1) ◽  
pp. 20-27
Author(s):  
Letnan Kolonel Elektronika Imat Rakhmat Hidayat, S.T., M.Eng

Prime number in growth computer science of number theory and very need to yield an tool which can yield an hardware storey level effectiveness use efficiency and Existing Tools can be used to awaken regular prime number sequence pattern, structure bit-array represent containing subdividing variables method of data aggregate with every data element which have type of equal, and also can be used in moth-balls the yielded number sequence. Prime number very useful to be applied by as bases from algorithm kriptografi key public creation, hash table, best algorithm if applied hence is prime number in order to can minimize collision (collisions) will happen, in determining pattern sequence of prime number which size measure is very big is not an work easy to, so that become problems which must be searched by the way of quickest to yield sequence of prime number which size measure is very big Serial use of prosesor in seeking sequence prime number which size measure is very big less be efficient remember needing of computing time which long enough, so also plural use prosesor in seeking sequence of prime number will concerning to price problem and require software newly. So that by using generator of prime number use structure bit-array expected by difficulty in searching pattern sequence of prime number can be overcome though without using plural processor even if, as well as time complexity minimization can accessed. Execution time savings gained from the research seen from the research data, using the algorithm on the input Atkins 676,999,999. 4235747.00 execution takes seconds. While the algorithm by using an array of input bits 676,999,999. 13955.00 execution takes seconds. So that there is a difference of execution time for 4221792.00 seconds.


Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 235 ◽  
Author(s):  
Bruno Colonetti ◽  
Erlon Cristian Finardi ◽  
Welington de Oliveira

Independent System Operators (ISOs) worldwide face the ever-increasing challenge of coping with uncertainties, which requires sophisticated algorithms for solving unit-commitment (UC) problems of increasing complexity in less-and-less time. Hence, decomposition methods are appealing options to produce easier-to-handle problems that can hopefully return good solutions at reasonable times. When applied to two-stage stochastic models, decomposition often yields subproblems that are embarrassingly parallel. Synchronous parallel-computing techniques are applied to the decomposable subproblem and frequently result in considerable time savings. However, due to the inherent run-time differences amongst the subproblem’s optimization models, unequal equipment, and communication overheads, synchronous approaches may underuse the computing resources. Consequently, asynchronous computing constitutes a natural enhancement to existing methods. In this work, we propose a novel extension of the asynchronous level decomposition to solve stochastic hydrothermal UC problems with mixed-integer variables in the first stage. In addition, we combine this novel method with an efficient task allocation to yield an innovative algorithm that far outperforms the current state-of-the-art. We provide convergence analysis of our proposal and assess its computational performance on a testbed consisting of 54 problems from a 46-bus system. Results show that our asynchronous algorithm outperforms its synchronous counterpart in terms of wall-clock computing time in 40% of the problems, providing time savings averaging about 45%, while also reducing the standard deviation of running times over the testbed in the order of 25%.


Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 755 ◽  
Author(s):  
Lukas Jancar ◽  
Marek Pagac ◽  
Jakub Mesicek ◽  
Petr Stefek

This article describes the design procedure of a topologically optimized scooter frame part. It is the rear heel of the frame, one of the four main parts of a scooter made with stainless steel 3D printing. The first part of the article deals with the design area definition and the determination of load cases for topology calculation. The second part describes the process of the topology optimization itself and the creation of the volume body based on the calculation results. Finally, the final control using an FEM (Finite Element Method) analysis and optimization of created Computer-Aided Design (CAD) data is shown. Part of the article is also a review of partial iterations and resulting versions of the designed part. Symmetry was used to define boundary conditions, which led to computing time savings, as well as during the CAD model creation, where non-parametric surfaces were mirrored to shorten the design time.


Author(s):  
M.A. O'Keefe ◽  
Sumio Iijima

We have extended the multi-slice method of computating many-beam lattice images of perfect crystals to calculations for imperfect crystals using the artificial superlattice approach. Electron waves scattered from faulted regions of crystals are distributed continuously in reciprocal space, and all these waves interact dynamically with each other to give diffuse scattering patterns.In the computation, this continuous distribution can be sampled only at a finite number of regularly spaced points in reciprocal space, and thus finer sampling gives an improved approximation. The larger cell also allows us to defocus the objective lens further before adjacent defect images overlap, producing spurious computational Fourier images. However, smaller cells allow us to sample the direct space cell more finely; since the two-dimensional arrays in our program are limited to 128X128 and the sampling interval shoud be less than 1/2Å (and preferably only 1/4Å), superlattice sizes are limited to 40 to 60Å. Apart from finding a compromis superlattice cell size, computing time must be conserved.


Author(s):  
P.-F. Staub ◽  
C. Bonnelle ◽  
F. Vergand ◽  
P. Jonnard

Characterizing dimensionally and chemically nanometric structures such as surface segregation or interface phases can be performed efficiently using electron probe (EP) techniques at very low excitation conditions, i.e. using small incident energies (0.5<E0<5 keV) and low incident overvoltages (1<U0<1.7). In such extreme conditions, classical analytical EP models are generally pushed to their validity limits in terms of accuracy and physical consistency, and Monte-Carlo simulations are not convenient solutions as routine tools, because of their cost in computing time. In this context, we have developed an intermediate procedure, called IntriX, in which the ionization depth distributions Φ(ρz) are numerically reconstructed by integration of basic macroscopic physical parameters describing the electron beam/matter interaction, all of them being available under pre-established analytical forms. IntriX’s procedure consists in dividing the ionization depth distribution into three separate contributions:


Author(s):  
D.S. Patrick ◽  
L.C. Wagner ◽  
P.T. Nguyen

Abstract Failure isolation and debug of CMOS integrated circuits over the past several years has become increasingly difficult to perform on standard failure analysis functional testers. Due to the increase in pin counts, clock speeds, increased complexity and the large number of power supply pins on current ICS, smaller and less equipped testers are often unable to test these newer devices. To reduce the time of analysis and improve the failure isolation capabilities for failing ICS, failure isolation is now performed using the same production testers used in product development, multiprobe and final test. With these production testers, the test hardware, program and pattern sets are already available and ready for use. By using a special interface that docks the production test head to failure isolation equipment such as the emission microscope, liquid crystal station and E-Beam prober, the analyst can quickly and easily isolate the faillure on an IC. This also enables engineers in design, product engineering and the waferfab yield enhancement groups to utilize this equipment to quickly solve critical design and yield issues. Significant cycle time savings have been achieved with the migration to this method of electrical stimulation for failure isolation.


2000 ◽  
Vol 41 (7) ◽  
pp. 103-110 ◽  
Author(s):  
G. Stanfield ◽  
E. Carrington ◽  
F. Albinet ◽  
B. Compagnon ◽  
N. Dumoutier ◽  
...  

With funding from the European Commission, a consortium of members of the European Water Research Institutes is carrying out a programme of work with the objective of optimising and standardising a method for determining the presence in water of (oo)cysts of Cryptosporidium and Giardia. Each of the stages of the conventional analysis procedure (initial concentration, recoveryand identification and enumeration) are being investigated and the relative merits of existing and new methods are being assessed. Newly developed filters (Envirochek and Filta-Max) have been shown to be more efficient for initial recovery of (oo)cysts from water than the previously used Cuno cartridge filters. In addition, for the analysis of raw waters, flocculationwith ferric sulphate has been shown to give recoveries similar to the Envirochek and Filta Max. Modern purification systems such as immunomagnetic separation have also been assessed and found to offer some advantages over flotation although optimisation of the latter has brought improved efficiency. Preliminary assessment of solid phase cytometry has indicated that this technique could offer significant time savings compared to conventional microscopic counting. The results of the study will be used to propose a revised standard method to CEN.


Author(s):  
Vaishali R. Kulkarni ◽  
Veena Desai ◽  
Raghavendra Kulkarni

Background & Objective: Location of sensors is an important information in wireless sensor networks for monitoring, tracking and surveillance applications. The accurate and quick estimation of the location of sensor nodes plays an important role. Localization refers to creating location awareness for as many sensor nodes as possible. Multi-stage localization of sensor nodes using bio-inspired, heuristic algorithms is the central theme of this paper. Methodology: Biologically inspired heuristic algorithms offer the advantages of simplicity, resourceefficiency and speed. Four such algorithms have been evaluated in this paper for distributed localization of sensor nodes. Two evolutionary computation-based algorithms, namely cultural algorithm and the genetic algorithm, have been presented to optimize the localization process for minimizing the localization error. The results of these algorithms have been compared with those of swarm intelligence- based optimization algorithms, namely the firefly algorithm and the bee algorithm. Simulation results and analysis of stage-wise localization in terms of number of localized nodes, computing time and accuracy have been presented. The tradeoff between localization accuracy and speed has been investigated. Results: The comparative analysis shows that the firefly algorithm performs the localization in the most accurate manner but takes longest convergence time. Conclusion: Further, the cultural algorithm performs the localization in a very quick time; but, results in high localization error.


Author(s):  
Peter Scott

The vacuum cleaner was an archetypal new economy product of the early twentieth century. It offered both major time savings and qualitative advantages over previous household cleaning methods—the brush, broom, and manual carpet sweeper—and was sold in a novel way (by household demonstration). The direct sales techniques pioneered by vacuum manufacturers in the United States were to have a profound impact on the way vacuums were sold in Britain, and globally. Yet by 1939 their household diffusion was relatively slow compared to refrigerators or washing machines. This chapter explores why the industry evolved a structure based on high prices, high cost distribution methods (door-to-door sales), and a strong emphasis on non-price competition, based on differentiation through features. It also shows how door-to-door selling eventually came to constitute both a key firm-level competitive advantage and a substantial industry-level constraint on product diffusion.


2021 ◽  
pp. 073490412199344
Author(s):  
Wolfram Jahn ◽  
Frane Sazunic ◽  
Carlos Sing-Long

Synthesising data from fire scenarios using fire simulations requires iterative running of these simulations. For real-time synthesising, faster-than-real-time simulations are thus necessary. In this article, different model types are assessed according to their complexity to determine the trade-off between the accuracy of the output and the required computing time. A threshold grid size for real-time computational fluid dynamic simulations is identified, and the implications of simplifying existing field fire models by turning off sub-models are assessed. In addition, a temperature correction for two zone models based on the conservation of energy of the hot layer is introduced, to account for spatial variations of temperature in the near field of the fire. The main conclusions are that real-time fire simulations with spatial resolution are possible and that it is not necessary to solve all fine-scale physics to reproduce temperature measurements accurately. There remains, however, a gap in performance between computational fluid dynamic models and zone models that must be explored to achieve faster-than-real-time fire simulations.


Sign in / Sign up

Export Citation Format

Share Document