Challenges & Effects of increasing Computational Speed for ICT Applications

Author(s):  
Mahipal Singh Deora ◽  
Satyendra Kumar Sharma
Keyword(s):  
Author(s):  
John T. Armstrong

One of the most cited papers in the geological sciences has been that of Albee and Bence on the use of empirical " α -factors" to correct quantitative electron microprobe data. During the past 25 years this method has remained the most commonly used correction for geological samples, despite the facts that few investigators have actually determined empirical α-factors, but instead employ tables of calculated α-factors using one of the conventional "ZAF" correction programs; a number of investigators have shown that the assumption that an α-factor is constant in binary systems where there are large matrix corrections is incorrect (e.g, 2-3); and the procedure’s desirability in terms of program size and computational speed is much less important today because of developments in computing capabilities. The question thus exists whether it is time to honorably retire the Bence-Albee procedure and turn to more modern, robust correction methods. This paper proposes that, although it is perhaps time to retire the original Bence-Albee procedure, it should be replaced by a similar method based on compositiondependent polynomial α-factor expressions.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Jeongmin Bae ◽  
Hajin Jeon ◽  
Min-Soo Kim

Abstract Background Design of valid high-quality primers is essential for qPCR experiments. MRPrimer is a powerful pipeline based on MapReduce that combines both primer design for target sequences and homology tests on off-target sequences. It takes an entire sequence DB as input and returns all feasible and valid primer pairs existing in the DB. Due to the effectiveness of primers designed by MRPrimer in qPCR analysis, it has been widely used for developing many online design tools and building primer databases. However, the computational speed of MRPrimer is too slow to deal with the sizes of sequence DBs growing exponentially and thus must be improved. Results We develop a fast GPU-based pipeline for primer design (GPrimer) that takes the same input and returns the same output with MRPrimer. MRPrimer consists of a total of seven MapReduce steps, among which two steps are very time-consuming. GPrimer significantly improves the speed of those two steps by exploiting the computational power of GPUs. In particular, it designs data structures for coalesced memory access in GPU and workload balancing among GPU threads and copies the data structures between main memory and GPU memory in a streaming fashion. For human RefSeq DB, GPrimer achieves a speedup of 57 times for the entire steps and a speedup of 557 times for the most time-consuming step using a single machine of 4 GPUs, compared with MRPrimer running on a cluster of six machines. Conclusions We propose a GPU-based pipeline for primer design that takes an entire sequence DB as input and returns all feasible and valid primer pairs existing in the DB at once without an additional step using BLAST-like tools. The software is available at https://github.com/qhtjrmin/GPrimer.git.


Author(s):  
Maximilian Moll ◽  
Leonhard Kunczik

AbstractIn recent history, reinforcement learning (RL) proved its capability by solving complex decision problems by mastering several games. Increased computational power and the advances in approximation with neural networks (NN) paved the path to RL’s successful applications. Even though RL can tackle more complex problems nowadays, it still relies on computational power and runtime. Quantum computing promises to solve these issues by its capability to encode information and the potential quadratic speedup in runtime. We compare tabular Q-learning and Q-learning using either a quantum or a classical approximation architecture on the frozen lake problem. Furthermore, the three algorithms are analyzed in terms of iterations until convergence to the optimal behavior, memory usage, and runtime. Within the paper, NNs are utilized for approximation in the classical domain, while in the quantum domain variational quantum circuits, as a quantum hybrid approximation method, have been used. Our simulations show that a quantum approximator is beneficial in terms of memory usage and provides a better sample complexity than NNs; however, it still lacks the computational speed to be competitive.


2007 ◽  
Vol 91 (22) ◽  
pp. 224104 ◽  
Author(s):  
L. Gammaitoni
Keyword(s):  

Author(s):  
R. MUKUNDAN

Geometric moments have been used in several applications in the field of Computer Vision. Many techniques for fast computation of geometric moments have therefore been proposed in the recent past, but these algorithms mainly rely on properties of the moment integral such as piecewise differentiability and separability. This paper explores an alternative approach to approximating the moment kernel itself in order to get a notable improvement in computational speed. Using Schlick's approximation for the normalized kernel of geometric moments, the computational overhead could be significantly reduced and numerical stability increased. The paper also analyses the properties of the modified moment functions, and shows that the proposed method could be effectively used in all applications where normalized Cartesian moment kernels are used. Several experimental results showing the invariant characteristics of the modified moments are also presented.


Perception ◽  
1986 ◽  
Vol 15 (4) ◽  
pp. 373-386 ◽  
Author(s):  
Nigel D Haig

For recognition of a target there must be some form of comparison process between the image of that target and a stored representation of that target. In the case of faces there must be a very large number of such stored representations, yet human beings seem able to perform comparisons at phenomenal speed. It is possible that faces are memorised by fitting unusual features or combinations of features onto a bland prototypical face, and such a data-compression technique would help to explain our computational speed. If humans do indeed function in this fashion, it is necessary to ask just what are the features that distinguish one face from another, and also, what are the features that form the basic set of the prototypical face. The distributed apertures technique was further developed in an attempt to answer both questions. Four target faces, stored in an image-processing computer, were each divided up into 162 contiguous squares that could be displayed in their correct positions in any combination of 24 or fewer squares. Each observer was required to judge which of the four target faces was displayed during a 1 s presentation, and the proportion of correct responses for each individual square was computed. The resultant response distributions, displayed as brightness maps, give a vivid impression of the relative saliency of each feature square, both for the individual targets and for all of them combined. The results, while broadly confirming previous work, contain some very interesting and surprising details about the differences between the target faces.


2018 ◽  
Vol 11 (8) ◽  
pp. 3391-3407 ◽  
Author(s):  
Zacharias Marinou Nikolaou ◽  
Jyh-Yuan Chen ◽  
Yiannis Proestos ◽  
Jos Lelieveld ◽  
Rolf Sander

Abstract. Chemical mechanism reduction is common practice in combustion research for accelerating numerical simulations; however, there have been limited applications of this practice in atmospheric chemistry. In this study, we employ a powerful reduction method in order to produce a skeletal mechanism of an atmospheric chemistry code that is commonly used in air quality and climate modelling. The skeletal mechanism is developed using input data from a model scenario. Its performance is then evaluated both a priori against the model scenario results and a posteriori by implementing the skeletal mechanism in a chemistry transport model, namely the Weather Research and Forecasting code with Chemistry. Preliminary results, indicate a substantial increase in computational speed-up for both cases, with a minimal loss of accuracy with regards to the simulated spatio-temporal mixing ratio of the target species, which was selected to be ozone.


Author(s):  
Ting Cai ◽  
Anna G. Stefanopoulou ◽  
Jason B. Siegel

This paper presents a model describing lithium-ion battery thermal runaway triggered by an internal short. The model predicts temperature and heat generation from the internal short circuit and side reactions using a three-section model. The three sections correspond to the core, middle, and surface layers. At each layer, the temperature-dependent heat release and progression of the three major side reactions are modeled. A thermal runaway test was conducted on a 4.5 Ah nickel manganese cobalt oxide pouch cell, and the temperature measurements are used for model validation. The proposed reduced order model based on three sections can balance the computational speed with the model complexity required to predict the fast core temperature evolution and slower surface temperature growth. The model shows good agreement with the experimental data, and it will be further improved with formal tuning in a follow-up effort to enable early detection of thermal runway induced by internal short.


Sign in / Sign up

Export Citation Format

Share Document