scholarly journals The physics of numerical analysis: a climate modelling case study

Author(s):  
T. N. Palmer

The case is made for a much closer synergy between climate science, numerical analysis and computer science. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.

Author(s):  
Jack Dongarra ◽  
Laura Grigori ◽  
Nicholas J. Higham

A number of features of today’s high-performance computers make it challenging to exploit these machines fully for computational science. These include increasing core counts but stagnant clock frequencies; the high cost of data movement; use of accelerators (GPUs, FPGAs, coprocessors), making architectures increasingly heterogeneous; and multi- ple precisions of floating-point arithmetic, including half-precision. Moreover, as well as maximizing speed and accuracy, minimizing energy consumption is an important criterion. New generations of algorithms are needed to tackle these challenges. We discuss some approaches that we can take to develop numerical algorithms for high-performance computational science, with a view to exploiting the next generation of supercomputers. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


2020 ◽  
Vol 8 (10) ◽  
pp. 793
Author(s):  
Demián García-Violini ◽  
Nicolás Faedo ◽  
Fernando Jaramillo-Lopez ◽  
John V. Ringwood

The design of controllers for wave energy devices has evolved from early monochromatic impedance-matching methods to complex numerical algorithms that can handle panchromatic seas, constraints, and nonlinearity. However, the potential high performance of such numerical controller comes at a computational cost, with some algorithms struggling to implement in real-time, and issues surround convergence of numerical optimisers. Within the broader area of control engineering, practitioners have always displayed a fondness for simple and intuitive controllers, as evidenced by the continued popularity of the ubiquitous PID controller. Recently, a number of energy-maximising wave energy controllers have been developed based on relatively simple strategies, stemming from the fundamentals behind impedance-matching. This paper documents this set of (5) controllers, which have been developed over the period 2010–2020, and compares and contrasts their characteristics, in terms of energy-maximising performance, the handling of physical constraints, and computational complexity. The comparison is carried out both analytically and numerically, including a detailed case study, when considering a state-of-the-art CorPower-like device.


Author(s):  
Hartwig Anzt ◽  
Erik Boman ◽  
Rob Falgout ◽  
Pieter Ghysels ◽  
Michael Heroux ◽  
...  

Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


Author(s):  
Erin Carson ◽  
Zdeněk Strakoš

With exascale-level computation on the horizon, the art of predicting the cost of computations has acquired a renewed focus. This task is especially challenging in the case of iterative methods, for which convergence behaviour often cannot be determined with certainty a priori (unless we are satisfied with potentially outrageous overestimates) and which typically suffer from performance bottlenecks at scale due to synchronization cost. Moreover, the amplification of rounding errors can substantially affect the practical performance, in particular for methods with short recurrences. In this article, we focus on what we consider to be key points which are crucial to understanding the cost of iteratively solving linear algebraic systems. This naturally leads us to questions on the place of numerical analysis in relation to mathematics, computer science and sciences, in general. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


2019 ◽  
pp. 123-130

The scientific research works concerning the field of mechanical engineering such as, manufacturing machine slate, soil tillage, sowing and harvesting based on the requirements for the implementation of agrotechnical measures for the cultivation of plants in its transportation, through the development of mastering new types of high-performance and energy-saving machines in manufacturing machine slate, creation of multifunctional machines, allowing simultaneous soil cultivation, by means of several planting operations, integration of agricultural machine designs are taken into account in manufacturing of the local universal tractor designed basing on high ergonomic indicators. For this reason, this article explores the use of case studies in teaching agricultural terminology by means analyzing the researches in machine building. Case study method was firstly used in 1870 in Harvard University of Law School in the United States. Also in the article, we give the examples of agricultural machine-building terms, teaching terminology and case methods, case study process and case studies method itself. The research works in the field of mechanical engineering and the use of case studies in teaching terminology have also been analyzed. In addition, the requirements for the development of case study tasks are given in their practical didactic nature. We also give case study models that allow us analyzing and evaluating students' activities.


Author(s):  
Abeer A. Amer ◽  
Soha M. Ismail

The following article has been withdrawn on the request of the author of the journal Recent Advances in Computer Science and Communications (Recent Patents on Computer Science): Title: Diabetes Mellitus Prognosis Using Fuzzy Logic and Neural Networks Case Study: Alexandria Vascular Center (AVC) Authors: Abeer A. Amer and Soha M. Ismail* Bentham Science apologizes to the readers of the journal for any inconvenience this may cause BENTHAM SCIENCE DISCLAIMER: It is a condition of publication that manuscripts submitted to this journal have not been published and will not be simultaneously submitted or published elsewhere. Furthermore, any data, illustration, structure or table that has been published elsewhere must be reported, and copyright permission for reproduction must be obtained. Plagiarism is strictly forbidden, and by submitting the article for publication the authors agree that the publishers have the legal right to take appropriate action against the authors, if plagiarism or fabricated information is discovered. By submitting a manuscript, the authors agree that the copyright of their article is transferred to the publishers if and when the article is accepted for publication.


2021 ◽  
Vol 47 (2) ◽  
pp. 1-28
Author(s):  
Goran Flegar ◽  
Hartwig Anzt ◽  
Terry Cojean ◽  
Enrique S. Quintana-Ortí

The use of mixed precision in numerical algorithms is a promising strategy for accelerating scientific applications. In particular, the adoption of specialized hardware and data formats for low-precision arithmetic in high-end GPUs (graphics processing units) has motivated numerous efforts aiming at carefully reducing the working precision in order to speed up the computations. For algorithms whose performance is bound by the memory bandwidth, the idea of compressing its data before (and after) memory accesses has received considerable attention. One idea is to store an approximate operator–like a preconditioner–in lower than working precision hopefully without impacting the algorithm output. We realize the first high-performance implementation of an adaptive precision block-Jacobi preconditioner which selects the precision format used to store the preconditioner data on-the-fly, taking into account the numerical properties of the individual preconditioner blocks. We implement the adaptive block-Jacobi preconditioner as production-ready functionality in the Ginkgo linear algebra library, considering not only the precision formats that are part of the IEEE standard, but also customized formats which optimize the length of the exponent and significand to the characteristics of the preconditioner blocks. Experiments run on a state-of-the-art GPU accelerator show that our implementation offers attractive runtime savings.


Sign in / Sign up

Export Citation Format

Share Document