parallel computing
Recently Published Documents


TOTAL DOCUMENTS

3910
(FIVE YEARS 632)

H-INDEX

50
(FIVE YEARS 9)

2022 ◽  
Vol 18 (2) ◽  
pp. 1-24
Author(s):  
Saman Froehlich ◽  
Saeideh Shirinzadeh ◽  
Rolf Drechsler

Resistive Random Access Memory (ReRAM) is an emerging non-volatile memory technology. Besides its low power consumption and its high scalability, its inherent computation capabilities make ReRAM especially interesting for future computer architectures. Merging computations into the memory is a promising solution for overcoming the memory bottleneck. To perform computations in ReRAM, efficient synthesis strategies for Boolean functions have to be developed. In this article, we give a thorough presentation of how to employ parallel computing capabilities of ReRAM for the synthesis of functions given state-of-the-art graph-based representations AIGs or BDDs. Additionally, we introduce a new graph-based representation called m-And-Inverter Graph (m-AIGs), which allows us to fully exploit the computing capabilities of ReRAM. In the simulations, we show that our proposed approaches outperform state-of-the art synthesis strategies, and we show the superiority of m-AIGs over the standard AIG representation for ReRAM-based synthesis.


Water ◽  
2022 ◽  
Vol 14 (2) ◽  
pp. 234
Author(s):  
Antonio Pasculli ◽  
Roberto Longo ◽  
Nicola Sciarra ◽  
Carmine Di Nucci

The analysis and prevention of hydrogeological risks plays a very important role and, currently, much attention is paid to advanced numerical models that correspond more to physical reality and whose aim is to reproduce complex environmental phenomena even for long times and on large spatial scales. Within this context, the feasibility of performing an effective balance of surface water flow relating to several months was explored, based on accurate hydraulic and mathematical-numerical models applied to a system at the scale of a hydrographic basin. To pursue this target, a 2D Riemann–Godunov shallow-water approach, solved in parallel on a graphical processing unit (GPU), able to drastically reduce calculation time, and implemented into the RiverFlow2D code (2017 version), was selected. Infiltration and evapotranspiration were included but in a simplified way, in order to face the calibration and validation simulations and because, despite the parallel approach, it is very demanding even for the computer time requirement. As a test case the Pescara river basin, located in Abruzzo, Central Italy, covering an area of 813 km2 and well representative of a typical medium-sized basin, was selected. The topography was described by a 10 × 10 m digital terrain model (DTM), covered by about 1,700,000 triangular elements, equipped with 11 rain gauges, distributed over the entire area, with some hydrometers and some fluviometric stations. Calibration, and validation were performed considering the flow data measured at a station located in close proximity to the mouth of the river. The comparison between the numerical and measured data, and also from a statistical point of view, was quite satisfactory. A further important outcome was the capability to highlight any differences between the numerical flow-rate balance carried out on the basis of the contributions of all known sources and the values actually measured. This characteristic of the applied modeling allows better calibration and verification not only of the effectiveness of much more simplified approaches, but also the entire network of measurement stations and could suggest the need for a more in-depth exploration of the territory in question. It would also enable the eventual identification of further hidden supplies of water inventory from underground sources and, accordingly, to enlarge the hydrographic and hydrogeological border of the basin under study. Moreover, the parallel computing platform would also allow the development of effective early warning systems, for example, of floods.


Author(s):  
Maksym Spiryagin ◽  
Qing Wu ◽  
Oldrich Polach ◽  
John Thorburn ◽  
Wenhsi Chua ◽  
...  

AbstractLocomotive design is a highly complex task that requires the use of systems engineering that depends upon knowledge from a range of disciplines and is strongly oriented on how to design and manage complex systems that operate under a wide range of different train operational conditions on various types of tracks. Considering that field investigation programs for locomotive operational scenarios involve high costs and cause disruption of train operations on real railway networks and given recent developments in the rollingstock compliance standards in Australia and overseas that allow the assessment of some aspects of rail vehicle behaviour through computer simulations, a great number of multidisciplinary research studies have been performed and these can contribute to further improvement of a locomotive design technique by increasing the amount of computer-based studies. This paper was focused on the presentation of the all-important key components required for locomotive studies, starting from developing a realistic locomotive design model, its validation and further applications for train studies. The integration of all engineering disciplines is achieved by means of advanced simulation approaches that can incorporate existing AC and DC locomotive designs, hybrid locomotive designs, full locomotive traction system models, rail friction processes, the application of simplified and exact wheel-rail contact theories, wheel-rail wear and rolling contact fatigue, train dynamic behaviour and in-train forces, comprehensive track infrastructure details, and the use of co-simulation and parallel computing. The co-simulation and parallel computing approaches that have been implemented on Central Queensland University’s High-Performance Computing cluster for locomotive studies will be presented. The confidence in these approaches is based on specific validation procedures that include a locomotive model acceptance procedure and field test data. The problems and limitations presented in locomotive traction studies in the way they are conducted at the present time are summarised and discussed.


2022 ◽  
Vol 6 (1) ◽  
Author(s):  
Shinji Sakane ◽  
Tomohiro Takaki ◽  
Takayuki Aoki

AbstractIn the phase-field simulation of dendrite growth during the solidification of an alloy, the computational cost becomes extremely high when the diffusion length is significantly larger than the curvature radius of a dendrite tip. In such cases, the adaptive mesh refinement (AMR) method is effective for improving the computational performance. In this study, we perform a three-dimensional dendrite growth phase-field simulation in which AMR is implemented via parallel computing using multiple graphics processing units (GPUs), which provide high parallel computation performance. In the parallel GPU computation, we apply dynamic load balancing to parallel computing to equalize the computational cost per GPU. The accuracy of an AMR refinement condition is confirmed through the single-GPU computations of columnar dendrite growth during the directional solidification of a binary alloy. Next, we evaluate the efficiency of dynamic load balancing by performing multiple-GPU parallel computations for three different directional solidification simulations using a moving frame algorithm. Finally, weak scaling tests are performed to confirm the parallel efficiency of the developed code.


Author(s):  
A. G. Aleksandrova ◽  
V. A. Avdyushev ◽  
N. A. Popandopulo ◽  
T. V. Bordovitsyna

2022 ◽  
pp. 1-15
Author(s):  
Peter S. Pacheco ◽  
Matthew Malensek
Keyword(s):  

2022 ◽  
pp. 105030
Author(s):  
Octavio Castillo-Reyes ◽  
David Modesto ◽  
Pilar Queralt ◽  
Alex Marcuello ◽  
Juanjo Ledo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document