scholarly journals WearGP: A UQ/ML Wear Prediction Framework for Slurry Pump Impellers and Casings

Author(s):  
Anh Tran ◽  
Yan Wang ◽  
John Furlan ◽  
Krishnan V. Pagalthivarthi ◽  
Mohamed Garman ◽  
...  

Abstract Dedicated to the memory of John Furlan. Wear prediction is important in designing reliable machinery for slurry industry. It usually relies on multi-phase computational fluid dynamics, which is accurate but computationally expensive. Each run of the simulations can take hours or days even on a high-performance computing platform. The high computational cost prohibits a large number of simulations in the process of design optimization. In contrast to physics-based simulations, data-driven approaches such as machine learning are capable of providing accurate wear predictions at a small fraction of computational costs, if the models are trained properly. In this paper, a recently developed WearGP framework [1] is extended to predict the global wear quantities of interest by constructing Gaussian process surrogates. The effects of different operating conditions are investigated. The advantages of the WearGP framework are demonstrated by its high accuracy and low computational cost in predicting wear rates.

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7006
Author(s):  
Mohamed Wassim Baba ◽  
Gregoire Thoumyre ◽  
Erwin W. J. Bergsma ◽  
Christopher J. Daly ◽  
Rafael Almar

Coasts are areas of vitality because they host numerous activities worldwide. Despite their major importance, the knowledge of the main characteristics of the majority of coastal areas (e.g., coastal bathymetry) is still very limited. This is mainly due to the scarcity and lack of accurate measurements or observations, and the sparsity of coastal waters. Moreover, the high cost of performing observations with conventional methods does not allow expansion of the monitoring chain in different coastal areas. In this study, we suggest that the advent of remote sensing data (e.g., Sentinel 2A/B) and high performance computing could open a new perspective to overcome the lack of coastal observations. Indeed, previous research has shown that it is possible to derive large-scale coastal bathymetry from S-2 images. The large S-2 coverage, however, leads to a high computational cost when post-processing the images. Thus, we develop a methodology implemented on a High-Performance cluster (HPC) to derive the bathymetry from S-2 over the globe. In this paper, we describe the conceptualization and implementation of this methodology. Moreover, we will give a general overview of the generated bathymetry map for NA compared with the reference GEBCO global bathymetric product. Finally, we will highlight some hotspots by looking closely to their outputs.


Author(s):  
Jon Calhoun ◽  
Franck Cappello ◽  
Luke N Olson ◽  
Marc Snir ◽  
William D Gropp

Checkpoint restart plays an important role in high-performance computing (HPC) applications, allowing simulation runtime to extend beyond a single job allocation and facilitating recovery from hardware failure. Yet, as machines grow in size and in complexity, traditional approaches to checkpoint restart are becoming prohibitive. Current methods store a subset of the application’s state and exploit the memory hierarchy in the machine. However, as the energy cost of data movement continues to dominate, further reductions in checkpoint size are needed. Lossy compression, which can significantly reduce checkpoint sizes, offers a potential to reduce computational cost in checkpoint restart. This article investigates the use of numerical properties of partial differential equation (PDE) simulations, such as bounds on the truncation error, to evaluate the feasibility of using lossy compression in checkpointing PDE simulations. Restart from a checkpoint with lossy compression is considered for a fail-stop error in two time-dependent HPC application codes: PlasComCM and Nek5000. Results show that error in application variables due to a restart from a lossy compressed checkpoint can be masked by the numerical error in the discretization, leading to increased efficiency in checkpoint restart without influencing overall accuracy in the simulation.


Water ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 2463 ◽  
Author(s):  
Yelena Medina ◽  
Enrique Muñoz

Time-varying sensitivity analysis (TVSA) allows sensitivity in a moving window to be estimated and the time periods in which the specific components of a model can affect its performance to be identified. However, one of the disadvantages of TVSA is its high computational cost, as it estimates sensitivity in a moving window within an analyzed series, performing a series of repetitive calculations. In this article a function to implement a simple TVSA with a low computational cost using regional sensitivity analysis is presented. As an example of its application, an analysis of hydrological model results in daily, monthly, and annual time windows is carried out. The results show that the model allows the time sensitivity of a model with respect to its parameters to be detected, making it a suitable tool for the assessment of temporal variability of processes in models that include time series analysis. In addition, it is observed that the size of the moving window can influence the estimated sensitivity; therefore, analysis of different time windows is recommended.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 4045 ◽  
Author(s):  
Wesllen Sousa Lima ◽  
Hendrio de Souza Bragança ◽  
Kevin Montero Quispe ◽  
Eduardo Pereira Souto

Mobile sensing has allowed the emergence of a variety of solutions related to the monitoring and recognition of human activities (HAR). Such solutions have been implemented in smartphones for the purpose of better understanding human behavior. However, such solutions still suffer from the limitations of the computing resources found on smartphones. In this sense, the HAR area has focused on the development of solutions of low computational cost. In general, the strategies used in the solutions are based on shallow and deep learning algorithms. The problem is that not all of these strategies are feasible for implementation in smartphones due to the high computational cost required, mainly, by the steps of data preparation and the training of classification models. In this context, this article evaluates a new set of alternative strategies based on Symbolic Aggregate Approximation (SAX) and Symbolic Fourier Approximation (SFA) algorithms with the purpose of developing solutions with low computational cost in terms of memory and processing. In addition, this article also evaluates some classification algorithms adapted to manipulate symbolic data, such as SAX-VSM, BOSS, BOSS-VS and WEASEL. Experiments were performed on the UCI-HAR, SHOAIB and WISDM databases commonly used in the literature to validate HAR solutions based on smartphones. The results show that the symbolic representation algorithms are faster in the feature extraction phase, on average, by 84.81%, and reduce the consumption of memory space, on average, by 94.48%, and they have accuracy rates equivalent to conventional algorithms.


2020 ◽  
Vol 238 ◽  
pp. 12001
Author(s):  
Luzia Hahn ◽  
Peter Eberhard

In this work, methods and procedures are investigated for the holistic simulation of the dynamicalthermal behavior of high-performance optics like lithography objectives. Flexible multibody systems in combination with model order reduction methods, finite element thermal analysis and optical system analyses are used for transient simulations of the dynamical-thermal behavior of optical systems at low computational cost.


2017 ◽  
Vol 29 (3) ◽  
Author(s):  
Mabule Samuel Mabakane ◽  
Daniel Mojalefa Moeketsi ◽  
Anton Lopis

This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY) performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.


Sign in / Sign up

Export Citation Format

Share Document