scientific applications
Recently Published Documents


TOTAL DOCUMENTS

1058
(FIVE YEARS 132)

H-INDEX

39
(FIVE YEARS 4)

2022 ◽  
pp. 1-28
Author(s):  
Marcelo de Carvalho Alves ◽  
Luciana Sanches

SIMULATION ◽  
2021 ◽  
pp. 003754972110641
Author(s):  
Aurelio Vivas ◽  
Harold Castro

Since simulation became the third pillar of scientific research, several forms of computers have become available to drive computer aided simulations, and nowadays, clusters are the most popular type of computers supporting these tasks. For instance, cluster settings, such as the so-called supercomputers, cluster of workstations (COW), cluster of desktops (COD), and cluster of virtual machines (COV) have been considered in literature to embrace a variety of scientific applications. However, those scientific applications categorized as high-performance computing (HPC) are conceptually restricted to be addressed only by supercomputers. In this aspect, we introduce the notions of cluster overhead and cluster coupling to assess the capacity of non-HPC systems to handle HPC applications. We also compare the cluster overhead with an existing measure of overhead in computing systems, the total parallel overhead, to explain the correctness of our methodology. The evaluation of capacity considers the seven dwarfs of scientific computing, which are well-known, scientific computing building blocks considered in the development of HPC applications. The evaluation of these building blocks provides insights regarding the strengths and weaknesses of non-HPC systems to deal with future HPC applications developed with one or a combination of these algorithmic building blocks.


Author(s):  
C. A. Haswell

AbstractThe Ariel mission will execute an ambitious survey to measure transit and / or secondary eclipse spectra of the atmospheres of about 1000 exoplanets. I outline here some possible scientific applications of the exquisite Ariel Core Survey data, beyond the science for which they are primarily designed.


2021 ◽  
Author(s):  
Marco Del Giudice

In this paper, I highlight a problem that has become ubiquitous in scientific applications of machine learning methods, and can lead to seriously distorted inferences about the phenomena under study. I call it the prediction-explanation fallacy. The fallacy occurs when researchers use prediction-optimized models for explanatory purposes, without considering the tradeoffs between explanation and prediction. This is a problem for at least two reasons. First, prediction-optimized models are often deliberately biased and unrealistic in order to prevent overfitting, and hence fail to accurately explain the phenomenon of interest. In other cases, they have an exceedingly complex structure that is hard or impossible to interpret, which greatly limits their explanatory value. Second, different predictive models trained on the same or similar data can be biased in different ways, so that multiple models may predict equally well but suggest conflicting explanations of the underlying phenomenon. In this note I introduce the tradeoffs between prediction and explanation in a non-technical fashion, present some illustrative examples from neuroscience, and end by discussing some mitigating factors and methods that can be used to limit or circumvent the problem.


2021 ◽  
Vol 27 (12) ◽  
pp. 619-625
Author(s):  
I. V. Bychkov ◽  
◽  
S. A. Gorsky ◽  
A. G. Feoktistov ◽  
R. O. Kostromin ◽  
...  

Nowadays, tools for designing scientific applications often do not implement the required continuous integration capabilities of the applied software. Therefore, such overheads as the application development time and experiment execution makespan are substantially increased. In this regard, we propose a new approach to developing scientific applications and carrying out experiments with them. It is based on applying continuous integration to both the applied and system software in developing distributed applied software packages with a modular architecture using the Orlando Tools framework. Within the proposed approach, we provide integrating the Orlando Tools subsystems with the GitLab system and automating the development of package modules. At the same time, Orlando Tools fully support constructing and testing problem-solving schemes (workflows) that combine package modules located on environment resources with different computational characteristics. To this end, Orlando Tools provides the necessary configuring and setting up of computational resources. The practical significance of our study is substantial reduction overheads needed to experiment fulfillments and increase of the resource use efficiency.


2021 ◽  
pp. 569-599
Author(s):  
Stephen Chin ◽  
Johan Vos ◽  
James Weaver

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7238
Author(s):  
Zulfiqar Ahmad ◽  
Ali Imran Jehangiri ◽  
Mohammed Alaa Ala’anzy ◽  
Mohamed Othman ◽  
Arif Iqbal Umar

Cloud computing is a fully fledged, matured and flexible computing paradigm that provides services to scientific and business applications in a subscription-based environment. Scientific applications such as Montage and CyberShake are organized scientific workflows with data and compute-intensive tasks and also have some special characteristics. These characteristics include the tasks of scientific workflows that are executed in terms of integration, disintegration, pipeline, and parallelism, and thus require special attention to task management and data-oriented resource scheduling and management. The tasks executed during pipeline are considered as bottleneck executions, the failure of which result in the wholly futile execution, which requires a fault-tolerant-aware execution. The tasks executed during parallelism require similar instances of cloud resources, and thus, cluster-based execution may upgrade the system performance in terms of make-span and execution cost. Therefore, this research work presents a cluster-based, fault-tolerant and data-intensive (CFD) scheduling for scientific applications in cloud environments. The CFD strategy addresses the data intensiveness of tasks of scientific workflows with cluster-based, fault-tolerant mechanisms. The Montage scientific workflow is considered as a simulation and the results of the CFD strategy were compared with three well-known heuristic scheduling policies: (a) MCT, (b) Max-min, and (c) Min-min. The simulation results showed that the CFD strategy reduced the make-span by 14.28%, 20.37%, and 11.77%, respectively, as compared with the existing three policies. Similarly, the CFD reduces the execution cost by 1.27%, 5.3%, and 2.21%, respectively, as compared with the existing three policies. In case of the CFD strategy, the SLA is not violated with regard to time and cost constraints, whereas it is violated by the existing policies numerous times.


2021 ◽  
Author(s):  
Mariza Ferro ◽  
Vinicius P. Klôh ◽  
Matheus Gritz ◽  
Vitor de Sá ◽  
Bruno Schulze

Understanding the computational impact of scientific applications on computational architectures through runtime should guide the use of computational resources in high-performance computing systems. In this work, we propose an analysis of Machine Learning (ML) algorithms to gather knowledge about the performance of these applications through hardware events and derived performance metrics. Nine NAS benchmarks were executed and the hardware events were collected. These experimental results were used to train a Neural Network, a Decision Tree Regressor and a Linear Regression focusing on predicting the runtime of scientific applications according to the performance metrics.


Sign in / Sign up

Export Citation Format

Share Document