CHARACTERIZING LARGE-SCALE HPC APPLICATIONS THROUGH TRACE EXTRAPOLATION

2013 ◽  
Vol 23 (04) ◽  
pp. 1340008 ◽  
Author(s):  
LAURA CARRINGTON ◽  
MICHAEL LAURENZANO ◽  
ANANTA TIWARI

The analysis and understanding of large-scale application behavior is critical for effectively utilizing existing HPC resources and making design decisions for upcoming systems. In this work we utilize the information about the behavior of an MPI application at a series of smaller core counts to characterize its behavior at a much larger core count. Our methodology first captures the application's behavior via a set of features that are important for both performance and energy (cache hit rates, floating point intensity, ILP, etc.). We then find the best statistical fit from among a set of canonical functions in terms of how these features change across a series of small core counts. The models for a given feature can then be utilized to generate an extrapolated trace of the application at scale. The accuracy of the extrapolated traces is evaluated by calculating the error of the extrapolated trace relative to an actual trace for two large-scale applications, UH3D and SPECFEM3D. The accuracy of the fully extrapolated traces is further evaluated by comparing the results of building performance models using both the extrapolated trace along with an actual trace in order to predict application performance. For these two full-scale HPC applications, performance models built using the extrapolated traces predicted the runtime with absolute relative errors of less than 5%.

Mission Performance Models (MPM) are important to the design of modern digital avionic systems because the flight deck information is no longer obvious. In large-scale dynamic systems, necessary responses to the incoming information model should be a direct correspondence. A Mission Performance Model is an abstract representation of the activity clusters necessary to achieve mission success. The three core activity clusters are trajectory management, energy management, and attitude control and will be covered in detail. Their combined performance characteristics highlight the vehicle's kinematic attributes, which then anticipates unstable conditions. Six MPM are necessary for the effective design and employment of a modern mission-ready flight deck. We describe MPM and their structure, purpose, and operational application. Performance models have many important uses including training system definition and design, avionic system design, and safety programs.


2001 ◽  
Vol 12 (03) ◽  
pp. 341-363 ◽  
Author(s):  
JENNIFER M. SCHOPF ◽  
FRANCINE BERMAN

Prediction is a critical component in the achievement of application execution performance. The development of adequate and accurate prediction models is especially difficult in local-area clustered environments where resources are distributed and performance varies due to the presence of other users in the system. This paper discusses the use of stochastic values to parameterize cluster application performance models. Stochastic values represent a range of likely behavior and can be used effectively as model parameters. We describe two representations for stochastic model parameters and demonstrate their effectiveness in predicting the behavior of several applications under different workloads on a contended network of workstations.


2000 ◽  
Vol 663 ◽  
Author(s):  
J. Samper ◽  
R. Juncosa ◽  
V. Navarro ◽  
J. Delgado ◽  
L. Montenegro ◽  
...  

ABSTRACTFEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project dealing with the bentonite engineered barrier designed for sealing and containment of waste in a high level radioactive waste repository (HLWR). It includes two main experiments: an situ full-scale test performed at Grimsel (GTS) and a mock-up test operating since February 1997 at CIEMAT facilities in Madrid (Spain) [1,2,3]. One of the objectives of FEBEX is the development and testing of conceptual and numerical models for the thermal, hydrodynamic, and geochemical (THG) processes expected to take place in engineered clay barriers. A significant improvement in coupled THG modeling of the clay barrier has been achieved both in terms of a better understanding of THG processes and more sophisticated THG computer codes. The ability of these models to reproduce the observed THG patterns in a wide range of THG conditions enhances the confidence in their prediction capabilities. Numerical THG models of heating and hydration experiments performed on small-scale lab cells provide excellent results for temperatures, water inflow and final water content in the cells [3]. Calculated concentrations at the end of the experiments reproduce most of the patterns of measured data. In general, the fit of concentrations of dissolved species is better than that of exchanged cations. These models were later used to simulate the evolution of the large-scale experiments (in situ and mock-up). Some thermo-hydrodynamic hypotheses and bentonite parameters were slightly revised during TH calibration of the mock-up test. The results of the reference model reproduce simultaneously the observed water inflows and bentonite temperatures and relative humidities. Although the model is highly sensitive to one-at-a-time variations in model parameters, the possibility of parameter combinations leading to similar fits cannot be precluded. The TH model of the “in situ” test is based on the same bentonite TH parameters and assumptions as for the “mock-up” test. Granite parameters were slightly modified during the calibration process in order to reproduce the observed thermal and hydrodynamic evolution. The reference model captures properly relative humidities and temperatures in the bentonite [3]. It also reproduces the observed spatial distribution of water pressures and temperatures in the granite. Once calibrated the TH aspects of the model, predictions of the THG evolution of both tests were performed. Data from the dismantling of the in situ test, which is planned for the summer of 2001, will provide a unique opportunity to test and validate current THG models of the EBS.


1996 ◽  
Vol 13 (3) ◽  
pp. 259-270 ◽  
Author(s):  
Julia Winterson

Originally, the creative music workshop involving professional players was intended to give direct support to school teachers and to enhance music in the classroom, but today's large-scale, high-profile projects mounted by orchestras and opera companies appear to be developing into a full-scale industry on their own. Their role in partnership with schools and colleges now requires clarification: a survey of education policies has revealed some confusion of aims with few bodies looking closely at objectives, outcomes and effects. Music companies could profit from the experience of museums and art galleries.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Yvkai Zhang

Assistive technology and evaluation system are expected to effectively solve the challenges of using robots in construction activities and improve the efficiency and building performance. Still, challenges are still need to be addressed before using robots for construction on a large scale. This Studies were corresponded to the identified areas for further critical review, and the development of research and application in each area was systematically analyzed to identify future directions for both the academia and the industry. More speci?cally, this review focus on determining the requirement of technology and application profile for robots, and based on the above analysis, a complete set of construction framework for robot was proposed, which integrates robots and activities with the various modular systems and digital technology to get global optimum solution.


2014 ◽  
Vol 71 (7) ◽  
pp. 2476-2488 ◽  
Author(s):  
Dale R. Durran ◽  
Mark Gingrich

Abstract The spectral turbulence model of Lorenz, as modified for surface quasigeostrophic dynamics by Rotunno and Snyder, is further modified to more smoothly approach nonlinear saturation. This model is used to investigate error growth starting from different distributions of the initial error. Consistent with an often overlooked finding by Lorenz, the loss of predictability generated by initial errors of small but fixed absolute magnitude is essentially independent of their spatial scale when the background saturation kinetic energy spectrum is proportional to the −5/3 power of the wavenumber. Thus, because the background kinetic energy increases with scale, very small relative errors at long wavelengths have similar impacts on perturbation error growth as large relative errors at short wavelengths. To the extent that this model applies to practical meteorological forecasts, the influence of initial perturbations generated by butterflies would be swamped by unavoidable tiny relative errors in the large scales. The rough applicability of the authors’ modified spectral turbulence model to the atmosphere over scales ranging between 10 and 1000 km is supported by the good estimate that it provides for the ensemble error growth in state-of-the-art ensemble mesoscale model simulations of two winter storms. The initial-error spectrum for the ensemble perturbations in these cases has maximum power at the longest wavelengths. The dominance of large-scale errors in the ensemble suggests that mesoscale weather forecasts may often be limited by errors arising from the large scales instead of being produced solely through an upscale cascade from the smallest scales.


Leonardo ◽  
2013 ◽  
Vol 46 (3) ◽  
pp. 288-289
Author(s):  
Andrew Pepper
Keyword(s):  

The author discusses an architectural installation incorporating the holographic shadows of water installed in a miniature townscape as part of a collaborative, developmental, group exhibition. The installation concept is outlined and its impact on the ability to stimulate architectural interventions, which would not normally be possible within a full-scale environment, is considered. The influence of a major large-scale holographic installation is outlined and the requirement to suspend belief, within the current exhibition, is discussed.


2021 ◽  
Author(s):  
Eija Tanskanen ◽  
Tero Raita ◽  
Joni Tammi ◽  
Jouni Pulliainen ◽  
Hannu Koivula ◽  
...  

<p>The near-Earth environment is continuously changing by disturbances from external and internal sources. A combined research ecosystem is needed to be able to monitor short- and long-term changes and mitigate their societal effects. Observatories and large-scale infrastructures are the best way to guarantee continuous 24/7 observations and full-scale monitoring capability. Sodankylä Geophysical Observatory takes care of continuous geoenvironmental monitoring in Finland and together with national infrastructures such as FIN-EPOS and E2S enable extending and expanding the monitoring capability. European Plate Observing System of Finland (FIN-EPOS) and flexible instrument network of FIN-EPOS (FLEX-EPOS) will create a national pool of instruments including geophysical instruments targeted for solving topical questions of solid Earth physics. Scientific and new hardware building by FLEX-EPOS is essential in order to identify and reduce the impact of seismic, magnetic and geodetic hazards and understand the underlying processes.</p><p> </p><p>New national infrastructure Earth-Space Research Ecosystem (E2S) will combine measurements from atmosphere to near-Earth and distant space. This combined infrastructure will enable resolving how the Arctic environment change over the seasons, years, decades and centuries. We target our joint efforts to improve the situational awareness in the near-Earth and space environments, and in the Arctic for enhancing safety on ground and in space. This presentation will give details on the large-scale Earth-space infrastructures and research ecosystems and will give examples on how they can improve the safety of society.</p>


Author(s):  
Yuan-Shun Dai ◽  
Jack Dongarra

Grid computing is a newly developed technology for complex systems with large-scale resource sharing, wide-area communication, and multi-institutional collaboration. It is hard to analyze and model the Grid reliability because of its largeness, complexity and stiffness. Therefore, this chapter introduces the Grid computing technology, presents different types of failures in grid system, models the grid reliability with star structure and tree structure, and finally studies optimization problems for grid task partitioning and allocation. The chapter then presents models for star-topology considering data dependence and treestructure considering failure correlation. Evaluation tools and algorithms are developed, evolved from Universal generating function and Graph Theory. Then, the failure correlation and data dependence are considered in the model. Numerical examples are illustrated to show the modeling and analysis.


Sign in / Sign up

Export Citation Format

Share Document