Enabling coupled multi-scale, multi-field experiments through choreographies of data-driven scientific simulations

Computing ◽  
2014 ◽  
Vol 98 (4) ◽  
pp. 439-467 ◽  
Author(s):  
Andreas Weiß ◽  
Dimka Karastoyanova
Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2021 ◽  
Author(s):  
Alex Chin ◽  
Dean Eckles ◽  
Johan Ugander

When trying to maximize the adoption of a behavior in a population connected by a social network, it is common to strategize about where in the network to seed the behavior, often with an element of randomness. Selecting seeds uniformly at random is a basic but compelling strategy in that it distributes seeds broadly throughout the network. A more sophisticated stochastic strategy, one-hop targeting, is to select random network neighbors of random individuals; this exploits a version of the friendship paradox, whereby the friend of a random individual is expected to have more friends than a random individual, with the hope that seeding a behavior at more connected individuals leads to more adoption. Many seeding strategies have been proposed, but empirical evaluations have demanded large field experiments designed specifically for this purpose and have yielded relatively imprecise comparisons of strategies. Here we show how stochastic seeding strategies can be evaluated more efficiently in such experiments, how they can be evaluated “off-policy” using existing data arising from experiments designed for other purposes, and how to design more efficient experiments. In particular, we consider contrasts between stochastic seeding strategies and analyze nonparametric estimators adapted from policy evaluation and importance sampling. We use simulations on real networks to show that the proposed estimators and designs can substantially increase precision while yielding valid inference. We then apply our proposed estimators to two field experiments, one that assigned households to an intensive marketing intervention and one that assigned students to an antibullying intervention. This paper was accepted by Gui Liberali, Special Issue on Data-Driven Prescriptive Analytics.


2019 ◽  
Author(s):  
Wentao Zhu ◽  
Yufang Huang ◽  
Mani A Vannan ◽  
Shizhen Liu ◽  
Daguang Xu ◽  
...  

AbstractEchocardiography has become routinely used in the diagnosis of cardiomyopathy and abnormal cardiac blood flow. However, manually measuring myocardial motion and cardiac blood flow from echocar-diogram is time-consuming and error-prone. Computer algorithms that can automatically track and quantify myocardial motion and cardiac blood flow are highly sought after, but have not been very successful due to noise and high variability of echocardiography. In this work, we propose a neural multi-scale self-supervised registration (NMSR) method for automated myocardial and cardiac blood flow dense tracking. NMSR incorporates two novel components: 1) utilizing a deep neural net to parameterize the velocity field between two image frames, and 2) optimizing the parameters of the neural net in a sequential multi-scale fashion to account for large variations within the velocity field. Experiments demonstrate that NMSR yields significantly better registration accuracy than the state-of-the-art methods, such as advanced normalization tools (ANTs) and Voxel Morph, for both myocardial and cardiac blood flow dense tracking. Our approach promises to provide a fully automated method for fast and accurate analyses of echocardiograms.


Author(s):  
Zhuo Wang ◽  
Chen Jiang ◽  
Mark F. Horstemeyer ◽  
Zhen Hu ◽  
Lei Chen

Abstract One of significant challenges in the metallic additive manufacturing (AM) is the presence of many sources of uncertainty that leads to variability in microstructure and properties of AM parts. Consequently, it is extremely challenging to repeat the manufacturing of a high-quality product in mass production. A trial-and-error approach usually needs to be employed to attain a product with high quality. To achieve a comprehensive uncertainty quantification (UQ) study of AM processes, we present a physics-informed data-driven modeling framework, in which multi-level data-driven surrogate models are constructed based on extensive computational data obtained by multi-scale multi-physical AM models. It starts with computationally inexpensive metamodels, followed by experimental calibration of as-built metamodels and then efficient UQ analysis of AM process. For illustration purpose, this study specifically uses the thermal level of AM process as an example, by choosing the temperature field and melt pool as quantity of interest. We have clearly showed the surrogate modeling in the presence of high-dimensional response (e.g. temperature field) during AM process, and illustrated the parameter calibration and model correction of an as-built surrogate model for reliable uncertainty quantification. The experimental calibration especially takes advantage of the high-quality AM benchmark data from National Institute of Standards and Technology (NIST). This study demonstrates the potential of the proposed data-driven UQ framework for efficiently investigating uncertainty propagation from process parameters to material microstructures, and then to macro-level mechanical properties through a combination of advanced AM multi-physics simulations, data-driven surrogate modeling and experimental calibration.


2021 ◽  
Author(s):  
Elnaz Naghibi ◽  
Elnaz Naghibi ◽  
Sergey Karabasov ◽  
Vassili Toropov ◽  
Vasily Gryazev

<p>In this study, we investigate Genetic Programming as a data-driven approach to reconstruct eddy-resolved simulations of the double-gyre problem. Stemming from Genetic Algorithms, Genetic Programming is a method of symbolic regression which can be used to extract temporal or spatial functionalities from simulation snapshots.  The double-gyre circulation is simulated by a stratified quasi-geostrophic model which is solved using high-resolution CABARET scheme. The simulation results are compressed using proper orthogonal decomposition and the time variant coefficients of the reduced-order model are fed into a Genetic Programming code. Due to the multi-scale nature of double-gyre problem, we decompose the time signal into a meandering and a fluctuating component. We next explore the parameter space of objective functions in Genetic Programming to capture the two components separately. The data-driven predictions are cross-compared with original double-gyre signal in terms of statistical moments such as variance and auto-correlation function.</p><p> </p>


2020 ◽  
Vol 12 (12) ◽  
pp. 5059
Author(s):  
Xinzheng Lu ◽  
Donglian Gu ◽  
Zhen Xu ◽  
Chen Xiong ◽  
Yuan Tian

To improve the ability to prepare for and adapt to potential hazards in a city, efforts are being invested in evaluating the performance of the built environment under multiple hazard conditions. An integrated physics-based multi-hazard simulation framework covering both individual buildings and urban areas can help improve analysis efficiency and is significant for urban planning and emergency management activities. Therefore, a city information model-powered multi-hazard simulation framework is proposed considering three types of hazards (i.e., earthquake, fire, and wind hazards). The proposed framework consists of three modules: (1) data transformation, (2) physics-based hazard analysis, and (3) high-fidelity visualization. Three advantages are highlighted: (1) the database with multi-scale models is capable of meeting the various demands of stakeholders, (2) hazard analyses are all based on physics-based models, leading to rational and scientific simulations, and (3) high-fidelity visualization can help non-professional users better understand the disaster scenario. A case study of the Tsinghua University campus is performed. The results indicate the proposed framework is a practical method for multi-hazard simulations of both individual buildings and urban areas and has great potential in helping stakeholders to assess and recognize the risks faced by important buildings or the whole city.


2019 ◽  
Vol 870 ◽  
pp. 988-1036 ◽  
Author(s):  
M. A. Mendez ◽  
M. Balabane ◽  
J.-M. Buchlin

Data-driven decompositions are becoming essential tools in fluid dynamics, allowing for tracking the evolution of coherent patterns in large datasets, and for constructing low-order models of complex phenomena. In this work, we analyse the main limits of two popular decompositions, namely the proper orthogonal decomposition (POD) and the dynamic mode decomposition (DMD), and we propose a novel decomposition which allows for enhanced feature detection capabilities. This novel decomposition is referred to as multi-scale proper orthogonal decomposition (mPOD) and combines multi-resolution analysis (MRA) with a standard POD. Using MRA, the mPOD splits the correlation matrix into the contribution of different scales, retaining non-overlapping portions of the correlation spectra; using the standard POD, the mPOD extracts the optimal basis from each scale. After introducing a matrix factorization framework for data-driven decompositions, the MRA is formulated via one- and two-dimensional filter banks for the dataset and the correlation matrix respectively. The validation of the mPOD, and a comparison with the discrete Fourier transform (DFT), DMD and POD are provided in three test cases. These include a synthetic test case, a numerical simulation of a nonlinear advection–diffusion problem and an experimental dataset obtained by the time-resolved particle image velocimetry (TR-PIV) of an impinging gas jet. For each of these examples, the decompositions are compared in terms of convergence, feature detection capabilities and time–frequency localization.


Author(s):  
Zeng Deliang ◽  
Liu Jiwei ◽  
Liu Jizhen

To improve the security and reliability of equipment and reduce their failure rate, a data-driven state detection algorithm was proposed. The concepts of multi-scale system, multi-scale entropy and multi-scale exergy were defined. The algorithm is used for multi-scale systems whose state parameters change over time and have the characteristic of increasing monotonically on a dominant scale. An abrasion index for the middle speed roller ring mill was constructed, which was used to monitor the states of the instruments. Noise that affected the accuracy of the results was analyzed. The results of simulation experiments demonstrate the effectiveness of the algorithm, which can provide a technical basis for condition maintenance.


Sign in / Sign up

Export Citation Format

Share Document