On the speed and numerical stability of ice-dynamics approximations

Author(s):  
Alexander Robinson ◽  
William Lipscomb ◽  
Daniel Goldberg ◽  
Jorge Alvarez-Solas

<p>The Stokes solution to ice dynamics is computationally expensive, and in many cases unnecessary. Many approximations have been developed that reduce the complexity of the problem and thus reduce computational cost. Most approximations can generally be tuned to give reasonable solutions to ice-dynamics problems, depending on the domain and scale being simulated. However, the inherent numerical stability of time-stepping with different solvers has not been studied in detail. Here we investigate how different approximations lead to limits on the maximum timestep in mass conservation calculations for both idealized and realistic geometries. The ice-sheet models Yelmo and CISM are used to compare the following approximations: the shallow-ice approximation (SIA), the shallow-shelf approximation (SSA), the SIA+SSA approximation (Hybrid) and two variants of the L1L2 solver, namely one that reduces to SIA in the case of no-sliding (dubbed L1L2-SIA here) and the so-called depth-integrated viscosity approximation (DIVA). We find that these approaches vary significantly with respect to numerical stability. The extreme dependence on the local surface gradient of the SIA-based approximations (SIA, Hybrid, L1L2-SIA) leads to an amplified local velocity response and greater potential for instability, especially as grid resolution increases. In contrast, the SSA and DIVA approximations allow for longer time steps, because numerical oscillations in ice thickness are damped with increasing resolution. Given its high fidelity to the Stokes solution and its favorable stability properties, we demonstrate the strong case for using the DIVA approximation in many contexts.</p>

2017 ◽  
Author(s):  
◽  
Aurélie Bellemans

Performing high-fidelity plasma simulations remains computationally expensive because of their large dimension and complex chemistry. Atmospheric re-entry plasmas for instance, involve hundreds of species in thousands of re- actions used in detailed physical models. These models are very complex as they describe the non-equilibrium phenomena due to finite-rate processes in the flow. Chemical non-equilibrium arises because of the many dissociation, ionization and excitation reaction at various time-scales. Vibrational, rotational, electronic and translational temperatures characterize the flow and exchange energy between species, which leads to thermal non-equilibrium. With the current computational resources, detailed three-dimensional simulations are still out of reach. Detailed calculations using the full dynamics are often restricted to a zero- or one-dimensional description. A trade-off has to be made between the level of accuracy of the model and its computational cost. This thesis presents various methods to develop accurate reduced kinetic models for plasma flows. Starting from detailed chemistry, high-fidelity reductions are achieved through the application of either physics-based techniques, such as presented by the binning methods and time-scale based reductions, either empirical techniques given by principal component analysis. As an original contribution to the existing methods, the physics-based techniques are combined with principal component analysis uniting both com- munities. The different techniques are trained on a 34 species collisional- radiative model for argon plasma by comparing shock relaxation simulations. The best performing method is applied on the large N-N2 mechanism containing 9391 species and 23 million reactions calculated by the NASA Ames Research Center. As a preliminary step, the system dynamics is analyzed to improve our understanding of the various processes occurring in plasma flows. The re- actions are analyzed and classified according to their importance. A deep investigation of the kinetics enables finding the main variables and parameters characterizing the plasma, which can thereafter be used to develop or improve existing reductions. As a result, a novel coarse grain model has been developed for argon by binning the electronic excited levels and the ionized species into 2 Boltzmann averaged energy bins. The ground state is solved individually together with the free electrons, reducing the species mass conservation equations from 34 to 4. Principal component analysis has been transferred from the combustion community to plasma flows by investigating the Manifold-Generated and Score-PCA techniques. PCA identifies low-dimensional manifolds empirically, projecting the full kinetics to its base of principal components. A novel approach combines the binning techniques with PCA, finding an optimized model for reducing the N3 rovibrational collisional model.


2020 ◽  
Author(s):  
Ali Raza ◽  
Arni Sturluson ◽  
Cory Simon ◽  
Xiaoli Fern

Virtual screenings can accelerate and reduce the cost of discovering metal-organic frameworks (MOFs) for their applications in gas storage, separation, and sensing. In molecular simulations of gas adsorption/diffusion in MOFs, the adsorbate-MOF electrostatic interaction is typically modeled by placing partial point charges on the atoms of the MOF. For the virtual screening of large libraries of MOFs, it is critical to develop computationally inexpensive methods to assign atomic partial charges to MOFs that accurately reproduce the electrostatic potential in their pores. Herein, we design and train a message passing neural network (MPNN) to predict the atomic partial charges on MOFs under a charge neutral constraint. A set of ca. 2,250 MOFs labeled with high-fidelity partial charges, derived from periodic electronic structure calculations, serves as training examples. In an end-to-end manner, from charge-labeled crystal graphs representing MOFs, our MPNN machine-learns features of the local bonding environments of the atoms and learns to predict partial atomic charges from these features. Our trained MPNN assigns high-fidelity partial point charges to MOFs with orders of magnitude lower computational cost than electronic structure calculations. To enhance the accuracy of virtual screenings of large libraries of MOFs for their adsorption-based applications, we make our trained MPNN model and MPNN-charge-assigned computation-ready, experimental MOF structures publicly available.<br>


Author(s):  
Wei Zhang ◽  
Saad Ahmed ◽  
Jonathan Hong ◽  
Zoubeida Ounaies ◽  
Mary Frecker

Different types of active materials have been used to actuate origami-inspired self-folding structures. To model the highly nonlinear deformation and material responses, as well as the coupled field equations and boundary conditions of such structures, high-fidelity models such as finite element (FE) models are needed but usually computationally expensive, which makes optimization intractable. In this paper, a computationally efficient two-stage optimization framework is developed as a systematic method for the multi-objective designs of such multifield self-folding structures where the deformations are concentrated in crease-like areas, active and passive materials are assumed to behave linearly, and low- and high-fidelity models of the structures can be developed. In Stage 1, low-fidelity models are used to determine the topology of the structure. At the end of Stage 1, a distance measure [Formula: see text] is applied as the metric to determine the best design, which then serves as the baseline design in Stage 2. In Stage 2, designs are further optimized from the baseline design with greatly reduced computing time compared to a full FEA-based topology optimization. The design framework is first described in a general formulation. To demonstrate its efficacy, this framework is implemented in two case studies, namely, a three-finger soft gripper actuated using a PVDF-based terpolymer, and a 3D multifield example actuated using both the terpolymer and a magneto-active elastomer, where the key steps are elaborated in detail, including the variable filter, metrics to select the best design, determination of design domains, and material conversion methods from low- to high-fidelity models. In this paper, analytical models and rigid body dynamic models are developed as the low-fidelity models for the terpolymer- and MAE-based actuations, respectively, and the FE model of the MAE-based actuation is generalized from previous work. Additional generalizable techniques to further reduce the computational cost are elaborated. As a result, designs with better overall performance than the baseline design were achieved at the end of Stage 2 with computing times of 15 days for the gripper and 9 days for the multifield example, which would rather be over 3 and 2 months for full FEA-based optimizations, respectively. Tradeoffs between the competing design objectives were achieved. In both case studies, the efficacy and computational efficiency of the two-stage optimization framework are successfully demonstrated.


2018 ◽  
Vol 140 (9) ◽  
Author(s):  
R. Maffulli ◽  
L. He ◽  
P. Stein ◽  
G. Marinescu

The emerging renewable energy market calls for more advanced prediction tools for turbine transient operations in fast startup/shutdown cycles. Reliable numerical analysis of such transient cycles is complicated by the disparity in time scales of the thermal responses in fluid and solid domains. Obtaining fully coupled time-accurate unsteady conjugate heat transfer (CHT) results under these conditions would require to march in both domains using the time-step dictated by the fluid domain: typically, several orders of magnitude smaller than the one required by the solid. This requirement has strong impact on the computational cost of the simulation as well as being potentially detrimental to the accuracy of the solution due to accumulation of round-off errors in the solid. A novel loosely coupled CHT methodology has been recently proposed, and successfully applied to both natural and forced convection cases that remove these requirements through a source-term based modeling (STM) approach of the physical time derivative terms in the relevant equations. The method has been shown to be numerically stable for very large time steps with adequate accuracy. The present effort is aimed at further exploiting the potential of the methodology through a new adaptive time stepping approach. The proposed method allows for automatic time-step adjustment based on estimating the magnitude of the truncation error of the time discretization. The developed automatic time stepping strategy is applied to natural convection cases under long (2000 s) transients: relevant to the prediction of turbine thermal loads during fast startups/shutdowns. The results of the method are compared with fully coupled unsteady simulations showing comparable accuracy with a significant reduction of the computational costs.


2012 ◽  
Vol 9 (4) ◽  
pp. 1493-1511 ◽  
Author(s):  
Huaibin Wang ◽  
Yuanquan Wang ◽  
Wenqi Ren

In this paper, novel second order and fourth order diffusion models are proposed for image denoising. Both models are based on the gradient vector convolution (GVC) model. The second model is coined by incorporating the GVC model into the anisotropic diffusion model and the fourth order one is by introducing the GVC to the You-Kaveh fourth order model. Since the GVC model can be implemented in real time using the FFT and possesses high robustness to noise, both proposed models have many advantages over traditional ones, such as low computational cost, high numerical stability and remarkable denoising effect. Moreover, the proposed fourth order model is an anisotropic filter, so it can obviously improve the ability of edge and texture preserving except for further improvement of denoising. Some experiments are presented to demonstrate the effectiveness of the proposed models.


2020 ◽  
Author(s):  
Shine Win Naung ◽  
Mohammad Rahmati ◽  
Hamed Farokhi

Abstract The high-fidelity computational fluid dynamics (CFD) simulations of a complete wind turbine model usually require significant computational resources. It will require much more resources if the fluid-structure interactions between the blade and the flow are considered, and it has been the major challenge in the industry. The aeromechanical analysis of a complete wind turbine model using a high-fidelity CFD method is discussed in this paper. The distinctiveness of this paper is the application of the nonlinear frequency domain solution method to analyse the forced response and flutter instability of the blade as well as to investigate the unsteady flow field across the wind turbine rotor and the tower. This method also enables the aeromechanical simulations of wind turbines for various inter blade phase angles in a combination with a phase shift solution method. Extensive validations of the nonlinear frequency domain solution method against the conventional time domain solution method reveal that the proposed frequency domain solution method can reduce the computational cost by one to two orders of magnitude.


2021 ◽  
Author(s):  
Alexander Robinson ◽  
Daniel Goldberg ◽  
William H. Lipscomb

Abstract. In the last decade, the number of ice-sheet models has increased substantially, in line with the growth of the glaciological community. These models use solvers based on different approximations of ice dynamics. In particular, several depth-integrated dynamics approximations have emerged as fast solvers capable of resolving the relevant physics of ice sheets at the continen- tal scale. However, the numerical stability of these schemes has not been studied systematically to evaluate their effectiveness in practice. Here we focus on three such solvers, the so-called Hybrid, L1L2-SIA and DIVA solvers, as well as the well-known SIA and SSA solvers as boundary cases. We investigate the numerical stability of these solvers as a function of grid resolution and the state of the ice sheet. Under simplified conditions with constant viscosity, the maximum stable timestep of the Hybrid solver, like the SIA solver, has a quadratic dependence on grid resolution. In contrast, the DIVA solver has a maximum timestep that is independent of resolution, like the SSA solver. Analysis indicates that the L1L2-SIA solver should behave similarly, but in practice, the complexity of its implementation can make it difficult to maintain stability. In realistic simulations of the Greenland ice sheet with a non-linear rheology, the DIVA and SSA solvers maintain superior numerical stability, while the SIA, Hybrid and L1L2-SIA solvers show markedly poorer performance. At a grid resolution of ∆x = 4 km, the DIVA solver runs approximately 15 times faster than the Hybrid and L1L2-SIA solvers. Our analysis shows that as resolution increases, the ice-dynamics solver can act as a bottleneck to model performance. The DIVA solver emerges as a clear outlier in terms of both model performance and its representation of the ice-flow physics itself.


Author(s):  
Sriram Shankaran ◽  
Brian Barr

The objective of this study is to develop and assess a gradient-based algorithm that efficiently traverses the Pareto front for multi-objective problems. We use high-fidelity, computationally intensive simulation tools (for eg: Computational Fluid Dynamics (CFD) and Finite Element (FE) structural analysis) for function and gradient evaluations. The use of evolutionary algorithms with these high-fidelity simulation tools results in prohibitive computational costs. Hence, in this study we use an alternate gradient-based approach. We first outline an algorithm that can be proven to recover Pareto fronts. The performance of this algorithm is then tested on three academic problems: a convex front with uniform spacing of Pareto points, a convex front with non-uniform spacing and a concave front. The algorithm is shown to be able to retrieve the Pareto front in all three cases hence overcoming a common deficiency in gradient-based methods that use the idea of scalarization. Then the algorithm is applied to a practical problem in concurrent design for aerodynamic and structural performance of an axial turbine blade. For this problem, with 5 design variables, and for 10 points to approximate the front, the computational cost of the gradient-based method was roughly the same as that of a method that builds the front from a sampling approach. However, as the sampling approach involves building a surrogate model to identify the Pareto front, there is the possibility that validation of this predicted front with CFD and FE analysis results in a different location of the “Pareto” points. This can be avoided with the gradient-based method. Additionally, as the number of design variables increases and/or the number of required points on the Pareto front is reduced, the computational cost favors the gradient-based approach.


2021 ◽  
Author(s):  
Francesco Rizzi ◽  
Eric Parish ◽  
Patrick Blonigan ◽  
John Tencer

&lt;p&gt;This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.&lt;/p&gt;&lt;p&gt;Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.&lt;/p&gt;&lt;p&gt;Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.&lt;/p&gt;&lt;p&gt;State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.&lt;/p&gt;


Author(s):  
Li Wang ◽  
Boris Diskin ◽  
Leonard V. Lopes ◽  
Eric J. Nielsen ◽  
Elizabeth Lee-Rausch ◽  
...  

A high-fidelity multidisciplinary analysis and gradient-based optimization tool for rotorcraft aero-acoustics is presented. Tightly coupled discipline models include physics-based computational fluid dynamics, rotorcraft comprehensive analysis, and noise prediction and propagation. A discretely consistent adjoint methodology accounts for sensitivities of unsteady flows and unstructured, dynamically deforming, overset grids. The sensitivities of structural responses to blade aerodynamic loads are computed using a complex-variable approach. Sensitivities of acoustic metrics are computed by chain-rule differentiation. Interfaces are developed for interactions between the discipline models for rotorcraft aeroacoustic analysis and the integrated sensitivity analysis. The multidisciplinary sensitivity analysis is verified through a complex-variable approach. To verify functionality of the multidisciplinary analysis and optimization tool, an optimization problem for a 40% Mach-scaled HART-II rotor-and-fuselage configuration is crafted with the objective of reducing thickness noise subject to aerodynamic and geometric constraints. The optimized configuration achieves a noticeable noise reduction, satisfies all required constraints, and produces thinner blades as expected. Computational cost of the optimization cycle is assessed in a high-performance computing environment and found to be acceptable for design of rotorcraft in general level-flight conditions.


Sign in / Sign up

Export Citation Format

Share Document