Implicit method for the time marching analysis of flutter

2001 ◽  
Vol 105 (1046) ◽  
pp. 199-214 ◽  
Author(s):  
G. S. L. Goura ◽  
K. J. Badcock ◽  
M. A. Woodgate ◽  
B. E. Richards

Abstract This paper evaluates a time marching simulation method for flutter which is based on a solution of the Euler equations and a linear modal structural model. Jameson’s pseudo time method is used for the time stepping, allowing sequencing errors to be avoided without incurring additional computational cost. Transfinite interpolation of displacements is used for grid regeneration and a constant volume transformation for inter-grid interpolation. The flow pseudo steady state is calculated using an unfactored implicit method which features a Krylov subspace solution of an approximately linearised system. The spatial discretisation is made using Osher’s approximate Riemann solver with MUSCL interpolation. The method is evaluated against available results for the AGARD 445.6 wing. This wing, which is made of laminated mahogany, was tested at NASA Langley in the 1960s and has been the standard test case for simulation methods ever since. The structural model in the current work was built in NASTRAN using homogeneous plate elements. The comparisons show good agreement for the prediction of flutter boundaries. The solution method allows larger time steps to be taken than other methods.

Author(s):  
Tom Verstraete ◽  
Lasse Müller ◽  
Jens-Dominik Müller

The design optimization of turbomachinery components has witnessed an increased attention in last decade, and is currently used in many companies in the daily design cycle. The adjoint method proves to have the highest potential in this field, however, has still two major shortcomings before its full potential can be used: 1) the shape is mainly parameterized by its grid and the connection to the CAD model is lost, and 2) the optimization process includes only aerodynamic performance and neglects stress and vibration requirements. Within this paper a methodology is developed to include stress calculations into a gradient-based framework, which requires the differentiation of a stress analysis tool. To allow combining the sensitivities from the structural model with those from the aero performance, the CAD model is used for parameterizing the shape, effectively defining a parametrization that controls both the fluid and solid domain that remain linked to each other without creating voids between both models. The method is tested on a radial turbine test case in which the meridional layout is optimized to reduce the maximum von Mises stresses in the material. The results demonstrate a significant reduction in stress concentrations with a limited computational cost.


2019 ◽  
Vol 29 (01) ◽  
pp. 1950001 ◽  
Author(s):  
Ambra Abdullahi Hassan ◽  
Valeria Cardellini ◽  
Pasqua D’Ambra ◽  
Daniela di Serafino ◽  
Salvatore Filippone

Many scientific applications require the solution of large and sparse linear systems of equations using Krylov subspace methods; in this case, the choice of an effective preconditioner may be crucial for the convergence of the Krylov solver. Algebraic MultiGrid (AMG) methods are widely used as preconditioners, because of their optimal computational cost and their algorithmic scalability. The wide availability of GPUs, now found in many of the fastest supercomputers, poses the problem of implementing efficiently these methods on high-throughput processors. In this work we focus on the application phase of AMG preconditioners, and in particular on the choice and implementation of smoothers and coarsest-level solvers capable of exploiting the computational power of clusters of GPUs. We consider block-Jacobi smoothers using sparse approximate inverses in the solve phase associated with the local blocks. The choice of approximate inverses instead of sparse matrix factorizations is driven by the large amount of parallelism exposed by the matrix-vector product as compared to the solution of large triangular systems on GPUs. The selected smoothers and solvers are implemented within the AMG preconditioning framework provided by the MLD2P4 library, using suitable sparse matrix data structures from the PSBLAS library. Their behaviour is illustrated in terms of execution speed and scalability, on a test case concerning groundwater modelling, provided by the Jülich Supercomputing Center within the Horizon 2020 Project EoCoE.


Author(s):  
Benjamin D. Youngman ◽  
David B. Stephenson

We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t -process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements.


2021 ◽  
Author(s):  
Samier Pierre ◽  
Raguenel Margaux ◽  
Darche Gilles

Abstract Solving the equations governing multiphase flow in geological formations involves the generation of a mesh that faithfully represents the structure of the porous medium. This challenging mesh generation task can be greatly simplified by the use of unstructured (tetrahedral) grids that conform to the complex geometric features present in the subsurface. However, running a million-cell simulation problem using an unstructured grid on a real, faulted field case remains a challenge for two main reasons. First, the workflow typically used to construct and run the simulation problems has been developed for structured grids and needs to be adapted to the unstructured case. Second, the use of unstructured grids that do not satisfy the K-orthogonality property may require advanced numerical schemes that preserve the accuracy of the results and reduce potential grid orientation effects. These two challenges are at the center of the present paper. We describe in detail the steps of our workflow to prepare and run a large-scale unstructured simulation of a real field case with faults. We perform the simulation using four different discretization schemes, including the cell-centered Two-Point and Multi-Point Flux Approximation (respectively, TPFA and MPFA) schemes, the cell- and vertex-centered Vertex Approximate Gradient (VAG) scheme, and the cell- and face-centered hybrid Mimetic Finite Difference (MFD) scheme. We compare the results in terms of accuracy, robustness, and computational cost to determine which scheme offers the best compromise for the test case considered here.


Author(s):  
Alessandra Cuneo ◽  
Alberto Traverso ◽  
Shahrokh Shahpar

In engineering design, uncertainty is inevitable and can cause a significant deviation in the performance of a system. Uncertainty in input parameters can be categorized into two groups: aleatory and epistemic uncertainty. The work presented here is focused on aleatory uncertainty, which can cause natural, unpredictable and uncontrollable variations in performance of the system under study. Such uncertainty can be quantified using statistical methods, but the main obstacle is often the computational cost, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation with as few evaluations as possible. In the last few years, different methodologies for uncertainty propagation and quantification have been proposed. The focus of this study is to evaluate four different methods to demonstrate strengths and weaknesses of each approach. The first method considered is Monte Carlo simulation, a sampling method that can give high accuracy but needs a relatively large computational effort. The second method is Polynomial Chaos, an approximated method where the probabilistic parameters of the response function are modelled with orthogonal polynomials. The third method considered is Mid-range Approximation Method. This approach is based on the assembly of multiple meta-models into one model to perform optimization under uncertainty. The fourth method is the application of the first two methods not directly to the model but to a response surface representing the model of the simulation, to decrease computational cost. All these methods have been applied to a set of analytical test functions and engineering test cases. Relevant aspects of the engineering design and analysis such as high number of stochastic variables and optimised design problem with and without stochastic design parameters were assessed. Polynomial Chaos emerges as the most promising methodology, and was then applied to a turbomachinery test case based on a thermal analysis of a high-pressure turbine disk.


2021 ◽  
Author(s):  
Anuj Dhoj Thapa

Gillespie's algorithm, also known as the Stochastic Simulation Algorithm (SSA), is an exact simulation method for the Chemical Master Equation model of well-stirred biochemical systems. However, this method is computationally intensive when some fast reactions are present in the system. The tau-leap scheme developed by Gillespie can speed up the stochastic simulation of these biochemically reacting systems with negligible loss in accuracy. A number of tau-leaping methods were proposed, including the explicit tau-leaping and the implicit tau-leaping strategies. Nonetheless, these schemes have low order of accuracy. In this thesis, we investigate tau-leap strategies which achieve high accuracy at reduced computational cost. These strategies are tested on several biochemical systems of practical interest.


2021 ◽  
Author(s):  
◽  
Pauline Mourlanette

Uncertainties related to permeability heterogeneity can be estimated using geostatistical simulation methods. Usually, these methods are applied on regular grids with cells of constant size, whereas unstructured grids are more flexible to honor geological structures and offer local refinements for fluid-flow simulations. However, cells of different sizes require to account for the support dependency of permeability statistics (support effect). This work presents a novel workflow based on the power averaging technique. The averaging exponent ω is estimated using a response surface calibrated from numerical upscaling experiments. Using spectral turning bands, permeability is simulated on points in each unstructured cell, and later averaged with a local value of ω that depends on the cell size and shape, but also on the proportion of each facies inside the cell. The method is first illustrated on a synthetic case, with a single facies. The simulation of a tracer experiment is used to compare this novel geostatistical simulation method with a conventional approach based on a fine scale Cartesian grid. The results show the consistency of both the simulated permeability fields and the tracer breakthrough curves. The application to an industrial case with two facies is then presented and shows both consistent permeability fields and computational costs acceptable for the industry. Indeed, the computational cost for several realizations is much lower than the conventional approach based on a pressure-solver upscaling. The method works for the presented cases, but its theoretical ro-bustness can still be improved. A discussion on pressure solver upscal-ing parameters selection and power averaging limits is available in the conclusion, as well as a few research perspectives on multiple facies and non stationary proportions inclusion, the management of anisotropy and the extension to multiphase flow.


Author(s):  
José Roberto F. Arruda ◽  
Carlson Antonio M. Verçosa

Abstract A new structural model updating method based on the dynamic force balance is presented. The method consists of rearranging the spectral equation so that measured modes and natural frequencies can be used to compute directly updated stiffness coefficients. The proposed method preserves both the structural connectivity and reciprocity, which translate into sparsity and symmetry of the stiffness matrix, respectively. Large changes in small-valued stiffness coefficients are avoided using parameter weighting in the rearranged spectral equation solution. It is shown that the proposed method produces results which are similar to the results obtained using Alvar Kabe’s method, with the advantages of simpler formulation and smaller computational cost. A simple example of an 8 degrees-of-freedom mass-spring system, originally used by Kabe to present his method, is used here to evaluate the proposed method.


Author(s):  
Alexander Liefke ◽  
Peter Jaksch ◽  
Sebastian Schmitz ◽  
Vincent Marciniak ◽  
Uwe Janoske ◽  
...  

Abstract This paper shows how to use discrete CFD and FEM adjoint surface sensitivities to derive objective-based tolerances for turbine blades, instead of relying on geometric tolerances. For this purpose a multidisciplinary adjoint evaluation tool chain is introduced to quantify the effect of real manufacturing imperfections on aerodynamic efficiency and probabilistic low cycle fatigue life time. Before the adjoint method is applied, a numerical validation of the CFD and FEM adjoint gradients is performed using 102 heavy duty turbine vane scans. The results show that the relative error for adjoint CFD gradients is below 0.5%, while the FEM life time gradient relative errors are below 5%. The adjoint assessment tool chain further reduces the computational cost by around 85% for the investigated test case compared to non-linear methods. Through the application of the presented tool chain, the definition of specified objective-based tolerances becomes available as a design assessment tool and allows to improve overall turbine efficiency and the accuracy of life time prediction.


2014 ◽  
Vol 4 (3) ◽  
pp. 267-282
Author(s):  
Akira Imakura

AbstractSubspace projection methods based on the Krylov subspace using powers of a matrix A have often been standard for solving large matrix computations in many areas of application. Recently, projection methods based on the extended Krylov subspace using powers of A and A−1 have attracted attention, particularly for functions of a matrix times a vector and matrix equations. In this article, we propose an efficient algorithm for constructing an orthonormal basis for the extended Krylov subspace. Numerical experiments indicate that this algorithm has less computational cost and approximately the same accuracy as the traditional algorithm.


Sign in / Sign up

Export Citation Format

Share Document