scholarly journals Radial Turbine Thermo-Mechanical Stress Optimization by Multidisciplinary Discrete Adjoint Method

Author(s):  
Alberto Racca ◽  
Tom Verstraete ◽  
Lorenzo Casalino

This paper addresses the problem of the design optimization of turbomachinery components under thermo-mechanical constraints, with focus on a radial turbine impeller for turbocharger applications. Typically, turbine components operate at high temperatures and are exposed to important thermal gradients, leading to thermal stresses. Dealing with such structural requirements necessitates the optimization algorithms to operate a coupling between fluid and structural solvers that is computationally intensive. To reduce the cost during the optimization, a novel multiphysics gradient-based approach is developed in this work, integrating a Conjugate Heat Transfer procedure by means of a partitioned coupling technique. The discrete adjoint framework allows for the efficient computation of the gradients of the thermo-mechanical constraint with respect to a large number of design variables. The contribution of the thermal strains to the sensitivities of the cost function extends the multidisciplinary outlook of the optimization and the accuracy of its predictions, with the aim of reducing the empirical safety factors applied to the design process. Finally, a turbine impeller is analyzed in a demanding operative condition and the gradient information results in a perturbation of the grid coordinates, reducing the stresses at the rotor back-plate, as a demonstration of the suitability of the presented method.

Author(s):  
James Farrow

ABSTRACT ObjectivesThe SA.NT DataLink Next Generation Linkage Management System (NGLMS) stores linked data in the form of a graph (in the computer science sense) comprised of nodes (records) and edges (record relationships or similarities). This permits efficient pre-clustering techniques based on transitive closure to form groups of records which relate to the same individual (or other selection criteria). ApproachOnly information known (or at least highly likely) to be relevant is extracted from the graph as superclusters. This operation is computationally inexpensive when the underlying information is stored as a graph and may be able to be done on-the-fly for typical clusters. More computationally intensive analysis and/or further clustering may then be performed on this smaller subgraph. Canopy clustering and using blocking used to reduce pairwise comparisons are expressions of the same type of approach. ResultsSubclusters for manual review based on transitive closure are typically computationally inexpensive enough to extract from the NGLMS that they are extracted on-demand during manual clerical review activities. There is no necessity to pre-calculate these clusters. Once extracted further analysis is undertaken on these smaller data groupings for visualisation and presentation for review and quality analysis. More computationally expensive techniques can be used at this point to prepare data for visualisation or provide hints to manual reviewers. 
Extracting high-recall groups of data records for review but providing them to reviews grouped further into high precision groups as the result of a second pass has resulted in a reduction of the time taken for clerical reviewers at SANT DataLink to manual review a group by 30–40%. The reviewers are able to manipulate whole groups of related records at once rather than individual records. ConclusionPre-clustering reduces the computational cost associated with higher order clustering and analysis algorithms. Algorithms which scale by n^2 (or more) are typical in comparison scenarios. By breaking the problem into pieces the computational cost can be reduced. Typically breaking a problem into many pieces reduces the cost in proportion to the number of pieces the problem can be broken into. This cost reduction can make techniques possible which would otherwise be computationally prohibitive.


2014 ◽  
Vol 2014 (1) ◽  
pp. 000783-000786 ◽  
Author(s):  
Farhang Yazdani

Silicon interposer is emerging as a vehicle for integrating dies with sub 50um bump pitch in 2.5D/3D configuration. Benefits of 2.5D/3D integration are well explained in the literature, however, cost and reliability is a major concern especially with the increase in interposer size. Among the challenges, reliability issues such as warpage, cracks and thermal-stresses must be managed, in addition, multi-layer build-up flip chip substrate cost and its impact on the overall yield must be considered. Because of these challenges, 2.5D/3D silicon interposer has developed a reputation as a costly process. To overcome the reliability challenges and cost associated with typical thin interposer manufacturing and assembly, a rigid silicon interposer type structure is disclosed. In this study, interposer with thickness of greater than 300um is referred to as rigid interposer. Rigid silicon interposer is directly assembled on PCB without the need for intermediary substrate. This eliminates the need for an intermediary substrate, thin wafer handling, wafer bonding/debonding procedures and Through Silicon Via (TSV) reveal processes, thus, substantially reducing the cost of 2.5D/3D integrated products while improving reliability. A 10X10mm2 rigid silicon interposer test vehicle with 310um thickness was designed and fabricated. BGA side of the interposer with 1mm ball pitch was bumped with eutectic solder balls through a reflow process. Interposer was then assembled on a 50X50mm2 FR-4 PCB. We present design and direct assembly of the rigid silicon interposer on PCB followed by temperature cycle results using CSAM images at 250, 500, 750 and 1000 cycles. It is shown that all samples successfully passed the temperature cycle stress test.


2020 ◽  
pp. 1-15
Author(s):  
Y. Zhang ◽  
X. Zhang ◽  
G. Chen

ABSTRACT The aerodynamic performance of a deployable and low-cost unmanned aerial vehicle (UAV) is investigated and improved in present work. The parameters of configuration, such as airfoil and winglet, are determined via an optimising process based on a discrete adjoint method. The optimised target is locked on an increasing lift-to-drag ratio with a limited variation of pitching moments. The separation that will lead to a stall is delayed after optimisation. Up to 128 design variables are used by the optimised solver to give enough flexibility of the geometrical transformation. As much as 20% enhancement of lift-to-drag ratio is gained at the cruise angle-of-attack, that is, a significant improvement in the lift-to-drag ratio adhering to the preferred configuration is obtained with increasing lift and decreasing drag coefficients, essentially entailing an improved aerodynamic performance.


2018 ◽  
Vol 140 (10) ◽  
Author(s):  
Sebastian Schuster ◽  
Dieter Brillert ◽  
Friedrich-Karl Benra

In this two-part paper, the investigation of condensation in the impeller of radial turbines is discussed. In Paper I, a solution strategy for the investigation of condensation in radial turbines using computational fluid dynamics (CFD) methods is presented. In Paper II, the investigation methodology is applied to a radial turbine type series that is used for waste heat recovery. First, the basic CFD approach for the calculation of the gas-droplet-liquid-film flow is introduced. Thereafter, the equations connecting the subparts are explained and a validation of the models is performed. Finally, in Paper I, condensation phenomena for a selected radial turbine impeller are discussed on a qualitative basis. Paper II continues with a detailed quantitative analyses. The aim of Paper I is to explain the models that are necessary to study condensation in radial turbines and to validate the implementation against available experiments conducted on isolated effects. This study aims to develop a procedure that is applicable for investigation of condensation in radial turbines. Furthermore, the main processes occurring in a radial turbine once the steam temperature is below the saturation temperature are explained and analyzed.


Author(s):  
Chulho Yang ◽  
Douglas E. Adams

To improve noise, vibration, and harshness (NVH) performance in a mechanical system, engineers make changes in the mass, damping, or stiffness properties of components in the system. A system response prediction method using a sensitivity function is suggested to reduce the cost in the design modification process. Embedded sensitivity functions derived solely from empirical data have been applied to identify optimal design modifications for reducing vibration resonance problems. In this paper, those sensitivity functions are used to predict the changes in vibration behavior of a system with respect to the design parameter modification. The cost and time for building many prototypes and testing actual parts can be reduced by identifying the best parameters to be changed and determining the amount of modification in those design variables through the prediction of the system response before actual components are built. The method is applied to a single degree of freedom analytical model to study the accuracy of the predictions. Finite element analyses are then conducted on a three-story structure with modifications to the stiffness and mass distributions to demonstrate the feasibility of these predictions in applications to more complicated structural systems.


2012 ◽  
Vol 19 (2) ◽  
pp. 177-184 ◽  
Author(s):  
V. Shutyaev ◽  
I. Gejadze ◽  
G. J. M. Copeland ◽  
F.-X. Le Dimet

Abstract. The problem of variational data assimilation (DA) for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition, boundary conditions and/or model parameters. The input data contain observation and background errors, hence there is an error in the optimal solution. For mildly nonlinear dynamics, the covariance matrix of the optimal solution error can be approximated by the inverse Hessian of the cost function. For problems with strongly nonlinear dynamics, a new statistical method based on the computation of a sample of inverse Hessians is suggested. This method relies on the efficient computation of the inverse Hessian by means of iterative methods (Lanczos and quasi-Newton BFGS) with preconditioning. Numerical examples are presented for the model governed by the Burgers equation with a nonlinear viscous term.


2016 ◽  
Vol 846 ◽  
pp. 294-299
Author(s):  
Grant P. Steven ◽  
Jacob Celermajer

Long before FEA was developed, people were participating in sports and as competition intensified is became clear that for many sports, the equipment plays as important a part in performance as does the athlete. With the use of modern materials and manufacturing processes there is always scope for maximizing the performance of sporting equipment. Traditionally improvements were incremental, as athletes fed-back suggestions to manufacturers and new prototypes were built and tested. Given the cost of tooling for many of the current manufacturing methods, carbon fibre with resin infusion to mention one, it is clear that such build and test iterations are not as preferable given the potential of limited success and high cost.Modern simulation techniques are capable of examining a “day–in–the-life” of an object and from an examination of the envelope of response the most sensitive regions can be detected. Iteration on the design variables, provided they remain within any constraints, be they physical or otherwise, can be incorporated to investigate their effect on performance.In this paper non-linear transient dynamic (NLTD) FEA is undertaken on a 3 iron golf club impacting a golf ball. During the less than 0.5 millisecond impact the whole outcome of the shit is established. Design changes that can lead to improved performance are studied. From the FEA simulation information on ball top spin, side spin, take off velocity are investigated.


2014 ◽  
Vol 40 (1) ◽  
pp. 2-32 ◽  
Author(s):  
John D. Finnerty

Purpose – More than 80 percent of S&P 500 firms that issue ESOs use the Black-Scholes-Merton (BSM) model and substitute the estimated average term for the contractual expiration to calculate ESO expense. This simplification systematically overprices ESOs, which worsens as the stock's volatility increases. The purpose of this paper is to present a modification of the BSM model to explicitly incorporate the rates of forfeiture pre- and post-vesting and the rate of early exercise. Design/methodology/approach – The paper demonstrates the model's usefulness by employing historical exercise and forfeiture data for 127 separate ESO grants and 1.31 billion ESOs to calculate the exercise and forfeiture parameters and value ESOs for nine firms. Findings – The modified BSM model is just as accurate but easier to use than the more computationally intensive utility maximization and trinomial lattice models, and it avoids the ASC 718 BSM model's overpricing bias. Originality/value – If firms prefer the BSM model over more mathematically elegant alternatives, they should at least use a BSM model that is free of overpricing bias.


Author(s):  
Hyun-Moo Koh ◽  
Kwan-Soon Park ◽  
Junho Song

A procedure for evaluating cost-effectiveness of seismically isolated pool structures is presented. The ground motion is modeled as the spectral density function matching the response spectrum, which is specified in codes in terms of acceleration and site coefficients. Interaction between flexible walls and contained fluid is considered in the form of added mass matrix. The thickness of wall and the stiffness of isolator are used as main design variables while the minimum cost for comparison is estimated. Transfer function vector of the system is derived and spectral analysis method based on random vibration theories is used for calculating probability of failure. Evaluation results of the examples show that the cost-effectiveness of seismically isolated pool structures is relatively high in regions of low to moderate seismicity.


Sign in / Sign up

Export Citation Format

Share Document