scholarly journals Design Centering of Compact Microwave Components Using Response Features and Trust Regions

Energies ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 8550
Author(s):  
Anna Pietrenko-Dabrowska ◽  
Slawomir Koziel

Fabrication tolerances, as well as uncertainties of other kinds, e.g., concerning material parameters or operating conditions, are detrimental to the performance of microwave circuits. Mitigating their impact requires accounting for possible parameter deviations already at the design stage. This involves optimization of appropriately defined statistical figures of merit such as yield. Although important, robust (or tolerance-aware) design is an intricate endeavor because manufacturing inaccuracies are normally described using probability distributions, and their quantification has to be based on statistical analysis. The major bottleneck here is high computational cost: for reliability reasons, miniaturized microwave components are evaluated using full-wave electromagnetic (EM) models, whereas conventionally utilized analysis methods (e.g., Monte Carlo simulation) are associated with massive circuit evaluations. A practical approach that allows for circumventing the aforementioned obstacles offers surrogate modeling techniques, which have been a dominant trend over the recent years. Notwithstanding, a construction of accurate metamodels may require considerable computational investments, especially for higher-dimensional cases. This paper brings in a novel design-centering approach, which assembles forward surrogates founded at the level of response features and trust-region framework for direct optimization of the system yield. Formulating the problem with the use of characteristic points of the system response alleviates the issues related to response nonlinearities. At the same time, as the surrogate is a linear regression model, a rapid yield estimation is possible through numerical integration of the input probability distributions. As a result, expenditures related to design centering equal merely few dozens of EM analyses. The introduced technique is demonstrated using three microstrip couplers. It is compared to recently reported techniques, and its reliability is corroborated using EM-based Monte Carlo analysis.

2019 ◽  
Vol 37 (4) ◽  
pp. 1179-1193
Author(s):  
Anna Pietrenko-Dabrowska ◽  
Slawomir Koziel

Purpose The purpose of this study is to propose a framework for expedited antenna optimization with numerical derivatives involving gradient variation monitoring throughout the optimization run and demonstrate it using a benchmark set of real-world wideband antennas. A comprehensive analysis of the algorithm performance involving multiple starting points is provided. The optimization results are compared with a conventional trust-region (TR) procedure, as well as the state-of-the-art accelerated TR algorithms. Design/methodology/approach The proposed algorithm is a modification of the TR gradient-based algorithm with numerical derivatives in which a monitoring of changes of the system response gradients is performed throughout the algorithm run. The gradient variations between consecutive iterations are quantified by an appropriately developed metric. Upon detecting stable patterns for particular parameter sensitivities, the costly finite differentiation (FD)-based gradient updates are suppressed; hence, the overall number of full-wave electromagnetic (EM) simulations is significantly reduced. This leads to considerable computational savings without compromising the design quality. Findings Monitoring of the antenna response sensitivity variations during the optimization process enables to detect the parameters for which updating the gradient information is not necessary at every iteration. When incorporated into the TR gradient-search procedures, the approach permits reduction of the computational cost of the optimization process. The proposed technique is dedicated to expedite direct optimization of antenna structures, but it can also be applied to speed up surrogate-assisted tasks, especially solving sub-problems that involve performing numerous evaluations of coarse-discretization models. Research limitations/implications The introduced methodology opens up new possibilities for future developments of accelerated antenna optimization procedures. In particular, the presented routine can be combined with the previously reported techniques that involve replacing FD with the Broyden formula for directions that are satisfactorily well aligned with the most recent design relocation and/or performing FD in a sparse manner based on relative design relocation (with respect to the current search region) in consecutive algorithm iterations. Originality/value Benchmarking against a conventional TR procedure, as well as previously reported methods, confirms improved efficiency and reliability of the proposed approach. The applications of the framework include direct EM-driven design closure, along with surrogate-based optimization within variable-fidelity surrogate-assisted procedures. To the best of the authors’ knowledge, no comparable approach to antenna optimization has been reported elsewhere. Particularly, it surmounts established methodology by carrying out constant supervision of the antenna response gradient throughout successive algorithm iterations and using gathered observations to properly guide the optimization routine.


2016 ◽  
Vol 33 (7) ◽  
pp. 2007-2018 ◽  
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose Development of techniques for expedited design optimization of complex and numerically expensive electromagnetic (EM) simulation models of antenna structures validated both numerically and experimentally. The paper aims to discuss these issues. Design/methodology/approach The optimization task is performed using a technique that combines gradient search with adjoint sensitivities, trust region framework, as well as EM simulation models with various levels of fidelity (coarse, medium and fine). Adaptive procedure for switching between the models of increasing accuracy in the course of the optimization process is implemented. Numerical and experimental case studies are provided to validate correctness of the design approach. Findings Appropriate combination of suitable design optimization algorithm embedded in a trust region framework, as well as model selection techniques, allows for considerable reduction of the antenna optimization cost compared to conventional methods. Research limitations/implications The study demonstrates feasibility of EM-simulation-driven design optimization of antennas at low computational cost. The presented techniques reach beyond the common design approaches based on direct optimization of EM models using conventional gradient-based or derivative-free methods, particularly in terms of reliability and reduction of the computational costs of the design processes. Originality/value Simulation-driven design optimization of contemporary antenna structures is very challenging when high-fidelity EM simulations are utilized for performance utilization of structure at hand. The proposed variable-fidelity optimization technique with adjoint sensitivity and trust regions permits rapid optimization of numerically demanding antenna designs (here, dielectric resonator antenna and compact monopole), which cannot be achieved when conventional methods are of use. The design cost of proposed strategy is up to 60 percent lower than direct optimization exploiting adjoint sensitivities. Experimental validation of the results is also provided.


Author(s):  
Paolo Pennacchi ◽  
Andrea Vania ◽  
Steven Chatterton ◽  
Ezio Tanzi

Hydraulic stability is one of the key problems during the design stage of hydraulic turbines. Despite of modern computational tools that help to define dangerous operating conditions and optimize runner design, hydraulic instabilities may fortuitously arise during the turbine life, as a consequence of variable and different operating conditions at which a hydraulic turbine can be subject. In general, the presence of unsteady flow reveals itself in two different ways: at small flow rate, the swirling flow in the draft tube conical inlet occupies a large portion of the inlet and causes a strong helical vortex rope; at large flow rate conditions the unsteady flow starts midway and causes a breakdownlike vortex bubble, followed by weak helical waves. In any case, hydraulic instability causes mechanical effects on the runner, on the whole turbine and on the draft tube, which may eventually produce severe damages on the turbine unit and whose most evident symptoms are vibrations. This notwithstanding, condition monitoring systems seldom are installed on this purpose in hydraulic power plants and no examples are reported in literature about the use of model-based methods to detect hydraulic instability onset. In this paper, by taking the advantage of a testing campaign performed during the commissioning of a 23 MW Kaplan hydraulic turbine unit, a rotordynamic model-based method is proposed. The turbine was equipped by proximity and vibration velocity probes, that allowed measuring lateral and axial vibrations of the shaft-line, under many different operating conditions, including also some off-design ones. The turbine mechanical model, realized by means of finite beam elements and considering lateral and axial degrees of freedom, is used to predict turbine unit response to the unsteady flow. Mechanical system response is then compared to the measured one and the possibility to detect instability onset, especially in real-time, is discussed.


2019 ◽  
Vol 37 (3) ◽  
pp. 851-862 ◽  
Author(s):  
Slawomir Koziel ◽  
Anna Pietrenko-Dabrowska

Purpose A technique for accelerated design optimization of antenna input characteristics is developed and comprehensively validated using real-world wideband antenna structures. Comparative study using a conventional trust-region algorithm is provided. Investigations of the effects of the algorithm control parameters are also carried out. Design/methodology/approach An optimization methodology is introduced that replaces finite differentiation (FD) by a combination of FD and selectively used Broyden updating formula for antenna response Jacobian estimations. The updating formula is used for directions that are sufficiently well aligned with the design relocation that occurred in the most recent algorithm iteration. This allows for a significant reduction of the number of full-wave electromagnetic simulations necessary for the algorithm to converge; hence, it leads to the reduction of the overall design cost. Findings Incorporation of the updating formulas into the Jacobian estimation process in a selective manner considerably reduces the computational cost of the optimization process without compromising the design quality. The algorithm proposed in the study can be used to speed up direct optimization of the antenna structures as well as surrogate-assisted procedures involving variable-fidelity models. Research limitations/implications This study sets a direction for further studies on accelerating procedures for the local optimization of antenna structures. Further investigations on the effects of the control parameters on the algorithm performance are necessary along with the development of means to automate the algorithm setup for a particular antenna structure, especially from the point of view of the search space dimensionality. Originality/value The proposed algorithm proved useful for a reduced-cost optimization of antennas and has been demonstrated to outperform conventional algorithms. To the authors’ knowledge, this is one of the first attempts to address the problem in this manner. In particular, it goes beyond traditional approaches, especially by combining various sensitivity estimation update measures in an adaptive fashion.


1997 ◽  
Vol 35 (2-3) ◽  
pp. 85-91
Author(s):  
D. A. Barton ◽  
J. D. Woodruff ◽  
T. M. Bousquet ◽  
A. M. Parrish

If promulgated as proposed, effluent guidelines for the U.S. pulp and paper industry will impose average monthly and maximum daily numerical limits of discharged AOX (adsorbable organic halogen). At this time, it is unclear whether the maximum-day variability factor used to establish the proposed effluent guidelines will provide sufficient margin for mills to achieve compliance during periods of normal but variable operating conditions within the pulping and bleaching processes. Consequently, additional information is needed to relate transient AOX loadings with final AOX discharges. This paper presents a simplistic dynamic model of AOX decay during treatment. The model consists of hydraulic characterization of an activated sludge process and a first-order decay coefficient for AOX removal. Data for model development were acquired by frequent collection of influent and effluent samples at a bleach kraft mill during a bleach plant shutdown and startup sequence.


Processes ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 93
Author(s):  
Alessandro Di Pretoro ◽  
Francesco D’Iglio ◽  
Flavio Manenti

Fouling is a substantial economic, energy, and safety issue for all the process industry applications, heat transfer units in particular. Although this phenomenon can be mitigated, it cannot be avoided and proper cleaning cycle scheduling is the best way to deal with it. After thorough literature research about the most reliable fouling model description, cleaning procedures have been optimized by minimizing the Time Average Losses (TAL) under nominal operating conditions according to the well-established procedure. For this purpose, different cleaning actions, namely chemical and mechanical, have been accounted for. However, this procedure is strictly related to nominal operating conditions therefore perturbations, when present, could considerably compromise the process profitability due to unexpected shutdown or extraordinary maintenance operations. After a preliminary sensitivity analysis, the uncertain variables and the corresponding disturbance likelihood were estimated. Hence, cleaning cycles were rescheduled on the basis of a stochastic flexibility index for different probability distributions to show how the uncertainty characterization affects the optimal time and economic losses. A decisional algorithm was finally conceived in order to assess the best number of chemical cleaning cycles included in a cleaning supercycle. In conclusion, this study highlights how optimal scheduling is affected by external perturbations and provides an important tool to the decision-maker in order to make a more conscious design choice based on a robust multi-criteria optimization.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 662
Author(s):  
Mateu Sbert ◽  
Jordi Poch ◽  
Shuning Chen ◽  
Víctor Elvira

In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction.


Vibration ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 49-63
Author(s):  
Waad Subber ◽  
Sayan Ghosh ◽  
Piyush Pandita ◽  
Yiming Zhang ◽  
Liping Wang

Industrial dynamical systems often exhibit multi-scale responses due to material heterogeneity and complex operation conditions. The smallest length-scale of the systems dynamics controls the numerical resolution required to resolve the embedded physics. In practice however, high numerical resolution is only required in a confined region of the domain where fast dynamics or localized material variability is exhibited, whereas a coarser discretization can be sufficient in the rest majority of the domain. Partitioning the complex dynamical system into smaller easier-to-solve problems based on the localized dynamics and material variability can reduce the overall computational cost. The region of interest can be specified based on the localized features of the solution, user interest, and correlation length of the material properties. For problems where a region of interest is not evident, Bayesian inference can provide a feasible solution. In this work, we employ a Bayesian framework to update the prior knowledge of the localized region of interest using measurements of the system response. Once, the region of interest is identified, the localized uncertainty is propagate forward through the computational domain. We demonstrate our framework using numerical experiments on a three-dimensional elastodynamic problem.


Author(s):  
Nishesh Jain ◽  
Esfand Burman ◽  
Dejan Mumovic ◽  
Mike Davies

To manage the concerns regarding the energy performance gap in buildings, a structured and longitudinal performance assessment of buildings, covering design through to operation, is necessary. Modelling can form an integral part of this process by ensuring that a good practice design stage modelling is followed by an ongoing evaluation of operational stage performance using a robust calibration protocol. In this paper, we demonstrate, via a case study of an office building, how a good practice design stage model can be fine-tuned for operational stage using a new framework that helps validate the causes for deviations of actual performance from design intents. This paper maps the modelling based process of tracking building performance from design to operation, identifying the various types of performance gaps. Further, during the operational stage, the framework provides a systematic way to separate the effect of (i) operating conditions that are driven by the building’s actual function and occupancy as compared with the design assumptions, and (ii) the effect of potential technical issues that cause underperformance. As the identification of issues is based on energy modelling, the process requires use of advanced and well-documented simulation tools. The paper concludes with providing an outline of the software platform requirements needed to generate robust design models and their calibration for operational performance assessments. Practical application The paper’s findings are a useful guide for building industry professionals to manage the performance gap with appropriate accuracy through a robust methodology in an easy to use workflow. The methodological framework to analyse building energy performance in-use links best practice design stage modelling guidance with a robust operational stage investigation. It helps designers, contractors, building managers and other stakeholders with an understanding of procedures to follow to undertake an effective measurement and verification exercise.


2017 ◽  
Vol 139 (4) ◽  
Author(s):  
Samuel F. Asokanthan ◽  
Soroush Arghavan ◽  
Mohamed Bognash

Effect of stochastic fluctuations in angular velocity on the stability of two degrees-of-freedom ring-type microelectromechanical systems (MEMS) gyroscopes is investigated. The governing stochastic differential equations (SDEs) are discretized using the higher-order Milstein scheme in order to numerically predict the system response assuming the fluctuations to be white noise. Simulations via Euler scheme as well as a measure of largest Lyapunov exponents (LLEs) are employed for validation purposes due to lack of similar analytical or experimental data. The response of the gyroscope under different noise fluctuation magnitudes has been computed to ascertain the stability behavior of the system. External noise that affect the gyroscope dynamic behavior typically results from environment factors and the nature of the system operation can be exerted on the system at any frequency range depending on the source. Hence, a parametric study is performed to assess the noise intensity stability threshold for a number of damping ratio values. The stability investigation predicts the form of threshold fluctuation intensity dependence on damping ratio. Under typical gyroscope operating conditions, nominal input angular velocity magnitude and mass mismatch appear to have minimal influence on system stability.


Sign in / Sign up

Export Citation Format

Share Document