scholarly journals Compensation and Weights for Trade-offs in Engineering Design: Beyond the Weighted Sum

2005 ◽  
Vol 127 (6) ◽  
pp. 1045-1055 ◽  
Author(s):  
Michael J. Scott ◽  
Erik K. Antonsson

Multicriteria decision support methods are common in engineering design. These methods typically rely on a summation of weighted attributes to accomplish trade-offs among competing objectives. It has long been known that a weighted sum, when used for multicriteria optimization, may fail to locate all points on a nonconvex Pareto frontier. More recent results from the optimization literature relate the curvature of an objective function to its ability to capture Pareto points, but do not consider the significance of the objective function parameters in choosing one Pareto point over another. A parametrized family of aggregations appropriate for engineering design is shown to model decisions capturing all possible trade-offs, and therefore can direct the solution to any Pareto optimum. This paper gives a mathematical and theoretical interpretation of the parameters of this family of aggregations as defining a degree of compensation among criteria as well as a measure of their relative importance. The inability to reach all Pareto optima is shown to be surmounted by this consideration of degree of compensation as an additional parameter of the decision. Additionally, the direct specification of importance weights is common to many decision methods. The choice of a single point from a Pareto frontier by specifying importance weights alone is shown to depend on the degree of compensation implicit in the aggregation. Thus both the degree of compensation and weights must be considered to capture all potentially acceptable decisions. A simple truss design example is used here to illustrate the concepts.

Author(s):  
Michael J. Scott ◽  
Erik K. Antonsson

Abstract Multi-criteria decision support methods are common in engineering design. These methods typically rely on the specification of importance weights to accomplish trade-offs among competing objectives. Such methods can have difficulties, however: they may not be able to select all possible Pareto optima, and the direct specification of importance weights can be arbitrary and ad hoc. The inability to reach all Pareto optima is shown to be surmountable by the consideration of trade-off strategy as an additional parameter of a decision. The use of indifference points to select a best solution, as an alternative to direct specification of importance weights, is presented, and a simple truss design example is used to illustrate the concepts.


Author(s):  
Katsuhiro Honda ◽  
◽  
Hidetomo Ichihashi

Fuzzyc-means (FCM) is the fuzzy version ofc-means clustering, in which memberships are fuzzified by introducing an additional parameter into the linear objective function of the weighted sum of distances between datapoints and cluster centers. Regularization of hardc-means clustering is another approach to fuzzification, in which regularization terms such as entropy and quadratic terms have been adopted. We generalized the fuzzification concept and propose a new approach to fuzzy clustering in which linear weights of hardc-means clustering are replaced by nonlinear ones through regularization. Numerical experiments demonstrated that the proposed algorithm has the characteristic features of the standard FCM algorithm and of regularization approaches. One of the proposed nonlinear weights makes it possible to both to attract data to clusters and to repulse different clusters. This feature derives different types of fuzzy classification functions in both probabilistic and possibilistic models.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yoav Kolumbus ◽  
Noam Nisan

AbstractWe study the effectiveness of tracking and testing policies for suppressing epidemic outbreaks. We evaluate the performance of tracking-based intervention methods on a network SEIR model, which we augment with an additional parameter to model pre-symptomatic and asymptomatic individuals, and study the effectiveness of these methods in combination with or as an alternative to quarantine and global lockdown policies. Our focus is on the basic trade-off between human-lives lost and economic costs, and on how this trade-off changes under different quarantine, lockdown, tracking, and testing policies. Our main findings are as follows: (1) Tests combined with patient quarantines reduce both economic costs and mortality, however, an extensive-scale testing capacity is required to achieve a significant improvement. (2) Tracking significantly reduces both economic costs and mortality. (3) Tracking combined with a moderate testing capacity can achieve containment without lockdowns. (4) In the presence of a flow of new incoming infections, dynamic “On–Off” lockdowns are more efficient than fixed lockdowns. In this setting as well, tracking strictly improves efficiency. The results show the extreme usefulness of policies that combine tracking and testing for reducing mortality and economic costs, and their potential to contain outbreaks without imposing any social distancing restrictions. This highlights the difficult social question of trading-off these gains against patient privacy, which is inevitably infringed by tracking.


Author(s):  
J.-F. Fu ◽  
R. G. Fenton ◽  
W. L. Cleghorn

Abstract An algorithm for solving nonlinear programming problems containing integer, discrete and continuous variables is presented. Based on a commonly employed optimization algorithm, penalties on integer and/or discrete violations are imposed on the objective function to force the search to converge onto standard values. Examples are included to illustrate the practical use of this algorithm.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. R553-R563
Author(s):  
Sagar Singh ◽  
Ilya Tsvankin ◽  
Ehsan Zabihi Naeini

The nonlinearity of full-waveform inversion (FWI) and parameter trade-offs can prevent convergence toward the actual model, especially for elastic anisotropic media. The problems with parameter updating become particularly severe if ultra-low-frequency seismic data are unavailable, and the initial model is not sufficiently accurate. We introduce a robust way to constrain the inversion workflow using borehole information obtained from well logs. These constraints are included in the form of rock-physics relationships for different geologic facies (e.g., shale, sand, salt, and limestone). We develop a multiscale FWI algorithm for transversely isotropic media with a vertical symmetry axis (VTI media) that incorporates facies information through a regularization term in the objective function. That term is updated during the inversion by using the models obtained at the previous inversion stage. To account for lateral heterogeneity between sparse borehole locations, we use an image-guided smoothing algorithm. Numerical testing for structurally complex anisotropic media demonstrates that the facies-based constraints may ensure the convergence of the objective function towards the global minimum in the absence of ultra-low-frequency data and for simple (even 1D) initial models. We test the algorithm on clean data and on surface records contaminated by Gaussian noise. The algorithm also produces a high-resolution facies model, which should be instrumental in reservoir characterization.


Author(s):  
Huizhuo Cao ◽  
Xuemei Li ◽  
Vikrant Vaze ◽  
Xueyan Li

Multi-objective pricing of high-speed rail (HSR) passenger fares becomes a challenge when the HSR operator needs to deal with multiple conflicting objectives. Although many studies have tackled the challenge of calculating the optimal fares over railway networks, none of them focused on characterizing the trade-offs between multiple objectives under multi-modal competition. We formulate the multi-objective HSR fare optimization problem over a linear network by introducing the epsilon-constraint method within a bi-level programming model and develop an iterative algorithm to solve this model. This is the first HSR pricing study to use an epsilon-constraint methodology. We obtain two single-objective solutions and four multi-objective solutions and compare them on a variety of metrics. We also derive the Pareto frontier between the objectives of profit and passenger welfare to enable the operator to choose the best trade-off. Our results based on computational experiments with Beijing–Shanghai regional network provide several new insights. First, we find that small changes in fares can lead to a significant improvement in passenger welfare with no reduction in profitability under multi-objective optimization. Second, multi-objective optimization solutions show considerable improvements over the single-objective optimization solutions. Third, Pareto frontier enables decision-makers to make more informed decisions about choosing the best trade-offs. Overall, the explicit modeling of multiple objectives leads to better pricing solutions, which have the potential to guide pricing decisions for the HSR operators.


1988 ◽  
Vol 37 (3) ◽  
pp. 367-378 ◽  
Author(s):  
Richard R. Egudo

The concept of efficiency (Pareto optimum) is used to formulate duality for multiobjective fractional programming problems. We consider programs where the components of the objective function have non-negative and convex numerators while the denominators are concave and positive. For this case the Mond-Weir extension of Bector dual analogy is given. We also give the Schaible type vector dual. The case where functions are ρ-convex (weakly or strongly convex) is also considered.


Author(s):  
Luca Bagnato ◽  
Antonio Punzo

Abstract Many statistical problems involve the estimation of a $$\left( d\times d\right) $$ d × d orthogonal matrix $$\varvec{Q}$$ Q . Such an estimation is often challenging due to the orthonormality constraints on $$\varvec{Q}$$ Q . To cope with this problem, we use the well-known PLU decomposition, which factorizes any invertible $$\left( d\times d\right) $$ d × d matrix as the product of a $$\left( d\times d\right) $$ d × d permutation matrix $$\varvec{P}$$ P , a $$\left( d\times d\right) $$ d × d unit lower triangular matrix $$\varvec{L}$$ L , and a $$\left( d\times d\right) $$ d × d upper triangular matrix $$\varvec{U}$$ U . Thanks to the QR decomposition, we find the formulation of $$\varvec{U}$$ U when the PLU decomposition is applied to $$\varvec{Q}$$ Q . We call the result as PLR decomposition; it produces a one-to-one correspondence between $$\varvec{Q}$$ Q and the $$d\left( d-1\right) /2$$ d d - 1 / 2 entries below the diagonal of $$\varvec{L}$$ L , which are advantageously unconstrained real values. Thus, once the decomposition is applied, regardless of the objective function under consideration, we can use any classical unconstrained optimization method to find the minimum (or maximum) of the objective function with respect to $$\varvec{L}$$ L . For illustrative purposes, we apply the PLR decomposition in common principle components analysis (CPCA) for the maximum likelihood estimation of the common orthogonal matrix when a multivariate leptokurtic-normal distribution is assumed in each group. Compared to the commonly used normal distribution, the leptokurtic-normal has an additional parameter governing the excess kurtosis; this makes the estimation of $$\varvec{Q}$$ Q in CPCA more robust against mild outliers. The usefulness of the PLR decomposition in leptokurtic-normal CPCA is illustrated by two biometric data analyses.


2004 ◽  
Vol 126 (6) ◽  
pp. 945-949 ◽  
Author(s):  
Maarten Franssen ◽  
Louis L. Bucciarelli

Rationality has different meanings within different contexts. In engineering design, to be rational usually means to be instrumentally rational, that is, to take a measured decision aimed at the realization of a particular goal, as in attempts to optimize an objective function. But in many engineering design problems, especially those that involve several engineers collaborating on a design task, there is no obvious or uncontested, unique objective function. An alternative approach then takes the locus of optimization to be individual engineers’ utility functions. In this paper, we address an argument which claimed that unless the engineers hold a common utility function over design alternatives, a suboptimal, hence, irrational, design is bound to ensue. We challenge this claim and show that, while sticking to the utility-function approach but adopting a game-theoretic perspective, rational outcomes to the problem at issue are possible.


Sign in / Sign up

Export Citation Format

Share Document