On the Optimal Decomposition of High-Dimensional Solution Spaces of Complex Systems

Author(s):  
Stefan Erschen ◽  
Fabian Duddeck ◽  
Matthias Gerdts ◽  
Markus Zimmermann

In the early development phase of complex technical systems, uncertainties caused by unknown design restrictions must be considered. In order to avoid premature design decisions, sets of good designs, i.e., designs which satisfy all design goals, are sought rather than one optimal design that may later turn out to be infeasible. A set of good designs is called a solution space and serves as target region for design variables, including those that quantify properties of components or subsystems. Often, the solution space is approximated, e.g., to enable independent development work. Algorithms that approximate the solution space as high-dimensional boxes are available, in which edges represent permissible intervals for single design variables. The box size is maximized to provide large target regions and facilitate design work. As a result of geometrical mismatch, however, boxes typically capture only a small portion of the complete solution space. To reduce this loss of solution space while still enabling independent development work, this paper presents a new approach that optimizes a set of permissible two-dimensional (2D) regions for pairs of design variables, so-called 2D-spaces. Each 2D-space is confined by polygons. The Cartesian product of all 2D-spaces forms a solution space for all design variables. An optimization problem is formulated that maximizes the size of the solution space, and is solved using an interior-point algorithm. The approach is applicable to arbitrary systems with performance measures that can be expressed or approximated as linear functions of their design variables. Its effectiveness is demonstrated in a chassis design problem.

Author(s):  
Helmut Harbrecht ◽  
Dennis Tröndle ◽  
Markus Zimmermann

AbstractSolution spaces are regions of good designs in a potentially high-dimensional design space. Good designs satisfy by definition all requirements that are imposed on them as mathematical constraints. In previous work, the complete solution space was approximated by a hyper-rectangle, i.e., the Cartesian product of permissible intervals for design variables. These intervals serve as independent target regions for distributed and separated design work. For a better approximation, i.e., a larger resulting solution space, this article proposes to compute the Cartesian product of two-dimensional regions, so-called 2d-spaces, that are enclosed by polygons. 2d-spaces serve as target regions for pairs of variables and are independent of other 2d-spaces. A numerical algorithm for non-linear problems is presented that is based on iterative Monte Carlo sampling.


2014 ◽  
Vol 136 (4) ◽  
Author(s):  
Johannes Fender ◽  
L. Graff ◽  
H. Harbrecht ◽  
Markus Zimmermann

Key parameters may be used to turn a bad design into a good design with comparatively little effort. The proposed method identifies key parameters in high-dimensional nonlinear systems that are subject to uncertainty. A numerical optimization algorithm seeks a solution space on which all designs are good, that is, they satisfy a specified design criterion. The solution space is box-shaped and provides target intervals for each parameter. A bad design may be turned into a good design by moving its key parameters into their target intervals. The solution space is computed so as to minimize the effort for design work: its shape is controlled by particular constraints such that it can be reached by changing only a small number of key parameters. Wide target intervals provide tolerance against uncertainty, which is naturally present in a design process, when design parameters are unknown or cannot be controlled exactly. In a simple two-dimensional example problem, the accuracy of the algorithm is demonstrated. In a high-dimensional vehicle crash design problem, an underperforming vehicle front structure is improved by identifying and appropriately changing a relevant key parameter.


Author(s):  
Marco Daub ◽  
Fabian Duddeck

Abstract The consideration of uncertainty is especially important for the design of complex systems. Because of high complexity, the total system is normally divided into subsystems, which are treated in a hierarchical and ideally independent manner. In recent publications, e.g., (Zimmermann, M., and von Hoessle, J. E., 2013, “Computing Solution Spaces for Robust Design,” Int. J. Numer. Methods Eng., 94(3), pp. 290–307; Fender, J., Duddeck, F., and Zimmermann, M., 2017, “Direct Computation of Solution Spaces,” Struct. Multidiscip. Optim., 55(5), pp. 1787–1796), a decoupling strategy is realized via first the identification of the complete solution space (solutions not violating any design constraints) and second via derivation of a subset, a so-called box-shaped solution space, which allows for decoupling and therefore independent development of subsystems. By analyzing types of uncertainties occurring in early design stages, it becomes clear that especially lack-of-knowledge uncertainty dominates. Often, there is missing knowledge about overall manufacturing tolerances like limitations in production or subsystems are not even completely defined. Furthermore, flexibility is required to handle new requirements and shifting preferences concerning single subsystems arising later in the development. Hence, a set-based approach using intervals for design variables (i.e., interaction quantities between subsystems and the total system) is useful. Because in the published approaches, no uncertainty consideration was taken into account for the computation of these intervals, they can possibly have inappropriate size, i.e., being too narrow. The work presented here proposes to include these uncertainties related to design variables. This allows now to consider lack-of-knowledge uncertainty specific for early phase developments in the framework of complex systems design. An example taken from a standard crash load case (frontal impact against a rigid wall) illustrates the proposed methodology.


Author(s):  
Xiaodong Ren ◽  
Daofu Guo ◽  
Zhigang Ren ◽  
Yongsheng Liang ◽  
An Chen

AbstractBy remarkably reducing real fitness evaluations, surrogate-assisted evolutionary algorithms (SAEAs), especially hierarchical SAEAs, have been shown to be effective in solving computationally expensive optimization problems. The success of hierarchical SAEAs mainly profits from the potential benefit of their global surrogate models known as “blessing of uncertainty” and the high accuracy of local models. However, their performance leaves room for improvement on high-dimensional problems since now it is still challenging to build accurate enough local models due to the huge solution space. Directing against this issue, this study proposes a new hierarchical SAEA by training local surrogate models with the help of the random projection technique. Instead of executing training in the original high-dimensional solution space, the new algorithm first randomly projects training samples onto a set of low-dimensional subspaces, then trains a surrogate model in each subspace, and finally achieves evaluations of candidate solutions by averaging the resulting models. Experimental results on seven benchmark functions of 100 and 200 dimensions demonstrate that random projection can significantly improve the accuracy of local surrogate models and the new proposed hierarchical SAEA possesses an obvious edge over state-of-the-art SAEAs.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Jian Zhang ◽  
Huanzhou Li ◽  
Zhangguo Tang ◽  
Qiuping Lu ◽  
Xiuqing Zheng ◽  
...  

A multilevel thresholding algorithm for histogram-based image segmentation is presented in this paper. The proposed algorithm introduces an adaptive adjustment strategy of the rotation angle and a cooperative learning strategy into quantum genetic algorithm (called IQGA). An adaptive adjustment strategy of the quantum rotation which is introduced in this study helps improving the convergence speed, search ability, and stability. Cooperative learning enhances the search ability in the high-dimensional solution space by splitting a high-dimensional vector into several one-dimensional vectors. The experimental results demonstrate good performance of the IQGA in solving multilevel thresholding segmentation problem by compared with QGA, GA and PSO.


Author(s):  
George H. Cheng ◽  
Adel Younis ◽  
Kambiz Haji Hajikolaei ◽  
G. Gary Wang

Mode Pursuing Sampling (MPS) was developed as a global optimization algorithm for optimization problems involving expensive black box functions. MPS has been found to be effective and efficient for problems of low dimensionality, i.e., the number of design variables is less than ten. A previous conference publication integrated the concept of trust regions into the MPS framework to create a new algorithm, TRMPS, which dramatically improved performance and efficiency for high dimensional problems. However, although TRMPS performed better than MPS, it was unproven against other established algorithms such as GA. This paper introduces an improved algorithm, TRMPS2, which incorporates guided sampling and low function value criterion to further improve algorithm performance for high dimensional problems. TRMPS2 is benchmarked against MPS and GA using a suite of test problems. The results show that TRMPS2 performs better than MPS and GA on average for high dimensional, expensive, and black box (HEB) problems.


Author(s):  
Ken Kobayashi ◽  
Naoki Hamada ◽  
Akiyoshi Sannai ◽  
Akinori Tanaka ◽  
Kenichi Bannai ◽  
...  

Multi-objective optimization problems require simultaneously optimizing two or more objective functions. Many studies have reported that the solution set of an M-objective optimization problem often forms an (M − 1)-dimensional topological simplex (a curved line for M = 2, a curved triangle for M = 3, a curved tetrahedron for M = 4, etc.). Since the dimensionality of the solution set increases as the number of objectives grows, an exponentially large sample size is needed to cover the solution set. To reduce the required sample size, this paper proposes a Bézier simplex model and its fitting algorithm. These techniques can exploit the simplex structure of the solution set and decompose a high-dimensional surface fitting task into a sequence of low-dimensional ones. An approximation theorem of Bézier simplices is proven. Numerical experiments with synthetic and real-world optimization problems demonstrate that the proposed method achieves an accurate approximation of high-dimensional solution sets with small samples. In practice, such an approximation will be conducted in the postoptimization process and enable a better trade-off analysis.


2020 ◽  
Vol 32 (12) ◽  
pp. 2332-2388 ◽  
Author(s):  
Spencer J. Kent ◽  
E. Paxon Frady ◽  
Friedrich T. Sommer ◽  
Bruno A. Olshausen

We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer ( 2020 ), a companion article in this issue, to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed by the Hadamard product between a discrete set of high-dimensional vectors, a resonator network can efficiently decompose the composite into these factors. We compare the performance of resonator networks against optimization-based methods, including Alternating Least Squares and several gradient-based algorithms, showing that resonator networks are superior in several important ways. This advantage is achieved by leveraging a combination of nonlinear dynamics and searching in superposition, by which estimates of the correct solution are formed from a weighted superposition of all possible solutions. While the alternative methods also search in superposition, the dynamics of resonator networks allow them to strike a more effective balance between exploring the solution space and exploiting local information to drive the network toward probable solutions. Resonator networks are not guaranteed to converge, but within a particular regime they almost always do. In exchange for relaxing the guarantee of global convergence, resonator networks are dramatically more effective at finding factorizations than all alternative approaches considered.


Sign in / Sign up

Export Citation Format

Share Document