Volume 2B: 40th Design Automation Conference
Latest Publications


TOTAL DOCUMENTS

56
(FIVE YEARS 0)

H-INDEX

5
(FIVE YEARS 0)

Published By American Society Of Mechanical Engineers

9780791846322

Author(s):  
Greg Burton

In this paper we present a new, efficient algorithm for computing the “raw offset” curves of 2D polygons with holes. Prior approaches focus on (a) complete computation of the Voronoi Diagram, or (b) pair-wise techniques for generating a raw offset followed by removal of “invalid loops” using a sweepline algorithm. Both have drawbacks in practice. Robust implementation of Voronoi Diagram algorithms has proven complex. Sweeplines take O((n + k)log n) time and O(n + k) memory, where n is the number of vertices and k is the number of self-intersections of the raw offset curve. It has been shown that k can be O(n2) when the offset distance is greater than or equal to the local radius of curvature of the polygon, a regular occurrence in the creation of contour-parallel offset curves for NC pocket machining. Our O(n log n) recursive algorithm, derived from Voronoi diagram algorithms, computes the velocities of polygon vertices as a function of overall offset rate. By construction, our algorithm prunes a large proportion of locally invalid loops from the raw offset curve, eliminating all self-intersections in raw offsets of convex polygons and the “near-circular”, k proportional to O(n2) worst-case scenarios in non-convex polygons.


Author(s):  
Yan Wang

One of the significant breakthroughs in quantum computation is Grover’s algorithm for unsorted database search. Recently, the applications of Grover’s algorithm to solve global optimization problems have been demonstrated, where unknown optimum solutions are found by iteratively improving the threshold value for the selective phase shift operator in Grover rotation. In this paper, a hybrid approach that combines continuous-time quantum walks with Grover search is proposed. By taking advantage of quantum tunneling effect, local barriers are overcome and better threshold values can be found at the early stage of search process. The new algorithm based on the formalism is demonstrated with benchmark examples of global optimization. The results between the new algorithm and the Grover search method are also compared.


Author(s):  
Hyunkyoo Cho ◽  
K. K. Choi ◽  
David Lamb

An accurate input probabilistic model is necessary to obtain a trustworthy result in the reliability analysis and the reliability-based design optimization (RBDO). However, the accurate input probabilistic model is not always available. Very often only insufficient input data are available in practical engineering problems. When only the limited input data are provided, uncertainty is induced in the input probabilistic model and this uncertainty propagates to the reliability output which is defined as the probability of failure. Then, the confidence level of the reliability output will decrease. To resolve this problem, the reliability output is considered to have a probability distribution in this paper. The probability of the reliability output is obtained as a combination of consecutive conditional probabilities of input distribution type and parameters using Bayesian approach. The conditional probabilities that are obtained under certain assumptions and Monte Carlo simulation (MCS) method is used to calculate the probability of the reliability output. Using the probability of the reliability output as constraint, a confidence-based RBDO (C-RBDO) problem is formulated. In the new probabilistic constraint of the C-RBDO formulation, two threshold values of the target reliability output and the target confidence level are used. For effective C-RBDO process, the design sensitivity of the new probabilistic constraint is derived. The C-RBDO is performed for a mathematical problem with different numbers of input data and the result shows that C-RBDO optimum designs incorporate appropriate conservativeness according to the given input data.


Author(s):  
Zhe Zhang ◽  
Chao Jiang ◽  
G. Gary Wang ◽  
Xu Han

Evidence theory has a strong ability to deal with the epistemic uncertainty, based on which the uncertain parameters existing in many complex engineering problems with limited information can be conveniently treated. However, the heavy computational cost caused by its discrete property severely influences the practicability of evidence theory, which has become a main difficulty in structural reliability analysis using evidence theory. This paper aims to develop an efficient method to evaluate the reliability for structures with evidence variables, and hence improves the applicability of evidence theory for engineering problems. A non-probabilistic reliability index approach is introduced to obtain a design point on the limit-state surface. An assistant area is then constructed through the obtained design point, based on which a small number of focal elements can be picked out for extreme analysis instead of using all the elements. The vertex method is used for extreme analysis to obtain the minimum and maximum values of the limit-state function over a focal element. A reliability interval composed of the belief measure and the plausibility measure is finally obtained for the structure. Two numerical examples are investigated to demonstrate the effectiveness of the proposed method.


Author(s):  
George H. Cheng ◽  
Chao Qi ◽  
G. Gary Wang

A practical, flexible, versatile, and heterogeneous distributed computing framework is presented that simplifies the creation of small-scale local distributed computing networks for the execution of computationally expensive black-box analyses. The framework is called the Dynamic Service-oriented Optimization Computing Framework (DSOCF), and is designed to parallelize black-box computation to speed up optimization runs. It is developed in Java and leverages the Apache River project, which is a dynamic Service-Oriented Architecture (SOA). A roulette-based real-time load balancing algorithm is implemented that supports multiple users and balances against task priorities, which is superior to the rigid pre-set wall clock limits commonly seen in grid computing. The framework accounts for constraints on resources and incorporates a credit-based system to ensure fair usage and access to computing resources. Experimental testing results are shown to demonstrate the effectiveness of the framework.


Author(s):  
Jeffrey D. Allen ◽  
Jason D. Watson ◽  
Christopher A. Mattson ◽  
Scott M. Ferguson

The challenge of designing complex engineered systems with long service lives can be daunting. As customer needs change over time, such systems must evolve to meet these needs. This paper presents a method for evaluating the reconfigurability of systems to meet future needs. Specifically we show that excess capability is a key factor in evaluating the reconfigurability of a system to a particular need, and that the overall system reconfigurability is a function of the system’s reconfigurability to all future needs combined. There are many examples of complex engineered systems; for example, aircraft, ships, communication systems, spacecraft and automated assembly lines. These systems cost millions of dollars to design and millions to replicate. They often need to stay in service for a long time. However, this is often limited by an inability to adapt to meet future needs. Using an automated assembly line as an example, we show that system reconfigurability can be modeled as a function of usable excess capability.


Author(s):  
Teng Long ◽  
Lv Wang ◽  
Di Wu ◽  
Xiaosong Guo ◽  
Li Liu

At the aim of reducing the computational time of engineering design optimization problems using metamodeling technologies, we developed a flexible distributed framework independent of any third-part parallel computing software to implement simultaneous sampling during metamodel-based design optimization procedures. In this paper, the idea and implementation of hardware configuration, software structure, the main functional modular and interfaces of this framework are represented in detail. The proposed framework is capable of integrating black-box functions and legacy software for analyzing and common MBDO methods for space exploring. In addition, a message-based communication infrastructure based on TCP/IP protocol is developed for distributed data exchange. The Client/Server architecture and computing budget allocation algorithm considering software dependency enable samples to be effectively allocated to the distributed computing nodes for simultaneous execution, which gives rise to decreasing the elapsed time and improving MBDO’s efficiency. Through testing on several numerical benchmark problems, the favorable results demonstrate that the proposal framework can evidently save the computational time, and is practical for engineering MBDO problems.


Author(s):  
Jing Wang ◽  
Mian Li

In reliability design, allocating redundancy through various optimization methods is one of the important ways to improve the system reliability. Generally, in these redundancy allocation problems, it is assumed that failures of components are independent. However, under this assumption failure rates can be underestimated since failure interactions can significantly affect the performance of systems. This paper first proposed an analytical model describing the failure rates with failure interactions. Then a Modified Analytic Hierarchy Process (MAHP) is proposed to solve the redundancy allocation problems for systems with failure interactions. This method decomposes the system into several blocks and deals with those down-sized blocks before diving deep into the most appropriate component for redundancy allocation. Being simple and flexible, MAHP provides an intuitive way to design a complex system and complex explicit objective functions for the entire system is not required in the proposed approach. More importantly, with the help of the proposed analytical failure interaction model, MAHP can capture the effect of failure interactions. Results from case studies clearly demonstrate the applicability of the analytical model for failure interactions and MAHP for reliability design.


Author(s):  
Meng Xu ◽  
Georges Fadel ◽  
Margaret M. Wiecek

As system design problems increase in complexity, researchers seek approaches to optimize such problems by coordinating the optimizations of decomposed sub-problems. Many methods for optimization by decomposition have been proposed in the literature among which, the Augmented Lagrangian Coordination (ALC) method has drawn much attention due to its efficiency and flexibility. The ALC method involves a quadratic penalty term, and the initial setting and update strategy of the penalty weight are critical to the performance of the ALC. The weight in the traditional weight update strategy always increases and previous research shows that an inappropriate initial value of the penalty weight may cause the method not to converge to optimal solutions. Inspired by the research on Augmented Lagrangian Relaxation in the convex optimization area, a new weight update strategy in which the weight can either increase or decrease is introduced into engineering optimization. The derivation of the primal and dual residuals for optimization by decomposition is conducted as a first step. It shows that the traditional weight update strategy only considers the primal residual, which may result in a duality gap and cause a relatively big solution error. A new weight update strategy considering both the primal and dual residuals is developed which drives the dual residual to zero in the optimization process, thus guaranteeing the solution accuracy of the decomposed problem. Finally, the developed strategy is applied to both mathematical and engineering test problems and the results show significant improvements in solution accuracy. Additionally, the proposed approach makes the ALC method more robust since it allows the coordination to converge with an initial weight selected from a much wider range of possible values while the selection of initial weight is a big concern in the traditional weight update strategy.


Author(s):  
Souma Chowdhury ◽  
Ali Mehmani ◽  
Achille Messac

One of the primary drawbacks plaguing wider acceptance of surrogate models is their low fidelity in general. This issue can be in a large part attributed to the lack of automated model selection techniques, particularly ones that do not make limiting assumptions regarding the choice of model types and kernel types. A novel model selection technique was recently developed to perform optimal model search concurrently at three levels: (i) optimal model type (e.g., RBF), (ii) optimal kernel type (e.g., multiquadric), and (iii) optimal values of hyper-parameters (e.g., shape parameter) that are conventionally kept constant. The error measures to be minimized in this optimal model selection process are determined by the Predictive Estimation of Model Fidelity (PEMF) method, which has been shown to be significantly more accurate than typical cross-validation-based error metrics. In this paper, we make the following important advancements to the PEMF-based model selection framework, now called the Concurrent Surrogate Model Selection or COSMOS framework: (i) The optimization formulation is modified through binary coding to allow surrogates with differing numbers of candidate kernels and kernels with differing numbers of hyper-parameters (which was previously not allowed). (ii) A robustness criterion, based on the variance of errors, is added to the existing criteria for model selection. (iii) A larger candidate pool of 16 surrogate-kernel combinations is considered for selection — possibly making COSMOS one of the most comprehensive surrogate model selection framework (in theory and implementation) currently available. The effectiveness of the COSMOS framework is demonstrated by successfully applying it to four benchmark problems (with 2–30 variables) and an airfoil design problem. The optimal model selection results illustrate how diverse models provide important tradeoffs for different problems.


Sign in / Sign up

Export Citation Format

Share Document