Volume 2B: 40th Design Automation Conference
Latest Publications


TOTAL DOCUMENTS

56
(FIVE YEARS 0)

H-INDEX

5
(FIVE YEARS 0)

Published By American Society Of Mechanical Engineers

9780791846322

Author(s):  
Yan Wang

One of the significant breakthroughs in quantum computation is Grover’s algorithm for unsorted database search. Recently, the applications of Grover’s algorithm to solve global optimization problems have been demonstrated, where unknown optimum solutions are found by iteratively improving the threshold value for the selective phase shift operator in Grover rotation. In this paper, a hybrid approach that combines continuous-time quantum walks with Grover search is proposed. By taking advantage of quantum tunneling effect, local barriers are overcome and better threshold values can be found at the early stage of search process. The new algorithm based on the formalism is demonstrated with benchmark examples of global optimization. The results between the new algorithm and the Grover search method are also compared.


Author(s):  
Greg Burton

In this paper we present a new, efficient algorithm for computing the “raw offset” curves of 2D polygons with holes. Prior approaches focus on (a) complete computation of the Voronoi Diagram, or (b) pair-wise techniques for generating a raw offset followed by removal of “invalid loops” using a sweepline algorithm. Both have drawbacks in practice. Robust implementation of Voronoi Diagram algorithms has proven complex. Sweeplines take O((n + k)log n) time and O(n + k) memory, where n is the number of vertices and k is the number of self-intersections of the raw offset curve. It has been shown that k can be O(n2) when the offset distance is greater than or equal to the local radius of curvature of the polygon, a regular occurrence in the creation of contour-parallel offset curves for NC pocket machining. Our O(n log n) recursive algorithm, derived from Voronoi diagram algorithms, computes the velocities of polygon vertices as a function of overall offset rate. By construction, our algorithm prunes a large proportion of locally invalid loops from the raw offset curve, eliminating all self-intersections in raw offsets of convex polygons and the “near-circular”, k proportional to O(n2) worst-case scenarios in non-convex polygons.


Author(s):  
George H. Cheng ◽  
Chao Qi ◽  
G. Gary Wang

A practical, flexible, versatile, and heterogeneous distributed computing framework is presented that simplifies the creation of small-scale local distributed computing networks for the execution of computationally expensive black-box analyses. The framework is called the Dynamic Service-oriented Optimization Computing Framework (DSOCF), and is designed to parallelize black-box computation to speed up optimization runs. It is developed in Java and leverages the Apache River project, which is a dynamic Service-Oriented Architecture (SOA). A roulette-based real-time load balancing algorithm is implemented that supports multiple users and balances against task priorities, which is superior to the rigid pre-set wall clock limits commonly seen in grid computing. The framework accounts for constraints on resources and incorporates a credit-based system to ensure fair usage and access to computing resources. Experimental testing results are shown to demonstrate the effectiveness of the framework.


Author(s):  
Jeffrey D. Allen ◽  
Jason D. Watson ◽  
Christopher A. Mattson ◽  
Scott M. Ferguson

The challenge of designing complex engineered systems with long service lives can be daunting. As customer needs change over time, such systems must evolve to meet these needs. This paper presents a method for evaluating the reconfigurability of systems to meet future needs. Specifically we show that excess capability is a key factor in evaluating the reconfigurability of a system to a particular need, and that the overall system reconfigurability is a function of the system’s reconfigurability to all future needs combined. There are many examples of complex engineered systems; for example, aircraft, ships, communication systems, spacecraft and automated assembly lines. These systems cost millions of dollars to design and millions to replicate. They often need to stay in service for a long time. However, this is often limited by an inability to adapt to meet future needs. Using an automated assembly line as an example, we show that system reconfigurability can be modeled as a function of usable excess capability.


Author(s):  
Teng Long ◽  
Lv Wang ◽  
Di Wu ◽  
Xiaosong Guo ◽  
Li Liu

At the aim of reducing the computational time of engineering design optimization problems using metamodeling technologies, we developed a flexible distributed framework independent of any third-part parallel computing software to implement simultaneous sampling during metamodel-based design optimization procedures. In this paper, the idea and implementation of hardware configuration, software structure, the main functional modular and interfaces of this framework are represented in detail. The proposed framework is capable of integrating black-box functions and legacy software for analyzing and common MBDO methods for space exploring. In addition, a message-based communication infrastructure based on TCP/IP protocol is developed for distributed data exchange. The Client/Server architecture and computing budget allocation algorithm considering software dependency enable samples to be effectively allocated to the distributed computing nodes for simultaneous execution, which gives rise to decreasing the elapsed time and improving MBDO’s efficiency. Through testing on several numerical benchmark problems, the favorable results demonstrate that the proposal framework can evidently save the computational time, and is practical for engineering MBDO problems.


Author(s):  
Hyunkyoo Cho ◽  
K. K. Choi ◽  
David Lamb

An accurate input probabilistic model is necessary to obtain a trustworthy result in the reliability analysis and the reliability-based design optimization (RBDO). However, the accurate input probabilistic model is not always available. Very often only insufficient input data are available in practical engineering problems. When only the limited input data are provided, uncertainty is induced in the input probabilistic model and this uncertainty propagates to the reliability output which is defined as the probability of failure. Then, the confidence level of the reliability output will decrease. To resolve this problem, the reliability output is considered to have a probability distribution in this paper. The probability of the reliability output is obtained as a combination of consecutive conditional probabilities of input distribution type and parameters using Bayesian approach. The conditional probabilities that are obtained under certain assumptions and Monte Carlo simulation (MCS) method is used to calculate the probability of the reliability output. Using the probability of the reliability output as constraint, a confidence-based RBDO (C-RBDO) problem is formulated. In the new probabilistic constraint of the C-RBDO formulation, two threshold values of the target reliability output and the target confidence level are used. For effective C-RBDO process, the design sensitivity of the new probabilistic constraint is derived. The C-RBDO is performed for a mathematical problem with different numbers of input data and the result shows that C-RBDO optimum designs incorporate appropriate conservativeness according to the given input data.


Author(s):  
Zhe Zhang ◽  
Chao Jiang ◽  
G. Gary Wang ◽  
Xu Han

Evidence theory has a strong ability to deal with the epistemic uncertainty, based on which the uncertain parameters existing in many complex engineering problems with limited information can be conveniently treated. However, the heavy computational cost caused by its discrete property severely influences the practicability of evidence theory, which has become a main difficulty in structural reliability analysis using evidence theory. This paper aims to develop an efficient method to evaluate the reliability for structures with evidence variables, and hence improves the applicability of evidence theory for engineering problems. A non-probabilistic reliability index approach is introduced to obtain a design point on the limit-state surface. An assistant area is then constructed through the obtained design point, based on which a small number of focal elements can be picked out for extreme analysis instead of using all the elements. The vertex method is used for extreme analysis to obtain the minimum and maximum values of the limit-state function over a focal element. A reliability interval composed of the belief measure and the plausibility measure is finally obtained for the structure. Two numerical examples are investigated to demonstrate the effectiveness of the proposed method.


Author(s):  
Jing Wang ◽  
Mian Li

In reliability design, allocating redundancy through various optimization methods is one of the important ways to improve the system reliability. Generally, in these redundancy allocation problems, it is assumed that failures of components are independent. However, under this assumption failure rates can be underestimated since failure interactions can significantly affect the performance of systems. This paper first proposed an analytical model describing the failure rates with failure interactions. Then a Modified Analytic Hierarchy Process (MAHP) is proposed to solve the redundancy allocation problems for systems with failure interactions. This method decomposes the system into several blocks and deals with those down-sized blocks before diving deep into the most appropriate component for redundancy allocation. Being simple and flexible, MAHP provides an intuitive way to design a complex system and complex explicit objective functions for the entire system is not required in the proposed approach. More importantly, with the help of the proposed analytical failure interaction model, MAHP can capture the effect of failure interactions. Results from case studies clearly demonstrate the applicability of the analytical model for failure interactions and MAHP for reliability design.


Author(s):  
Wei Song ◽  
Hae Chang Gea ◽  
Bin Zheng

Conventionally, design domain of topology optimization is predefined and is not adjusted in the design optimization process since designers are required to specify the design domain in advance. However, it is difficult for a fixed design domain to satisfy design requirements such as domain sizing adjustment or boundaries change. In this paper, Domain Composition Method (DCM) for structural optimization is presented and it deals with the design domain adjustment and the material distribution optimization in one framework. Instead of treating design domain as a whole, DCM divides domain into several subdomains. Additional scaling factors and subdomain transformations are applied to describe changes between different designs. It then composites subdomains and solve it as a whole in the updated domain. Based on the domain composition, static analysis with DCM and sensitivity analysis are derived. Consequently, the design domain and the topology of the structure are optimized simultaneously. Finally, the effectiveness of the proposed DCM for structural optimization is demonstrated through different numerical examples.


Author(s):  
Zequn Wang ◽  
Pingfeng Wang

This paper presents an integrated performance measure approach (iPMA) for system reliability assessment considering multiple dependent failure modes. An integrated performance function is developed to envelope all component level failure events, thereby enables system reliability approximation by considering only one integrated system limit state. The developed integrated performance function possesses two critical properties. First, it represents exact joint failure surface defined by multiple component failure events, thus no error will be induced due to the integrated limit state function in system reliability computation. Second, smoothness of the integrated performance on system failure surface can be guaranteed, therefore advanced response surface techniques can be conveniently employed for response approximation. With the developed integrated performance function, the maximum confidence enhancement based sequential sampling method is adopted as an efficient component reliability analysis tool for system reliability approximation. To furthermore improve the computational efficiency, a new constraint filtering technique is developed to adaptively identify active limit states during the iterative sampling process without inducing any extra computational cost. One case study is used to demonstrate the effectiveness of system reliability assessment using the developed iPMA methodology.


Sign in / Sign up

Export Citation Format

Share Document