Volume 2B: 42nd Design Automation Conference
Latest Publications


TOTAL DOCUMENTS

61
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By American Society Of Mechanical Engineers

9780791850114

Author(s):  
Chen-Ming Kuo ◽  
Chung-Hsin Kuo ◽  
Shu-Ping Lin ◽  
Mark Christian E. Manuel ◽  
Po Ting Lin ◽  
...  

Public infrastructures such as bridges are common civil structures for road and railway transport. In Poland, many of the steel truss bridges were constructed in the 1950s or earlier. The aging managements and damage assessments are required to ensure safe operations of these old bridges. The first step of damage assessment is usually done via visual inspection. The said inspection procedure can be expensive, laborious and dangerous as it is often performed by trained personnel. As a solution to this, we have developed and used a custom-designed, modular aerial robot equipped with a CCD camera for the collection of high-resolution images. The images were merged into one single, high-resolution facade map that will be the basis for subsequent evaluation by bridge inspectors. It was observed that the collected images had encountered irregularities which decreases the reliability of the facade map. We have conducted experiments to estimate the correction of image perspective in terms of attitude and position of unmanned aerial vehicle (UAV). A Kriging model was utilized to parametrically model the aforementioned nonlinear relationship. The image reliability is then evaluated based on the variance of the parametric model. The generated information is further used for high fidelity automated image correction and stitching.


Author(s):  
Adarsh Venkiteswaran ◽  
Sayed Mohammad Hejazi ◽  
Deepanjan Biswas ◽  
Jami J. Shah ◽  
Joseph K. Davidson

Industries are continuously trying to improve the time to market through automation and optimization of existing product development processes. Large companies vow to save significant time and resources through seamless communication of data between design, manufacturing, supply chain and quality assurance teams. In this context, Model Based Definition/Engineering (MBD) / (MBE) has gained popularity, particularly in its effort to replace traditional engineering drawings and documentations with a unified digital product model in a multi-disciplinary environment. Widely used 3D data exchange models (STEP AP 203, 214) contains mere shape information, which does not provide much value for reuse in downstream manufacturing applications. However, the latest STEP AP 242 (ISO 10303-242) “Managed model based 3D engineering” aims to support smart manufacturing by capturing semantic Product Manufacturing Information (PMI) within the 3D model and also helping with long-term archival. As a primary, for interoperability of Geometric Dimensions & Tolerances (GD&T) through AP 242, CAx Implementor Forum has published a set of recommended practices for the implementation of a translator. In line with these recommendations, this paper discusses the implementation of an AP 203 to AP 242 translator by attaching semantic GD&T available in an in-house Constraint Tolerance Graph (CTF) file. Further, semantic GD&T data can be automatically consumed by downstream applications such as Computer Aided Process Planning (CAPP), Computer Aided Inspection (CAI), Computer Aided Tolerance Systems (CATS) and Coordinate Measuring Machines (CMM). Also, this paper will briefly touch base on the important elements that will constitute a comprehensive product data model for model-based interoperability.


Author(s):  
Chanyoung Park ◽  
Raphael T. Haftka ◽  
Nam H. Kim

Surrogates have been used as an approximate tool to emulate simulation responses based on a handful of response samples. However, for high fidelity simulations, often only a small number samples are affordable, and this increases the risk of extrapolation using surrogates. Frequently, most of the sampling domain is not in the interpolation domain (called coverage), usually defined as the convex hull of these samples. For example, when we build a surrogate with 20 samples in six-dimensional space, the coverage is merely 2% of the sampling domain. Multi-fidelity surrogates (MFS) may mitigate this problem, because they use large number of low fidelity simulations, so that most of the domain is covered with at least some simulations. This paper explores the extrapolation capability of MFS frameworks through examples including algebraic functions. To examine the effects of different MFS frameworks, we consider six MFS frameworks in terms of their functional forms and frameworks for fitting the forms to data. We consider three different functional forms based on different approaches: 1) a model discrepancy function, 2) model calibration, and 3) both. Bayesian MFS frameworks based on the functional forms are considered. We include also their counterparts in simple frameworks, which have the same functional form but can be built with ready-made surrogates. We examined the effect of the high fidelity sample coverage on extrapolation while the number of high fidelity samples remains the same. The root mean square errors (RMSE) of the interpolation and extrapolation domains are calculated to see their effectiveness on the overall RMSE of whole MFS. For the examples considered, we found that the presence of a regression scalar could be important to extrapolation. Bayesian framework is useful to find a good regression scalar, which simplifies the discrepancy function.


Author(s):  
Jesper Kristensen ◽  
You Ling ◽  
Isaac Asher ◽  
Liping Wang

Adaptive sampling methods have been used to build accurate meta-models across large design spaces from which engineers can explore data trends, investigate optimal designs, study the sensitivity of objectives on the modeling design features, etc. For global design optimization applications, adaptive sampling methods need to be extended to sample more efficiently near the optimal domains of the design space (i.e., the Pareto front/frontier in multi-objective optimization). Expected Improvement (EI) methods have been shown to be efficient to solve design optimization problems using meta-models by incorporating prediction uncertainty. In this paper, a set of state-of-the-art methods (hypervolume EI method and centroid EI method) are presented and implemented for selecting sampling points for multi-objective optimizations. The classical hypervolume EI method uses hyperrectangles to represent the Pareto front, which shows undesirable behavior at the tails of the Pareto front. This issue is addressed utilizing the concepts from physical programming to shape the Pareto front. The modified hypervolume EI method can be extended to increase local Pareto front accuracy in any area identified by an engineer, and this method can be applied to Pareto frontiers of any shape. A novel hypervolume EI method is also developed that does not rely on the assumption of hyperrectangles, but instead assumes the Pareto frontier can be represented by a convex hull. The method exploits fast methods for convex hull construction and numerical integration, and results in a Pareto front shape that is desired in many practical applications. Various performance metrics are defined in order to quantitatively compare and discuss all methods applied to a particular 2D optimization problem from the literature. The modified hypervolume EI methods lead to dramatic resource savings while improving the predictive capabilities near the optimal objective values.


Author(s):  
Trung Pham ◽  
Christopher Hoyle ◽  
Yue Zhang ◽  
Tam Nguyen

Topology optimization (TO) aims to find a material distribution within a reference domain, which optimizes objective function(s) and satisfies certain constraints. Topology optimization has various potential applications in early phases of structural design, e.g., reducing structural weight or maximizing structural stiffness. However, most research on TO has focused on linear elastic materials, which has severely restricted applications of TO to hyperelastic structures made of, e.g., rubber or elastomer. While there is some work in literature on TO of nonlinear continua, to the best knowledge of the authors there is no work which investigates the different models of hyperelastic material. Furthermore, topology optimized designs often possess complex geometries and intermediate densities making it difficult to manufacture such designs using conventional methods. Additive Manufacturing (AM) is capable of handling such complexities. Continuing advances in AM will allow for usage of rubber-like materials, which are modeled by hyperelastic constitutive laws, in producing complex structures designed by TO. The contribution of this paper is an investigation of different models of hyperelastic materials taking account of both geometrical and material nonlinearities, and their influences on the resulting topologies. Topology optimization of nonlinear continua is the main topic of few papers. This paper considers different isotropic hyperelastic models including the Ogden, Arruda–Boyce and Yeoh model under finite deformations, which have not yet been implemented in the context of topology optimization of continua. This paper proposes to start with a reference domain having known boundary and loading conditions. Material parameters of different models that fill the domain are also known. Maximizing the stiffness of the structure subject to a volume constraint is used as the design objective. The domain is then meshed into a large number of finite elements, and each element is assigned a density between 0 and 1, which becomes design variable of the optimization problem. These densities are further penalized to make intermediate densities (i.e., not 0 or 1) less favorable. Optimized material distribution will be constructed from optimized values of design variables. Because of the penalization factors that make the problem nonlinear, the Method of Moving Asymptotes (MMA) is utilized to update it iteratively. At each iteration the nonlinear finite element problem is solved using the Finite Element Analysis Program (FEAP), which has been modified to accept penalized densities on element stiffness matrices and internal nodal forces, and a filtering scheme is applied on the sensitivities of objective function to guarantee the existence of solution. The proposed method is tested on several numerical examples. The first two examples are common benchmark models, which are a simply supported beam , and a beam fixed at two ends. Both models are subjected to a concentrated force at midpoints of their edges. The effects of linear and nonlinear material behaviors are compared with regards to resulting designs. The third example is a foremost attempt to reflect on TO in design of airless tire through a simple model, which demonstrates capability of the method in solving real-world design problems.


Author(s):  
Zequn Wang ◽  
Yan Fu ◽  
Ren-Jye Yang ◽  
Saeed Barbat ◽  
Wei Chen

Validating dynamic engineering models is critically important in practical applications by assessing the agreement between simulation results and experimental observations. Though significant progresses have been made, the existing metrics lack the capability of managing uncertainty in both simulations and experiments, which may stem from computer model instability, imperfection in material fabrication and manufacturing process, and variations in experimental conditions. In addition, it is challenging to validate a dynamic model aggregately over both the time domain and a model input space with data at multiple validation sites. To overcome these difficulties, this paper presents an area-based metric to systemically handle uncertainty and validate computational models for dynamic systems over an input space by simultaneously integrating the information from multiple validation sites. To manage the complexity associated with a high-dimensional data space, Eigen analysis is performed for the time series data from simulations at each validation site to extract the important features. A truncated Karhunen-Loève (KL) expansion is then constructed to represent the responses of dynamic systems, resulting in a set of uncorrelated random coefficients with unit variance. With the development of a hierarchical data fusion strategy, probability integral transform is then employed to pool all the resulting random coefficients from multiple validation sites across the input space into a single aggregated metric. The dynamic model is thus validated by calculating the cumulative area difference of the cumulative density functions. The proposed model validation metric for dynamic systems is illustrated with a mathematical example, a supported beam problem with stochastic loads, and real data from the vehicle occupant restraint system.


Author(s):  
Prakhar Jaiswal ◽  
Rahul Rai ◽  
Saigopal Nelaturi

Various 3D solid model representation schemes are developed to capture and process geometrical information of physical 3D objects as accurately and precisely as possible with the consideration of storage and computational complexity. These representation schemes are error prone, and their limitations prohibit them to capture all the pertinent information perfectly for a complex 3D object. Many applications in design involve repetitive conversions between several representation schemes to efficiently evaluate and operate on solid models. Mapping one representation to other degrades the quality, correctness, and completeness of the information content. In this paper, we quantify the degradation of the proxy representation models by taking inspiration from the hysteresis concept applied in different fields, such as magnetism, mechanics, control systems, cell biology, and economics. We propose a method to compute the error remanence using quantitative measures of information content and quality of proxy models. We also discuss the areas of future research such as sequencing of operations in computational work-flows that would benefit by utilizing the error remanence metric.


Author(s):  
Sayed Mohammad Hejazi ◽  
Deepanjan Biswas ◽  
Adarsh Venkiteswaran ◽  
Jami J. Shah ◽  
Joseph K. Davidson

Tolerances are specified by a designer to allow reasonable freedom to a manufacturer for imperfections and inherent variability without compromising performance. A group of tolerance classes, tolerance values and datums specified in design that control the variations in a part to be manufactured, is called a tolerance scheme. It takes knowledge and experience to create a good tolerance scheme. It is a tedious process driven by the type of parts, their features and controls needed for each one of them. In this paper, we investigate the development and implementation of 1st order automated Geometric Dimensioning and Tolerancing (GD&T) schema generation. Prior to schema generation, some assembly information such as presence of assembly features and pattern features in the given assembly CAD file is required. Mohan et al [6] proposed and implemented 3 preprocessing modules which provides required assembly information. Rao [8] developed and reported 5 experiential GD&T rulesets that could help the process of auto-tolerancing. Haghighi et al [5] proposed a procedure for automating 1st order GD&T schema generation and value allocation based on the provided assembly information and using GD&T rulesets. In this paper we present the development and implementation of 1st order GD&T schema generation. The output of this toolset is a complete GD&T schema for given assembly without tolerance values. Biswas et al [12] proposed and developed a toolset for tolerance value allocation and analysis. Once GD&T schema is generated and tolerance values are allocated, recommended GD&T is translated into STEP AP242 file format. Venkiteswaran et al [13] developed a module that reads the nominal geometry in STEP AP203 and GD&T information in CTF format and translates it to STEP AP242. Combining all these 3 modules with pre-processing modules leads to the completion of 1st order auto-tolerancing.


Author(s):  
Tingting Xia ◽  
Mian Li ◽  
Jianhua Zhou

The real challenge for multi-disciplinary design optimization (MDO) problems to gain a robust solution is the propagation of uncertainty from one discipline to another. Most existing methods only consider a MDO problem in deterministic manner or find a solution which is robust for a single-disciplinary optimization problem. These rare methods for solving MDO problems under uncertainty are usually computational expensive. This research proposes a robust sequential MDO (RS-MDO) approach based on a sequential MDO (S-MDO) framework. Firstly, a robust solution is obtained by giving each discipline full autonomy to perform optimization. Tolerance range is specified for the coupling variable to model uncertainty propagation in the original coupled system. Then the obtained robust extreme points of global variable and coupling variable are dispatched into subsystems to perform optimization sequentially. Additional constraints are added to keep consistency and guarantee a robust solution. To find a solution with such strict constraints, genetic algorithm (GA) is used as a solver in each optimization stage. Since all iterations in the sequential optimization stage can be processed in parallel, this robust MDO approach can be more time-saving. Numerical examples are provided to demonstrate the availability and effectiveness of proposed approach.


Author(s):  
Xiangxue Zhao ◽  
Zhimin Xi ◽  
Hongyi Xu ◽  
Ren-Jye Yang

Model bias can be normally modeled as a regression model to predict potential model errors in the design space with sufficient training data sets. Typically, only continuous design variables are considered since the regression model is mainly designed for response approximation in a continuous space. In reality, many engineering problems have discrete design variables mixed with continuous design variables. Although the regression model of the model bias can still approximate the model errors in various design/operation conditions, accuracy of the bias model degrades quickly with the increase of the discrete design variables. This paper proposes an effective model bias modeling strategy to better approximate the potential model errors in the design/operation space. The essential idea is to firstly determine an optimal base model from all combination models derived from discrete design variables, then allocate majority of the bias training samples to this base model, and build relationships between the base model and other combination models. Two engineering examples are used to demonstrate that the proposed approach possesses better bias modeling accuracy compared to the traditional regression modeling approach. Furthermore, it is shown that bias modeling combined with the baseline simulation model can possess higher model accuracy compared to the direct meta-modeling approach using the same amount of training data sets.


Sign in / Sign up

Export Citation Format

Share Document