Beyond the Known: Detecting Novel Feasible Domains Over an Unbounded Design Space

2017 ◽  
Vol 139 (11) ◽  
Author(s):  
Wei Chen ◽  
Mark Fuge

To solve a design problem, sometimes it is necessary to identify the feasible design space. For design spaces with implicit constraints, sampling methods are usually used. These methods typically bound the design space; that is, limit the range of design variables. But bounds that are too small will fail to cover all possible designs, while bounds that are too large will waste sampling budget. This paper tries to solve the problem of efficiently discovering (possibly disconnected) feasible domains in an unbounded design space. We propose a data-driven adaptive sampling technique—ε-margin sampling, which learns the domain boundary of feasible designs and also expands our knowledge on the design space as available budget increases. This technique is data-efficient, in that it makes principled probabilistic trade-offs between refining existing domain boundaries versus expanding the design space. We demonstrate that this method can better identify feasible domains on standard test functions compared to both random and active sampling (via uncertainty sampling). However, a fundamental problem when applying adaptive sampling to real world designs is that designs often have high dimensionality and thus require (in the worst case) exponentially more samples per dimension. We show how coupling design manifolds with ε-margin sampling allows us to actively expand high-dimensional design spaces without incurring this exponential penalty. We demonstrate this on real-world examples of glassware and bottle design, where our method discovers designs that have different appearance and functionality from its initial design set.

Author(s):  
Hyunseung Bang ◽  
Daniel Selva

One of the major challenges faced by the decision maker in the design of complex engineering systems is information overload. When the size and dimensionality of the data exceeds a certain level, a designer may become overwhelmed and no longer be able to perceive and analyze the underlying dynamics of the design problem at hand, which can result in premature or poor design selection. There exist various knowledge discovery and visual analytic tools designed to relieve the information overload, such as BrickViz, Cloud Visualization, ATSV, and LIVE, to name a few. However, most of them do not explicitly support the discovery of key knowledge about the mapping between the design space and the objective space, such as the set of high-level design features that drive most of the trade-offs between objectives. In this paper, we introduce a new interactive method, called iFEED, that supports the designer in the process of high-level knowledge discovery in a large, multiobjective design space. The primary goal of the method is to iteratively mine the design space dataset for driving features, i.e., combinations of design variables that appear to consistently drive designs towards specific target regions in the design space set by the user. This is implemented using a data mining algorithm that mines interesting patterns in the form of association rules. The extracted patterns are then used to build a surrogate classification model based on a decision tree that predicts whether a design is likely to be located in the target region of the tradespace or not. Higher level features will generate more compact classification trees while improving classification accuracy. If the mined features are not satisfactory, the user can go back to the first step and extract higher level features. Such iterative process helps the user to gain insights and build a mental model of how design variables are mapped into objective values. A controlled experiment with human subjects is designed to test the effectiveness of the proposed method. A preliminary result from the pilot experiment is presented.


2006 ◽  
Author(s):  
◽  
Ryan Jason Hamilton

Fibre Reinforced Plastics (FRPs) have been used in many practical structural applications due to their excellent strength and weight characteristics as well as the ability for their properties to be tailored to the requirements of a given application. Thus, designing with FRPs can be extremely challenging, particularly when the number of design variables contained in the design space is large. For example, to determine the ply orientations and the material properties optimally is typically difficult without a considered approach. Optimization of composite structures with respect to the ply angles is necessary to realize the full potential of fibre-reinforced materials. Evaluating the fitness of each candidate in the design space, and selecting the most efficient can be very time consuming and costly. Structures composed of composite materials often contain components which may be modelled as rectangular plates or cylindrical shells, for example. Modelling of components such as plates can be useful as it is a means of simplifying elements of structures, and this can save time and thus cost. Variations in manufacturing processes and user environment may affect the quality and performance of a product. It is usually beneficial to account for such variances or tolerances in the design process, and in fact, sometimes it may be crucial, particularly when the effect is of consequence. The work conducted within this project focused on methodologies for optimally designing fibre-reinforced laminated composite structures with the effects of manufacturing tolerances included. For this study it is assumed that the probability of any tolerance value occurring within the tolerance band, compared with any other, is equal, and thus the techniques are aimed at designing for the worst-case scenario. This thesis thus discusses four new procedures for the optimization of composite structures with the effects of manufacturing uncertainties included.


2009 ◽  
Vol 43 (2) ◽  
pp. 48-60 ◽  
Author(s):  
M. Martz ◽  
W. L. Neu

AbstractThe design of complex systems involves a number of choices, the implications of which are interrelated. If these choices are made sequentially, each choice may limit the options available in subsequent choices. Early choices may unknowingly limit the effectiveness of a final design in this way. Only a formal process that considers all possible choices (and combinations of choices) can insure that the best option has been selected. Complex design problems may easily present a number of choices to evaluate that is prohibitive. Modern optimization algorithms attempt to navigate a multidimensional design space in search of an optimal combination of design variables. A design optimization process for an autonomous underwater vehicle is developed using a multiple objective genetic optimization algorithm that searches the design space, evaluating designs based on three measures of performance: cost, effectiveness, and risk. A synthesis model evaluates the characteristics of a design having any chosen combination of design variable values. The effectiveness determined by the synthesis model is based on nine attributes identified in the U.S. Navy’s Unmanned Undersea Vehicle Master Plan and four performance-based attributes calculated by the synthesis model. The analytical hierarchy process is used to synthesize these attributes into a single measure of effectiveness. The genetic algorithm generates a set of Pareto optimal, feasible designs from which a decision maker(s) can choose designs for further analysis.


2015 ◽  
Vol 2015 ◽  
pp. 1-20
Author(s):  
Gongyu Wang ◽  
Greg Stitt ◽  
Herman Lam ◽  
Alan George

Field-programmable gate arrays (FPGAs) provide a promising technology that can improve performance of many high-performance computing and embedded applications. However, unlike software design tools, the relatively immature state of FPGA tools significantly limits productivity and consequently prevents widespread adoption of the technology. For example, the lengthy design-translate-execute (DTE) process often must be iterated to meet the application requirements. Previous works have enabled model-based, design-space exploration to reduce DTE iterations but are limited by a lack of accurate model-based prediction of key design parameters, the most important of which is clock frequency. In this paper, we present a core-level modeling and design (CMD) methodology that enables modeling of FPGA applications at an abstract level and yet produces accurate predictions of parameters such as clock frequency, resource utilization (i.e., area), and latency. We evaluate CMD’s prediction methods using several high-performance DSP applications on various families of FPGAs and show an average clock-frequency prediction error of 3.6%, with a worst-case error of 20.4%, compared to the best of existing high-level prediction methods, 13.9% average error with 48.2% worst-case error. We also demonstrate how such prediction enables accurate design-space exploration without coding in a hardware-description language (HDL), significantly reducing the total design time.


2019 ◽  
Vol 36 (3) ◽  
pp. 245-256
Author(s):  
Yoonki Kim ◽  
Sanga Lee ◽  
Kwanjung Yee ◽  
Young-Seok Kang

Abstract The purpose of this study is to optimize the 1st stage of the transonic high pressure turbine (HPT) for enhancement of aerodynamic performance. Isentropic total-to-total efficiency is designated as the objective function. Since the isentropic efficiency can be improved through modifying the geometry of vane and rotor blade, lean angle and sweep angle are chosen as design variables, which can effectively alter the blade geometry. The sensitivities of each design variable are investigated by applying lean and sweep angles to the base nozzle and rotor, respectively. The design space is also determined based on the results of the parametric study. For the design of experiment (DoE), Optimal Latin Hypercube sampling is adopted, so that 25 evenly distributed samples are selected on the design space. Sequentially, based on the values from the CFD calculation, Kriging surrogate model is constructed and refined using Expected Improvement (EI). With the converged surrogate model, optimum solution is sought by using the Genetic Algorithm. As a result, the efficiency of optimum turbine 1st stage is increased by 1.07 % point compared to that of the base turbine 1st stage. Also, the blade loading, pressure distribution, static entropy, shock structure, and secondary flow are thoroughly discussed.


2018 ◽  
Vol 11 (3) ◽  
pp. 12 ◽  
Author(s):  
Kanokrat Jirasatjanukul ◽  
Namon Jeerungsuwan

The objectives of the research were to (1) design an instructional model based on Connectivism and Constructivism to create innovation in real world experience, (2) assess the model designed–the designed instructional model. The research involved 2 stages: (1) the instructional model design and (2) the instructional model rating. The sample consisted of 7 experts, and the Purposive Sampling Technique was used. The research instruments were the instructional model and the instructional model evaluation form. The statistics used in the research were means and standard division. The research results were (1) the Instructional Model based on Connectivism and Constructivism to Create innovation in Real World Experience consisted of 3 components. These were Connectivism, Constructivism and Innovation in Real World Experience and (2) the instructional model rating was at a high level (=4.37, S.D.=0.41). The research results revealed that the Instructional Model Based on Connectivism and Constructivism to Create Innovation in Real World Experience was a model that can be used in learning, in that it promoted the creation of real world experience innovation.


Author(s):  
Femi A. Aderohunmu ◽  
Giacomo Paci ◽  
Davide Brunelli ◽  
Jeremiah Deng ◽  
Luca Benini ◽  
...  
Keyword(s):  

1998 ◽  
Vol 5 (10) ◽  
Author(s):  
Jakob Pagter ◽  
Theis Rauhe

We study the fundamental problem of sorting in a sequential model of computation and in particular consider the time-space trade-off (product of time and space) for this problem.<br />Beame has shown a lower bound of  Omega(n^2) for this product leaving a gap of a logarithmic factor up to the previously best known upper bound of O(n^2 log n) due to Frederickson. Since then, no progress has been made towards tightening this gap.<br />The main contribution of this paper is a comparison based sorting algorithm which closes this gap by meeting the lower bound of Beame. The time-space product O(n^2) upper bound holds for the full range of space bounds between log n and n/log n. Hence in this range our algorithm is optimal for comparison based models as well as for the very powerful general models considered by Beame.


Author(s):  
D. A. Saravanos ◽  
C. C. Chamis

Abstract A method is developed for the optimal design of composite links based on dynamic performance criteria directly related to structural modal damping and dynamic stiffness. An integrated mechanics theory correlates structural composite damping to the parameters of basic composite material systems, laminate parameters, link shape, and modal deformations. The inclusion of modal properties allows the selective minimization of vibrations associated with specific modes. Ply angles and fiber volumes are tailored to obtain optimal combinations of damping and stiffness. Applications to simple composite links indicate wide margins for trade-offs and illustrate the importance of various design variables to the optimal design.


Sign in / Sign up

Export Citation Format

Share Document