Identification of High Performance Regions of High-Dimensional Design Spaces With Materials Design Applications

Author(s):  
Clinton Morris ◽  
Carolyn C. Seepersad

Design exploration methods seek to identify sets of candidate designs or regions of the design space that yield desirable performance. Commonly, the dimensionality of the design space exceeds the limited dimensions supported by standard graphical techniques, making it difficult for human designers to visualize or understand the underlying structure of the design space. With standard visualization tools, it is sometimes challenging to visualize a multi-dimensional Pareto frontier, but it is even more difficult to visualize the collections of design (input) variable values that yield those Pareto solutions. It is difficult for a designer to determine not only how many distinct regions of the design (input) space may offer desirable performance but also how those design spaces are structured. In this paper, a form of spectral clustering known as ε-neighborhood clustering is proposed for identifying satisfactory regions in the design spaces of multilevel problems. By exploiting properties of graph theory, the number of satisfactory design regions can be determined accurately and efficiently, and the design space can be partitioned. The method is demonstrated to be effective at identifying clusters in a 10 dimensional space. It is also applied to a multilevel materials design problem to demonstrate its efficacy on a realistic design application. Future work intends to visualize each individually identified design region to produce an intuitive mapping of the design space.

Author(s):  
Clinton Morris ◽  
Carolyn C. Seepersad

Design space exploration can reveal the underlying structure of design problems of interest. In a set-based approach, for example, exploration can identify sets of designs or regions of the design space that meet specific performance requirements. For some problems, promising designs may cluster in multiple regions of the design space, and the boundaries of those clusters may be irregularly shaped and difficult to predict. Visualizing the promising regions can clarify the design space structure, but design spaces are typically high-dimensional, making it difficult to visualize the space in three dimensions. Techniques have been introduced to map high-dimensional design spaces to low-dimensional, visualizable spaces. Before the promising regions can be visualized, however, the first task is to identify how many clusters of promising designs exist in the high-dimensional design space. Unsupervised machine learning methods, such as spectral clustering, have been utilized for this task. Spectral clustering is generally accurate but becomes computationally intractable with large sets of candidate designs. Therefore, in this paper a technique for accurately identifying clusters of promising designs is introduced that remains viable with large sets of designs. The technique is based on spectral clustering but reduces its computational impact by leveraging the Nyström Method in the formulation of self-tuning spectral clustering. After validating the method on a simplified example, it is applied to identify clusters of high performance designs for a high-dimensional negative stiffness metamaterials design problem.


Author(s):  
Clinton B. Morris ◽  
Michael R. Haberman ◽  
Carolyn C. Seepersad

Abstract Design space exploration can reveal the underlying structure of design problems. In a set-based approach, for example, exploration can map sets of designs or regions of the design space that meet specific performance requirements. For some problems, promising designs may cluster in multiple regions of the input design space, and the boundaries of those clusters may be irregularly shaped and difficult to predict. Visualizing the promising regions can clarify the design space structure, but design spaces are typically high-dimensional, making it difficult to visualize the space in three dimensions. To convey the structure of such high-dimensional design regions, a two-stage approach is proposed to (1) identify and (2) visualize each distinct cluster or region of interest in the input design space. This paper focuses on the visualization stage of the approach. Rather than select a singular technique to map high-dimensional design spaces to low-dimensional, visualizable spaces, a selection procedure is investigated. Metrics are available for comparing different visualizations, but the current metrics either overestimate the quality or favor selection of certain visualizations. Therefore, this work introduces and validates a more objective metric, termed preservation, to compare the quality of alternative visualization strategies. Furthermore, a new visualization technique previously unexplored in the design automation community, t-Distributed Neighbor Embedding, is introduced and compared to other visualization strategies. Finally, the new metric and visualization technique are integrated into a two-stage visualization strategy to identify and visualize clusters of high-performance designs for a high-dimensional negative stiffness metamaterials design problem.


2006 ◽  
Vol 34 (3) ◽  
pp. 170-194 ◽  
Author(s):  
M. Koishi ◽  
Z. Shida

Abstract Since tires carry out many functions and many of them have tradeoffs, it is important to find the combination of design variables that satisfy well-balanced performance in conceptual design stage. To find a good design of tires is to solve the multi-objective design problems, i.e., inverse problems. However, due to the lack of suitable solution techniques, such problems are converted into a single-objective optimization problem before being solved. Therefore, it is difficult to find the Pareto solutions of multi-objective design problems of tires. Recently, multi-objective evolutionary algorithms have become popular in many fields to find the Pareto solutions. In this paper, we propose a design procedure to solve multi-objective design problems as the comprehensive solver of inverse problems. At first, a multi-objective genetic algorithm (MOGA) is employed to find the Pareto solutions of tire performance, which are in multi-dimensional space of objective functions. Response surface method is also used to evaluate objective functions in the optimization process and can reduce CPU time dramatically. In addition, a self-organizing map (SOM) proposed by Kohonen is used to map Pareto solutions from high-dimensional objective space onto two-dimensional space. Using SOM, design engineers see easily the Pareto solutions of tire performance and can find suitable design plans. The SOM can be considered as an inverse function that defines the relation between Pareto solutions and design variables. To demonstrate the procedure, tire tread design is conducted. The objective of design is to improve uneven wear and wear life for both the front tire and the rear tire of a passenger car. Wear performance is evaluated by finite element analysis (FEA). Response surface is obtained by the design of experiments and FEA. Using both MOGA and SOM, we obtain a map of Pareto solutions. We can find suitable design plans that satisfy well-balanced performance on the map called “multi-performance map.” It helps tire design engineers to make their decision in conceptual design stage.


Author(s):  
Umar Ibrahim Minhas ◽  
Roger Woods ◽  
Georgios Karakonstantis

AbstractWhilst FPGAs have been used in cloud ecosystems, it is still extremely challenging to achieve high compute density when mapping heterogeneous multi-tasks on shared resources at runtime. This work addresses this by treating the FPGA resource as a service and employing multi-task processing at the high level, design space exploration and static off-line partitioning in order to allow more efficient mapping of heterogeneous tasks onto the FPGA. In addition, a new, comprehensive runtime functional simulator is used to evaluate the effect of various spatial and temporal constraints on both the existing and new approaches when varying system design parameters. A comprehensive suite of real high performance computing tasks was implemented on a Nallatech 385 FPGA card and show that our approach can provide on average 2.9 × and 2.3 × higher system throughput for compute and mixed intensity tasks, while 0.2 × lower for memory intensive tasks due to external memory access latency and bandwidth limitations. The work has been extended by introducing a novel scheduling scheme to enhance temporal utilization of resources when using the proposed approach. Additional results for large queues of mixed intensity tasks (compute and memory) show that the proposed partitioning and scheduling approach can provide higher than 3 × system speedup over previous schemes.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Liang Sun ◽  
Yu-Xing Zhou ◽  
Xu-Dong Wang ◽  
Yu-Han Chen ◽  
Volker L. Deringer ◽  
...  

AbstractThe Ge2Sb2Te5 alloy has served as the core material in phase-change memories with high switching speed and persistent storage capability at room temperature. However widely used, this composition is not suitable for embedded memories—for example, for automotive applications, which require very high working temperatures above 300 °C. Ge–Sb–Te alloys with higher Ge content, most prominently Ge2Sb1Te2 (‘212’), have been studied as suitable alternatives, but their atomic structures and structure–property relationships have remained widely unexplored. Here, we report comprehensive first-principles simulations that give insight into those emerging materials, located on the compositional tie-line between Ge2Sb1Te2 and elemental Ge, allowing for a direct comparison with the established Ge2Sb2Te5 material. Electronic-structure computations and smooth overlap of atomic positions (SOAP) similarity analyses explain the role of excess Ge content in the amorphous phases. Together with energetic analyses, a compositional threshold is identified for the viability of a homogeneous amorphous phase (‘zero bit’), which is required for memory applications. Based on the acquired knowledge at the atomic scale, we provide a materials design strategy for high-performance embedded phase-change memories with balanced speed and stability, as well as potentially good cycling capability.


2015 ◽  
Vol 2015 ◽  
pp. 1-20
Author(s):  
Gongyu Wang ◽  
Greg Stitt ◽  
Herman Lam ◽  
Alan George

Field-programmable gate arrays (FPGAs) provide a promising technology that can improve performance of many high-performance computing and embedded applications. However, unlike software design tools, the relatively immature state of FPGA tools significantly limits productivity and consequently prevents widespread adoption of the technology. For example, the lengthy design-translate-execute (DTE) process often must be iterated to meet the application requirements. Previous works have enabled model-based, design-space exploration to reduce DTE iterations but are limited by a lack of accurate model-based prediction of key design parameters, the most important of which is clock frequency. In this paper, we present a core-level modeling and design (CMD) methodology that enables modeling of FPGA applications at an abstract level and yet produces accurate predictions of parameters such as clock frequency, resource utilization (i.e., area), and latency. We evaluate CMD’s prediction methods using several high-performance DSP applications on various families of FPGAs and show an average clock-frequency prediction error of 3.6%, with a worst-case error of 20.4%, compared to the best of existing high-level prediction methods, 13.9% average error with 48.2% worst-case error. We also demonstrate how such prediction enables accurate design-space exploration without coding in a hardware-description language (HDL), significantly reducing the total design time.


Author(s):  
Sam E. Calisch ◽  
Neil A. Gershenfeld

Honeycomb sandwich panels are widely used for high performance parts subject to bending loads, but their manufacturing costs remain high. In particular, for parts with non-flat, non-uniform geometry, honeycombs must be machined or thermoformed with great care and expense. The ability to produce shaped honeycombs would allow sandwich panels to replace monolithic parts in a number of high performance, space-constrained applications, while also providing new areas of research for structural optimization, distributed sensing and actuation, and on-site production of infrastructure. Previous work has shown methods of directly producing shaped honeycombs by cutting and folding flat sheets of material. This research extends these methods by demonstrating work towards a continuous process for the cutting and folding steps of this process. An algorithm for producing a manufacturable cut-and-fold pattern from a three-dimensional volume is designed, and a machine for automatically performing the required cutting and parallel folding is proposed and prototyped. The accuracy of the creases placed by this machine is characterized and the impact of creasing order is demonstrated. Finally, a prototype part is produced and future work is sketched towards full process automation.


Sign in / Sign up

Export Citation Format

Share Document