When Decomposition Increases Complexity: How Decomposing Introduces New Information Into the Problem Space

2021 ◽  
Author(s):  
Suparna Mukherjee ◽  
Anthony Hennig ◽  
Taylan G. Topcu ◽  
Zoe Szajnfarber

Abstract Decomposition is a dominant design strategy because it enables complex problems to be broken up into more manageable modules. However, although it is well known that complex systems are rarely fully decomposable, much of the decomposition literature is framed around reordering or clustering processes that optimize an objective function to yield a module assignment. As illustrated in this study, these approaches overlook the fact that decoupling partially decomposeable modules can require significant additional design work, with associated consequences that introduce considerable information to the design space. This paper draws on detailed empirical evidence from a NASA space robotics field experiment to elaborate mechanisms through which the processes of decomposing can add information and associated descriptive complexity to the problem space. Contrary to widely held expectations, we show that complexity can increase substantially when natural system modules are fully decoupled from one another to support parallel design. We explain this phenomenon through two mechanisms: interface creation and functional allocation. These findings have implications for the ongoing discussion of optimal module identification as part of the decomposition process. We contend that the sometimes-significant costs of later stages of design decomposition are not adequately considered in existing methods. With this work we lay a foundation for valuing these performance, schedule and complexity costs earlier in the decomposition process.

2021 ◽  
pp. 1-53
Author(s):  
Taylan G. Topcu ◽  
Suparna Mukherjee ◽  
Anthony I Hennig ◽  
Zoe Szajnfarber

Abstract Decomposition is a dominant design strategy because it enables complex problems to be broken up into loosely-coupled modules that are easier to manage and can be designed in parallel. However, contrary to widely held expectations, we show that complexity can increase substantially when natural system modules are fully decoupled from one another to support parallel design. Drawing on detailed empirical evidence from a NASA space robotics field experiment we explain how new information is introduced into the design space through three complexity addition mechanisms of the decomposition process: interface creation, functional allocation, and second order effects. These findings have important implications for how modules are selected early in the design process and how future decomposition approaches should be developed. Although it is well known that complex systems are rarely fully decomposable and that the decoupling process necessitates additional design work, the literature is predominantly focused on reordering, clustering, and/or grouping based approaches to define module boundaries within a fixed system representation. Consequently, these approaches are unable to account for the (often significant) new information that is added to the design space through the decomposition process. We contend that the observed mechanisms of complexity growth need to be better accounted for during the module selection process in order to avoid unexpected downstream costs. With this work we lay a foundation for valuing these complexity-induced impacts to performance, schedule and cost, earlier in the decomposition process.


Author(s):  
Matthew Henchey ◽  
Scott Rosen

In the Department of Defense, unmanned aerial vehicle (UAV) mission planning is typically in the form of a set of pre-defined waypoints and tasks, and results in optimized plans being implemented prior to the beginning of the mission. These include the order of waypoints, assignment of tasks, and assignment of trajectories. One emerging area that has been recently identified in the literature involves frameworks, simulations, and supporting algorithms for dynamic mission planning, which entail re-planning mid-mission based on new information. These frameworks require algorithmic support for flight path and flight time approximations, which can be computationally complex in nature. This article seeks to identify the leading academic algorithms that could support dynamic mission planning and recommendations for future research for how they could be adopted and used in current applications. A survey of emerging UAV mission planning algorithms and academic UAV flight path algorithms is presented, beginning with a taxonomy of the problem space. Next, areas of future research related to current applications are presented.


2012 ◽  
Vol 134 (7) ◽  
Author(s):  
David W. Shahan ◽  
Carolyn Conner Seepersad

Complex engineering design problems are often decomposed into a set of interdependent, distributed subproblems that are solved by domain-specific experts. These experts must resolve couplings between the subproblems and negotiate satisfactory, system-wide solutions. Set-based approaches help resolve these couplings by systematically mapping satisfactory regions of the design space for each subproblem and then intersecting those maps to identify mutually satisfactory system-wide solutions. In this paper, Bayesian network classifiers are introduced for mapping sets of promising designs, thereby classifying the design space into satisfactory and unsatisfactory regions. The approach is applied to two example problems—a spring design problem and a simplified, multilevel design problem for an unmanned aerial vehicle (UAV). The method is demonstrated to offer several advantages over competing techniques, including the ability to represent arbitrarily shaped and potentially disconnected regions of the design space and the ability to be updated straightforwardly as new information about the satisfactory design space is discovered. Although not demonstrated in this paper, it is also possible to interface the classifier with automated search and optimization techniques and to combine expert knowledge with the results of quantitative simulations when constructing the classifiers.


2020 ◽  
Author(s):  
Seyed Mohamad Moosavi ◽  
Aditya Nandy ◽  
Kevin Maik Jablonka ◽  
Daniele Ongari ◽  
Jon Paul Janet ◽  
...  

By combining metal nodes and organic linkers one can make millions of different metal-organic frameworks (MOFs). At present over 90,000 MOFs have been synthesized and there are databases with over 500,000 predicted structures. This raises the question whether a new experimental or predicted structure adds new information. For MOF-chemists the chemical design space is a combination of pore geometry, metal nodes, organic linkers, and functional groups, but at present we do not have a formalism to quantify optimal coverage of chemical design space. In this work, we show how machine learning can be used to quantify similarities of MOFs. This quantification allows us to use techniques from ecology to analyse the chemical diversity of these materials in terms of diversity metrics. In particular, we show that this diversity analysis can identify biases in the databases, and how such bias can lead to incorrect conclusions. This formalism provides us with a simple and powerful practical guideline to see whether a set of new structures will have the potential for new insights, or constitute a relatively small variation of existing structures.


Author(s):  
Chaoguang Wang ◽  
Lusha Huang

In recent years, there has been extensive research on serious games for edu-cational purpose. However, the design space for collaboration in games re-mains substantially unexplored. In this study, we systematically reviewed 31 empirical research articles regarding game-based collaborative learning published from 2006 to 2020 and attempted to provide new information about designing serious games for collaborative learning. We surveyed a number of games and investigated their design features that encourage col-laborative learning. Twenty game mechanics were identified and grouped in-to six main domains: (1) Space, (2) Objects, attributes and states, (3) Ac-tions, (4) Rules and goals, (5) Skill, (6) chance. The analysis of user studies they performed indicated that most of the game projects relied on self-report methods to test their learning effectiveness, and only a few studies adopted the data mining method based on game logs. The implications for research into facilitating collaborative learning and recommendations for future re-search directions are discussed.


Author(s):  
Khoo Zhi Yion ◽  
Ab Al-Hadi Ab Rahman

<p>This paper presents the design space exploration of the hardware-based inverse fixed-point integer transform for High Efficiency Video Coding (HEVC). The designs are specified at high-level using CAL dataflow language and automatically synthesized to HDL for FPGA implementation. Several parallel design alternatives are proposed with trade-off between performance and resource. The HEVC transform consists of several independent components from 4x4 to 32x32 discrete cosine transform and 4x4 discrete sine transform. This work explores the strategies to efficiently compute the transforms by applying data parallelism on the different components. Results show that an intermediate version of parallelism, whereby the 4x4 and 8x8 are merged together, and the 16x16 and 32x32 merged together gives the best trade-off between performance and resource. The results presented in this work also give an insight on how the HEVC transform can be designed efficiently in parallel for hardware implementation.</p>


Author(s):  
Gary Osborne ◽  
Glen Prater ◽  
Rostyslav Lesiv ◽  
David Lamb ◽  
Matthew Castanier

Due to a lack of suitable analysis tools, automotive engineers are often forced to forego quantitative optimization early in the development process, when fundamental decisions establishing vehicle architecture are made. This lack of tools arises because traditional analysis models require detailed geometric descriptions of components and assembly joints in order to yield accurate results, but this information is simply not available early in the development cycle. Optimization taking place later in the cycle usually occurs at the detail design level, and tends to result in expedient solutions to performance problems that might have been more effectively addressed at the architecture level. Alternatively, late-cycle architecture changes may be imposed, but such modifications are equivalent to a huge optimization cycle covering almost the entire design process, and require discarding the detail design work used originally as the basis of the NVH model. Optimizing at the architecture level can both shorten and improve the results of a vehicle development process. In this paper we describe the requirements and implementation of a user interface for a software package supporting vehicle architecture conceptual design and analysis.


2020 ◽  
Author(s):  
Seyed Mohamad Moosavi ◽  
Aditya Nandy ◽  
Kevin Maik Jablonka ◽  
Daniele Ongari ◽  
Jon Paul Janet ◽  
...  

By combining metal nodes and organic linkers one can make millions of different metal-organic frameworks (MOFs). At present over 90,000 MOFs have been synthesized and there are databases with over 500,000 predicted structures. This raises the question whether a new experimental or predicted structure adds new information. For MOF-chemists the chemical design space is a combination of pore geometry, metal nodes, organic linkers, and functional groups, but at present we do not have a formalism to quantify optimal coverage of chemical design space. In this work, we show how machine learning can be used to quantify similarities of MOFs. This quantification allows us to use techniques from ecology to analyse the chemical diversity of these materials in terms of diversity metrics. In particular, we show that this diversity analysis can identify biases in the databases, and how such bias can lead to incorrect conclusions. This formalism provides us with a simple and powerful practical guideline to see whether a set of new structures will have the potential for new insights, or constitute a relatively small variation of existing structures.


Author(s):  
Helmut Harbrecht ◽  
Dennis Tröndle ◽  
Markus Zimmermann

AbstractSolution spaces are regions of good designs in a potentially high-dimensional design space. Good designs satisfy by definition all requirements that are imposed on them as mathematical constraints. In previous work, the complete solution space was approximated by a hyper-rectangle, i.e., the Cartesian product of permissible intervals for design variables. These intervals serve as independent target regions for distributed and separated design work. For a better approximation, i.e., a larger resulting solution space, this article proposes to compute the Cartesian product of two-dimensional regions, so-called 2d-spaces, that are enclosed by polygons. 2d-spaces serve as target regions for pairs of variables and are independent of other 2d-spaces. A numerical algorithm for non-linear problems is presented that is based on iterative Monte Carlo sampling.


Author(s):  
J. Y. Koo ◽  
G. Thomas

High resolution electron microscopy has been shown to give new information on defects(1) and phase transformations in solids (2,3). In a continuing program of lattice fringe imaging of alloys, we have applied this technique to the martensitic transformation in steels in order to characterize the atomic environments near twin, lath and αmartensite boundaries. This paper describes current progress in this program.Figures A and B show lattice image and conventional bright field image of the same area of a duplex Fe/2Si/0.1C steel described elsewhere(4). The microstructure consists of internally twinned martensite (M) embedded in a ferrite matrix (F). Use of the 2-beam tilted illumination technique incorporating a twin reflection produced {110} fringes across the microtwins.


Sign in / Sign up

Export Citation Format

Share Document