scholarly journals Aggregation and Controlled Interaction: Automated Mechanisms for Managing Design Complexity

Author(s):  
Timothy M. Jacobs ◽  
Elaine Cohen

Abstract Complexity in modern product design is manifest through large numbers of diverse parts, functions, and design disciplines that require an intricate web of synergistic relationships to link them together. It is extremely difficult for designers to assimilate or represent such complex designs in their totality. In this research, we present a framework that utilizes the intricate relationships between design components to enhance the representational power of design models and to provide focal points for automating the management of design complexity. We introduce automated mechanisms, based on aggregation and interaction relationships between design components, that integrate model structure, a variety of conceptual and detailed design information, and product management controls into a single modeling framework. These mechanisms are easily incorporated into design models and they facilitate re-use and cooperative design by ensuring that related entities can be modified independently.

2021 ◽  
Vol 25 (10) ◽  
pp. 5603-5621
Author(s):  
Andrew J. Newman ◽  
Amanda G. Stone ◽  
Manabendra Saharia ◽  
Kathleen D. Holman ◽  
Nans Addor ◽  
...  

Abstract. This study employs a stochastic hydrologic modeling framework to evaluate the sensitivity of flood frequency analyses to different components of the hydrologic modeling chain. The major components of the stochastic hydrologic modeling chain, including model structure, model parameter estimation, initial conditions, and precipitation inputs were examined across return periods from 2 to 100 000 years at two watersheds representing different hydroclimates across the western USA. A total of 10 hydrologic model structures were configured, calibrated, and run within the Framework for Understanding Structural Errors (FUSE) modular modeling framework for each of the two watersheds. Model parameters and initial conditions were derived from long-term calibrated simulations using a 100 member historical meteorology ensemble. A stochastic event-based hydrologic modeling workflow was developed using the calibrated models in which millions of flood event simulations were performed for each basin. The analysis of variance method was then used to quantify the relative contributions of model structure, model parameters, initial conditions, and precipitation inputs to flood magnitudes for different return periods. Results demonstrate that different components of the modeling chain have different sensitivities for different return periods. Precipitation inputs contribute most to the variance of rare floods, while initial conditions are most influential for more frequent events. However, the hydrological model structure and structure–parameter interactions together play an equally important role in specific cases, depending on the basin characteristics and type of flood metric of interest. This study highlights the importance of critically assessing model underpinnings, understanding flood generation processes, and selecting appropriate hydrological models that are consistent with our understanding of flood generation processes.


2020 ◽  
Author(s):  
Jack M Winters

Introduction. Effectively modeling SARS-CoV-2/COVID-19 dynamics requires careful integration of population health (public health motivation) and recovery dynamics (medical interventions motivation). This manuscript proposes a minimal pandemic model, which conceptually separates "complex adaptive systems" (CAS) associated with social behavior and infrastructure (e.g., tractable input events modulating exposure) from idealized bio-CAS (e.g., the immune system). The proposed model structure extends the classic simple SEIR (susceptible, exposed, infected, resistant/recovered) uni-causal compartmental model, widely used in epidemiology, into an 8th-order functional network SEI3R2S-Nrec model structure, with infection partitioned into three severity states (e.g., starts in I1 [mostly asymptomatic], then I2 if notable symptoms, then I3 if ideally hospitalized) that connect via a lattice of fluxes to two "resistant" (R) states. Here Nrec ("not recovered") represents a placeholder for better tying emerging COVID-19 medical research findings with those from epidemiology. Methods. Borrowing from fuzzy logic, a given model represents a "Universe of Discourse" (UoD) that is based on assumptions. Nonlinear flux rates are implemented using the classic Hill function, widely used in the biochemical and pharmaceutical fields and intuitive for inclusion within differential equations. There is support for "encounter" input events that modulate ongoing E (exposures) fluxes via S↔I1 and other I1/2/3 encounters, partitioned into a "social/group" (uSG(t)) behavioral subgroup (e.g., ideally informed by evolving science best-practices), and a smaller uTB(t) subgroup with added "spreader" lifestyle and event support. In addition to signal and flux trajectories (e.g., plotted over 300 days), key cumulative output metrics include fluxes such as I3→D deaths, I2→I3 hospital admittances, I1→I2 related to "cases" and R1+R2 resistant. The code, currently available as a well-commented Matlab Live Script file, uses a common modeling framework developed for a portfolio of other physiological models that tie to a planned textbook; an interactive web-based version will follow. Results. Default population results are provided for the USA as a whole, three states in which this author has lived (Arizona, Wisconsin, Oregon), and several special hypothetical cases of idealized UoDs (e.g., nursing home; healthy lower-risk mostly on I1→R1 path to evaluate reinfection possibilities). Often known events were included (e.g., pulses for holiday weekends; Trump/governor-inspired summer outbreak in Arizona). Runs were mildly tuned by the author, in two stages: i) mild model-tuning (e.g., for risk demographics such as obesity), then ii) iterative input tuning to obtain similar overall March-thru-November curve shapes and appropriate cumulative numbers (recognizing limitations of data like "cases"). Predictions are consistent deaths, and CDC estimates of actual cases and immunity (e.g., antibodies). Results could be further refined by groups with more resources (human, data access, computational). It is hoped that its structure and causal predictions might prove helpful to policymakers, medical professionals, and "on the ground" managers of science-based interventions. Discussion and Future Directions. These include: i) sensitivity of the model to parameters; ii) possible next steps for this SEI3R2S-Nrec framework such as dynamic sub-models to better address compartment-specific forms of population diversity (e.g., for E [host-parasite biophysics], I's [infection diversity], and/or R's [immune diversity]); iii) model's potential utility as a framework for applying optimal/feedback control engineering to help manage the ongoing pandemic response in the context of competing subcriteria and emerging new tools (e.g., more timely testing, vaccines); and iv) ways in which the Nrec medical submodel could be expanded to provide refined estimates of the types of tissue damage, impairments and dysfunction that are known byproducts of the COVID-19 disease process, including as a function of existing comorbidities.


2019 ◽  
Vol 11 (1) ◽  
pp. 258 ◽  
Author(s):  
Xijie Li ◽  
Ying Lv ◽  
Wei Sun ◽  
Li Zhou

This study focuses on an environment-friendly toll design problem, where an acceptable road network performance is promised. First, a Traffic Performance Index (TPI)-based evaluation method is developed to help identify the optimal congestion level and the management target of a transportation system. Second, environment-oriented cordon- and link-based road toll design models are respectively proposed through the use of bi-level programming. Both upper-level submodel objectives are to minimize gross revenue (the total collected toll minus the emissions treatment cost) under different pricing strategies. Both lower-level submodels quantify the user equilibrium (UE) condition under elastic demand. Moreover, the TPI-related constraints for the management requirements of the network performance are incorporated into the bi-level programming modeling framework, which can lead to 0–1 mixed integer bi-level nonlinear programming for toll design problems. Accordingly, a genetic algorithm-based heuristic searching method is proposed for the two pricing models. The proposed cordon- and link-based pricing models were then applied to a real-world road network in Beijing, China. The effects of the toll schemes generated from the two models were compared in terms of emissions reduction and congestion mitigation. In this study, it was indicated that a higher total collected toll may lead to more emissions and related treatment costs. Tradeoffs existed between the toll scheme, emissions reduction, and congestion mitigation.


2014 ◽  
Author(s):  
Christoph Lippert ◽  
Franceso Paolo Casale ◽  
Barbara Rakitsch ◽  
Oliver Stegle

AbstractMulti-trait mixed models have emerged as a promising approach for joint analyses of multiple traits. In principle, the mixed model framework is remarkably general. However, current methods implement only a very specific range of tasks to optimize the necessary computations. Here, we present a multi-trait modeling framework that is versatile and fast: LIMIX enables to flexibly adapt mixed models for a broad range of applications with different observed and hidden covariates, and variable study designs. To highlight the novel modeling aspects of LIMIX we performed three vastly different genetic studies: joint GWAS of correlated blood lipid phenotypes, joint analysis of the expression levels of the multiple transcript-isoforms of a gene, and pathway-based modeling of molecular traits across environments. In these applications we show that LIMIX increases GWAS power and phenotype prediction accuracy, in particular when integrating stepwise multi-locus regression into multi-trait models, and when analyzing large numbers of traits. An open source implementation of LIMIX is freely available at: https://github.com/PMBio/limix.


2020 ◽  
Vol 24 (12) ◽  
pp. 5835-5858
Author(s):  
Juliane Mai ◽  
James R. Craig ◽  
Bryan A. Tolson

Abstract. Model structure uncertainty is known to be one of the three main sources of hydrologic model uncertainty along with input and parameter uncertainty. Some recent hydrological modeling frameworks address model structure uncertainty by supporting multiple options for representing hydrological processes. It is, however, still unclear how best to analyze structural sensitivity using these frameworks. In this work, we apply the extended Sobol' sensitivity analysis (xSSA) method that operates on grouped parameters rather than individual parameters. The method can estimate not only traditional model parameter sensitivities but is also able to provide measures of the sensitivities of process options (e.g., linear vs. non-linear storage) and sensitivities of model processes (e.g., infiltration vs. baseflow) with respect to a model output. Key to the xSSA method's applicability to process option and process sensitivity is the novel introduction of process option weights in the Raven hydrological modeling framework. The method is applied to both artificial benchmark models and a watershed model built with the Raven framework. The results show that (1) the xSSA method provides sensitivity estimates consistent with those derived analytically for individual as well as grouped parameters linked to model structure. (2) The xSSA method with process weighting is computationally less expensive than the alternative aggregate sensitivity analysis approach performed for the exhaustive set of structural model configurations, with savings of 81.9 % for the benchmark model and 98.6 % for the watershed case study. (3) The xSSA method applied to the hydrologic case study analyzing simulated streamflow showed that model parameters adjusting forcing functions were responsible for 42.1 % of the overall model variability, while surface processes cause 38.5 % of the overall model variability in a mountainous catchment; such information may readily inform model calibration and uncertainty analysis. (4) The analysis of time-dependent process sensitivities regarding simulated streamflow is a helpful tool for understanding model internal dynamics over the course of the year.


2020 ◽  
Author(s):  
Nadezda Vasilyeva ◽  
Artem Vladimirov ◽  
Taras Vasiliev

<p>The aim of our study is the source of uncertainty in soil organic carbon (SOC) models which comes from the model structure. For that we have developed a family of mathematical models for SOC dynamics with switchable biological and physical mechanisms. Studies mechanisms include microbial activity with constant or dynamic carbon use efficiency (CUE) and constant or dynamic microbial turnover rate; priming effect: decay of stable SOC pool in the presence of labile SOC pool; temperature and moisture dependencies of SOC decomposition rates; dynamic adsorption strength and occlusion. Model SOC cycle includes measurable C pools in soil size and density fractions, each comprised of two estimated theoretical C pools (labile and stable - biochemical C cycle). Reaction rates of the biochemical cycle are modified according to its physical state: decay accelerates with size, accelerates with the amount of adsorbed C (density: heavy to light) and decelerates with soil microaggregation (occluded state). The models family was tested on C and 13C dynamics detailed data of a long-term bare fallow chronosequence.</p><p>Analysis of SOC models family with different combinations of mechanisms showed that the best (estimated by BIC) description of SOC dynamics in physical fractions was with microbially-explicit models only in case of a feedback via dynamics of microbial turnover and CUE. First, we estimated uncertainty of all mechanism-specific parameters for every model in the family. We calculated density distributions for parameters characterizing functional properties and stability of soil components (such as energy of activation, adsorption capacity, CUE, 13C distillation coefficient) for the models family weighted with models likelihoods. These parameter values were then compared with common experimental values.</p><p>We discuss the use of the study results to estimate relevance of observed parameter and structural uncertainties for global SOC projections obtained using different model structures.</p>


Author(s):  
Xinbao Gao ◽  
Shane Farritor

The paper presents a modeling framework for rational selection between similar designs. Designers often make decisions between similar candidate designs such as deciding which off-the-shelf component should be used or whether to design that component from scratch. Such relative decisions between similar designs are common and important. A theoretical formulation of Delta Performance Models (DPM) is presented. DPM is an engineering design model that represents only the changes in performance that result from the small changes in competing designs. This method explicitly considers uncertainty and utilizes the fact that some uncertainty and modeling error is common to both candidate designs. DPM are intended to exploit the full resolution of engineering design models even if the models do not have high accuracy. The procedure is demonstrated on a simple design task using Monte Carlo simulations and a comparison is made with a traditional model.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0248414
Author(s):  
Wentao Zhao ◽  
Dalin Zhou ◽  
Xinguo Qiu ◽  
Wei Jiang

Because large numbers of artworks are preserved in museums and galleries, much work must be done to classify these works into genres, styles and artists. Recent technological advancements have enabled an increasing number of artworks to be digitized. Thus, it is necessary to teach computers to analyze (e.g., classify and annotate) art to assist people in performing such tasks. In this study, we tested 7 different models on 3 different datasets under the same experimental setup to compare their art classification performances when either using or not using transfer learning. The models were compared based on their abilities for classifying genres, styles and artists. Comparing the result with previous work shows that the model performance can be effectively improved by optimizing the model structure, and our results achieve state-of-the-art performance in all classification tasks with three datasets. In addition, we visualized the process of style and genre classification to help us understand the difficulties that computers have when tasked with classifying art. Finally, we used the trained models described above to perform similarity searches and obtained performance improvements.


2018 ◽  
Vol 118 (1) ◽  
pp. 65-95 ◽  
Author(s):  
Mengru Tu ◽  
Ming K. Lim ◽  
Ming-Fang Yang

Purpose The lack of reference architecture for Internet of Things (IoT) modeling impedes the successful design and implementation of an IoT-based production logistics and supply chain system (PLSCS). The authors present this study in two parts to address this research issue. Part A proposes a unified IoT modeling framework to model the dynamics of distributed IoT processes, IoT devices, and IoT objects. The models of the framework can be leveraged to support the implementation architecture of an IoT-based PLSCS. The second part (Part B) of this study extends the discussion of implementation architecture proposed in Part A. Part B presents an IoT-based cyber-physical system framework and evaluates its performance. The paper aims to discuss this issue. Design/methodology/approach This paper adopts a design research approach, using ontology, process analysis, and Petri net modeling scheme to support IoT system modeling. Findings The proposed IoT system-modeling approach reduces the complexity of system development and increases system portability for IoT-based PLSCS. The IoT design models generated from the modeling can also be transformed to implementation logic. Practical implications The proposed IoT system-modeling framework and the implementation architecture can be used to develop an IoT-based PLSCS in the real industrial setting. The proposed modeling methods can be applied to many discrete manufacturing industries. Originality/value The IoT modeling framework developed in this study is the first in this field which decomposes IoT system design into ontology-, process-, and object-modeling layers. A novel implementation architecture also proposed to transform above IoT system design models into implementation logic. The developed prototype system can track product and different parts of the same product along a manufacturing supply chain.


2020 ◽  
Author(s):  
Juliane Mai ◽  
James R. Craig ◽  
Bryan A. Tolson

Abstract. Model structure uncertainty is known to be one of the three main sources of hydrologic model uncertainty along with input and parameter uncertainty. Some recent hydrological modeling frameworks address model structure uncertainty by supporting multiple options for representing hydrological processes. It is, however, still unclear how best to analyze structural sensitivity using these frameworks. In this work, we apply an Extended Sobol' Sensitivity Analysis (xSSA) method that operates on grouped parameters rather than individual parameters. The method can estimate not only traditional model parameter sensitivities but is also able to provide measures of the sensitivities of process options (e.g., linear vs. non-linear storage) and sensitivities of model processes (e.g., infiltration vs. baseflow) with respect to a model output. Key to the xSSA method's applicability to process option and process sensitivity is the novel introduction of process option weights in the Raven hydrological modeling framework. The method is applied to both artificial benchmark models and a watershed model built with the Raven framework. The results show that: (1) The xSSA method provides sensitivity estimates consistent with those derived analytically for individual as well as grouped parameters linked to model structure. (2) The xSSA method with process weighting is computationally less expensive than the alternative aggregate sensitivity analysis approach performed for the exhaustive set of structural model configurations, with savings of 81.9 % for the benchmark model and 98.6 % for the watershed case study. (3) The xSSA method applied to the hydrologic case study analyzing simulated streamflow showed that model parameters adjusting forcing functions were responsible for 42.1 % of the overall model variability while surface processes cause 38.5 % of the overall model variability in a mountainous catchment; such information may readily inform model calibration. (4) The analysis of time dependent process sensitivities regarding simulated streamflow is a helpful tool to understand model internal dynamics over the course of the year.


Sign in / Sign up

Export Citation Format

Share Document