Quantifying the Convergence Time of Distributed Design Processes

Author(s):  
Erich Devendorf ◽  
Kemper Lewis

Time is an asset of critical importance in the design process and it is desirable to reduce the amount of time spent developing products and systems. Design is an iterative activity and a significant portion of time spent in the product development process is consumed by design engineers iterating towards a mutually acceptable solution. Therefore, the amount of time necessary to complete a design can be shortened by reducing the time required for design iterations or by reducing the number of iterations. The focus of this paper is on reducing the number of iterations required to converge to a mutually acceptable solution in distributed design processes. In distributed design, large systems are decomposed into smaller, coupled design problems where individual designers have control over local design decisions and seek to satisfy their own individual objectives. The number of iterations required to reach equilibrium solutions in distributed design processes can vary depending on the starting location and the chosen process architecture. We investigate the influence of process architecture on the convergence behavior of distributed design systems. This investigation leverages concepts from game theory, classical controls and discrete systems theory to develop a transient response model. As a result, we are able to evaluate process architectures without carrying out any solution iterations.

2013 ◽  
Vol 2013 ◽  
pp. 1-15 ◽  
Author(s):  
Erich Devendorf ◽  
Kemper Lewis

Time is an asset of critical importance in a multidisciplinary design process and it is desirable to reduce the amount of time spent designing products and systems. Design is an iterative activity and designers consume a significant portion of the product development process negotiating a mutually acceptable solution. The amount of time necessary to complete a design depends on the number and duration of design iterations. This paper focuses on accurately characterizing the number of iterations required for designers to converge to an equilibrium solution in distributed design processes. In distributed design, systems are decomposed into smaller, coupled design problems where individual designers have control over local design decisions and seek to achieve their own individual objectives. These smaller coupled design optimization problems can be modeled using coupled games and the number of iterations required to reach equilibrium solutions varies based on initial conditions and process architecture. In this paper, we leverage concepts from game theory, classical controls, and discrete systems theory to evaluate and approximate process architectures without carrying out any solution iterations. As a result, we develop an analogy between discrete decisions and a continuous time representation that we analyze using control theoretic techniques.


Author(s):  
Erich Devendorf ◽  
Kemper Lewis

In distributed design individual designers have local control over design variables and seek to minimize their own individual objectives. The amount of time required to reach equilibrium solutions in decentralized design can vary based on the design process architecture chosen. There are two primary design process architectures, sequential and parallel, and a number of possible combinations of these architectures. In this paper a game theoretic approach is developed to determine the time required for a parallel and sequential architecture to converge to a solution for a two designer case. The equations derived solve for the time required to converge to a solution in closed form without any objective function evaluations. This result is validated by analyzing a distributed design case study. In this study the equations accurately predict the convergence time for a sequential and parallel architecture. A second validation is performed by analyzing a large number of randomly generated two designer systems. The approach in this case successfully predicts convergence within 3 iterations for nearly 98% of the systems analyzed. The remaining 2% highlight one of the approach’s weaknesses; it is susceptible to numerically ill conditioned problems. Understanding the rate at which distributed design problems converge is of key importance when determining design architectures. This work begins the investigation with a two designer case and lays the groundwork to expand to larger design systems with multiple design variables.


Author(s):  
Erich Devendorf ◽  
Kemper Lewis

In distributed design processes, individual design subsystems have local control over design variables and seek to satisfy their own individual objectives, which may also be influenced by some system level objectives. The resulting network of coupled subsystems will either converge to a stable equilibrium, or diverge in an unstable manner. In this paper, we study the dependence of system stability on the solution process architecture. The solution process architecture describes how the design subsystems are ordered and can be either sequential, parallel, or a hybrid that incorporates both parallel and sequential elements. In this paper we demonstrate that the stability of a distributed design system does indeed depend on the solution process architecture chosen and we create a general process architecture model based on linear systems theory. The model allows the stability of equilibrium solutions to be analyzed for distributed design systems by converting any process architecture into an equivalent parallel representation. Moreover, we show that this approach can accurately predict when the equilibrium is unstable and the system divergent when previous models suggest the system is convergent.


Author(s):  
Erich Devendorf ◽  
Kemper Lewis

When designing complex systems, it is often the case that a design process is subjected to a variety of unexpected inputs, interruptions, and changes. These disturbances can create unintended consequences including changes to the design process architecture, the planned design responsibilities, or the design objectives and requirements. In this paper a specific type of design disturbance, mistakes, is investigated. The impact of mistakes on the convergence time of a distributed multi-subsystem optimization problem is studied for several solution process architectures. A five subsystem case study is used to help understand the ability of certain architectures to handle the impact of the mistakes. These observations have led to the hypothesis that selecting distributed design architectures that minimize the number of iterations to propagate mistakes can significantly reduce their impact. It is also observed that design architectures that converge quickly tend to have these same error damping properties. Considering these observations when selecting distributed design architectures can passively reduce the impact of mistakes.


2021 ◽  
Vol 1 ◽  
pp. 531-540
Author(s):  
Albert Albers ◽  
Miriam Wilmsen ◽  
Kilian Gericke

AbstractThe implementation of agile frameworks, such as SAFe, in large companies causes conflicts between the overall product development process with a rigid linkage to the calendar cycles and the continuous agile project planning. To resolve these conflicts, adaptive processes can be used to support the creation of realistic target-processes, i.e. project plans, while stabilizing process quality and simplifying process management. This enables the usage of standardisation methods and module sets for design processes.The objective of this contribution is to support project managers to create realistic target-processes through the usage of target-process module sets. These target-process module sets also aim to stabilize process quality and to simplify process management. This contribution provides an approach for the development and application of target-process module sets, in accordance to previously gathered requirements and evaluates the approach within a case study with project managers at AUDI AG (N=21) and an interview study with process authors (N=4) from three different companies.


2011 ◽  
Vol 133 (10) ◽  
Author(s):  
Erich Devendorf ◽  
Kemper Lewis

In distributed design processes, individual design subsystems have local control over design variables and seek to satisfy their own individual objectives, which may also be influenced by some system level objectives. The resulting network of coupled subsystems will either converge to a stable equilibrium or diverge in an unstable manner. In this paper, we study the dependence of system stability on the solution process architecture. The solution process architecture describes how the design subsystems are ordered and can be either sequential, parallel, or a hybrid that incorporates both parallel and sequential elements. In this paper, we demonstrate that the stability of a distributed design system does indeed depend on the solution process architecture chosen, and we create a general process architecture model based on linear systems theory. The model allows the stability of equilibrium solutions to be analyzed for distributed design systems by converting any process architecture into an equivalent parallel representation. Moreover, we show that this approach can accurately predict when the equilibrium is unstable and the system divergent when previous models suggest that the system is convergent.


Author(s):  
Khadija Tahera ◽  
Chris Earl ◽  
Claudia Eckert

Testing components, prototypes and products comprise essential, but time consuming activities throughout the product development process particularly for complex iteratively designed products. To reduce product development time, testing and design processes are often overlapped. A key research question is how this overlapping can be planned and managed to minimise risks and costs. The first part of this research study investigates how a case study company plans testing and design processes and how they manage these overlaps. The second part of the study proposes a significant modification to the existing process configuration for design and testing, which explicitly identifies virtual testing, that is an extension to Computer Aided Engineering which mirrors the testing process through product modelling and simulation, as a distinct and significant activity used to (a) enhance and (b) replace some physical tests. The analysis shows how virtual testing can mediate information flows between overlapping (re)design and physical tests. The effects of virtual testing to support overlap of test and (re)design is analysed for the development phases of diesel engine design at a case study company. We assess the costs and risks of overlaps and their amelioration through targeted virtual testing. Finally, using the analysis of the complex interactions between (re)design, physical and virtual testing, and the scope for replacing physical with virtual testing is examined.


Author(s):  
Tomonori Honda ◽  
Francesco Ciucci ◽  
Kemper Lewis ◽  
Maria C. Yang

Frameworks for modeling the communication and coordination of subsystem stakeholders are valuable for the synthesis of large engineering systems. However, these frameworks can be resource intensive and challenging to implement. This paper compares three frameworks, Multidisciplinary Design Optimization (MDO), traditional Game Theory, and a Modified Game Theoretic approach on the form and flow of information passed between subsystems. This paper considers the impact of “complete” information sharing by determining the effect of merging subsystems. Comparisons are made of convergence time and robustness in a case study of the design of a satellite. Results comparing MDO in two- and three-player scenarios indicate that, when the information passed between subsystems is sufficiently linear, the two scenarios converge in statistically indifferent number of iterations, but additional “complete” information does reduce variability in the number of iterations. The Modified Game Theoretic approach converges to a smaller region of the Pareto set compared to MDO, but does so without a system facilitator. Finally, a traditional Game Theoretic approach converges to a limit cycle rather than a fixed point for the given initial design. There may also be a region of attraction for convergence for a traditional Game Theoretic approach.


Author(s):  
Jonathan K. Niemeyer ◽  
Daniel E. Whitney

This paper looks at the product development process as an exercise in risk reduction and performs a critical analysis of how gas turbine engine manufacturers weigh the competing risks associated with on-time delivery, product quality, and development costs. Three frameworks are used to focus the analysis: • Iteration by using multiple attempts to converge to an acceptable solution. • Maintaining options in development, and delaying convergence to a single design. • Improving the organization’s predictive capability prior to committing to a particular set of performance goals, designs, or technologies for a product. This is explored from the perspective of “technology readiness”. For six gas turbine engine development programs, case studies were performed to assess the effectiveness of the product development process by measuring how well the engine met its guaranteed level of fuel consumption. For each development program, performance against guarantees was compared against technology readiness levels (TRL) at program launch when performance was guaranteed by contract to customers, and against the degree of flexibility provided to designers to react once performance shortfalls were known. Decomposition of the engine system into sub-systems was necessary to specifically define TRL, parallel efforts, and iteration. Risk strategies were compared in light of the time sensitivity of the quality of information, the cost of engineering changes, contractual penalties, and lead times associated with implementing improvements.


Sign in / Sign up

Export Citation Format

Share Document