How One-Factor-at-a-Time Experimentation Can Lead to Greater Improvements Than Orthogonal Arrays

Author(s):  
Daniel D. Frey ◽  
Rajesh Jugulum

This paper attempts to explain the empirically demonstrated phenomena that, under some conditions, one-at-a-time experiments outperform orthogonal arrays (on average) in parameter design of engineering systems. Five case studies are presented, each based on data from previously published full factorial experiments on actual engineering systems. Computer simulations of adaptive one-at-a-time plans and orthogonal arrays were carried out with varying degrees of pseudo-random error added to the data. The average outcomes are plotted for both approaches to optimization. For each of the five case studies, the main effects and interactions of the experimental factors are presented and analyzed to explain the observed simulation results. It is shown that, for some types of engineering systems, “one-at-a-time” designs consistently exploit interactions despite the fact that these designs lack the resolution to estimate interactions. It is also confirmed that orthogonal arrays are adversely affected by confounding of main effects and interactions.

2005 ◽  
Vol 128 (5) ◽  
pp. 1050-1060 ◽  
Author(s):  
Daniel D. Frey ◽  
Rajesh Jugulum

This paper examines mechanisms underlying the phenomenon that, under some conditions, adaptive one-factor-at-a-time experiments outperform fractional factorial experiments in improving the performance of mechanical engineering systems. Five case studies are presented, each based on data from previously published full factorial physical experiments at two levels. Computer simulations of adaptive one-factor-at-a-time and fractional factorial experiments were carried out with varying degrees of pseudo-random error. For each of the five case studies, the average outcomes are plotted for both approaches as a function of the strength of the pseudo-random error. The main effects and interactions of the experimental factors in each system are presented and analyzed to illustrate how the observed simulation results arise. The case studies show that, for certain arrangements of main effects and interactions, adaptive one-factor-at-a-time experiments exploit interactions with high probability despite the fact that these designs lack the resolution to estimate interactions. Generalizing from the case studies, four mechanisms are described and the conditions are stipulated under which these mechanisms act.


2017 ◽  
Vol 2017 ◽  
pp. 1-14
Author(s):  
Ye Cheng ◽  
Jianhao Hu

In conventional stochastic computation, all the input streams are Bernoulli sequences (BSs), which may result in large random error. To reduce random error and improve computational accuracy, some other sequences have been reported as alternatives to BSs. However, these sequences only apply to the specific stochastic circuits, have difficulties in hardware generation, or have length constraints. To this end, new sequences without these disadvantages should be considered. This paper proposes the random error analysis method for stochastic computation based on autocorrelation sequence (AS), which is more general than the conventional one based on BS. The analysis results show that we can use the proper ASs as input streams of stochastic circuits to reduce random error. On the basis of that conclusion, we propose the random error reduction scheme based on maximal concentrated autocorrelation sequence (MCAS) and BS, both of which are ASs. MCAS and BS are applicable to any combinational stochastic circuit, are easily generated by hardware, and have no length constraints, which avoid the disadvantages of sequences in the previous work. Moreover, we apply the proposed random error reduction scheme into several typical stochastic circuits as case studies. The simulation results confirm the effectiveness of the proposed scheme.


1978 ◽  
Vol 22 (1) ◽  
pp. 598-598
Author(s):  
Steven M. Sidik ◽  
Arthur G. Holms

In many cases in practice an experimenter has some prior knowledge of indefinite validity concerning the main effects and interactions which would be estimable from a two-level full factorial experiment. Such information should be incorporated into the design of the experiment.


Author(s):  
Vasily Bulatov ◽  
Wei Cai

This book presents a broad collection of models and computational methods - from atomistic to continuum - applied to crystal dislocations. Its purpose is to help students and researchers in computational materials sciences to acquire practical knowledge of relevant simulation methods. Because their behavior spans multiple length and time scales, crystal dislocations present a common ground for an in-depth discussion of a variety of computational approaches, including their relative strengths, weaknesses and inter-connections. The details of the covered methods are presented in the form of "numerical recipes" and illustrated by case studies. A suite of simulation codes and data files is made available on the book's website to help the reader "to learn-by-doing" through solving the exercise problems offered in the book.


2010 ◽  
Vol 146-147 ◽  
pp. 966-971
Author(s):  
Qi Hua Jiang ◽  
Hai Dong Zhang ◽  
Bin Xiang ◽  
Hai Yun He ◽  
Ping Deng

This work studies the aggregation of an synthetic ultraviolet absorbent, named 2-hydroxy-4-perfluoroheptanoate-benzophenone (HPFHBP), in the interface between two solvents which can not completely dissolve each other. The aggregation is studied by computer simulations based on a dynamic density functional method and mean-field interactions, which are implemented in the MesoDyn module and Blend module of Material Studios. The simulation results show that the synthetic ultraviolet absorbent diffuse to the interface phase and the concentration in the interface phase is greater than it in the solvents phase.


2015 ◽  
Vol 137 (9) ◽  
Author(s):  
Brian Sylcott ◽  
Jeremy J. Michalek ◽  
Jonathan Cagan

In conjoint analysis, interaction effects characterize how preference for the level of one product attribute is dependent on the level of another attribute. When interaction effects are negligible, a main effects fractional factorial experimental design can be used to reduce data requirements and survey cost. This is particularly important when the presence of many parameters or levels makes full factorial designs intractable. However, if interaction effects are relevant, main effects design can create biased estimates and lead to erroneous conclusions. This work investigates consumer preference interactions in the nontraditional context of visual choice-based conjoint analysis, where the conjoint attributes are parameters that define a product's shape. Although many conjoint studies assume interaction effects to be negligible, they may play a larger role for shape parameters. The role of interaction effects is explored in two visual conjoint case studies. The results suggest that interactions can be either negligible or dominant in visual conjoint, depending on consumer preferences. Generally, we suggest using randomized designs to avoid any bias resulting from the presence of interaction effects.


2005 ◽  
Vol 18 (3) ◽  
pp. 505-514
Author(s):  
Dusanka Bundalo ◽  
Branimir Ðordjevic ◽  
Zlatko Bundalo

Principles and possibilities of synthesis and design of quaternary multiple valued regenerative CMOS logic circuits with high-impedance output state are de- scribed and proposed in the paper. Two principles of synthesis and implementation of CMOS regenerative quaternary multiple-valued logic circuits with high-impedance output state are proposed and described: the simple circuits with smaller number of transistors, and the buffer/driver circuits with decreased propagation delay time. The schemes of such logic circuits are given and analyzed by computer simulations. Some of computer simulation results confirming descriptions and conclusions are also given in the paper.


2006 ◽  
Vol 18 (11) ◽  
pp. 2854-2877 ◽  
Author(s):  
Yingfeng Wang ◽  
Xiaoqin Zeng ◽  
Daniel So Yeung ◽  
Zhihang Peng

The sensitivity of a neural network's output to its input and weight perturbations is an important measure for evaluating the network's performance. In this letter, we propose an approach to quantify the sensitivity of Madalines. The sensitivity is defined as the probability of output deviation due to input and weight perturbations with respect to overall input patterns. Based on the structural characteristics of Madalines, a bottomup strategy is followed, along which the sensitivity of single neurons, that is, Adalines, is considered first and then the sensitivity of the entire Madaline network. Bymeans of probability theory, an analytical formula is derived for the calculation of Adalines' sensitivity, and an algorithm is designed for the computation of Madalines' sensitivity. Computer simulations are run to verify the effectiveness of the formula and algorithm. The simulation results are in good agreement with the theoretical results.


Sign in / Sign up

Export Citation Format

Share Document