The Mechanisms by Which Adaptive One-factor-at-a-time Experimentation Leads to Improvement

2005 ◽  
Vol 128 (5) ◽  
pp. 1050-1060 ◽  
Author(s):  
Daniel D. Frey ◽  
Rajesh Jugulum

This paper examines mechanisms underlying the phenomenon that, under some conditions, adaptive one-factor-at-a-time experiments outperform fractional factorial experiments in improving the performance of mechanical engineering systems. Five case studies are presented, each based on data from previously published full factorial physical experiments at two levels. Computer simulations of adaptive one-factor-at-a-time and fractional factorial experiments were carried out with varying degrees of pseudo-random error. For each of the five case studies, the average outcomes are plotted for both approaches as a function of the strength of the pseudo-random error. The main effects and interactions of the experimental factors in each system are presented and analyzed to illustrate how the observed simulation results arise. The case studies show that, for certain arrangements of main effects and interactions, adaptive one-factor-at-a-time experiments exploit interactions with high probability despite the fact that these designs lack the resolution to estimate interactions. Generalizing from the case studies, four mechanisms are described and the conditions are stipulated under which these mechanisms act.

Author(s):  
Daniel D. Frey ◽  
Rajesh Jugulum

This paper attempts to explain the empirically demonstrated phenomena that, under some conditions, one-at-a-time experiments outperform orthogonal arrays (on average) in parameter design of engineering systems. Five case studies are presented, each based on data from previously published full factorial experiments on actual engineering systems. Computer simulations of adaptive one-at-a-time plans and orthogonal arrays were carried out with varying degrees of pseudo-random error added to the data. The average outcomes are plotted for both approaches to optimization. For each of the five case studies, the main effects and interactions of the experimental factors are presented and analyzed to explain the observed simulation results. It is shown that, for some types of engineering systems, “one-at-a-time” designs consistently exploit interactions despite the fact that these designs lack the resolution to estimate interactions. It is also confirmed that orthogonal arrays are adversely affected by confounding of main effects and interactions.


1978 ◽  
Vol 22 (1) ◽  
pp. 598-598
Author(s):  
Steven M. Sidik ◽  
Arthur G. Holms

In many cases in practice an experimenter has some prior knowledge of indefinite validity concerning the main effects and interactions which would be estimable from a two-level full factorial experiment. Such information should be incorporated into the design of the experiment.


2015 ◽  
Vol 137 (9) ◽  
Author(s):  
Brian Sylcott ◽  
Jeremy J. Michalek ◽  
Jonathan Cagan

In conjoint analysis, interaction effects characterize how preference for the level of one product attribute is dependent on the level of another attribute. When interaction effects are negligible, a main effects fractional factorial experimental design can be used to reduce data requirements and survey cost. This is particularly important when the presence of many parameters or levels makes full factorial designs intractable. However, if interaction effects are relevant, main effects design can create biased estimates and lead to erroneous conclusions. This work investigates consumer preference interactions in the nontraditional context of visual choice-based conjoint analysis, where the conjoint attributes are parameters that define a product's shape. Although many conjoint studies assume interaction effects to be negligible, they may play a larger role for shape parameters. The role of interaction effects is explored in two visual conjoint case studies. The results suggest that interactions can be either negligible or dominant in visual conjoint, depending on consumer preferences. Generally, we suggest using randomized designs to avoid any bias resulting from the presence of interaction effects.


1978 ◽  
Vol 22 (1) ◽  
pp. 599-599
Author(s):  
Joseph J. Pignatiello

It is assumed that, in a 2k factorial experiment, there are different costs per observation at each of the factor combinations. When the number of factors, k, increases, the total number of observations in the full factorial increases rapidly as does the expense of observing all observations in the full factorial. If the experimenter can assume certain classes of higher-order interactions are negligible, then advantage may be taken by observing measurements from an orthogonal fractional factorial. For any “1/2p” fraction of the full factorial, a 2k-p experiment, there are 2p feasible orthogonal fractions that could be selected at random. This paper develops an algorithm for generating the minimum cost such fraction in an efficient way. The problem is formulated as a mathematical programming problem subject to a resolution III constraint (main effects unconfounded). Computational experience is presented.


2019 ◽  
Vol 9 (4) ◽  
pp. 609-618 ◽  
Author(s):  
Ilham Kuncahyo ◽  
Syaiful Choiri ◽  
Achmad Fudholi ◽  
Ronny Martien ◽  
Abdul Rohman

Purpose: Recently, a self-nanoemulsifying drug delivery system (SNEDDS) has showngreat improvement in the enhancement of drug bioavailability. The selection of appropriatecompositions in the SNEDDS formulation is the fundamental step towards developing asuccessful formulation. This study sought to evaluate the effectiveness of fractional factorialdesign (FFD) in the selection and screening of a SNEDDS composition. Furthermore, the mostefficient FFD approach would be applied to the selection of SNEDDS components.Methods: The types of oil, surfactant, co-surfactant, and their concentrations were selected asfactors. 26 full factorial design (FD) (64 runs), 26-1 FFD (32 runs), 26-2 FFD (16 runs), and 26-3 FFD(8 runs) were compared to the main effect contributions of each design. Ca-pitavastatin (Ca-PVT)was used as a drug model. Screening parameters, such as transmittance, emulsification time,and drug load, were selected as responses followed by particle size along with zeta potentialfor optimized formulation.Results: The results indicated that the patterns of 26 full FD and 26-1 for both main effects andinteractions were similar. 26-3 FFD lacked adequate precision when used for screening owing tothe limitation of design points. In addition, capryol, Tween 80, and transcutol P were selected tobe developed in a SNEDDS formulation with a particle size of 69.7 ± 5.3 nm along with a zetapotential of 33.4 ± 2.1 mV.Conclusion: Herein, 26-2 FFD was chosen as the most efficient and adequate design for theselection and screening of SNEDDS composition. The optimized formulation fulfilled therequirement of a quality target profile of a nanoemulsion.<br />


2017 ◽  
Vol 2017 ◽  
pp. 1-14
Author(s):  
Ye Cheng ◽  
Jianhao Hu

In conventional stochastic computation, all the input streams are Bernoulli sequences (BSs), which may result in large random error. To reduce random error and improve computational accuracy, some other sequences have been reported as alternatives to BSs. However, these sequences only apply to the specific stochastic circuits, have difficulties in hardware generation, or have length constraints. To this end, new sequences without these disadvantages should be considered. This paper proposes the random error analysis method for stochastic computation based on autocorrelation sequence (AS), which is more general than the conventional one based on BS. The analysis results show that we can use the proper ASs as input streams of stochastic circuits to reduce random error. On the basis of that conclusion, we propose the random error reduction scheme based on maximal concentrated autocorrelation sequence (MCAS) and BS, both of which are ASs. MCAS and BS are applicable to any combinational stochastic circuit, are easily generated by hardware, and have no length constraints, which avoid the disadvantages of sequences in the previous work. Moreover, we apply the proposed random error reduction scheme into several typical stochastic circuits as case studies. The simulation results confirm the effectiveness of the proposed scheme.


Author(s):  
N. Rajalakshmi ◽  
G. Velayutham ◽  
K. S. Dhathathreyan

This paper describes the application of statistical analysis to a 2.5kW proton exchange membrane fuel cell stack operation, by experimental design methodology, whereby robust design conditions were identified for the operation of fuel cell stacks. The function is defined as the relationship between the fuel cell power and the operating pressure and stoichiometry of the reactants. Four types of control factors, namely, the pressures of the fuel and oxidant and the flow rates of the fuel and oxidant, are considered to select the optimized conditions for fuel cell operation. All the four factors have two levels, leading a full factorial design requiring 24 experiments leading to 16 experiments and fractional factorial experiments, 24−1, leading to 8 experiments. The experimental data collected were analyzed by statistical sensitivity analysis by checking the effect of one variable parameter on the other. The mixed interaction between the factors was also considered along with main interaction to explain the model developed using the design of experiments. The robust design condition for maximum fuel cell performance was found to be air flow rate, and the interaction between the air pressure and flow rate compared to all other factors and their interactions. These fractional factorial experiments, presently applied to fuel cell systems, can be extended to other ranges and factors with various levels, with a goal to minimize the variation caused by various factors that influence the fuel cell performance but with less number of trials compared to full factorial experiments.


2000 ◽  
Vol 1719 (1) ◽  
pp. 165-174 ◽  
Author(s):  
Peter R. Stopher ◽  
David A. Hensher

Transportation planners increasingly include a stated choice (SC) experiment as part of the armory of empirical sources of information on how individuals respond to current and potential travel contexts. The accumulated experience with SC data has been heavily conditioned on analyst prejudices about the acceptable complexity of the data collection instrument, especially the number of profiles (or treatments) given to each sampled individual (and the number of attributes and alternatives to be processed). It is not uncommon for transport demand modelers to impose stringent limitations on the complexity of an SC experiment. A review of the marketing and transport literature suggests that little is known about the basis for rejecting complex designs or accepting simple designs. Although more complex designs provide the analyst with increasing degrees of freedom in the estimation of models, facilitating nonlinearity in main effects and independent two-way interactions, it is not clear what the overall behavioral gains are in increasing the number of treatments. A complex design is developed as the basis for a stated choice study, producing a fractional factorial of 32 rows. The fraction is then truncated by administering 4, 8, 16, 24, and 32 profiles to a sample of 166 individuals (producing 1, 016 treatments) in Australia and New Zealand faced with the decision to fly (or not to fly) between Australia and New Zealand by either Qantas or Ansett under alternative fare regimes. Statistical comparisons of elasticities (an appropriate behavioral basis for comparisons) suggest that the empirical gains within the context of a linear specification of the utility expression associated with each alternative in a discrete choice model may be quite marginal.


Sign in / Sign up

Export Citation Format

Share Document