scholarly journals Being Uncertain in Chromatographic Calibration—Some Unobvious Details in Experimental Design

Molecules ◽  
2021 ◽  
Vol 26 (22) ◽  
pp. 7035
Author(s):  
Łukasz Komsta ◽  
Katarzyna Wicha-Komsta ◽  
Tomasz Kocki

This is an introductory tutorial and review about the uncertainty problem in chromatographic calibration. It emphasizes some unobvious, but important details influencing errors in the calibration curve estimation, uncertainty in prediction, as well as the connections and dependences between them, all from various perspectives of uncertainty measurement. Nonuniform D-optimal designs coming from Fedorov theorem are computed and presented. As an example, all possible designs of 24 calibration samples (3–8, 4–6, 6–4, 8–3 and 12–2, both uniform and D-optimal) are compared in context of many optimality criteria. It can be concluded that there are only two independent (orthogonal, but slightly complex) trends in optimality of these designs. The conclusions are important, as the uniform designs with many concentrations are not the best choices, contrary to some intuitive perception. Nonuniform designs are visibly better alternative in most calibration cases.

2021 ◽  
Vol 1 (1) ◽  
pp. 49-58
Author(s):  
Mårten Schultzberg ◽  
Per Johansson

AbstractRecently a computational-based experimental design strategy called rerandomization has been proposed as an alternative or complement to traditional blocked designs. The idea of rerandomization is to remove, from consideration, those allocations with large imbalances in observed covariates according to a balance criterion, and then randomize within the set of acceptable allocations. Based on the Mahalanobis distance criterion for balancing the covariates, we show that asymptotic inference to the population, from which the units in the sample are randomly drawn, is possible using only the set of best, or ‘optimal’, allocations. Finally, we show that for the optimal and near optimal designs, the quite complex asymptotic sampling distribution derived by Li et al. (2018), is well approximated by a normal distribution.


2019 ◽  
Vol 5 (344) ◽  
pp. 17-27
Author(s):  
Małgorzata Graczyk ◽  
Bronisław Ceranka

The problem of determining unknown measurements of objects in the model of spring balance weighing designs is presented. These designs are considered under the assumption that experimental errors are uncorrelated and that they have the same variances. The relations between the parameters of weighing designs are deliberated from the point of view of optimality criteria. In the paper, designs in which the product of the variances of estimators is possibly the smallest one, i.e. D‑optimal designs, are studied. A highly D‑efficient design in classes in which a D‑optimal design does not exist are determined. The necessary and sufficient conditions under which a highly efficient design exists and methods of its construction, along with relevant examples, are introduced.


Author(s):  
Stephen L. Canfield ◽  
Daniel L. Chlarson ◽  
Alexander Shibakov ◽  
Patrick V. Hull

Researchers in the field of optimal synthesis of compliant mechanisms have been working to develop tools that yield distributed compliant devices to perform specific tasks. However, it has been demonstrated in the literature that much of this work has resulted in mechanisms that localize compliance rather than distribute it as desired. In fact, Yin and Ananthasuresh (2003) [1] demonstrate that based on the current formulation of optimality criteria and analysis via the finite element (FE) technique, a lumped compliant device will always exist as the minimizing solution to the objective function. The addition of constraints on allowable strain simply moves the solution back from this objective. Therefore, modification to the standard optimality criteria needs to take place. Yin and Ananthasuresh [1] proposed and compared several approaches that include distributivity-based measures within the optimality criteria, and demonstrated the effectiveness of this approach. In this paper, the authors propose to build on this problem. In a similar manner, a general approach to the topology synthesis problem will be suggested to yield mechanisms in which the compliance is distributed throughout the device. This work will be based on the idea of including compliance distribution directly within the objective functions, while addressing some of the potential limiting factors in past approaches. The technique will be generalized to allow simple addition of criteria in the future, and to deliver optimal designs through to manufacture. This work will first revisit and propose several quantitative definitions for distributed compliant devices. Then, a multi-objective formulation based on a non-dominating sort and Pareto set method will be incorporated that will provide information on the nature of the problem and compatibility of employed objective functions.


2015 ◽  
Vol 43 (1) ◽  
pp. 30-56 ◽  
Author(s):  
Linwei Hu ◽  
Min Yang ◽  
John Stufken

2021 ◽  
Vol 15 (4) ◽  
Author(s):  
Kirsten Schorning ◽  
Holger Dette

AbstractWe consider the problem of designing experiments for the comparison of two regression curves describing the relation between a predictor and a response in two groups, where the data between and within the group may be dependent. In order to derive efficient designs we use results from stochastic analysis to identify the best linear unbiased estimator (BLUE) in a corresponding continuous model. It is demonstrated that in general simultaneous estimation using the data from both groups yields more precise results than estimation of the parameters separately in the two groups. Using the BLUE from simultaneous estimation, we then construct an efficient linear estimator for finite sample size by minimizing the mean squared error between the optimal solution in the continuous model and its discrete approximation with respect to the weights (of the linear estimator). Finally, the optimal design points are determined by minimizing the maximal width of a simultaneous confidence band for the difference of the two regression functions. The advantages of the new approach are illustrated by means of a simulation study, where it is shown that the use of the optimal designs yields substantially narrower confidence bands than the application of uniform designs.


1989 ◽  
Vol 38 (3-4) ◽  
pp. 187-194
Author(s):  
Mike Jacroux ◽  
Rita Saha Ray

In this paper we consider the determination of optimal designs in experimental situations requiring usage of a block design having v treatments assigned to experimental units arranged in b blocks of size k. Using majorization arguments, a design d* having an incidence matrix of the form [Formula: see text] is optimal under a wide class of optimality criteria.


Author(s):  
Ruichen Jin ◽  
Wei Chen ◽  
Agus Sudjianto

Metamodeling approach has been widely used due to the high computational cost of using high-fidelity simulations in engineering design. The accuracy of metamodels is directly related to the experimental designs used. Optimal experimental designs have been shown to have good “space filling” and projective properties. However, the high cost in constructing them limits their use. In this paper, a new algorithm for constructing optimal experimental designs is developed. There are two major developments involved in this work. One is on developing an efficient global optimal search algorithm, named as enhanced stochastic evolutionary (ESE) algorithm. The other is on developing efficient algorithms for evaluating optimality criteria. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time, the number of exchanges needed for generating new designs, and the achieved optimality criteria. The algorithm is also very flexible to construct various classes of optimal designs to retain certain structural properties.


Author(s):  
K.M Abdelbasit

Experimental designs for nonlinear problems have to a large extent relied on optimality criteria originally proposed for linear models. Optimal designs obtained for nonlinear models are functions of the unknown model parameters. They cannot, therefore, be directly implemented without some knowledge of the very parameters whose estimation is sought. The natural way is to adopt a sequential or Bayesian approach. Another is to utilize available estimates or guesses. In this article we provide a brief historical account of the subject, discuss optimality criteria commonly used for nonlinear models, the associated problems and ways of overcoming them. We also discuss issues of robustness of locally optimal designs. A brief review of sequential and Bayesian procedures is given. Finally we discuss alternative design criteria of constant information and minimum bias and pose some problems for future work.   


2008 ◽  
Vol 18 (1) ◽  
pp. 63-74 ◽  
Author(s):  
Marija Kuzmanovic

Conjoint analysis is a research technique for measuring consumer preferences, and it is a method for simulating consumers' possible reactions to changes in current products or newly introduced products into an existing competitive market. One of the most critical steps in Conjoint analysis application is experimental designs construction. The purpose of an experimental design is to give a rough overall idea as to the shape of the experimental response surface, while only requiring a relatively small number of runs. These designs are expected to be orthogonal and balanced in an ideal case. In practice, though, it is hard to construct optimal designs and thus constructing of near optimal and efficient designs is carried out. There are several ways to quantify the relative efficiency of experimental designs. The choice of measure will determine which types of experimental designs are favored as well as the algorithms for choosing efficient designs. In this paper it is proposed the algorithm which combines one standard and one non-standard optimality criteria. The computational experiments were made, and results of comparison with algorithm implemented in commercial package SPSS confirm the efficiency of the proposed algorithm. .


1998 ◽  
Vol 20 (1) ◽  
pp. 9-15 ◽  
Author(s):  
Maria Fernanda Pimentel ◽  
Benício de Barros Neto ◽  
Teresa Cristina B. Saldanha ◽  
Mário César Ugulino Araújo

A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data.


Sign in / Sign up

Export Citation Format

Share Document