scholarly journals Additive Dose Response Models: Explicit Formulations and the Loewe Additivity Consistency Condition

2017 ◽  
Author(s):  
Simone Lederer ◽  
Tjeerd M. H. Dijkstra ◽  
Tom Heskes

AbstractHigh-throughput techniques allow for massive screening of drug combinations. To find combinations that exhibit an interaction effect, one filters for promising compound combinations by comparing to a response without interaction. A common principle for no interaction is Loewe Additivity which is based on the assumption that no compound interacts with itself and that doses of both compounds for a given effect are equivalent. For the model to be consistent, the doses of both compounds have to be proportional. We call this restriction the Loewe Additivity Consistency Condition (LACC). We derive explicit and implicit null reference models from the Loewe Additivity principle that are equivalent when the LACC holds. Of these two formulations, the implicit formulation is the known General Isobole Equation [1], whereas the explicit one is the novel contribution. The LACC is violated in a significant number of cases. In this scenario the models make different predictions. We analyze two data sets of drug screening that are non-interactive [2, 3] and show that the LACC is mostly violated and Loewe Additivity not defined. Further, we compare the measurements of the non-interactive cases of both data sets to the theoretical null reference models in terms of bias and mean squared error. We demonstrate that the explicit formulation of the null reference model leads to smaller mean squared errors than the implicit one and is much faster to compute.

2022 ◽  
Vol 6 (1) ◽  
pp. 29
Author(s):  
Zulqurnain Sabir ◽  
Muhammad Asif Zahoor Raja ◽  
Thongchai Botmart ◽  
Wajaree Weera

In this study, a novel design of a second kind of nonlinear Lane–Emden prediction differential singular model (NLE-PDSM) is presented. The numerical solutions of this model were investigated via a neuro-evolution computing intelligent solver using artificial neural networks (ANNs) optimized by global and local search genetic algorithms (GAs) and the active-set method (ASM), i.e., ANN-GAASM. The novel NLE-PDSM was derived from the standard LE and the PDSM along with the details of singular points, prediction terms and shape factors. The modeling strength of ANN was implemented to create a merit function based on the second kind of NLE-PDSM using the mean squared error, and optimization was performed through the GAASM. The corroboration, validation and excellence of the ANN-GAASM for three distinct problems were established through relative studies from exact solutions on the basis of stability, convergence and robustness. Furthermore, explanations through statistical investigations confirmed the worth of the proposed scheme.


2019 ◽  
Author(s):  
Shanaz A. Ghandhi ◽  
Igor Shuryak ◽  
Shad R. Morton ◽  
Sally A. Amundson ◽  
David J. Brenner

AbstractIn the event of a nuclear attack or radiation event, there would be an urgent need for assessing and reconstructing the dose to which hundreds or thousands of individuals were exposed. These measurements would need a rapid assay to facilitate triage and medical management for individuals based on dose. Our approaches to development of rapid assays for reconstructing dose, using transcriptomics, have led to identification of gene sets that have potential to be used in the field; but need further testing. This was a proof-of-principle study for new methods using radiation-responsive genes to generate quantitative, rather than categorical, radiation-dose reconstructions based on a blood sample. We used a new normalization method to reduce effects of variability of gene signals in unirradiated samples across studies; developed a quantitative dose-reconstruction method that is generally under-utilized compared to categorical methods; and combined these to determine a gene-set as a reconstructor. Our dose-reconstruction biomarker was trained on two data sets and tested on two independent ones. It was able to predict dose up to 4.5 Gy with root mean squared error (RMSE) of ± 0.35 Gy on test datasets (same platform), and up to 6.0 Gy with RMSE of 1.74 Gy on another (different platform).


2016 ◽  
Vol 8 (3) ◽  
pp. 321-339
Author(s):  
R. Pandey ◽  
K. Yadav ◽  
N. S. Thakur

The present paper provides alternative improved Factor-Type (F-T) estimators of population mean in presence of item non-response for the practitioners. The proposed estimators have been shown to be more efficient than the four existing estimators which are more efficient than the usual ratio and the mean estimators. Optimum conditions for minimum mean squared error are obtained for the new estimators. Empirical comparisons based on three different data sets establish that the proposed estimators record least mean squared error and hence a substantial gain in Percentage Relative Efficiency (P.R.E.), over these five contemporary estimators.


Author(s):  
Sofi Mudasir Ahad ◽  
Sheikh Parvaiz Ahmad ◽  
Sheikh Aasimeh Rehman

In this paper, Bayesian and non-Bayesian methods are used for parameter estimation of weighted Rayleigh (WR) distribution. Posterior distributions are derived under the assumption of informative and non-informative priors. The Bayes estimators and associated risks are obtained under different symmetric and asymmetric loss functions. Results are compared on the basis of posterior risk and mean square error using simulated and real life data sets. The study depicts that in order to estimate the scale parameter of the weighted Rayleigh distribution use of entropy loss function under Gumbel type II prior can be preferred. Also, Bayesian method of estimation having least values of mean squared error gives better results as compared to maximum likelihood method of estimation.


2018 ◽  
Vol 44 (1) ◽  
pp. 25-44
Author(s):  
Sandip Sinharay

The value-added method of Haberman is arguably one of the most popular methods to evaluate the quality of subscores. According to the method, a subscore has added value if the reliability of the subscore is larger than a quantity referred to as the proportional reduction in mean squared error of the total score. This article shows how well-known statistical tests can be used to determine the added value of subscores and augmented subscores. The usefulness of the suggested tests is demonstrated using two operational data sets.


2018 ◽  
Vol 3 (1) ◽  
pp. 24-32
Author(s):  
Muhammad Ali ◽  
Muhammad Khalil ◽  
Muhammad Hanif ◽  
Nasir Jamal ◽  
Usman Shahzad

In this research study, modified family of estimators is proposed to estimate the population variance of the study variable when the population variance, quartiles, median and the coefficient of correlation of auxiliary variable are known. The expression of bias and mean squared error (MSE) of the proposed estimator are derived. Comparisons of the proposed estimator with the other existing are conducted estimators. The results obtained were illustrated numerically by using primary data sets. Theoretical and numerical justification of the proposed estimator was done to show its dominance.


Author(s):  
HENRIK BOSTRÖM

Probability estimation trees (PETs) generalize classification trees in that they assign class probability distributions instead of class labels to examples that are to be classified. This property has been demonstrated to allow PETs to outperform classification trees with respect to ranking performance, as measured by the area under the ROC curve (AUC). It has further been shown that the use of probability correction improves the performance of PETs. This has lead to the use of probability correction also in forests of PETs. However, it was recently observed that probability correction may in fact deteriorate performance of forests of PETs. A more detailed study of the phenomenon is presented and the reasons behind this observation are analyzed. An empirical investigation is presented, comparing forests of classification trees to forests of both corrected and uncorrected PETS on 34 data sets from the UCI repository. The experiment shows that a small forest (10 trees) of probability corrected PETs gives a higher AUC than a similar-sized forest of classification trees, hence providing evidence in favor of using forests of probability corrected PETs. However, the picture changes when increasing the forest size, as the AUC is no longer improved by probability correction. For accuracy and squared error of predicted class probabilities (Brier score), probability correction even leads to a negative effect. An analysis of the mean squared error of the trees in the forests and their variance, shows that although probability correction results in trees that are more correct on average, the variance is reduced at the same time, leading to an overall loss of performance for larger forests. The main conclusions are that probability correction should only be employed in small forests of PETs, and that for larger forests, classification trees and PETs are equally good alternatives.


Chemosensors ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 23
Author(s):  
Julia Ashina ◽  
Vasily Babain ◽  
Dmitry Kirsanov ◽  
Andrey Legin

This work aims to discuss quantification of rare earth metals in a complex mixture using the novel multi-ionophore approach based on potentiometric sensor arrays. Three compounds previously tested as extracting agents in reprocessing of spent nuclear fuel were applied as ionophores in polyvinyl chloride (PVC)-plasticized membranes of potentiometric sensors. Seven types of sensors containing these ionophores were prepared and assembled into a sensor array. The multi-ionophore array performance was evaluated in the analysis of Ln3+ mixtures and compared to that of conventional monoionophore sensors. It was demonstrated that a multi-ionophore array can yield RMSEP (root mean-squared error of prediction) values not exceeding 0.15 logC for quantification of individual lanthanides in binary mixtures in a concentration range 5 to 3 pLn3+.


Author(s):  
S. K. Yadav ◽  
Dinesh Sharma ◽  
Julius Alade

Introduction: Variation is an inherent phenomenon whether in nature made things or man made. Thus, it looks important to estimate this variation. Various authors have worked in the direction of improved estimation of population variance utilizing the known auxiliary parameters for better policy making. Methods: In this article, a new searls ratio type class of estimator is suggested for elevated estimation of population variance of main variable. As the suggested estimator is biased, so its bias and mean squared error (MSE) have been derived up to the approximation of order-one. The optimum values for the Searls characterizing scalars are obtained. The minimum MSE of the introduced estimator is obtained for the optimum Searls characterizing scalars. A theoretical comparison between suggested estimator and the competing estimators has been made through their mean squared errors. The efficiency conditions of suggested estimator over competing estimators are also obtained. These theoretical conditions are verified using some natural data sets. The computation of R codes for the biases and MSEs of the suggested and competing estimators are developed and are used for three natural populations in Naz et al. (2019). The estimator with least MSE is recommended for practical utility. The empirical study has been done using R programming. Results: The MSEs of different competing and the suggested estimators are obtained for three natural populations. The estimator under comparison with the least MSE is recommended for practical applications. Discussion: The aim to search for the most efficient estimation for improved estimation, is fulfilled through the proper use of the auxiliary parameters obtained from the known auxiliary variable. The suggested estimator may be used for elevated estimation of population variance. Conclusion: The introduced estimator is having least MSE as compared to competing estimators of popularion variance for all three natural populations. Thus it may be recommended for the application in various fields.


Sign in / Sign up

Export Citation Format

Share Document