true variance
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 10)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Elise Anne Victoire Crompvoets ◽  
Anton A. Béguin ◽  
Klaas Sijtsma

Comparative judgment is a method that allows measurement of a competence by comparison of items with other items. In educational measurement, where comparative judgment is becoming an increasingly popular assessment method, items are mostly students’ responses to an assignment or an examination. For assessments using comparative judgment, the Scale Separation Reliability (SSR) is used to estimate the reliability of the measurement. Previous research has shown that the SSR may overestimate reliability when the pairs to be compared are selected with certain adaptive algorithms, when raters use different underlying models/truths, or when the true variance of the item parameters is below one. This research investigated bias and stability of the components of the SSR in relation to the number of comparisons per item to increase understanding of the SSR. We showed that many comparisons are required to obtain an accurate estimate of the item variance, but that the SSR can be useful even when the variance of the items is overestimated. Lastly, we recommend adjusting the general guideline for the required number of comparisons per item to 41 comparisons per item. This recommendation partly depends on the number of items and the true variance in our simulation study and needs further investigation.


2021 ◽  
Vol 1 ◽  
pp. 2701-2710
Author(s):  
Julie Krogh Agergaard ◽  
Kristoffer Vandrup Sigsgaard ◽  
Niels Henrik Mortensen ◽  
Jingrui Ge ◽  
Kasper Barslund Hansen ◽  
...  

AbstractMaintenance decision making is an important part of managing the costs, effectiveness and risk of maintenance. One way to improve maintenance efficiency without affecting the risk picture is to group maintenance jobs. Literature includes many examples of algorithms for the grouping of maintenance activities. However, the data is not always available, and with increasing plant complexity comes increasingly complex decision requirements, making it difficult to leave the decision making up to algorithms.This paper suggests a framework for the standardisation of maintenance data as an aid for maintenance experts to make decisions on maintenance grouping. The standardisation improves the basis for decisions, giving an overview of true variance within the available data. The goal of the framework is to make it simpler to apply tacit knowledge and make right decisions.Applying the framework in a case study showed that groups can be identified and reconfigured and potential savings easily estimated when maintenance jobs are standardised. The case study enabled an estimated 7%-9% saved on the number of hours spent on the investigated jobs.


2021 ◽  
Author(s):  
Zhiqian Zhai ◽  
Yu Leo Lei ◽  
Rongrong Wang ◽  
Yuying Xie

The rapid development of scRNA-seq technologies enables us to explore the transcriptome at the cell level in a large scale. Recently, various computational methods have been developed to analyze the scRNAseq data such as clustering and visualization. However, current visualization methods including t-SNE and UMAP are challenged by the limited accuracy of rendering the geometic relationship of populations with distinct functional states. Most visualization methods are unsupervised, leaving out information from the clustering results or given labels. This leads to the inaccurate depiction of the distances between the bona fide functional states. In particular, UMAP and t-SNE are not optimal to preserve the global geometric structure. They may result in a contradiction that clusters with near distance in the embedded dimensions are in fact further away in the original dimensions. Besides, UMAP and t-SNE cannot track the variance of clusters. Through the embedding of t-SNE and UMAP, the variance of a cluster is not only associated with the true variance but also is proportional to the sample size. We present supCPM, a robust supervised visualization method, which separates different clusters, preserves global structure, and tracks the cluster variance. Compared with six visualization methods using synthetic and real datasets, supCPM shows improved performance than other methods in preserving the global geometric structure and data variance. Overall, supCPM provides an enhanced visualization pipeline to assist the interpretation of functional transition and accurately depict population segregation.


Author(s):  
Rodríguez Sandoval, Marco Tulio, Et. al.

The text responds to the validation of an instrument that has been designed to evaluate the quality of an intervention proposal aimed at promoting the development of critical thinking in students of the competence assessment diploma of the degree programs of the Caribbean University Corporation CECAR. The design was subjected to a process that began with the validation of the judges, obtaining data that were subjected to the protocol of Lawshe (1975) modified by Tristán (2008), which allowed calculating the content validity index of the instrument 0.829 or 82 , 9%, which confirms the seven proposed dimensions. Also, some recommendations from the judges regarding the writing of some items were accepted. The instrument was then subjected to the process of statistical validity by doing a pilot test with 20 products, initially calculating Cronbach's alpha of 0.912, which shows a high consistency of the construct items. Subsequently, the Guttman and Spearman-Brown reliability coefficients were determined, obtaining values ​​of the coefficient of the two halves of the construct of 0.998 that show high reliability. And at the end, the goodness of fit test of the model was carried out, estimating the variance error, common variance and true variance, estimated common inter-element correlation, estimated reliability and estimation of unbiased reliability, obtaining values ​​that demonstrate high reliability and internal consistency.


2021 ◽  
Vol 6 (2) ◽  
pp. 49-59
Author(s):  
Min Shirley Liu

Kerlinger and Lee (2000) defines reliability as “the proportion of the ‘true’ variance to the total obtained variance of the data yielded by a measuring instrument” and content validity as “representativeness or sampling adequacy of the content—the substance, the matter, the topic of measuring instrument”. The goal of this research is to provide an empirical research method to quantify the reliability and validity of residual income model in the prediction of the value of equity (stock price), by proposing to compare all active U.S. firms from 1981 to 2005 traded in the NYSE and the AMEX (the time period and listed stocks are subject to change based upon the availability of data from different sources). JEL Classification Codes: G10, G17, M41, Z10.


Author(s):  
Н.Е. ПОБОРЧАЯ

Проведен анализ работы регуляризующего алгоритма и процедуры нелинейной фильтрации в условиях неточного знания величины дисперсии аддитивного шума и анализ их вычислительной сложности. С помощью регуляризующих алгоритмов на фоне аддитивного и фазового шума оценивались параметры сигнала квадратурной амплитудной модуляции: сдвиг частоты, постоянные составляющие квадратур сигнала, амплитудный и фазовый дисбаланс, амплитуда и фаза сигнала. Показано, что их сложность ниже, чем у известной процедуры совместного оценивания, а регуляризующий алгоритм устойчивее процедуры нелинейной фильтрации к отклонению дисперсии аддитивного шума от истинных значений. Analysis of the operation of regularizing algorithm and procedure of nonlinear filtering in conditions of the imprecise value of the variance of the additive noise and analysis of their computational complexity were carried out. Using regularizing these algorithms against the background of additive and phase noise, the following parameters of the quadrature amplitude modulation signal were estimated: frequency shift, constant components of the signal quadrature, amplitude and phase imbalance, amplitude and phase of the signal. It is shown that their complexity is lower than that of the well-known joint estimation procedure, and also that the regularizing algorithm is more resistant to deviations from the true variance of the additive noise than the nonlinear filtering procedure.


2021 ◽  
Vol 9 (1) ◽  
pp. 83-108
Author(s):  
Jonathan Levy ◽  
Mark van der Laan ◽  
Alan Hubbard ◽  
Romain Pirracchio

Abstract The stratum-specific treatment effect function is a random variable giving the average treatment effect (ATE) for a randomly drawn stratum of potential confounders a clinician may use to assign treatment. In addition to the ATE, the variance of the stratum-specific treatment effect function is fundamental in determining the heterogeneity of treatment effect values. We offer a non-parametric plug-in estimator, the targeted maximum likelihood estimator (TMLE) and the cross-validated TMLE (CV-TMLE), to simultaneously estimate both the average and variance of the stratum-specific treatment effect function. The CV-TMLE is preferable because it guarantees asymptotic efficiency under two conditions without needing entropy conditions on the initial fits of the outcome model and treatment mechanism, as required by TMLE. Particularly, in circumstances where data adaptive fitting methods are very important to eliminate bias but hold no guarantee of satisfying the entropy condition, we show that the CV-TMLE sampling distributions maintain normality with a lower mean squared error than TMLE. In addition to verifying the theoretical properties of TMLE and CV-TMLE through simulations, we highlight some of the challenges in estimating the variance of the treatment effect, which lack double robustness and might be biased if the true variance is small and sample size insufficient.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 923-924
Author(s):  
Glen Pridham ◽  
Andrew Rutenberg

Abstract The frailty index (FI) is a summary measure of health during aging that is defined by the average number of ‘things wrong’, i.e. health deficits, across a sundry of lab, clinical, and questionnaire measurements. Missing data are ubiquitous in aging studies. Although the FI appears to have robust predictive power—even when ignoring missing data, there has not been a systematic study of the consequences of imputation when used in the principle investigation. We investigated the standard imputation methodology, multiple imputation using chained equations (MICE), and other missing data methods, in terms of prediction of mortality and statistical power using the 2003/04 and 2005/06 NHANES datasets. When we masked known data completely at random, we observed that available case analysis incorrectly estimated the true variance of the FI leading to potential problems in hypothesis testing, whereas imputation helped mitigate this effect. We also observed that the default imputation methods from MICE showed a significant increase in FI relative to the ground truth together with a decrease in predictive power, hence we suggest other options when performing imputation with NHANES. The underlying missing mechanism in NHANES is not random and appears to be important, for example survival curve analysis showed that the top half of patients with the most missing data died significantly younger than the bottom half.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5178 ◽  
Author(s):  
Mostafa Osman ◽  
Ahmed Hussein ◽  
Abdulla Al-Kaff ◽  
Fernando García ◽  
Dongpu Cao

Localization is the fundamental problem of intelligent vehicles. For a vehicle to autonomously operate, it first needs to locate itself in the environment. A lot of different odometries (visual, inertial, wheel encoders) have been introduced through the past few years for autonomous vehicle localization. However, such odometries suffers from drift due to their reliance on integration of sensor measurements. In this paper, the drift error in an odometry is modeled and a Drift Covariance Estimation (DCE) algorithm is introduced. The DCE algorithm estimates the covariance of an odometry using the readings of another on-board sensor which does not suffer from drift. To validate the proposed algorithm, several real-world experiments in different conditions as well as sequences from Oxford RobotCar Dataset and EU long-term driving dataset are used. The effect of the covariance estimation on three different fusion-based localization algorithms (EKF, UKF and EH-infinity) is studied in comparison with the use of constant covariance, which were calculated based on the true variance of the sensors being used. The obtained results show the efficacy of the estimation algorithm compared to constant covariances in terms of improving the accuracy of localization.


2019 ◽  
Vol 485 (3) ◽  
pp. 4343-4358
Author(s):  
Germán Chaparro-Molano ◽  
Juan Carlos Cuervo ◽  
Oscar Alberto Restrepo Gaitán ◽  
Sergio Torres Arzayús

ABSTRACT We propose the use of robust, Bayesian methods for estimating extragalactic distance errors in multimeasurement catalogues. We seek to improve upon the more commonly used frequentist propagation-of-error methods, as they fail to explain both the scatter between different measurements and the effects of skewness in the metric distance probability distribution. For individual galaxies, the most transparent way to assess the variance of redshift independent distances is to directly sample the posterior probability distribution obtained from the mixture of reported measurements. However, sampling the posterior can be cumbersome for catalogue-wide precision cosmology applications. We compare the performance of frequentist methods versus our proposed measures for estimating the true variance of the metric distance probability distribution. We provide pre-computed distance error data tables for galaxies in three catalogues: NED-D, HyperLEDA, and Cosmicflows-3. Additionally, we develop a Bayesian model that considers systematic and random effects in the estimation of errors for Tully–Fisher (TF) relation derived distances in NED-D. We validate this model with a Bayesian p-value computed using the Freeman–Tukey discrepancy measure as a posterior predictive check. We are then able to predict distance errors for 884 galaxies in the NED-D catalogue and 203 galaxies in the HyperLEDA catalogue that do not report TF distance modulus errors. Our goal is that our estimated and predicted errors are used in catalogue-wide applications that require acknowledging the true variance of extragalactic distance measurements.


Sign in / Sign up

Export Citation Format

Share Document