Aggregation and Measurement Errors in Performance Evaluation

2004 ◽  
Vol 16 (1) ◽  
pp. 93-105 ◽  
Author(s):  
Anil Arya ◽  
John C. Fellingham ◽  
Douglas A. Schroeder

In this paper, we present a sequential production setting wherein employing aggregate measures for performance evaluation prove superior to those constructed specifically to measure individual activity. In our setting, unverifiable inputs translate into verifiable measures via two types of shocks: the first is production errors that cause outputs to deviate from inputs, and the second is measurement errors that result in outputs themselves being stated imprecisely. Agents are evaluated using either individual or aggregate measures, where the former measures the incremental output added by each link and the latter measures the cumulative output produced at the end of each stage. Aggregate measures can be preferred to individual measures because they increase the sample size available to infer upstream agents' unobservable acts and because they serve as an avenue for measurement errors to cancel.

1992 ◽  
Vol 19 (2) ◽  
pp. 121-126 ◽  
Author(s):  
F. E. Dowell

Abstract Multiple samples of two sizes from 40 trailers of farmers' stock peanuts were inspected to determine sample size effects on measuring grade factors and dollar value. Grade factors and dollar value were measured using the current sample size (IX) and in a sample double the current size (2X). The 2X sample variances for determining sound mature kernels, sound splits, other kernels, damaged kernels, foreign material, loose shelled kernels, and load value were significantly lower than the IX sample variances in only 8 or less of the 40 trailers. Average dollar values indicate measurement errors caused by equipment and human errors when cleaning samples, determining kernel size, and determining damaged kernels may be increasing as sample size increases. At least 24% of the total error can be attributed to equipment and human error. The grade factors with the smallest percentage of total error attributable to equipment and human error will benefit most by increasing sample size. Thus, dollar value, sound mature kernel, foreign material and damaged kernel measurements will benefit most by increasing sample size; whereas, loose shelled kernels, sound split and other kernel measurements will benefit most by improving equipment and procedures.


2019 ◽  
Vol 0 (0) ◽  
pp. 0-0 ◽  
Author(s):  
Zeynab Hassani ◽  
Amirhossein Amiri ◽  
Philippe CASTAGLIOLA

2017 ◽  
Vol 78 (1) ◽  
pp. 70-79 ◽  
Author(s):  
W. Alan Nicewander

Spearman’s correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman’s equation removes all measurement error from a correlation coefficient which translates into “increasing the reliability of either-or-both of two variables to 1.0.” In this inquiry, Spearman’s correction is modified to allow partial removal of measurement error from either-or-both of two variables being correlated. The practical utility of this partial correction is demonstrated in its use to explore increasing the power of statistical tests by increasing sample size versus increasing the reliability of the dependent variable for an experiment. Other applied uses are mentioned.


2019 ◽  
Vol 12 (3) ◽  
pp. 509
Author(s):  
Luiz Carlos Marques dos Anjos ◽  
Adhemar Ranciaro Neto ◽  
Edilson Paulo ◽  
Paulo Aguiar do Monte

The objective of this study was to analyze the implicit use of relative performance evaluation in BM&FBovespa listed companies as a way to measure the remuneration of its executives. To define the sample, we sought to identify companies that disclosed information about the compensation of their executives between 2009 and 2012, totaling the sample size in 67 companies, totaling 112 observations. They were then categorized in order to capture risk sharing as predicted by the theory of relative performance evaluation. The results of this research indicate a strong asymmetry in the distribution of the compensation, mainly due to the long-term compensation, which caused the occurrence of outliers. As a result of this situation, and following studies already developed, it was decided to test the model through quantile regression. Even with the use of the median regression it was not possible to identify statistically significant evidences of the occurrence of relative performance evaluation, therefore, there is no evidence that the variation of the result of the sector reduces the impacts that the results obtained by the organizations exercise on the executive remuneration.


Author(s):  
Aslı Suner

Abstract A number of specialized clustering methods have been developed so far for the accurate analysis of single-cell RNA-sequencing (scRNA-seq) expression data, and several reports have been published documenting the performance measures of these clustering methods under different conditions. However, to date, there are no available studies regarding the systematic evaluation of the performance measures of the clustering methods taking into consideration the sample size and cell composition of a given scRNA-seq dataset. Herein, a comprehensive performance evaluation study of 11 selected scRNA-seq clustering methods was performed using synthetic datasets with known sample sizes and number of subpopulations, as well as varying levels of transcriptome complexity. The results indicate that the overall performance of the clustering methods under study are highly dependent on the sample size and complexity of the scRNA-seq dataset. In most of the cases, better clustering performances were obtained as the number of cells in a given expression dataset was increased. The findings of this study also highlight the importance of sample size for the successful detection of rare cell subpopulations with an appropriate clustering tool.


Sign in / Sign up

Export Citation Format

Share Document