How a Bayesian Might Estimate the Distribution of Cronbach’s Alpha From Ordinal-Dynamic Scaled Data

Methodology ◽  
2010 ◽  
Vol 6 (2) ◽  
pp. 71-82 ◽  
Author(s):  
Byron J. Gajewski ◽  
Diane K. Boyle ◽  
Sarah Thompson

We demonstrate the utility of a Bayesian-based approach for calculating intervals of Cronbach’s alpha from a psychological instrument having ordinal responses with a dynamic scale. A small number of response options on an instrument will cause traditional-based interval estimates to be biased. Ordinal-based solutions are problematic because there is no clear mechanism for handling the dynamic scale. One way to remedy the bias is to adjust with a Bayesian approach. The Bayesian approach adjusts the bias and allows theoretically simple calculations of Cronbach’s alpha and intervals. We demonstrate the calculations of the Bayesian approach while at the same time offer a comparison to more traditional-based methods using both credible (or confidence) intervals and mean squared error. Practical advice is offered.

Author(s):  
Antons Rebguns ◽  
Diana F. Spears ◽  
Richard Anderson-Sprecher ◽  
Aleksey Kletsov

This paper presents a novel theoretical framework for swarms of agents. Before deploying a swarm for a task, it is advantageous to predict whether a desired percentage of the swarm will succeed. The authors present a framework that uses a small group of expendable “scout” agents to predict the success probability of the entire swarm, thereby preventing many agent losses. The scouts apply one of two formulas to predict – the standard Bernoulli trials formula or the new Bayesian formula. For experimental evaluation, the framework is applied to simulated agents navigating around obstacles to reach a goal location. Extensive experimental results compare the mean-squared error of the predictions of both formulas with ground truth, under varying circumstances. Results indicate the accuracy and robustness of the Bayesian approach. The framework also yields an intriguing result, namely, that both formulas usually predict better in the presence of (Lennard-Jones) inter-agent forces than when their independence assumptions hold.


Soil Research ◽  
2015 ◽  
Vol 53 (8) ◽  
pp. 907 ◽  
Author(s):  
David Clifford ◽  
Yi Guo

Given the wide variety of ways one can measure and record soil properties, it is not uncommon to have multiple overlapping predictive maps for a particular soil property. One is then faced with the challenge of choosing the best prediction at a particular point, either by selecting one of the maps, or by combining them together in some optimal manner. This question was recently examined in detail when Malone et al. (2014) compared four different methods for combining a digital soil mapping product with a disaggregation product based on legacy data. These authors also examined the issue of how to compute confidence intervals for the resulting map based on confidence intervals associated with the original input products. In this paper, we propose a new method to combine models called adaptive gating, which is inspired by the use of gating functions in mixture of experts, a machine learning approach to forming hierarchical classifiers. We compare it here with two standard approaches – inverse-variance weights and a regression based approach. One of the benefits of the adaptive gating approach is that it allows weights to vary based on covariate information or across geographic space. As such, this presents a method that explicitly takes full advantage of the spatial nature of the maps we are trying to blend. We also suggest a conservative method for combining confidence intervals. We show that the root mean-squared error of predictions from the adaptive gating approach is similar to that of other standard approaches under cross-validation. However under independent validation the adaptive gating approach works better than the alternatives and as such it warrants further study in other areas of application and further development to reduce its computational complexity.


2021 ◽  
Vol 8 (4) ◽  
pp. 309-332
Author(s):  
Efosa Michael Ogbeide ◽  
Joseph Erunmwosa Osemwenkhae

Density estimation is an important aspect of statistics. Statistical inference often requires the knowledge of observed data density. A common method of density estimation is the kernel density estimation (KDE). It is a nonparametric estimation approach which requires a kernel function and a window size (smoothing parameter H). It aids density estimation and pattern recognition. So, this work focuses on the use of a modified intersection of confidence intervals (MICIH) approach in estimating density. The Nigerian crime rate data reported to the Police as reported by the National Bureau of Statistics was used to demonstrate this new approach. This approach in the multivariate kernel density estimation is based on the data. The main way to improve density estimation is to obtain a reduced mean squared error (MSE), the errors for this approach was evaluated. Some improvements were seen. The aim is to achieve adaptive kernel density estimation. This was achieved under a sufficiently smoothing technique. This adaptive approach was based on the bandwidths selection. The quality of the estimates obtained of the MICIH approach when applied, showed some improvements over the existing methods. The MICIH approach has reduced mean squared error and relative faster rate of convergence compared to some other approaches. The approach of MICIH has reduced points of discontinuities in the graphical densities the datasets. This will help to correct points of discontinuities and display adaptive density. Keywords: approach, bandwidth, estimate, error, kernel density


2010 ◽  
Vol 1 (4) ◽  
pp. 17-45
Author(s):  
Antons Rebguns ◽  
Diana F. Spears ◽  
Richard Anderson-Sprecher ◽  
Aleksey Kletsov

This paper presents a novel theoretical framework for swarms of agents. Before deploying a swarm for a task, it is advantageous to predict whether a desired percentage of the swarm will succeed. The authors present a framework that uses a small group of expendable “scout” agents to predict the success probability of the entire swarm, thereby preventing many agent losses. The scouts apply one of two formulas to predict – the standard Bernoulli trials formula or the new Bayesian formula. For experimental evaluation, the framework is applied to simulated agents navigating around obstacles to reach a goal location. Extensive experimental results compare the mean-squared error of the predictions of both formulas with ground truth, under varying circumstances. Results indicate the accuracy and robustness of the Bayesian approach. The framework also yields an intriguing result, namely, that both formulas usually predict better in the presence of (Lennard-Jones) inter-agent forces than when their independence assumptions hold.


2017 ◽  
Vol 11 (22) ◽  
Author(s):  
Juan Rositas Martínez

Keywords: confidence intervals, Cronbach's alpha, effect size, factor analysis, hypothesis testing, sample size, structural equation modelingAbstract. The purpose of this paper is to contribute to fulfilling the objectives of social sciences research such as proper estimation, explanation, prediction and control of levels of social reality variables and their interrelationships, especially when dealing with quantitative variables. It was shown that the sample size or the number of observations to be collected and analyzed is transcendental for the adequacy of the method of statistical inference selected and for the impact degree achieved in its results, especially for complying with reports guidelines issued by the American Psychological Association. Methods and formulations were investigated to determine the sample sizes that contribute to have good levels of estimation when establishing confidence intervals, with reasonable wide and relevant and significative magnitudes of the effects. Practical rules suggested by several researchers when determining samples sizes were tested and as a result it was integrated a guide for determining sample sizes for dichotomous, continuous, discrete and Likert variables, correlation and regression methods, factor analysis, Cronbach's alpha, and structural equation models. It is recommended that the reader builds scenarios with this guide and be aware of the implications and relevance in scientific research and decision making of the sample sizes in trying to meet the aforementioned objectives.Palabras clave: análisis factorial, intervalo de confianza, alpha de Cronbach, modelación mediante ecuaciones estructurales, pruebas de hipótesis, tamaño de muestra, tamaño del efectoResumen. El propósito del presente documento es contribuir al cumplimiento de los objetivos de la investigación en las ciencias sociales de estimar, explicar, predecir y controlar niveles de variables de la realidad social y sus interrelaciones, en investigaciones de tipo cuantitativo. Se demostró que el tamaño de la muestra o la cantidad de observaciones que hay que recolectar y analizar es trascendente tanto en la pertinencia del método de inferencia estadístico que se utilice como en el grado de impacto que se logre en sus resultados, sobre todo de cara a cumplir con lineamientos emitidos por la Asociación Americana de Psicología que es la que da la pauta en la mayoría de las publicaciones del área social. Se investigaron métodos y formulaciones para determinar los tamaños de muestra que contribuyan a tener buenos niveles de estimación al momento de establecer los intervalos de confianza, con aperturas razonables y con magnitudes de los efectos que sean de impacto y se pusieron a prueba reglas prácticas sugeridas por varios autores lográndose integrar una guía tanto para variables dicotómicas, continuas, discretas, tipo Likert y para interrelaciones en ellas, ya se trate de análisis factorial, alpha de Cronbach, regresiones o ecuaciones estructurales. Se recomienda que el lector crear escenarios con esta guía y se sensibilice y se convenza de las implicaciones y de trascendencia tanto en la investigación científica como en la toma de decisiones de los tamaños de muestra al tratar de cumplir con los objetivos de la que hemos mencionado.


2016 ◽  
Vol 2 (11) ◽  
Author(s):  
William Stewart

<p>For modern linkage studies involving many small families, Stewart et al. (2009)[1] introduced an efficient estimator of disease gene location (denoted ) that averages location estimates from random subsamples of the dense SNP data. Their estimator has lower mean squared error than competing estimators and yields narrower confidence intervals (CIs) as well. However, when the number of families is small and the pedigree structure is large (possibly extended), the computational feasibility and statistical properties of  are not known. We use simulation and real data to show that (1) for this extremely important but often overlooked study design, CIs based on  are narrower than CIs based on a single subsample, and (2) the reduction in CI length is proportional to the square root of the expected Monte Carlo error. As a proof of principle, we applied  to the dense SNP data of four large, extended, specific language impairment (SLI) pedigrees, and reduced the single subsample CI by 18%. In summary, confidence intervals based on  should minimize re-sequencing costs beneath linkage peaks, and reduce the number of candidate genes to investigate.</p>


2006 ◽  
Vol 105 (5) ◽  
pp. 877-884 ◽  
Author(s):  
J Bryan Sexton ◽  
Martin A. Makary ◽  
Anthony R. Tersigni ◽  
David Pryor ◽  
Ann Hendrich ◽  
...  

Background The Joint Commission on Accreditation of Healthcare Organizations is proposing that hospitals measure culture beginning in 2007. However, a reliable and widely used measurement tool for the operating room (OR) setting does not currently exist. Methods OR personnel in 60 US hospitals were surveyed using the Safety Attitudes Questionnaire. The teamwork climate domain of the survey uses six items about difficulty speaking up, conflict resolution, physician-nurse collaboration, feeling supported by others, asking questions, and heeding nurse input. To justify grouping individual-level responses to a single score at each hospital OR level, the authors used a multilevel confirmatory factor analysis, intraclass correlations, within-group interrater reliability, and Cronbach's alpha. To detect differences at the hospital OR level and by caregiver type, the authors used multivariate analysis of variance (items) and analysis of variance (scale). Results The response rate was 77.1%. There was robust evidence for grouping individual-level respondents to the hospital OR level using the diverse set of statistical tests, e.g., Comparative Fit Index = 0.99, root mean squared error of approximation = 0.05, and acceptable intraclasss correlations, within-group interrater reliability values, and Cronbach's alpha = 0.79. Teamwork climate differed significantly by hospital (F59, 1,911 = 4.06, P &lt; 0.001) and OR caregiver type (F4, 1,911 = 9.96, P &lt; 0.001). Conclusions Rigorous assessment of teamwork climate is possible using this psychometrically sound teamwork climate scale. This tool and initial benchmarks allow others to compare their teamwork climate to national means, in an effort to focus more on what excellent surgical teams do well.


2021 ◽  
Author(s):  
So Yeon Paek ◽  
Lonnie Roy ◽  
Mark J DeHaven ◽  
Elyse Carson ◽  
Sarah E Barlow ◽  
...  

The 10-item Behavior Assessment Questionnaire (BAQ) was developed to assess parent-report of child screen time, physical activity, and food consumption during the past 3 months in children with obesity. Response options were on a 5-point scale, converted to 0-100, with higher scores indicating healthier behavior. To evaluate, two convenience samples of parents of children 5-18 years completed the questionnaire: a cohort presenting to an obesity program (n=83) and a cohort of community events attenders and hospital employee parents (n=147). Scores had a normal distribution without floor or ceiling effects. Cronbach's alpha for the 10-item scale was .71. Factor analysis yielded three component factors with Cronbach's alpha of .66, .75, and .59 for the Screen Time, Physical Activity, and Food Consumption dimensions respectively. Scores of the obesity group (49.02 [SD 14.52]) were lower than scores of the community group (55.44 [SD 13.55]), p=.001. The BAQ demonstrated reliability and validity for use as an index of lifestyle behaviors.


2021 ◽  
Vol 1 (2) ◽  
pp. 139-151
Author(s):  
Emilio Díaz

La investigación tuvo como objetivo analizar las estrategias de gestión de tesorería en las pequeñas y medianas empresas del sector petroquímico. La misma fue sustentada con una metodología de tipo descriptiva, bajo un diseño no experimental, transeccional y de campo. La población quedo constituida por seis empresas, vigentes y en ejecución dentro del complejo petroquímico del municipio Miranda. Para la recolección de la información se utilizó un instrumento, tipo cuestionario, conformado por trece ítems, con cinco opciones de respuestas. Para su validez se sometió al juicio de expertos. Para la confiabilidad, se empleó el Método Alfa de Cronbach, obteniéndose 0,96. Se tabularon los datos procediendo al análisis cuantitativo de frecuencias absolutas y relativas. Se resalta que los indicadores: flujo de cobros, flujo de pagos, posición fecha valor, previsiones de tesorería y funciones del tesorero se posicionaron en muy alta aplicación; sólo los depósitos a plazo fijo ostentaron alta aplicación. ABSTRACT The research aimed to analyze treasury management strategies in small and medium-sized companies in the petrochemical sector. It was supported by a descriptive methodology, under a non-experimental, transectional and field design. The population was made up of six companies, in force and in execution within the petrochemical complex of the Miranda municipality. To collect the information, an instrument, questionnaire type, consisting of thirteen items, with five response options was used. For its validity, it was submitted to expert judgment. For reliability, the Cronbach's Alpha Method was used, obtaining 0.96. The data were tabulated proceeding to the quantitative analysis of absolute and relative frequencies. It is highlighted that the indicators: collection flow, payment flow, value date position, treasury forecasts and treasurer functions were positioned in very high application; only fixed-term deposits held high application.


2021 ◽  
pp. 096228022110342
Author(s):  
Denis Talbot ◽  
Awa Diop ◽  
Mathilde Lavigne-Robichaud ◽  
Chantal Brisson

Background The change in estimate is a popular approach for selecting confounders in epidemiology. It is recommended in epidemiologic textbooks and articles over significance test of coefficients, but concerns have been raised concerning its validity. Few simulation studies have been conducted to investigate its performance. Methods An extensive simulation study was realized to compare different implementations of the change in estimate method. The implementations were also compared when estimating the association of body mass index with diastolic blood pressure in the PROspective Québec Study on Work and Health. Results All methods were susceptible to introduce important bias and to produce confidence intervals that included the true effect much less often than expected in at least some scenarios. Overall mixed results were obtained regarding the accuracy of estimators, as measured by the mean squared error. No implementation adequately differentiated confounders from non-confounders. In the real data analysis, none of the implementation decreased the estimated standard error. Conclusion Based on these results, it is questionable whether change in estimate methods are beneficial in general, considering their low ability to improve the precision of estimates without introducing bias and inability to yield valid confidence intervals or to identify true confounders.


Sign in / Sign up

Export Citation Format

Share Document