reliability estimate
Recently Published Documents


TOTAL DOCUMENTS

70
(FIVE YEARS 12)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Zoe M Boundy-Singer ◽  
Corey M Ziemba ◽  
Robbe LT Goris

Decisions vary in difficulty. Humans know this and typically report more confidence in easy than in difficult decisions. However, confidence reports do not perfectly track decision accuracy, but also reflect response biases and difficulty misjudgments. To isolate the quality of confidence reports, we developed a model of the decision-making process underlying choice-confidence data. In this model, confidence reflects a subject's estimate of the reliability of their decision. The quality of this estimate is limited by the subject's uncertainty about the uncertainty of the variable that informs their decision ("meta-uncertainty"). This model provides an accurate account of choice-confidence data across a broad range of perceptual and cognitive tasks, revealing that meta-uncertainty varies across subjects, is stable over time, generalizes across some domains, and can be manipulated experimentally. The model offers a parsimonious explanation for the computational processes that underlie and constrain the sense of confidence.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Amer Ibrahim Al-Omari ◽  
Amal S. Hassan ◽  
Naif Alotaibi ◽  
Mansour Shrahili ◽  
Heba F. Nagy

In survival analysis, the two-parameter inverse Lomax distribution is an important lifetime distribution. In this study, the estimation of R = P   Y < X is investigated when the stress and strength random variables are independent inverse Lomax distribution. Using the maximum likelihood approach, we obtain the R estimator via simple random sample (SRS), ranked set sampling (RSS), and extreme ranked set sampling (ERSS) methods. Four different estimators are developed under the ERSS framework. Two estimators are obtained when both strength and stress populations have the same set size. The two other estimators are obtained when both strength and stress distributions have dissimilar set sizes. Through a simulation experiment, the suggested estimates are compared to the corresponding under SRS. Also, the reliability estimates via ERSS method are compared to those under RSS scheme. It is found that the reliability estimate based on RSS and ERSS schemes is more efficient than the equivalent using SRS based on the same number of measured units. The reliability estimates based on RSS scheme are more appropriate than the others in most situations. For small even set size, the reliability estimate via ERSS scheme is more efficient than those under RSS and SRS. However, in a few cases, reliability estimates via ERSS method are more accurate than using RSS and SRS schemes.


2021 ◽  
Vol 2061 (1) ◽  
pp. 012085
Author(s):  
E K Ablyazov ◽  
G V Deruzhinsky ◽  
K A Ablyazov

Abstract When calculating the reliability indicators of transshipment machines and mechanisms, the required values are not always obtained explicitly due to systematic difficulties (complex distribution laws) or limited initial data. The optimal stock of spare parts for transshipment machines and mechanisms depends on the reliability indicators, and the nature and intensity of their use. Downtime due to the failure of transshipment machines and mechanisms used for ship handling leads to greater losses than downtime of these machines in the warehouse. Failures of transshipment machines and mechanisms can be due to the failure of non-repairable and quickly repairable components, and those with a long repair time. To calculate the main reliability indicators of quickly repairable components, the most common laws of distribution of random variables were employed. The paper considers the methodological aspects of the probabilistic reliability estimate of quickly repairable components of transshipment machines and mechanisms.


2021 ◽  
pp. 109442812110115
Author(s):  
Ze Zhu ◽  
Alan J. Tomassetti ◽  
Reeshad S. Dalal ◽  
Shannon W. Schrader ◽  
Kevin Loo ◽  
...  

Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.


Assessment ◽  
2021 ◽  
pp. 107319112199416
Author(s):  
Desirée Blázquez-Rincón ◽  
Juan I. Durán ◽  
Juan Botella

A reliability generalization meta-analysis was carried out to estimate the average reliability of the seven-item, 5-point Likert-type Fear of COVID-19 Scale (FCV-19S), one of the most widespread scales developed around the COVID-19 pandemic. Different reliability coefficients from classical test theory and the Rasch Measurement Model were meta-analyzed, heterogeneity among the most reported reliability estimates was examined by searching for moderators, and a predictive model to estimate the expected reliability was proposed. At least one reliability estimate was available for a total of 44 independent samples out of 42 studies, being that Cronbach’s alpha was most frequently reported. The coefficients exhibited pooled estimates ranging from .85 to .90. The moderator analyses led to a predictive model in which the standard deviation of scores explained 36.7% of the total variability among alpha coefficients. The FCV-19S has been shown to be consistently reliable regardless of the moderator variables examined.


2020 ◽  
Vol 2020 (8) ◽  
pp. 39-46
Author(s):  
Elena Shischenko ◽  
Anton Alekseev ◽  
Vera Novikova

The investigation purpose is to develop a method for the assessment of reliability values of dc drive motors for tram cars the life of which approaches its completion which will allow correcting the maintenance and repair system in order to decrease the number of sudden failures and hence the number of unscheduled repair operations. According to the available rolling-stock (trams) repair statistics, a considerable part of unscheduled repair works is determined by failures of dc drive motors. Reasoning from the investigation purpose one of the problems consists in the development of the simulator allowing the accurate definition of quantitative characteristics of reliability during the operation. The operation reliability of dc drive motors of tram-cars is affected considerably by the interaction of design structure elements and it raises a question of combined structural-functional patterns use during the formation of a compositional simulator for reliability estimate. The results obtained show that the account of structural-functional ties of dc drive motors design elements in tram-cars during the simulator formation for reliability assessment allows obtaining more correct data for the definition of maintenance terms and repair works. In such a way, the compositional model for reliability assessment of dc drive motors installed in tram-cars allows obtaining more exact dependences of their trouble-free operation probability that gives an opportunity to correct terms for scheduled-preventive repair, so that to decrease the number of sudden failures becoming more often, as it is seen in practice, at the approach of tram-car operation completion and decrease the number of unscheduled repair works. The model offered for reliability assessment is urgent for the term correction of maintenance and repair works of tram-car dc drive motors the life of which comes to the completion and also for motors which run out of power.


2020 ◽  
Author(s):  
Sam Parsons

Analytic flexibility is known to influence the results of statistical tests, e.g. effect sizes and p-values. Yet, the degree to which flexibility in data-processing decisions influences the reliability of our measures is unknown. In this paper I attempt to address this question using a series of reliability multiverse analyses. The methods section incorporates a brief tutorial for readers interested in implementing multiverse analyses reported in this manuscript; all functions are contained in the R package splithalf. I report six multiverse analyses of data-processing specifications, including accuracy and response time cutoffs. I used data from a Stroop task and Flanker task at two time points. This allowed for an internal consistency reliability multiverse at time 1 and 2, and a test-retest reliability multiverse between time 1 and 2. Largely arbitrary decisions in data-processing led to differences between the highest and lowest reliability estimate of at least 0.2. Importantly, there was no consistent pattern in the data-processing specifications that led to greater reliability, across time as well as tasks. Together, data-processing decisions are highly influential, and largely unpredictable, on measure reliability. I discuss actions researchers could take to mitigate some of the influence of reliability heterogeneity, including adopting hierarchical modelling approaches. Yet, there are no approaches that can completely save us from measurement error. Measurement matters and I call on readers to help us move from what could be a measurement crisis towards a measurement revolution.


2020 ◽  
Author(s):  
Dan Tamul ◽  
Malte Elson ◽  
James D. Ivory ◽  
Jessica C Hotter ◽  
Madison Lanier ◽  
...  

The Moral Foundations Theory (MFT) has drawn significant interest from the popular press and academics alike. One of its primary constituting methodologies is the Moral Foundations Questionnaire (MFQ), a self-report instrument aimed to assess the use of five sets of moral intuitions when making moral judgments. The proposed universality and moduliarity of both MFT and MFQ have been challenged conceptually and empirically. We examine the scale’s development and offer a systematic content analysis of 539 scholarly works using the MFQ to estimate its overall reliability. Altogether, 61% of the studies reported Cronbach’s alpha as an indicator for reliability (38% reported no reliability estimate at all). Mean Cronbach’s alpha scores for four of the five subscales were below .70. We discuss implications for the measurement of moral foundations and the evidentiary basis of Moral Foundations Theory.


Sign in / Sign up

Export Citation Format

Share Document