Applying Social Cognition in Assessing Reliability with a Small Sample

1994 ◽  
Vol 75 (1) ◽  
pp. 236-238
Author(s):  
Scott W. Brown ◽  
Mary M. Brown

An issue for researchers studying special or selected populations is how to obtain reliability estimates on instruments. In our 1993 study assessing the attitudes of intubated patients using a social cognition technique, the internal reliability estimate was 0.83. This study replicated and extended this finding and included items requesting subjects to estimate their ability to imagine the simulated intubation process. For 130 undergraduates alpha was 0.83. Most subjects reported they were able to imagine the situation and respond to the items accordingly. These results confirm the reliability and may have implications for estimating reliability of other scales which cannot be easily tested on large samples. While present estimates cannot be generalized, the procedure may provide useful feedback to researchers.

1993 ◽  
Vol 73 (2) ◽  
pp. 499-505 ◽  
Author(s):  
Scott W. Brown ◽  
Mary M. Brown

One of the issues facing researchers studying very select populations is how to obtain reliability estimates for their instruments. When the populations and resulting samples of studies are very small and select, gathering data for typical reliability estimates becomes very difficult. As a result, many researchers ignore the concern about reliability of their instrumentation and forge ahead collecting data. In response to this concern, Bandura's model of social cognition and Wolpe's model of systematic desensitization were combined and applied to a group of 90 undergraduates completing a Communication Satisfaction Scale designed to assess the attitudes of intubated patients in a hospital Intensive Care Unit. Stimuli (text, auditory and visual) were provided to sensitize the subjects to the intubation procedure and to enable the subjects to imagine what it is like to be an intubated patient. The subjects responded to 10 items focusing on the communication issues of intubated patients on a scale in Likert format. Internal reliability (Cronbach alpha) was 0.83 for the entire scale. The results are discussed within both a social cognition and a measurement framework. While the resulting reliabilities cannot be directly applied to the intubated sample, the procedure may provide critical feedback to researchers and instrument developers prior to the actual administration of the instrument in research.


Assessment ◽  
2021 ◽  
pp. 107319112199416
Author(s):  
Desirée Blázquez-Rincón ◽  
Juan I. Durán ◽  
Juan Botella

A reliability generalization meta-analysis was carried out to estimate the average reliability of the seven-item, 5-point Likert-type Fear of COVID-19 Scale (FCV-19S), one of the most widespread scales developed around the COVID-19 pandemic. Different reliability coefficients from classical test theory and the Rasch Measurement Model were meta-analyzed, heterogeneity among the most reported reliability estimates was examined by searching for moderators, and a predictive model to estimate the expected reliability was proposed. At least one reliability estimate was available for a total of 44 independent samples out of 42 studies, being that Cronbach’s alpha was most frequently reported. The coefficients exhibited pooled estimates ranging from .85 to .90. The moderator analyses led to a predictive model in which the standard deviation of scores explained 36.7% of the total variability among alpha coefficients. The FCV-19S has been shown to be consistently reliable regardless of the moderator variables examined.


2021 ◽  
pp. 109442812110115
Author(s):  
Ze Zhu ◽  
Alan J. Tomassetti ◽  
Reeshad S. Dalal ◽  
Shannon W. Schrader ◽  
Kevin Loo ◽  
...  

Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.


1984 ◽  
Vol 106 (4) ◽  
pp. 495-500
Author(s):  
T. S. Sankar ◽  
G. D. Xistris

In the direct methods for evaluating reliability of industrial machinery, performance indicators are ascertained from statistical analyses of the randomly fluctuating response variable using pre-established threshold levels of acceptable performance. The accuracy of the results depends directly on the rigor of the probabilistic procedure employed. In this paper, a new approach for calculating the probability measures of the randomly enveloped areas of the response excursions about any given threshold level is presented. The area excursions contain important information on both the time durations of the excursions as well as their intensity levels above the critical value and thus give a direct reliability estimate of the system performance. The mathematical development of the method, derivation of reliability indices, sample calculations, and life estimates are included.


2009 ◽  
Vol 4 (2) ◽  
pp. 66-69
Author(s):  
W. David Carr ◽  
Bruce B. Frey ◽  
Elizabeth Swann

Objective: To establish the validity and reliability of an online assessment instrument's items developed to track educational outcomes over time. Design and Setting: A descriptive study of the validation arguments and reliability testing of the assessment items. The instrument is available to graduating students enrolled in entry-level Athletic Training Education Programs (ATEPs). Methods: Validity was established with the creation of a national advisory board of Athletic Training educators. Construct validity was established with the creation of a test blueprint to guide the development of items for the knowledge exam. Internal reliability estimates for each domain were calculated. A single scale reliability analysis was conducted using all items. An item analysis was conducted by calculating difficulty and discrimination indexes for each item. Results: The internal reliability estimates ranged from .23 to .44 suggesting that individual domain scores for this draft of the instrument were not reliable. The single scale total score reliability however, produced an alpha = .84 suggesting a high level of reliability. Difficulty index scores ranged from .03 to .99 (mean = .74 ± .25). Discrimination index scores ranged from −.01 to .41 (mean = .21 ± .09). Conclusions: While the individual domain reliability was low, the overall single scale score is acceptable. Difficulty and discrimination index scores allowed the removal and revision of items to increase the overall reliability of the test bank.


Author(s):  
Elena Hoicka ◽  
Burcu Soy-Telli ◽  
Eloise Prouten ◽  
George Leckie ◽  
William J. Browne ◽  
...  

AbstractSocial cognition refers to a broad range of cognitive processes and skills that allow individuals to interact with and understand others, including a variety of skills from infancy through preschool and beyond, e.g., joint attention, imitation, and belief understanding. However, no measures examine socio-cognitive development from birth through preschool. Current test batteries and parent-report measures focus either on infancy, or toddlerhood through preschool (and beyond). We report six studies in which we developed and tested a new 21-item parent-report measure of social cognition targeting 0–47 months: the Early Social Cognition Inventory (ESCI). Study 1 (N = 295) revealed the ESCI has excellent internal reliability, and a two-factor structure capturing social cognition and age. Study 2 (N = 605) also showed excellent internal reliability and confirmed the two-factor structure. Study 3 (N = 84) found a medium correlation between the ESCI and a researcher-administered social cognition task battery. Study 4 (N = 46) found strong 1-month test–retest reliability. Study 5 found longitudinal stability (6 months: N = 140; 12 months: N = 39), and inter-observer reliability between parents (N = 36) was good, and children’s scores increased significantly over 6 and 12 months. Study 6 showed the ESCI was internally reliable within countries (Australia, Canada, United Kingdom, United States, Trinidad and Tobago); parent ethnicity; parent education; and age groups from 4–39 months. ESCI scores positively correlated with household income (UK); children with siblings had higher scores; and Australian parents reported lower scores than American, British, and Canadian parents.


2009 ◽  
Vol 31 (4) ◽  
pp. 500-506 ◽  
Author(s):  
Robert Slavin ◽  
Dewi Smith

Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of the Best Evidence Encyclopedia. As predicted, there was a significant negative correlation between sample size and effect size. The differences in effect sizes between small and large experiments were much greater than those between randomized and matched experiments. Explanations for the effects of sample size on effect size are discussed.


2017 ◽  
Vol 41 (S1) ◽  
pp. S40-S40 ◽  
Author(s):  
A. Mucci ◽  
S. Galderisi ◽  
P. Rocca ◽  
A. Rossi ◽  
A. Bertolino ◽  
...  

IntroductionSocial cognition is a complex construct that refers to the functions required to understand other people's mental states and behavior. In people with schizophrenia, social cognition deficits account for a proportion of variance in functional outcome, independent of symptomatology. However, the relationships among social cognition, neurocognitive functioning and functional outcome are still unclear. Previous investigations had several limitations including small sample size, heterogeneous and limited measures of social cognition and neurocognitive functions.AimsWithin the study of the Italian Network for Research on Psychoses, we investigated factors influencing outcome in patients with schizophrenia and their unaffected relatives. Psychopathology, including depression, neurocognition, social cognition and outcome were assessed using instruments designed to overcome some of the previous limitations.MethodsStructural equation modeling was used to test direct and indirect effects of neurocognition, social cognition and functional capacity on vocational and interpersonal functioning. Tests of facial emotion recognition, emotional intelligence and theory of mind were included to assess social cognition. The MATRICS Consensus Cognitive Battery (MCCB) was used to investigate neurocognition.ResultsIn both subjects with schizophrenia and their first-degree relatives, social cognition was found to be independent of negative symptoms and to have a direct impact on outcome. Neurocognition was a predictor of functional capacity and social cognition, which both mediated its impact on outcome. Social cognition was independent of functional capacity and negative symptoms.ConclusionsBetter understanding of how neurocognitive dysfunction and social cognition deficits relate to one another may guide efforts toward targeted treatment approaches.Disclosure of interestAM received honoraria or advisory board/consulting fees from the following companies: Janssen Pharmaceuticals, Otsuka, Pfizer and Pierre Fabre SG received honoraria or advisory board/consulting fees from the following companies: Lundbeck, Janssen Pharmaceuticals, Hoffman-La Roche, Angelini-Acraf, Otsuka, Pierre Fabre and Gedeon-Richter.All other Authors declare no potential conflict of interest.


Author(s):  
JOSE E. RAMIREZ-MARQUEZ ◽  
DAVID W. COIT ◽  
TONGDAN JIN

A new methodology is presented to allocate testing units to the different components within a system when the system configuration is fixed and there are budgetary constraints limiting the amount of testing. The objective is to allocate additional testing units so that the variance of the system reliability estimate, at the conclusion of testing, will be minimized. Testing at the component-level decreases the variance of the component reliability estimate, which then decreases the system reliability estimate variance. The difficulty is to decide which components to test given the system-level implications of component reliability estimation. The results are enlightening because the components that most directly affect the system reliability estimation variance are often not those components with the highest initial uncertainty. The approach presented here can be applied to any system structure that can be decomposed into a series-parallel or parallel-series system with independent component reliability estimates. It is demonstrated using a series-parallel system as an example. The planned testing is to be allocated and conducted iteratively in distinct sequential testing runs so that the component and system reliability estimates improve as the overall testing progresses. For each run, a nonlinear programming problem must be solved based on the results of all previous runs. The testing allocation process is demonstrated on two examples.


Sign in / Sign up

Export Citation Format

Share Document