Team Process Measurement and Implications for Training

1992 ◽  
Vol 36 (17) ◽  
pp. 1351-1355 ◽  
Author(s):  
Ashley Prince ◽  
Michael T. Brannick ◽  
Carolyn Prince ◽  
Eduardo Salas

The purpose of this research was to establish the construct validity of a behaviorally anchored rating scale developed to measure team process behaviors. This scale contains six skills (i.e. leadership, assertiveness, decision making/mission analysis, situation awareness, communication, adaptability/flexibility) that were identified through a prior needs analysis with training specialists and subject matter experts. Student and instructor pilots (104 individuals, 51 teams) participated in two team tasks (simulated aviation tasks) which were designed to elicit the team process behaviors identified for the rating scale, and were rated on their behaviors. A multitrait-multimethod analysis on the resulting ratings (Campbell and Fiske, 1959) was conducted. Evidence of convergent and discriminant validity as well as some method bias were found when the method investigated was team task. Implications for the use of the team process scale in training are discussed.

2012 ◽  
Author(s):  
Elysse B. Arnold ◽  
Jeffrey J. Wood ◽  
Jill Ehrenreich May ◽  
Anna M. Jones ◽  
Jennifer M. Park ◽  
...  

2015 ◽  
Vol 11 (4) ◽  
pp. 1-10 ◽  
Author(s):  
Ned Kock

The author discusses common method bias in the context of structural equation modeling employing the partial least squares method (PLS-SEM). Two datasets were created through a Monte Carlo simulation to illustrate the discussion: one contaminated by common method bias, and the other not contaminated. A practical approach is presented for the identification of common method bias based on variance inflation factors generated via a full collinearity test. The author's discussion builds on an illustrative model in the field of e-collaboration, with outputs generated by the software WarpPLS. They demonstrate that the full collinearity test is successful in the identification of common method bias with a model that nevertheless passes standard convergent and discriminant validity assessment criteria based on a confirmation factor analysis.


2021 ◽  
Vol 10 (2) ◽  
pp. 116
Author(s):  
Konstantinos Lavidas ◽  
Dionysios Manesis ◽  
Vasilios Gialamas

The purpose of this study was to adapt the Statistics Anxiety Rating Scale (STARS) for a Greek student population. The STARS was administered to 890 Tertiary Education students in two Greek universities. It was performed a cross-validation study to examine the factorial structure and the psychometric properties with a series of confirmatory factor analyses. Results revealed a correlated six first-order factor model which provided the best fit to the data compared to a six-factor model with one superordinate factor. All six factors of the Greek version of the STARS presented convergent and discriminant validity and were internally consistent. Implications and limitations are discussed.


1982 ◽  
Vol 55 (3) ◽  
pp. 927-933 ◽  
Author(s):  
L. K. Waters ◽  
Maryelien Reardon ◽  
Jack E. Edwards

A multitrait-multimethod analysis was performed on instructors' ratings obtained from three formats, behaviorally anchored rating scales, graphic rating scale, and mixed standard scale, for two samples of 100 undergraduate students each. The two samples were distinguished on the basis of whether the statements on the mixed-standard scale were behaviorally specific or more generic descriptions of the dimensions. The more specific mixed-standard scale yielded a greater proportion of inconsistent ratings than the less specific one. Also, convergent and discriminant validity were smaller and method variance and unexplained error were greater for the more specific mixed-standard scale. However, a more detailed examination of these effects in terms of selected average correlations indicated that some of these results were not necessarily due to the format. Relative levels of convergent validity were higher and relative levels of discriminant validity were lower than found by Dickinson and Zellinger in 1980 for faculty ratings in a professional school. Over-all, the mixed-standard scale engendered as much convergent and discriminant validity as did the other two rating formats.


2014 ◽  
Author(s):  
Rezvan Ameil ◽  
David A Luckenbaugh ◽  
Neda F Gould ◽  
M. Kathleen Holmes ◽  
Níall Lally ◽  
...  

Anhedonia, a diminished or lack of ability to experience and anticipate pleasure represents a core psychiatric symptom in depression. Current clinician assessment of anhedonia is generally limited to one or two all-purpose questions and most well-known psychometric scales of anhedonia are relatively long, self-administered, typically not state sensitive, and are unsuitable for use in clinical settings. A user-friendly tool for a more in-depth clinician assessment of hedonic capacity is needed. The present study assessed the validity and reliability of a clinician administered version of the Snaith-Hamilton Pleasure Scale, the SHAPS-C, in 34 depressed subjects. We compared total and specific item scores on the SHAPS-C, SHAPS (self-report version), Montgomery-Åsberg Depression Rating Scale (MADRS), and the Inventory of Depressive Symptomatology-Self Rating version (IDS-SR). We also examined construct, content, concurrent, convergent, and discriminant validity, internal consistency, and split-half reliability of the SHAPS-C. The SHAPS-C was found to be valid and reliable. The SHAPS and the SHAPS-C were positively correlated with one another, with levels of depression severity, as measured by the MADRS, and the IDS-SR total scores, and with specific items of the MADRS and IDS-SR sensitive to measuring hedonic capacity. Our investigation indicates that the SHAPS-C is a user friendly, reliable, and valid tool for clinician assessment of hedonic capacity in depressed bipolar and unipolar patients.


2018 ◽  
Vol 23 (1) ◽  
pp. 181-207 ◽  
Author(s):  
Adam W. Meade ◽  
Gabriel Pappalardo ◽  
Phillip W. Braddy ◽  
John W. Fleenor

While rating-scale-based assessments have been shown to be useful for measuring a variety of workplace-relevant constructs, assessment length and response distortion present practical limitations on their use. We describe a new type of measurement method termed rapid response measurement (RRM) in which stimuli are presented on a computer screen one at a time in rapid succession and respondents are asked to quickly provide a dichotomous response. Two personality assessments using RRM were developed and reliability and validity evidence across four independent samples were evaluated. Both RRM assessments showed adequate reliability, even at short test lengths, with acceptable levels of convergent and discriminant validity with traditional survey-based measures. Analyses based on a within-participants design indicated that the RRM was significantly more difficult to fake when instructed than was a survey-based measure of personality. The second RRM was related to several aspects of job performance. While initial results show promise, further research is needed to establish the validity and viability of the RRM for organizational and psychological measurement.


2006 ◽  
Author(s):  
Nicole C. Spanakis ◽  
Gargi Roysircar-Sodowsky ◽  
Michael Brodeur ◽  
Josefina Irigoyen ◽  
Mary Quinn ◽  
...  

2014 ◽  
Vol 19 (3) ◽  
pp. 141-148 ◽  
Author(s):  
Danielle Ruskin ◽  
Chitra Lalloo ◽  
Khushnuma Amaria ◽  
Jennifer N Stinson ◽  
Erika Kewley ◽  
...  

BACKGROUND: In clinical practice, children are often asked to rate their pain intensity on a simple 0 to 10 numerical rating scale (NRS). Although the NRS is a well-established measure for adults, no study has yet evaluated its validity for children with chronic pain.OBJECTIVES: To examine the convergent and discriminant validity of the NRS as it is used within regular clinical practice to document pain intensity for children with chronic pain. Interchangeability between the NRS and an analogue pain measure was also assessed.METHODS: A cohort of 143 children (mean [± SD] age 14.1±2.4 years; 72% female) rated their pain intensity (current, usual, lowest and strongest levels) on a verbally administered 0 to 10 NRS during their first appointment at a specialized pain clinic. In a separate session that occurred either immediately before or after their appointment, children also rated their pain using the validated 0 to 10 coloured analogue scale (CAS).RESULTS: NRS ratings met a priori criteria for convergent validity (r>0.3 to 0.5), correlating with CAS ratings at all four pain levels (r=0.58 to 0.68; all P<0.001). NRS for usual pain intensity differed significantly from an affective pain rating, as hypothesized (Z=2.84; P=0.005), demonstrating discriminant validity. The absolute differences between NRS and CAS pain scores were small (range 0.98±1.4 to 1.75±1.9); however, the two scales were not interchangeable.CONCLUSIONS: The present study provides preliminary evidence that the NRS is a valid measure for assessing pain intensity in children with chronic pain.


2014 ◽  
Vol 115 (2) ◽  
pp. 415-418 ◽  
Author(s):  
Subhra Chakrabarty

In a recent study, Dussault, Frenette, and Fernet (2013) developed a 21-item self-report instrument to measure leadership based on Bass's (1985) transformational/transactional leadership paradigm. The final specification included a third-order dimension (leadership), two second-order dimensions (transactional leadership and transformational leadership), and a first-order dimension ( laissezfaire leadership). This note focuses on the need for assessing convergent and discriminant validity of the scale, and on ruling out the potential for common method bias.


2011 ◽  
Vol 16 (6) ◽  
pp. 510-516 ◽  
Author(s):  
Kimberley D. Lakes ◽  
James M. Swanson ◽  
Matt Riggs

Objective: To evaluate the reliability and validity of the English and Spanish versions of the Strengths and Weaknesses of ADHD-symptom and Normal-behavior (SWAN) rating scale. Method: Parents of preschoolers completed both a SWAN and the well-established Strengths and Difficulties Questionnaire (SDQ) on two separate occasions over a span of 3 months; instruments were in the primary language of the family (English or Spanish). Results: Psychometric properties for the English and Spanish versions of the SWAN were adequate, with high internal consistency and moderate test–retest reliability. Skewness and kurtosis statistics for the SWAN were within the range expected for a normally distributed population. The SWAN also demonstrated adequate convergent and discriminant validity in correlations with the various subscales of the SDQ. Conclusion: Psychometric properties of both the English and Spanish versions of the SWAN indicate that it is a reliable and valid instrument for measuring child attention and hyperactivity. The stability of ratings over time in this preschool sample was moderate, which may reflect the relative instability of these characteristics in preschool children.


Sign in / Sign up

Export Citation Format

Share Document