scholarly journals A new instrument for measuring optimism and pessimism: Test-retest reliability and relations with happiness and religious commitment

1989 ◽  
Vol 27 (4) ◽  
pp. 365-366 ◽  
Author(s):  
William N. Dember ◽  
Judith Brooks
2021 ◽  
Vol 21 (2) ◽  
pp. 163-190
Author(s):  
Pelin Bintaș-Zörer ◽  
Orçun Yorulmaz

"The main purpose of the present study is to adapt the Emotion Regulation Interview (ERI) into Turkish and to examine its psychometric properties, while at the same time, to revise it by extending its scope in terms of emotions, emotion regulation (ER) strategies, and the efficacy measures related to ER strategies. For this purpose, various adjustments have been made to the original interview form, resulting in the Emotion Regulation Interview-Revised Form (ERI-RF). The ERI-RF evaluates the regulation of the emotions (i.e., anxiety, sadness, anger) experienced in romantic relationships, recognizing that emotions and ER mostly emerge in close relationships. A total of 138 participants in romantic relationships were interviewed using the ERI-RF, and second interviews were conducted with 31 of the participants for the assessment of test-retest reliability. Results showed that the ERI-RF had good validity results, and the use of some ER strategies to certain emotions demonstrated sufficient test-retest reliability. It was concluded that the ERI-RF, as an assessment tool for the evaluation of a wide range of ER strategies based on the most frequently experienced emotions, has sufficient psychometric properties, and that its use in different samples in feature studies may yield useful results."


1999 ◽  
Vol 29 (4) ◽  
pp. 891-902 ◽  
Author(s):  
L. KROLL ◽  
A. WOODHAM ◽  
J. ROTHWELL ◽  
S. BAILEY ◽  
C. TOBIAS ◽  
...  

Background. For adolescents, there is no specific needs assessment instrument that assesses significant problems that can benefit from specified interventions. A new instrument (S.NASA) was developed by incorporating and adapting three well established adult needs assessment instruments. The S.NASA covers 21 areas of functioning including social, psychiatric, educational and life skills.Method. Client and carer interviews were conducted by different researchers. A week later the interviews were repeated using a crossover design. Significant (cardinal) problems were generated from the clinical interviews using a pre-defined algorithm. Final need status (three categories) was made by clinicians assessing the cardinal problems against defined interventions. The interventions were generated from discussions with clinicians and a survey of appropriate professionals working with adolescents.Results. Pre-piloting led to the final version being administered to 40 adolescents from secure units, forensic psychiatric and adolescent psychiatric services. There were 25 males and 15 females, mean age 15·5 years. Overall there were moderate to good inter-rater and test–retest reliability coefficients, the test–retest reliability coefficients for the total scores on the needs assessment interviews ranged from 0·73 to 0·85. Consensual and face validity was good, the adolescents and staff finding the instrument useful and helpful.Conclusions. This new needs assessment instrument shows acceptable psychometric properties. It should be of use in research projects assessing the needs and the provision of services for adolescents with complex and chronic problems.


2009 ◽  
Vol 24 (S1) ◽  
pp. 1-1 ◽  
Author(s):  
L.M. Giglio ◽  
P.V.d.S. Magalhães ◽  
A. Andreazza ◽  
J. Walz ◽  
L. Jakobsen ◽  
...  

Introduction:As several lines of evidence point to irregular biological rhythms in bipolar disorder, and its disruption may lead to new illness episodes, having an instrument that measures biological rhythms is critical. This report describes the validation of a new instrument, the Biological Rhythms Interview of Assessment in Neuropsychiatry (BRIAN), designed to assess biological rhythms in the clinical setting.Methods:Eighty-one outpatients with a diagnosis of bipolar disorder and 79 control subjects matched for type of health service used, sex, age and educational level were consecutively recruited. After a pilot study, 18 items evaluating sleep, activities, social rhythm and eating pattern were probed for discriminant, content and construct validity, concurrent validity with the Pittsburgh Sleep Quality Index (PSQI), internal consistency and test-retest reliability.Results:A three-factor solution, termed sleep/ social rhythm factor, activity factor and feeding factor, provided the best theoretical and most parsimonious account of the data; items essentially loaded in factors as theoretically intended, with the exception of the sleep and social scales, which formed a single factor. Test-retest reliability and internal consistency were excellent. Highly significant differences between the two groups were found for the whole scale and for each BRIAN factor. Total BRIAN scores were highly correlated with the global PSQI score.Discussion:The BRIAN scale presents a consistent profile of validity and reliability. Its use may help clinicians to better assess their patients and researchers to improve the evaluation of the impact of novel therapies targeting biological rhythm pathways.


2019 ◽  
Author(s):  
Brooke Linden ◽  
Heather Stuart

Abstract Background: Previous research has linked excessive stress among post-secondary students to poor academic performance and poor mental health. Despite attempts to ameliorate mental health challenges at post-secondary institutions, there exists a gap in the evaluation of the specific sources of stress for students within the post-secondary setting. Methods: The goal of this study was to develop a new instrument to better assess the sources of post-secondary student stress. Over the course of two years, the Post-Secondary Student Stressors Index (PSSI) was created in collaboration with post-secondary students as co-developers and subject matter experts. In this study, we used a combination of individual cognitive interviews (n = 11), an online consensus survey modeled after a traditional Delphi method (n = 65), and an online pre- (n = 535) and post-test (n = 350) survey to psychometrically evaluate the PSSI using samples of students from Ontario, Canada. We collected four types of evidence for validity, including: content evidence, response processes evidence, internal structure evidence, and relations to other variables. The test-retest reliability of the instrument was also evaluated. Results: The PSSI demonstrated strong psychometric properties. Content validation and response processes evidence was derived from active student involvement throughout the development and refinement of the tool. Exploratory factor analysis suggested that the structure of the PSSI reflects the internal structure of an index, rather than a scale, as expected. Test-retest reliability of the instrument was comparable to existing, established instruments. Finally, the PSSI demonstrated good relationships with like measures of stress, distress, and resilience, in the hypothesized directions. Conclusions: The PSSI is a 46-item inventory that will allow post-secondary institutions to pinpoint the most severe and frequently occurring stressors on their campus. This knowledge will facilitate appropriate targeting of priority areas, and help institutions to better align their mental health promotion and mental illness prevention programming with the needs of their campus.


1985 ◽  
Vol 45 (2) ◽  
pp. 401-405 ◽  
Author(s):  
Steven L. Wise

This study describes the development and validation of a new instrument entitled Attitudes Toward Statistics (ATS) to be used in the measurement of attitude change in introductory statistics students. Two ATS subscales are identified: Attitude Toward Course and Attitude Toward the Field, respectively. These subscales were demonstrated to have both high internal consistency and test-retest reliability. It is further shown that each ATS subscale provides distinctly different information about the attitudes of introductory statistics students.


1995 ◽  
Vol 167 (5) ◽  
pp. 589-595 ◽  
Author(s):  
Michael Phelan ◽  
Mike Slade ◽  
Graham Thornicroft ◽  
Graham Dunn ◽  
Frank Holloway ◽  
...  

BackgroundPeople with severe mental illness often have a complex mixture of clinical and social needs. The Camberwell Assessment of Need (CAN) is a new instrument which has been designed to provide a comprehensive assessment of these needs. There are two versions of the instrument: the clinical version has been designed to be used by staff to plan patients' care; whereas the research version is primarily a mental health service evaluation tool. The CAN has been designed to assist local authorities to fulfil their statutory obligations under the National Health Service and Community Care Act 1990 to assess needs for community services.MethodA draft version of the instrument was designed by the authors. Modifications were made following comments from mental health experts and a patient survey. Patients (n = 49) and staff (n = 60) were then interviewed, using the amended version, to assess the inter-rater and test-retest reliability of the instrument.ResultsThe mean number of needs identified per patient ranged from 7.55 to 8.64. Correlations of the inter-rater and test-retest reliability of the total number of needs identified by staff were 0.99 and 0.78 respectively. The percentage of complete agreement on individual items ranged from 100–81.6% (inter-rater) and 100–58.1% (test-retest).ConclusionsThe study suggests that the CAN is a valid and reliable instrument for assessing the needs of people with severe mental illness. It is easily learnt by staff from a range of professional backgrounds, and a complete assessment took, on average, around 25 minutes.


1989 ◽  
Vol 64 (3) ◽  
pp. 991-995 ◽  
Author(s):  
George Atkinson

Kolb revised the Learning Style Inventory to improve psychometric properties such as test-retest reliability. The data from this study suggest the new instrument has no better stability coefficients than its predecessor.


Author(s):  
J Baker ◽  
J Racosta ◽  
K Kimpinski

Background: Orthostatic symptoms including dizziness, light-headedness and syncope can be major causes of disability in patients with dysautonomia. Currently there is no validated tool capable of discriminating orthostatic from non-orthostatic constitutional symptoms. Therefore, we developed the Orthostatic Discriminant and Severity Scale (ODSS) to help make this distinction. Objective: Demonstrate validity and reliability of the ODSS. Methods: Convergent and clinical validity were assessed by correlating Orthostatic scores with previously validated tools (Autonomic Symptom Profile (ASP), composite scores of the Orthostatic Hypotension Questionnaire and the total Composite Autonomic Severity Score (tCASS), respectively). Test-retest reliability was calculated using an intra-class correlation coefficient. Results: Orthostatic scores from 23 controls and 5 patients were highly correlated with both the Orthostatic Intolerance index of the ASP (r=0.724;p<0.01) and the composite OHDAS and OHSAS (r=0.552;p<0.01 and r=0.753;p<0.01, respectively), indicating good convergent validity. Orthostatic scores were significantly correlated with tCASS (r=0.568;p<0.01), and the systolic blood pressure change during head-up tilt (r=-0.472;p=0.013). In addition, patients with Neurogenic Orthostatic Hypotension had significantly higher Orthostatic scores than controls (p<0.01) indicating good clinical validity. Test-retest reliability was strong (r=0.954;p<0.01) with an internal consistency of 0.978. Conclusions: Our results, though preliminary, provide empiral evidence that the ODSS is capable of producing a valid and reliable orthostatic score.


2019 ◽  
Author(s):  
Brooke Linden ◽  
Heather Stuart

Abstract Background: Previous research has linked excessive stress among post-secondary students to poor academic performance and poor mental health. Despite attempts to ameliorate mental health challenges at post-secondary institutions, there exists a gap in the evaluation of the specific sources of stress for students within the post-secondary setting. Methods: The goal of this study was to develop a new instrument to better assess the sources of post-secondary student stress. Over the course of two years, the Post-Secondary Student Stressors Index (PSSI) was created in collaboration with post-secondary students as co-developers and subject matter experts. In this study, we used a combination of individual cognitive interviews (n = 11), an online consensus survey modeled after a traditional Delphi method (n = 65), and an online pre- (n = 535) and post-test (n = 350) survey to psychometrically evaluate the PSSI using samples of students from Ontario, Canada. We collected four types of evidence for validity, including: content evidence, response processes evidence, internal structure evidence, and relations to other variables. The test-retest reliability of the instrument was also evaluated. Results: The PSSI demonstrated strong psychometric properties. Content validation and response processes evidence was derived from active student involvement throughout the development and refinement of the tool. Exploratory factor analysis suggested that the structure of the PSSI reflects the internal structure of an index, rather than a scale, as expected. Test-retest reliability of the instrument was comparable to existing, established instruments. Finally, the PSSI demonstrated good relationships with like measures of stress, distress, and resilience, in the hypothesized directions. Conclusions: The PSSI is a 46-item inventory that will allow post-secondary institutions to pinpoint the most severe and frequently occurring stressors on their campus. This knowledge will facilitate appropriate targeting of priority areas, and help institutions to better align their mental health promotion and mental illness prevention programming with the needs of their campus.


Author(s):  
Matthew L. Hall ◽  
Stephanie De Anda

Purpose The purposes of this study were (a) to introduce “language access profiles” as a viable alternative construct to “communication mode” for describing experience with language input during early childhood for deaf and hard-of-hearing (DHH) children; (b) to describe the development of a new tool for measuring DHH children's language access profiles during infancy and toddlerhood; and (c) to evaluate the novelty, reliability, and validity of this tool. Method We adapted an existing retrospective parent report measure of early language experience (the Language Exposure Assessment Tool) to make it suitable for use with DHH populations. We administered the adapted instrument (DHH Language Exposure Assessment Tool [D-LEAT]) to the caregivers of 105 DHH children aged 12 years and younger. To measure convergent validity, we also administered another novel instrument: the Language Access Profile Tool. To measure test–retest reliability, half of the participants were interviewed again after 1 month. We identified groups of children with similar language access profiles by using hierarchical cluster analysis. Results The D-LEAT revealed DHH children's diverse experiences with access to language during infancy and toddlerhood. Cluster analysis groupings were markedly different from those derived from more traditional grouping rules (e.g., communication modes). Test–retest reliability was good, especially for the same-interviewer condition. Content, convergent, and face validity were strong. Conclusions To optimize DHH children's developmental potential, stakeholders who work at the individual and population levels would benefit from replacing communication mode with language access profiles. The D-LEAT is the first tool that aims to measure this novel construct. Despite limitations that future work aims to address, the present results demonstrate that the D-LEAT represents progress over the status quo.


Sign in / Sign up

Export Citation Format

Share Document