scholarly journals O8.2. VALIDATION OF A COMPUTERIZED ADAPTIVE TESTING TOOL FOR PSYCHOSIS: THE CAT-PSYCHOSIS BATTERY

2020 ◽  
Vol 46 (Supplement_1) ◽  
pp. S19-S19
Author(s):  
Daniel Guinart ◽  
Renato de Filippis ◽  
Stella Rosson ◽  
Lara Prizgint ◽  
Bhagyashree Patil ◽  
...  

Abstract Background Time constraints limit the use of measurement-based approaches in the routine clinical management of schizophrenia. Computerized Adaptive Testing (CAT) uses computational algorithms (item response theory - IRT) to match individual subjects with only the most relevant questions for them, reducing administration time and increasing measurement efficiency and scalability. This study aimed to test the psychometric properties of the newly developed CAT-Psychosis battery, both self-administered and rater-administered versions. Methods Patients rated themselves with the self-administered CAT-Psychosis which yields a current psychotic severity score. The CAT-Psychosis is based on a multidimensional extension of traditional IRT-based CAT that is suitable for complex traits and disorders such as psychosis. Two different raters independently conducted the rater-administered CAT-Psychosis to test inter-rater reliability (IRR). The Brief Psychiatric Rating Scale (BPRS) was administered to test convergent validity. Subjects were re-tested within 7 days to assess test-retest reliability. Generalized linear mixed models and Pearson product moment correlation coefficients were used to test for correlations between individual ratings and average CAT-Psychosis severity scores respectively and the BPRS. Intraclass correlation coefficients (ICCs) were used to test for reliability. Generalized linear and non-linear (logistic) mixed models were used to estimate diagnostic discrimination capacity (lifetime ratings) and to estimate diagnostic sensitivity, specificity and area under the ROC curve with 10-fold cross validation. Results 135 subjects with psychosis and 25 healthy controls were included in the study. Mean age of the sample was 33.1 years, standard deviation (SD)=12.2years; 62% were males. No significant differences were detected between groups (p=0.9064 and p=0.2684 respectively). Mean length of assessment was 7 minutes and 9 seconds (SD: 5:04min) for the clinician-administered version and 1 minute and 49 seconds (SD: 1:35min) for the self-administered version, averaging 11.4 and 12.6 questions each. Convergent validity against BPRS was moderate for both rater-administered (r=0.65 (0.55–0.73); Marginal Maximum Likelihood Estimation (MMLE)=0.052, Standard Error (SE)=0.005, p<0.00001) and self-administered (r=0.66; MMLE=0.057, SE=0.005, p<0.00001) versions. Clinician version’s IRR was strong (ICC=0.67 (Confidence Interval (CI): 0.51–0.80)), and test-retest reliability was strong for both self-report (ICC=0.83 (CI: 0.76–0.87) and clinician (ICC=0.87 (CI: 0.75–0.94) version. The CAT-Psychosis clinician version was able to discriminate psychosis vs. healthy controls (Area Under the ROC Curve (AUC)=0.96 (CI: 0.90–0.97)). CAT-Psychosis self-report yielded similar results (AUC= 0.85 (CI: 0.77–0.88)). Discussion CAT-Psychosis provides valid severity ratings that mirror BPRS total scores, even as a self-report, yielding a dramatic reduction in administration time, while maintaining reliable psychometric properties. Furthermore, CAT-Psychosis, both clinician and self-report versions, is able to reliably discriminate psychotic patients based on a lifetime diagnosis from healthy controls after a brief assessment of current symptomatology.

Author(s):  
Daniel Guinart ◽  
Renato de Filippis ◽  
Stella Rosson ◽  
Bhagyashree Patil ◽  
Lara Prizgint ◽  
...  

Abstract Objective Time constraints limit the use of measurement-based approaches in research and routine clinical management of psychosis. Computerized adaptive testing (CAT) can reduce administration time, thus increasing measurement efficiency. This study aimed to develop and test the capacity of the CAT-Psychosis battery, both self-administered and rater-administered, to measure the severity of psychotic symptoms and discriminate psychosis from healthy controls. Methods An item bank was developed and calibrated. Two raters administered CAT-Psychosis for inter-rater reliability (IRR). Subjects rated themselves and were retested within 7 days for test-retest reliability. The Brief Psychiatric Rating Scale (BPRS) was administered for convergent validity and chart diagnosis, and the Structured Clinical Interview (SCID) was used to test psychosis discriminant validity. Results Development and calibration study included 649 psychotic patients. Simulations revealed a correlation of r = .92 with the total 73-item bank score, using an average of 12 items. Validation study included 160 additional patients and 40 healthy controls. CAT-Psychosis showed convergent validity (clinician: r = 0.690; 95% confidence interval [95% CI]: 0.610–0.757; self-report: r = .690; 95% CI: 0.609–0.756), IRR (intraclass correlation coefficient [ICC] = 0.733; 95% CI: 0.611–0.828), and test-retest reliability (clinician ICC = 0.862; 95% CI: 0.767–0.922; self-report ICC = 0.815; 95%CI: 0.741–0.871). CAT-Psychosis could discriminate psychosis from healthy controls (clinician: area under the receiver operating characteristic curve [AUC] = 0.965, 95% CI: 0.945–0.984; self-report AUC = 0.850, 95% CI: 0.807–0.894). The median length of the clinician-administered assessment was 5 minutes (interquartile range [IQR]: 3:23–8:29 min) and 1 minute, 20 seconds (IQR: 0:57–2:09 min) for the self-report. Conclusion CAT-Psychosis can quickly and reliably assess the severity of psychosis and discriminate psychotic patients from healthy controls, creating an opportunity for frequent remote assessment and patient/population-level follow-up.


2005 ◽  
Vol 11 (3) ◽  
pp. 338-342 ◽  
Author(s):  
Ruth Ann Marrie ◽  
Gary Cutter ◽  
Tuula Tyry ◽  
Olympia Hadjimichael ◽  
Timothy Vollmer

The North American Research Committee on Multiple Sclerosis (NARCOMS) Registry is a multiple sclerosis (MS) self-report registry with more than 24 000 participants. Participants report disability status upon enrolment, and semi-annually using Performance Scales (PS), Patient Determined Disease Steps (PDDS) and a pain question. In November 2000 and 2001, we also collected the Pain Effects Scale (PES). Our aim was to validate the NARCOMS pain question using the PES as our criterion measure. We measured correlations between the pain question and age, disease duration, various PS subscales and PDDS to assess construct validity. We correlated pain question responses in participants who reported no change in PDSS or the PS subscales between questionnaires to determine test—retest reliability. We measured responsiveness in participants who reported a substantial change in the sensory, spasticity PS subscales. The correlation between the pain question and PES was r=0.61 in November 2000, and r=0.64 in November 2001 (both P<0.0001). Correlations between the pain question and age, and disease duration were low, indicating divergent validity. Correlations between the pain question and spasticity, sensory PS subscales and PDSS were moderate, indicating convergent validity. Test—retest reliability was r=0.84 (P<0.0001). Responsiveness was 70.7%. The pain question is a valid self-report measure of pain in MS.


2021 ◽  
Vol 12 ◽  
Author(s):  
Wei Xia ◽  
William Ho Cheung Li ◽  
Tingna Liang ◽  
Yuanhui Luo ◽  
Laurie Long Kwan Ho ◽  
...  

Objectives: This study conducted a linguistic and psychometric evaluation of the Chinese Counseling Competencies Scale-Revised (CCS-R).Methods: The Chinese CCS-R was created from the original English version using a standard forward-backward translation process. The psychometric properties of the Chinese CCS-R were examined in a cohort of 208 counselors-in-training by two independent raters. Fifty-three counselors-in-training were asked to undergo another counseling performance evaluation for the test-retest. The confirmatory factor analysis (CFA) was conducted for the Chinese CCS-R, followed by internal consistency, test-retest reliability, inter-rater reliability, convergent validity, and concurrent validity.Results: The results of the CFA supported the factorial validity of the Chinese CCS-R, with adequate construct replicability. The scale had a McDonald's omega of 0.876, and intraclass correlation coefficients of 0.63 and 0.90 for test-retest reliability and inter-rater reliability, respectively. Significantly positive correlations were observed between the Chinese CCS-R score and scores of performance checklist (Pearson's γ = 0.781), indicating a large convergent validity, and knowledge on drug abuse (Pearson's γ = 0.833), indicating a moderate concurrent validity.Conclusion: The results support that the Chinese CCS-R is a valid and reliable measure of the counseling competencies.Practice implication: The CCS-R provides trainers with a reliable tool to evaluate counseling students' competencies and to facilitate discussions with trainees about their areas for growth.


Assessment ◽  
1994 ◽  
Vol 1 (4) ◽  
pp. 407-413 ◽  
Author(s):  
Mark A. Blais ◽  
Kenneth B. Benedict ◽  
Dennis K. Norman

The Millon Clinical Multiaxial Inventory—II (MCMI-II), a frequently used self-report measure of psychopathology, contains nine scales designed to assess Axis I psychopathology (the clinical syndrome and severe syndrome scales). This study explored the relationships among these nine MCMI-II clinical syndrome scales and the clinical scales of the Minnesota Multiphasic Personality Inventory–2 (MMPI-2). A sample of 108 psychiatric inpatients was administered both the MCMI-II and the MMPI-2 within 7 days of admission. Pearson correlation coefficients and principal component factors were obtained for the MCMI-II and MMPI-2 scales. The results provided support for the convergent validity of all the MCMI-II Axis I scales. However, the majority of the MCMI-II scales failed to demonstrate adequate discriminant validity in relation to the MMPI-2 scales. The principal component analysis revealed that method variance was the principal influence in determining factor loadings for the majority of test scales. This finding suggests that these two popular self-report tests differ substantially in how they measure psychopathology.


2005 ◽  
Vol 20 (2) ◽  
pp. 145-151 ◽  
Author(s):  
Philippe Birmes ◽  
Alain Brunet ◽  
Maryse Benoit ◽  
Sabine Defer ◽  
Leah Hatton ◽  
...  

AbstractBackgroundPeritraumatic dissociation is a risk factor for developing PTSD. The Peritraumatic Dissociative Experiences Questionnaire (PDEQ) is a self-report inventory used to assess dissociation that occurred at the time of a trauma. The aim of this study was the validation the PDEQ in French.MethodsNinety French speaking traumatized victims presenting to the emergency department were recruited. They were administered the PDEQ shortly after exposure and others trauma-related measures 2 weeks and 1 month posttrauma.ResultsPrincipal components factor analyses suggested a single factor solution for the PDEQ. Significant correlations between the PDEQ and acute and posttraumatic stress symptoms indicated moderate to strong convergent validity. The PDEQ also showed satisfactory test–retest reliability and internal consistency.ConclusionsThis study is the first one to investigate such detailed psychometric findings on the PDEQ. This confirms the unity of the concept of peritraumatic dissociation and the value of the PDEQ-French Version to assess it.


Assessment ◽  
2016 ◽  
Vol 25 (1) ◽  
pp. 3-13 ◽  
Author(s):  
David F. Tolin ◽  
Christina Gilliam ◽  
Bethany M. Wootton ◽  
William Bowe ◽  
Laura B. Bragdon ◽  
...  

Three hundred sixty-two adult patients were administered the Diagnostic Interview for Anxiety, Mood, and OCD and Related Neuropsychiatric Disorders (DIAMOND). Of these, 121 provided interrater reliability data, and 115 provided test–retest reliability data. Participants also completed a battery of self-report measures that assess symptoms of anxiety, mood, and obsessive-compulsive and related disorders. Interrater reliability of DIAMOND anxiety, mood, and obsessive-compulsive and related diagnoses ranged from very good to excellent. Test–retest reliability of DIAMOND diagnoses ranged from good to excellent. Convergent validity was established by significant between-group comparisons on applicable self-report measures for nearly all diagnoses. The results of the present study indicate that the DIAMOND is a promising semistructured diagnostic interview for DSM-5 disorders.


2012 ◽  
Vol 25 (3) ◽  
pp. 420-430 ◽  
Author(s):  
Marie Eckerström ◽  
Johanna Skoogh ◽  
Sindre Rolstad ◽  
Mattias Göthlin ◽  
Gunnar Steineck ◽  
...  

ABSTRACTBackground: Subjective cognitive impairment (SCI) is a potential early marker for actual cognitive decline. The cognitive manifestation of the SCI stage is, however, largely unknown. Self-report instruments developed especially for use in the SCI population are lacking, and many SCI studies have not excluded mild cognitive impairment and dementia. We developed and tested a patient-based questionnaire on everyday cognitive function aiming to discriminate between patients with subjective, but not objective, cognitive impairment and healthy controls.Methods: Individuals experiencing cognitive impairment were interviewed to generate a pool of items. After condensing to 97 items, we tested the questionnaire in 93 SCI patients seeking care at a memory clinic (age M = 64.5 years, Mini-Mental State Examination (MMSE) M = 29.0) and 50 healthy controls (age M = 69.6 years, MMSE M = 29.3). Further item reduction was conducted to maximize that remaining items would discriminate between SCI patients and controls, using a conservative α level and requiring medium to high effect sizes. Internal consistency reliability and convergent validity was subsequently examined.Results: Forty-five items discriminated between the groups, resulting in the Sahlgrenska Academy Self-reported Cognitive Impairment Questionnaire (SASCI-Q). Internal consistency was high and correlations to a single question on memory functioning were of medium to large sizes. Most remaining items were related to the memory domain.Conclusion: The SASCI-Q discriminates between SCI patients and healthy controls and demonstrates satisfying psychometric properties. The instrument provides a research method for examining SCI and forms a foundation for future examining which SCI symptoms predict objective cognitive decline. The cognitive manifestation of the SCI stage is mostly related to experiences of memory deficits.


2011 ◽  
Vol 14 (7) ◽  
pp. 1165-1176 ◽  
Author(s):  
Jinan C Banna ◽  
Marilyn S Townsend

AbstractObjectiveTo assess convergent validity, factorial validity, test–retest reliability and internal consistency of a diet quality food behaviour checklist (FBC) for low-literate, low-income Spanish speakers.DesignParticipants (n 90) completed three dietary recalls, the Spanish-language version of the US Department of Agriculture (USDA) Household Food Security Survey Module (HFSSM) and the Spanish-language FBC. Factor structure was examined using principal component analysis. Spearman correlation coefficients between FBC item responses and nutrient intakes from 24 h recalls were used to estimate convergent validity. Correlation coefficients were also calculated between FBC item responses at two time points in another group of participants (n 71) to examine test–retest reliability. Cronbach's α coefficient was determined for items within each sub-scale.SettingNon-profit community agencies serving low-income clients, migrant farm worker camps and low-income housing sites in four California counties.SubjectsSpanish-speaking women (n 161) who met income eligibility for the SNAP-Ed (Supplemental Nutrition Assistance Program–Education).ResultsFactor analysis resulted in six sub-scales. Responses to nineteen food behaviour items were significantly correlated with hypothesized 24 h recall data (with a maximum correlation of 0·44 for drinking milk and calcium) or the USDA HFSSM (0·42 with the food security item). Coefficients for test–retest reliability ranged from 0·35 to 0·79. Cronbach's α ranged from 0·49 for the diet quality sub-scale to 0·80 for the fruit and vegetable sub-scale.ConclusionsThe twenty-two-item FBC and instruction guide will be used to evaluate USDA community nutrition education interventions with low-literate Spanish speakers. This research contributes to the body of knowledge about this at-risk population in California.


Author(s):  
Vahid Farnia ◽  
Mehdi Moradinazar ◽  
Nasrin Abdoli ◽  
Mostafa Alikhani ◽  
Mansour Rezaei ◽  
...  

Background: No standard self-report instrument for withdrawal symptoms is available in Iran. Objectives: This study aimed to evaluate the psychometric properties of the Persian version of the 10-item Amphetamine Withdrawal questionnaire version 2 (AWQV2). Methods: A sample of 388 methamphetamine addicts (215 females and 173 males) referred to addiction recovery centers and psychiatric ward of Farabi Hospital in Kermanshah. A two-stage random sampling method was used. The reliability and internal consistency of the AWQV2 items were examined using Cronbach’s alpha and test-retest reliability, respectively, and the instrument validity of the AWQV2 was measured using construct validity and convergent validity. Results: The AWQV2 had a Cronbach’s alpha of 0.72. Factor analysis using the main component analysis with a varimax rotation introduced three factors of hyperarousal, anxiety, and reversed vegetative symptoms. These factors explained 0.58 of the total variance. The coefficient of test-retest reliability at a 2-week interval was equal to 0.77. The convergent validity of the AWQV2 was examined by simultaneously administering the Advanced Warning of Relapse (AWARE) questionnaire to 40 subjects, with a correlation coefficient of 0.81. Conclusions: Based on the results, the AWQV2 has very good psychometric properties and may be used in research and therapeutic interventions.


Sign in / Sign up

Export Citation Format

Share Document