scholarly journals Cross-cultural validation of the stroke riskometer using generalizability theory

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Oleg Medvedev ◽  
Quoc Truong ◽  
Alexander Merkin ◽  
Robert Borotkanics ◽  
Rita Krishnamurthi ◽  
...  

AbstractThe Stroke Riskometer mobile application is a novel, validated way to provide personalized stroke risk assessment for individuals and motivate them to reduce their risks. Although this app is being used worldwide, its reliability across different countries has not yet been rigorously investigated using appropriate methodology. The Generalizability Theory (G-Theory) is an advanced statistical method suitable for examining reliability and generalizability of assessment scores across different samples, cultural and other contexts and for evaluating sources of measurement errors. G-Theory was applied to the Stroke Riskometer data sampled from 1300 participants in 13 countries using two-facet nested observational design (person by item nested in the country). The Stroke Riskometer demonstrated strong reliability in measuring stroke risks across the countries with coefficients G relative and absolute of 0.84, 95%CI [0.79; 0.89] and 0.82, 95%CI [0.76; 0.88] respectively. D-study analyses revealed that the Stroke Riskometer has optimal reliability in its current form in measuring stroke risk for each country and no modifications are required. These results suggest that the Stroke Riskometer’s scores are generalizable across sample population and countries permitting cross-cultural comparisons. Further studies investigating reliability of the Stroke Riskometer over time in longitudinal study design are warranted.

2021 ◽  
pp. 1-11
Author(s):  
Q. C. Truong ◽  
C. Choo ◽  
K. Numbers ◽  
A. G. Merkin ◽  
H. Brodaty ◽  
...  

ABSTRACT Objectives: This study aimed to apply the generalizability theory (G-theory) to investigate dynamic and enduring patterns of subjective cognitive complaints (SCC), and reliability of two widely used SCC assessment tools. Design: G-theory was applied to assessment scales using longitudinal measurement design with five assessments spanning 10 years of follow-up. Setting: Community-dwelling older adults aged 70–90 years and their informants, living in Sydney, Australia, participated in the longitudinal Sydney Memory and Ageing Study. Participants: The sample included 232 participants aged 70 years and older, and 232 associated informants. Participants were predominantly White Europeans (97.8%). The sample of informants included 76 males (32.8%), 153 females (65.9%), and their age ranged from 27 to 86 years, with a mean age of 61.3 years (SD = 14.38). Measurements: The Memory Complaint Questionnaire (MAC-Q) and the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Results: The IQCODE demonstrated strong reliability in measuring enduring patterns of SCC with G = 0.86. Marginally acceptable reliability of the 6-item MAC-Q (G = 0.77–0.80) was optimized by removing one item resulting in G = 0.80–0.81. Most items of both assessments were measuring enduring SCC with exception of one dynamic MAC-Q item. The IQCODE significantly predicted global cognition scores and risk of dementia incident across all occasions, while MAC-Q scores were only significant predictors on some occasions. Conclusions: While both informants’ (IQCODE) and self-reported (MAC-Q) SCC scores were generalizable across sample population and occasions, self-reported (MAC-Q) scores may be less accurate in predicting cognitive ability and diagnosis of each individual.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ali Khodi

AbstractThe present study attempted to to investigate  factors  which affect EFL writing scores through using generalizability theory (G-theory). To this purpose, one hundred and twenty students participated in one independent and one integrated writing tasks. Proceeding, their performances were scored by six raters: one self-rating,  three peers,-rating and two instructors-rating. The main purpose of the sudy was to determine the relative and absolute contributions of different facets such as student, rater, task, method of scoring, and background of education  to the validity of writing assessment scores. The results indicated three major sources of variance: (a) the student by task by method of scoring (nested in background of education) interaction (STM:B) with 31.8% contribution to the total variance, (b) the student by rater by task by method of scoring (nested in background of education) interaction (SRTM:B) with 26.5% of contribution to the total variance, and (c) the student by rater by method of scoring (nested in background of education) interaction (SRM:B) with 17.6% of the contribution. With regard to the G-coefficients in G-study (relative G-coefficient ≥ 0.86), it was also found that the result of the assessment was highly valid and reliable. The sources of error variance were detected as the student by rater (nested in background of education) (SR:B) and rater by background of education with 99.2% and 0.8% contribution to the error variance, respectively. Additionally, ten separate G-studies were conducted to investigate the contribution of different facets across rater, task, and methods of scoring as differentiation facet. These studies suggested that peer rating, analytical scoring method, and integrated writing tasks were the most reliable and generalizable designs of the writing assessments. Finally, five decision-making studies (D-studies) in optimization level were conducted and it was indicated that at least four raters (with G-coefficient = 0.80) are necessary for a valid and reliable assessment. Based on these results, to achieve the greatest gain in generalizability, teachers should have their students take two writing assessments and their performance should be rated on at least two scoring methods by at least four raters.


2016 ◽  
Vol 24 (6) ◽  
pp. 565-567 ◽  
Author(s):  
Stephane M Shepherd

Objective: Violence risk assessment assumes a critical medico-legal role addressing offender/patient needs and informing forensic mental health decision making. Yet questions remain over the cross-cultural applicability of such measures. In their current form, violence risk instruments may not reflect the unique life and cultural experiences of Indigenous Australians rendering them culturally unsafe. Conclusions: To realize equitable forensic assessment, it is necessary to ascertain whether there are cultural differences across risk factors for violence and that risk instruments are validated as culturally appropriate. Greater cross-cultural rigour in forensic mental health risk assessment, research and practice is proposed.


Mindfulness ◽  
2020 ◽  
Author(s):  
Oleg N. Medvedev ◽  
Anastasia T. Dailianis ◽  
Yoon-Suk Hwang ◽  
Christian U. Krägeloh ◽  
Nirbhay N. Singh

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Oleg Medvedev ◽  
Quoc Cuong Truong ◽  
Alexander Merkin ◽  
Robert Borotkanics ◽  
Rita Krishnamurthi ◽  
...  

2021 ◽  
Vol 2 (3) ◽  
pp. 50-55
Author(s):  
James Uzomba Okeaba ◽  
Nkechi Patricia-Mary Esomonu

This study estimated measurement error and score dependability in examinations using the Generalizability Theory. Scores obtained by the students (object of measurements) in examinations are affected by multiple sources of error (facets), and these scores are used in taking relative and absolute decisions about the students. There is, therefore, needed to estimate measurement error and score dependability to find the extent of the contributions of the facets to error in examination scores. Three research questions and one hypothesis were used to guide the study. The study population comprised 5,085 SS3 students of the 34 Government-owned senior secondary schools in Yenagoa LGA of Bayelsa State 2019/2020 academic session. 10 schools were selected using simple random sampling technic and the 1,525 SS3 students of the selected schools formed the sample. section A of the 2018 NECO Mathematics main paper and 2018 NECO Mathematics Marking Scheme were used to collect the data. EduG version 6.0-e based on ANOVA and Generalizability theory was used to answer the three research questions. A 95% confidence interval was computed using the standard error variance components to test the hypothesis. The findings of the study revealed that some hidden sources of error were at play in the study. the students’ facets (σ2S) made the highest contribution to measurement error in examination scores followed by the residual (σ2SIM). Also, the Students’ facet was significantly different (p< 0.05) in its contributions to measurement error, while the other facets and their interactions were not significantly different in their contribution to measurement error. Hence, Ho1 was not accepted for the students’ facet but accepted for other facets. An increase in the level of markers from 1 to 4 with level of items 5 yielded an outcome of 0.84 to 0.91 respectively, a generalizability coefficient of 0.94 high enough to rank order students according to their relative abilities in examinations was obtained when the level of markers was at 2 with an increment in level of items to 10. An index of dependability of 0.93 that is high enough to maximize reliability was obtained when we have level of markers at 2 and the items at 10.


2017 ◽  
Vol 20 (5) ◽  
pp. 599-627 ◽  
Author(s):  
Stephane M Shepherd ◽  
Cynthia Willis-Esqueda

Violence risk instruments are widely employed with at-risk minority clients in correctional and forensic mental health settings. However, the construction and subsequent validation of such instruments rarely, if at all, incorporate the perceptions, worldviews, life experiences, and belief systems of non-white communities. This study utilized a culturally informed qualitative approach to address the cross-cultural disparities in the forensic risk literature. Cultural perspectives on violence risk assessment were elicited from a sample of 30 American Indian and First Nations professionals from health, legal, and pedagogical sectors following an inspection of the Structured Assessment of Violence Risk in Youth instrument. Generally, participants believed that the Structured Assessment of Violence Risk in Youth instrument was not culturally appropriate for use with American Indian and First Nations youth in its current form. Recurrent themes of concern included the instrument’s negative labeling capacity, lack of cultural contextualization, individualized focus, and absence of cultural norms and practices. Recommendations to improve the cross-cultural applicability of the Structured Assessment of Violence Risk in Youth are discussed within.


2019 ◽  
Vol 26 (1) ◽  
pp. 563-575
Author(s):  
David Lowenstein ◽  
Wu Yi Zheng ◽  
Rosemary Burke ◽  
Eliza Kenny ◽  
Anmol Sandhu ◽  
...  

This study aimed to assess drug–drug interaction alert interfaces and to examine the relationship between compliance with human factors principles and user-preferences of alerts. Three reviewers independently evaluated drug–drug interaction alert interfaces in seven electronic systems using the Instrument-for-Evaluating-Human-Factors-Principles-in-Medication-Related-Decision-Support-Alerts (I-MeDeSA). Fifty-three doctors and pharmacists completed a survey to rate the alert interfaces from best to worst and reported on liked and disliked features. Human factors compliance and user-preferences of alerts were compared. Statistical analysis revealed no significant association between I-MeDeSA scores and user-preferences. However, the strengths and weaknesses of drug–drug interaction alerts from users’ perspectives were in-line with the human factors constructs evaluated by the I-MeDeSA. I-MeDeSA in its current form, is unable to identify alerts that are preferred by the users. The design principles assessed by I-MeDeSA appear to be sound, but its arbitrary allocation of points to each human factors construct may not reflect the relative importance that the end-users place on different aspects of alert design.


Sign in / Sign up

Export Citation Format

Share Document