scholarly journals Putting the Meaning Into Meaningful Change Research

Author(s):  
Jessica Braid ◽  
Susanne Clinch ◽  
Hannah M Staunton ◽  
Patricia K Corey-Lisle ◽  
Bruno Kovic ◽  
...  

Abstract PurposeMethods for deriving clinically meaningful change thresholds have advanced considerably in recent years, however, key questions remain about what the identified change score actually means for an individual patient or group of patient. This is particularly important in the case of ClinROs where the translation from clinically meaningful change to patient-relevance in daily living is not clear. This paper provides case studies from an Industry perspective, where we have addressed this challenge using varied approaches. We have explored meaningful change at both the group and individual level.MethodsWe provide several case studies to illustrate different approaches to understanding and communicating a meaningful outcome on a ClinRO. These include alternative methods for interpreting group-level MCIDs, and several examples of linking ClinRO items to patient-relevant real-world concepts e.g. through exit interviews, translation of ClinRO items into patient-friendly concepts, and use of the Rasch model to equate ClinRO items to real-world functional measures.ResultsEach case study provides unique learning opportunities. For example, contextualising group-level differences, converting MCIDs into other metrics like numbers needed to treat and responder deltas supports interpretation of clinical meaning, especially for clinicians. For interpreting individual-level meaningful change, exit interviews and the development of patient-friendly versions of ClinROs provide a means of linking clinician-focused content to real-world functional outcomes in a meaningful way for patients. Finally, the Rasch model can help predict probable item scores on a ClinRO associated with the threshold at which a function is gained or lost.ConclusionWhile methods for deriving meaningful change thresholds have evolved, there remains a significant challenge in communicating what observed changes mean to the patient, a challenge which is further complicated in ClinROs. These case studies showcase novel approaches to addressing this challenge and may provide a useful addition to the COA scientist’s toolbox.

GeroPsych ◽  
2020 ◽  
Vol 33 (4) ◽  
pp. 209-222
Author(s):  
Jennifer Christine Hughes ◽  
William Romine ◽  
Tanvi Banerjee ◽  
Garrett Goodman ◽  
Abby Edwards ◽  
...  

Abstract. Dementia caregiving is associated with depression, stress, and sleep disturbance. A daily use caregiver sleep survey (DUCSS) was developed to evaluate caregiver sleep. The tool was distributed to 24 informal caregivers and validated using the Rasch model, which indicated that the 17-item survey produced sleep quality measures of sufficient reliability for both group-level and individual-level comparisons (reliability = .87). The sample size was sufficient to provide precise measures of the item’s position along the scale (item difficulty) (reliability = .85), so outcomes associated with sleep quality levels could be evaluated. We observed that the structure of the instrument is unidimensional, meaning the wording does not contain systematic biases peripheral to sleep quality. DUCSS is a useful tool for caregiver assessment and monitoring.


2021 ◽  
Author(s):  
Philip Griffiths ◽  
Joel Sims ◽  
Abi Williams ◽  
Nicola Williamson ◽  
David Cella ◽  
...  

Abstract Purpose: Treatment benefit as assessed using clinical outcome assessments (COAs), is a key endpoint in many clinical trials at both the individual and group level. Anchor-based methods can aid interpretation of COA change scores beyond statistical significance, and help derive a meaningful change threshold (MCT). However, evidence-based guidance on the selection of appropriately related anchors is lacking. Methods: A simulation was conducted which varied sample size, change score variability and anchor correlation strength to assess the impact of these variables on recovering the true simulated MCT at both the individual and group-level. At the individual-level, Receiver Operating Characteristic (ROC) curves and Predictive Modelling (PM) anchor analyses were conducted. At the group-level, group means of the ‘not-improved’ and ‘improved’ groups were compared. Results: Sample sizes, change score variability and magnitude of anchor correlation affected accuracy of the estimated MCT. At the individual-level, ROC curves were less accurate than PM methods at recovering the true MCT. For both methods, smaller samples led to higher variability in the returned MCT, but higher variability still using ROC. Anchors with weaker correlations with COA change scores had increased variability in the estimated MCT. An anchor correlation of 0.50-0.60 identified a true MCT cut-point under certain conditions using ROC. However, anchor correlations as low as 0.30 were appropriate when using PM under certain conditions. At the group-level, the MCT was consistently underestimated regardless of the anchor correlation. Conclusion: Findings show that the chosen method, sample size and variability in change scores influence the necessary anchor correlation strength when identifying a true individual-level MCT. Often, this needs to be higher than the commonly accepted threshold of 0.30. Stronger correlations than 0.30 are required at the group-level, but a specific recommendation is not provided. Results can be used to assist researchers selecting and assessing the quality of anchors.


2011 ◽  
Vol 8 (2) ◽  
pp. 197-200 ◽  
Author(s):  
Andrew J. King ◽  
Lawrence Cheng ◽  
Sandra D. Starke ◽  
Julia P. Myatt

Diversity of expertise at an individual level can increase intelligence at a collective level—a type of swarm intelligence (SI) popularly known as the ‘wisdom of the crowd’. However, this requires independent estimates (rare in the real world owing to the availability of public information) and contradicts people's bias for copying successful individuals. To explain these inconsistencies, 429 people took part in a ‘guess the number of sweets’ exercise. Guesses made with no public information were diverse, resulting in highly accurate SI. Individuals with access to the previous guess, mean guess or a randomly chosen guess, tended to over-estimate the number of sweets and this undermined SI. However, when people were provided with the current best guess, this prevented very large (inaccurate) guesses, resulting in convergence of guesses towards the true value and accurate SI across a range of group sizes. Thus, contrary to previous work, we show that social influence need not undermine SI, especially where individual decisions are made sequentially and then aggregated. Furthermore, we offer an explanation for why people have a bias to recruit and follow experts in team settings: copying successful individuals can enable accuracy at both the individual and group level, even at small group sizes.


2018 ◽  
Vol 29 (10) ◽  
pp. 1631-1641 ◽  
Author(s):  
Kristin M. Brethel-Haurwitz ◽  
Elise M. Cardinale ◽  
Kruti M. Vekaria ◽  
Emily L. Robertson ◽  
Brian Walitt ◽  
...  

Shared neural representations during experienced and observed distress are hypothesized to reflect empathic neural simulation, which may support altruism. But the correspondence between real-world altruism and shared neural representations has not been directly tested, and empathy’s role in promoting altruism toward strangers has been questioned. Here, we show that individuals who have performed costly altruism (donating a kidney to a stranger; n = 25) exhibit greater self–other overlap than matched control participants ( n = 27) in neural representations of pain and threat (fearful anticipation) in anterior insula (AI) during an empathic-pain paradigm. Altruists exhibited greater self–other correspondence in pain-related activation in left AI, highlighting that group-level overlap was supported by individual-level associations between empathic pain and firsthand pain. Altruists exhibited enhanced functional coupling of left AI with left midinsula during empathic pain and threat. Results show that heightened neural instantiations of empathy correspond to real-world altruism and highlight limitations of self-report.


2017 ◽  
Author(s):  
Kristin M. Brethel-Haurwitz ◽  
Elise Cardinale ◽  
Kruti Vekaria ◽  
Emily Lynne Robertson ◽  
Brian Walitt ◽  
...  

Shared neural representations during experienced and observed distress are hypothesized to reflect empathic neural simulation, which may support altruism. But the correspondence between real-world altruism and shared neural representations has not been directly tested, and empathy’s role in promoting altruism toward strangers has been questioned. Here we show that individuals who have performed costly altruism (donating a kidney to a stranger; n=25) exhibit greater self-other overlap than matched controls (n=27) in neural representations of pain and threat (fearful anticipation) in anterior insula (AI) in an empathic pain paradigm. Altruists exhibited greater self-other correspondence in pain-related activation in left AI, highlighting that group-level overlap was supported by individual-level associations between empathic pain and first-hand pain. Altruists exhibited enhanced functional coupling of left AI with left mid-insula during empathic pain and threat. Results show that heightened neural instantiations of empathy correspond to real-world altruism and highlight limitations of self-report.


2011 ◽  
Author(s):  
Klaus Kubinger ◽  
D. Rasch ◽  
T. Yanagida

2020 ◽  
Author(s):  
Keith Payne ◽  
Heidi A. Vuletich ◽  
Kristjen B. Lundberg

The Bias of Crowds model (Payne, Vuletich, & Lundberg, 2017) argues that implicit bias varies across individuals and across contexts. It is unreliable and weakly associated with behavior at the individual level. But when aggregated to measure context-level effects, the scores become stable and predictive of group-level outcomes. We concluded that the statistical benefits of aggregation are so powerful that researchers should reconceptualize implicit bias as a feature of contexts, and ask new questions about how implicit biases relate to systemic racism. Connor and Evers (2020) critiqued the model, but their critique simply restates the core claims of the model. They agreed that implicit bias varies across individuals and across contexts; that it is unreliable and weakly associated with behavior at the individual level; and that aggregating scores to measure context-level effects makes them more stable and predictive of group-level outcomes. Connor and Evers concluded that implicit bias should be considered to really be noisily measured individual construct because the effects of aggregation are merely statistical. We respond to their specific arguments and then discuss what it means to really be a feature of persons versus situations, and multilevel measurement and theory in psychological science more broadly.


Sign in / Sign up

Export Citation Format

Share Document