The Cognitive Walkthough for Implementation Strategies (CWIS): A Pragmatic Method for Assessing Implementation Strategy Usability
Abstract Background. Implementation strategies have flourished over the last decade in an effort to increase integration of research evidence into clinical practice. Most strategies are complex, socially-mediated processes. Many are complicated, expensive, and ultimately impractical to deliver in real-world settings. The field lacks methods to assess the extent to which implementation strategies are usable and aligned with the needs and constraints of the individuals and contexts who will deliver or receive them. Drawn from the field of human-centered design, cognitive walkthroughs are an efficient assessment method with potential to surface aspects of strategies that may inhibit their usability and, ultimately, their effectiveness. This article presents a novel cognitive walkthrough methodology for evaluating strategy usability as well as an example application to a post-training consultation strategy to support mental health clinicians in the education sector to adopt measurement-based care.Method. The Cognitive Walkthrough for Implementation Strategies (CWIS) is a pragmatic, mixed-methods approach for evaluating complex, socially-mediated implementation strategies in health. CWIS includes six steps: (1) determine preconditions; (2) hierarchical task analysis; (3) task prioritization; (4) convert tasks to scenarios; (5) pragmatic group testing; and (6) usability issue identification, classification, and prioritization. A facilitator conducted two group testing sessions with clinician users (N = 10), guiding participants through 6 scenarios and 11 associated subtasks. Clinicians reported their anticipated likelihood of completing each subtask and provided qualitative justifications during group discussion. Following the walkthrough sessions, users completed a quantitative assessment of strategy usability.Results. Average subtask success ratings indicated substantial variability across participants and subtasks. Usability ratings (scale: 0-100) of the consultation protocol averaged 71.3 (SD = 10.6). Twenty-one usability problems were identified via qualitative coding and classified by severity and problem type to explain the ratings. High-severity problems included potential misalignment between consultation and clinical service timelines as well as digressions during consultation processes.Conclusions. Ratings indicated that usability of the consultation protocol was at the low end of the “acceptable” range. Collectively, the 21 usability issues explained the ISUS quantitative usability data and provided specific direction for usability enhancements. The current study provides preliminary evidence for the utility of CWIS to assess strategy usability and generate a blueprint for redesign.