Walkthroughs in Web Usability

Author(s):  
Hokyoung Ryu

The evaluators of a website have a need for robust and easy-to-use usability inspection methods to help them to systematically identify the possible usability problems of the website being analysed. Three usability inspection methods – heuristic walkthrough (HW), cognitive walkthrough (CW), and activity walkthrough (AW) – are reviewed in this chapter. Reviewing that work, this chapter discusses the relative advantages and weaknesses of all of the techniques, and suggestions for web evaluation are offered, with a short website example. Based on these analyses, we suggest some changes to website evaluation to improve accuracy and reliability of the current walkthrough methods; however, this chapter is not a comparison between the walkthrough techniques in order to determine which technique is best at detecting usability problems of a website.

2013 ◽  
Vol 2013 ◽  
pp. 1-17 ◽  
Author(s):  
Lars-Ola Bligård ◽  
Anna-Lisa Osvalder

To avoid use errors when handling medical equipment, it is important to develop products with a high degree of usability. This can be achieved by performing usability evaluations in the product development process to detect and mitigate potential usability problems. A commonly used method is cognitive walkthrough (CW), but this method shows three weaknesses: poor high-level perspective, insufficient categorisation of detected usability problems, and difficulties in overviewing the analytical results. This paper presents a further development of CW with the aim of overcoming its weaknesses. The new method is called enhanced cognitive walkthrough (ECW). ECW is a proactive analytical method for analysis of potential usability problems. The ECW method has been employed to evaluate user interface designs of medical equipment such as home-care ventilators, infusion pumps, dialysis machines, and insulin pumps. The method has proved capable of identifying several potential use problems in designs.


10.28945/3001 ◽  
2006 ◽  
Author(s):  
Chris Procter

This paper describes the design and use of a simple method for comparative website evaluation that has been used for the purposes of teaching web design to University students. The method can be learnt within two hours by a novice user or typical customer. The method is not dependant upon the environment being used by the tester and can be adjusted according to the subjective preferences they may have. Results are presented of the use of the method in practice in comparing the sites of a number of airlines. These suggest that the method is both sufficiently rigorous to produce reliable results, and flexible enough for users to customise. It is an effective tool in teaching the principles of web design.


2016 ◽  
Vol 24 (e1) ◽  
pp. e55-e60 ◽  
Author(s):  
Reza Khajouei ◽  
Misagh Zahiri Esfahani ◽  
Yunes Jahani

Objective: There are several user-based and expert-based usability evaluation methods that may perform differently according to the context in which they are used. The objective of this study was to compare 2 expert-based methods, heuristic evaluation (HE) and cognitive walkthrough (CW), for evaluating usability of health care information systems. Materials and methods: Five evaluators independently evaluated a medical office management system using HE and CW. We compared the 2 methods in terms of the number of identified usability problems, their severity, and the coverage of each method. Results: In total, 156 problems were identified using the 2 methods. HE identified a significantly higher number of problems related to the “satisfaction” attribute (P = .002). The number of problems identified using CW concerning the “learnability” attribute was significantly higher than those identified using HE (P = .005). There was no significant difference between the number of problems identified by HE, based on different usability attributes (P = .232). Results of CW showed a significant difference between the number of problems related to usability attributes (P < .0001). The average severity of problems identified using CW was significantly higher than that of HE (P < .0001). Conclusion: This study showed that HE and CW do not differ significantly in terms of the number of usability problems identified, but they differ based on the severity of problems and the coverage of some usability attributes. The results suggest that CW would be the preferred method for evaluating systems intended for novice users and HE for users who have experience with similar systems. However, more studies are needed to support this finding.


SEMINASTIKA ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 99-106
Author(s):  
Gracella Tambunan ◽  
Lit Malem Ginting

Usability is a factor that indicates the success of an interactive product or system, such as a mobile application. The increasing use of smartphones demands a more accurate and effective usability evaluation method to find usability problems, so that they can be used for product improvement in the development process. This study compares the Cognitive Walkthrough method with Heuristic Evaluation in evaluating the usability of the SIRS Del eGov Center mobile application. Evaluation with these two methods will be carried out by three evaluators who act as experts. Finding problems and recommending improvements from each method will produce an improvement prototype made in the form of a high-fidelity prototype. Each prototype will be tested against ten participants using the Usability Testing method, which will generate scores through the SUS table. From the test scores, the percentage of Likert scale and the success rate of each prototype will be found. The results show that between the two usability evaluation methods, the Heuristic Evaluation method is the more effective method, finds more usability problems, and has a higher Likert scale percentage, which is 66.5%, while Cognitive Walkthrough is 64.75%.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Aaron R. Lyon ◽  
Jessica Coifman ◽  
Heather Cook ◽  
Erin McRee ◽  
Freda F. Liu ◽  
...  

Abstract Background Implementation strategies have flourished in an effort to increase integration of research evidence into clinical practice. Most strategies are complex, socially mediated processes. Many are complicated, expensive, and ultimately impractical to deliver in real-world settings. The field lacks methods to assess the extent to which strategies are usable and aligned with the needs and constraints of the individuals and contexts who will deliver or receive them. Drawn from the field of human-centered design, cognitive walkthroughs are an efficient assessment method with potential to identify aspects of strategies that may inhibit their usability and, ultimately, effectiveness. This article presents a novel walkthrough methodology for evaluating strategy usability as well as an example application to a post-training consultation strategy to support school mental health clinicians to adopt measurement-based care. Method The Cognitive Walkthrough for Implementation Strategies (CWIS) is a pragmatic, mixed-methods approach for evaluating complex, socially mediated implementation strategies. CWIS includes six steps: (1) determine preconditions; (2) hierarchical task analysis; (3) task prioritization; (4) convert tasks to scenarios; (5) pragmatic group testing; and (6) usability issue identification, classification, and prioritization. A facilitator conducted two group testing sessions with clinician users (N = 10), guiding participants through 6 scenarios and 11 associated subtasks. Clinicians reported their anticipated likelihood of completing each subtask and provided qualitative justifications during group discussion. Following the walkthrough sessions, users completed an adapted quantitative assessment of strategy usability. Results Average anticipated success ratings indicated substantial variability across participants and subtasks. Usability ratings (scale 0–100) of the consultation protocol averaged 71.3 (SD = 10.6). Twenty-one usability problems were identified via qualitative content analysis with consensus coding, and classified by severity and problem type. High-severity problems included potential misalignment between consultation and clinical service timelines as well as digressions during consultation processes. Conclusions CWIS quantitative usability ratings indicated that the consultation protocol was at the low end of the “acceptable” range (based on norms from the unadapted scale). Collectively, the 21 resulting usability issues explained the quantitative usability data and provided specific direction for usability enhancements. The current study provides preliminary evidence for the utility of CWIS to assess strategy usability and generate a blueprint for redesign.


Author(s):  
Terence S. Andre ◽  
H. Rex Hartson ◽  
Robert C. Williges

Despite the increased focus on usability and on the processes and methods used to increase usability, a substantial amount of software is unusable and poorly designed. Much of this is attributable to the lack of cost-effective usability evaluation tools that provide an interaction-based framework for identifying problems. We developed the user action framework and a corresponding evaluation tool, the usability problem inspector (UPI), to help organize usability concepts and issues into a knowledge base. We conducted a comprehensive comparison study to determine if our theory-based framework and tool could be effectively used to find important usability problems in an interface design, relative to two other established inspection methods (heuristic evaluation and cognitive walkthrough). Results showed that the UPI scored higher than heuristic evaluation in terms of thoroughness, validity, and effectiveness and was consistent with cognitive walkthrough for these same measures. We also discuss other potential advantages of the UPI over heuristic evaluation and cognitive walkthrough when applied in practice. Potential applications of this work include a cost-effective alternative or supplement to lab-based formative usability evaluation during any stage of development.


Author(s):  
Rachel E. Stuck ◽  
Amy W. Chong ◽  
L. Mitzner Tracy ◽  
Wendy A. Rogers

For older adults, managing medications can be a burden and could lead to medication non-adherence. To decrease risks associated with medication non-adherence, healthcare providers may recommend medication reminder apps as an assistive tool. However, these apps are often not designed with consideration of older adults’ needs, capabilities, and limitations. To identify whether available apps are suitable for older adults, we conducted an in-depth cognitive walkthrough and a heuristic evaluation of the most commonly downloaded medication reminder app. Findings revealed three main issues: 1) difficulty in navigation, 2) poor visibility, and 3) a lack of transparency. We also selected the top five downloaded medication reminder apps and categorized user reviews to assess app functionality and usability problems. The results of our analysis provide guidance for app design for older adult users to provide effective tools for managing medications and supporting patient/user health.


Sign in / Sign up

Export Citation Format

Share Document