Heuristic Evaluation on M-Learning Applications

Author(s):  
Christofer Ramos ◽  
Flávio Anthero Nunes Vianna dos Santos ◽  
Monique Vandresen

Heuristic evaluation stands out among the usability evaluation methods regarding its benefits related to time and costs. Nevertheless, generic heuristic sets require improvements when it comes to specific interfaces as seen on m-learning applications that have acquired considerable evidence within the current technologic context. Regarding the lack of studies aimed at interfaces of this sort, the authors propose, through a systematic methodology, the comparative study between a heuristic set specific to the assessment on e-learning interfaces and other, on mobile. The identified usability problems were matched with the aspects of coverage, distribution, redundancy, context and severity, in a way that it was possible to understand the efficiency of each set in covering m-learning issues. Among the findings, e-learning's heuristic set could detect a larger number of usability problems not found by mobile's.

Author(s):  
Shirish C. Srivastava ◽  
Shalini Chandra ◽  
Hwee Ming Lam

Usability evaluation which refers to a series of activities that are designed to measure the effectiveness of a system as a whole, is an important step for determining the acceptance of system by the users. Usability evaluation is becoming important since both user groups, as well as tasks, are increasing in size and diversity. Users are increasingly becoming more informed and, consequently, have higher expectations from the systems. Moreover “system interface” has become a commodity and, hence, user acceptance plays a major role in the success of the system. Currently, there are various usability evaluation methods in vogue, like cognitive walkthrough, think aloud, claims analysis, heuristic evaluation, and so forth. However, for this study we have chosen heuristic evaluation because it is relatively inexpensive, logistically uncomplicated, and is often used as a discount usability-engineering tool (Nielsen, 1994). Heuristic evaluation is a method for finding usability problems in a user interface design by having a small set of evaluators examine an interface and judge its compliance with recognized usability principles. The rest of the chapter is organized as follows: we first look at the definition of e-learning, followed by concepts of usability, LCD, and heuristics. Subsequently, we introduce a methodology for heuristic usability evaluation (Reeves, Benson, Elliot, Grant, Holschuh, Kim, Kim, Lauber, & Loh, 2002), and then use these heuristics for evaluating an existing e-learning system, GETn2. We offer our recommendations for the system and end with a discussion on the contributions of our chapter.


2010 ◽  
Vol 45 ◽  
Author(s):  
Samuel Ssemugabi ◽  
Ruth De Villiers

The Internet, World Wide Web (WWW) and e-learning are contributing to new forms of teaching and learning. Such environments should be designed and evaluated in effective ways, considering both usability- and pedagogical issues. The selection of usability evaluation methods (UEMs) is influenced by the cost of a methods and its effectiveness in addressing users’ issues. The issue of usability is vital in e-learning, where students cannot begin to learn unless they can first use the application. Heuristic evaluation (HE) remains the most widely-used usability evaluation method. This paper describes meta-evaluation research that investigated an HE of a web-based learning (WBL) application. The evaluations were based on a synthesised framework of criteria, related to usability and learning within WBL environments. HE was found to be effective in terms of the number and nature of problems identified in the target application by a complementary team of experienced experts. The findings correspond closely with those of a survey among learners.


SEMINASTIKA ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 99-106
Author(s):  
Gracella Tambunan ◽  
Lit Malem Ginting

Usability is a factor that indicates the success of an interactive product or system, such as a mobile application. The increasing use of smartphones demands a more accurate and effective usability evaluation method to find usability problems, so that they can be used for product improvement in the development process. This study compares the Cognitive Walkthrough method with Heuristic Evaluation in evaluating the usability of the SIRS Del eGov Center mobile application. Evaluation with these two methods will be carried out by three evaluators who act as experts. Finding problems and recommending improvements from each method will produce an improvement prototype made in the form of a high-fidelity prototype. Each prototype will be tested against ten participants using the Usability Testing method, which will generate scores through the SUS table. From the test scores, the percentage of Likert scale and the success rate of each prototype will be found. The results show that between the two usability evaluation methods, the Heuristic Evaluation method is the more effective method, finds more usability problems, and has a higher Likert scale percentage, which is 66.5%, while Cognitive Walkthrough is 64.75%.


2018 ◽  
Vol 9 (1) ◽  
pp. 62-81 ◽  
Author(s):  
Jehad Alqurni ◽  
Roobaea Alroobaea ◽  
Mohammed Alqahtani

Heuristic evaluation (HE) is a widely used method for assessing software systems. Several studies have sought to improve the effectiveness of HE by developing its heuristics and procedures. However, few studies have involved the end-user, and to the best of the authors' knowledge, no HE studies involving end-users with non-expert evaluators have been reported. Therefore, the aim of this study is to investigate the impact of end-users on the results obtained by a non-expert evaluator within the HE process, and through that, to explore the number of usability problems and their severity. This article proposes introducing two sessions within the HE process: a user exploration session (UES-HE) and a user review session (URS-HE). The outcomes are compared with two solid benchmarks in the usability-engineering field: the traditional HE and the usability testing (UT) methods. The findings show that the end-user has a significant impact on non-expert evaluator results in both sessions. In the UES-HE method, the results outperformed all usability evaluation methods (UEMs) regarding the usability problems identified, and it tended to identify more major, minor, and cosmetic problems than other methods.


2021 ◽  
Author(s):  
Mehrdad Farzandipour ◽  
Ehsan Nabovati ◽  
Hamidreza Tadayon ◽  
Monireh Sadeqi Jabali

Abstract Background There are some inconsistencies regarding the selection of the most appropriate usability evaluation method. The present study aimed to compare two expert-based evaluation methods in a nursing module, as the most widely used module of a Hospital Information System (HIS). Methods The Heuristic Evaluation (HE) and Cognitive Walkthrough (CW) methods were used by five independent evaluators to evaluate the nursing module of Shafa HIS. In this regard, the number, severity and ratio of the recognized problems according to the usability attributes were compared using two evaluation methods. Results The use of the HE and CW evaluation methods resulted in the identification of 104 and 24 unique problems, respectively. The average severity of the recognized problems was 2.32 in the HE method and 2.77 in the CW evaluation method; however, there was a significant difference between the number and severity of recognized usability problems by these methods (P < 0.001). Some problems, which were associated with effectiveness, satisfaction and error, were better recognized by the HE method; however, CW evaluation method was more successful in recognizing problems of learnability, efficiency and memorability. Conclusion The HE method recognized more problems with a lower average severity. On the other hand, CW could recognize fewer problems with a higher average severity. Regarding the evaluation goal, HE method would be used to improve effectiveness, increase satisfaction and decrease the number of errors. Furthermore, CW evaluation method is recommended to be used to improve the learnability, efficiency and memorability of the system.


Author(s):  
Panagiotis Zaharias

The issue of e-learning quality remains prominent on end users’ (the learners’) agenda. It is no surprise that many non-motivated adult learners abandon prematurely their e-learning experiences. This is attributed in a great extent to the poor design and usability of e-learning applications. This paper proposes a usability framework that addresses the user as a learner and extends the current e-learning usability practice by focusing on the affective dimension of learning, a frequently neglected issue in e-learning developments. Motivation to learn, a dominant affective factor related with learning effectiveness, has been similarly neglected. Usability and instructional design constructs as well as Keller’s ARCS Model are being employed within the framework proposed in this work upon which new usability evaluation methods can be based. This framework integrates web usability and instructional design parameters and proposes motivation to learn as a new type of usability dimension in designing and evaluating e-learning applications.


2015 ◽  
Vol 7 (3) ◽  
pp. 18-39
Author(s):  
Maria Alexandra Rentroia-Bonito ◽  
Daniel Gonçalves ◽  
Joaquim A Jorge

Technological advances during the last decade have provided huge possibilities to support e-learning. However, there are still concerns regarding Return-on-Investment (ROI) of e-learning, its sustainability within organizational bound-aries and effectiveness across potential learner groups. Much previous research has concentrated on learners' motivation, satisfaction, and retention. This leaves room for further research to identify alternative and innovative ways to center design on students' concerns when learning online. The authors' work focuses on designing workable courseware usability evaluation methods to differentiate students to improve learning-support frameworks from both pedagogical and system perspectives. The authors' results suggest that students can be grouped in three clusters based on their motivation to e-Learn. Instructors could predict which cluster a new student belongs to, making it possible to anticipate usability issues that most affect results. This also facilitates pedagogical interventions that could help at-risk learners, contributing to the retention rate.


Sign in / Sign up

Export Citation Format

Share Document