Usability Evaluation of E-Learning Systems

Author(s):  
Shirish C. Srivastava ◽  
Shalini Chandra ◽  
Hwee Ming Lam

Usability evaluation which refers to a series of activities that are designed to measure the effectiveness of a system as a whole, is an important step for determining the acceptance of system by the users. Usability evaluation is becoming important since both user groups, as well as tasks, are increasing in size and diversity. Users are increasingly becoming more informed and, consequently, have higher expectations from the systems. Moreover “system interface” has become a commodity and, hence, user acceptance plays a major role in the success of the system. Currently, there are various usability evaluation methods in vogue, like cognitive walkthrough, think aloud, claims analysis, heuristic evaluation, and so forth. However, for this study we have chosen heuristic evaluation because it is relatively inexpensive, logistically uncomplicated, and is often used as a discount usability-engineering tool (Nielsen, 1994). Heuristic evaluation is a method for finding usability problems in a user interface design by having a small set of evaluators examine an interface and judge its compliance with recognized usability principles. The rest of the chapter is organized as follows: we first look at the definition of e-learning, followed by concepts of usability, LCD, and heuristics. Subsequently, we introduce a methodology for heuristic usability evaluation (Reeves, Benson, Elliot, Grant, Holschuh, Kim, Kim, Lauber, & Loh, 2002), and then use these heuristics for evaluating an existing e-learning system, GETn2. We offer our recommendations for the system and end with a discussion on the contributions of our chapter.

Author(s):  
Christofer Ramos ◽  
Flávio Anthero Nunes Vianna dos Santos ◽  
Monique Vandresen

Heuristic evaluation stands out among the usability evaluation methods regarding its benefits related to time and costs. Nevertheless, generic heuristic sets require improvements when it comes to specific interfaces as seen on m-learning applications that have acquired considerable evidence within the current technologic context. Regarding the lack of studies aimed at interfaces of this sort, the authors propose, through a systematic methodology, the comparative study between a heuristic set specific to the assessment on e-learning interfaces and other, on mobile. The identified usability problems were matched with the aspects of coverage, distribution, redundancy, context and severity, in a way that it was possible to understand the efficiency of each set in covering m-learning issues. Among the findings, e-learning's heuristic set could detect a larger number of usability problems not found by mobile's.


Author(s):  
Emily Gonzalez-Holland ◽  
Daphne Whitmer ◽  
Larry Moralez ◽  
Mustapha Mouloua

Heuristics are commonly employed throughout various stages of the design process to evaluate the usability of interfaces. Heuristic Evaluation (HE) provides researchers with a cost effective and practical means to effectively assess designs. In this article, we aim to outline the development and application of one of the most frequently cited set of heuristic evaluation tools, Nielsen’s (1994) 10 usability heuristics. Nielsen’s heuristics have not only been applied to various modalities of interface design, but have also been compared to other usability evaluation methods. Moreover, in many cases they have been modified so that they can be applied in an ever-changing socio-technical environment. In reviewing these developments, we propose theoretical and practical implications of these heuristic methods and present an outlook for the future. We argue that with the rapid expansion and growth of technology in the last 20 years, Nielsen’s 10 usability heuristics may need an update to remain consistent with modern usability problems.


2020 ◽  
Vol 4 (3) ◽  
pp. 103
Author(s):  
Siti Vika Ngainul Fitri ◽  
Oktalia Juwita ◽  
Tio Dharmawan

Banyuwangi Regency has a new innovation called " Lahir Procot Pulang Bawa Akta " which in this innovation is realized in the form of an Online Deed Website. Every information technology has an interface that can be a link between the user and the technology itself. Interface formation is influenced by needs, and information technology has different interface designs according to the needs of its users. The User Interface has the aim of making it easier for users to operate information technology that can make users feel comfortable using the application or technology. Heuristic Evaluation is one of the Usability evaluation methods that can be used to determine the extent to which a system is used by users to achieve certain goals with effectiveness, efficiency and satisfaction. This research is a research that is focused on the use of Heuristic Evaluation based on user interface design aspects of application usability through observation, interviews and questionnaires to users.


2010 ◽  
Vol 45 ◽  
Author(s):  
Samuel Ssemugabi ◽  
Ruth De Villiers

The Internet, World Wide Web (WWW) and e-learning are contributing to new forms of teaching and learning. Such environments should be designed and evaluated in effective ways, considering both usability- and pedagogical issues. The selection of usability evaluation methods (UEMs) is influenced by the cost of a methods and its effectiveness in addressing users’ issues. The issue of usability is vital in e-learning, where students cannot begin to learn unless they can first use the application. Heuristic evaluation (HE) remains the most widely-used usability evaluation method. This paper describes meta-evaluation research that investigated an HE of a web-based learning (WBL) application. The evaluations were based on a synthesised framework of criteria, related to usability and learning within WBL environments. HE was found to be effective in terms of the number and nature of problems identified in the target application by a complementary team of experienced experts. The findings correspond closely with those of a survey among learners.


SEMINASTIKA ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 99-106
Author(s):  
Gracella Tambunan ◽  
Lit Malem Ginting

Usability is a factor that indicates the success of an interactive product or system, such as a mobile application. The increasing use of smartphones demands a more accurate and effective usability evaluation method to find usability problems, so that they can be used for product improvement in the development process. This study compares the Cognitive Walkthrough method with Heuristic Evaluation in evaluating the usability of the SIRS Del eGov Center mobile application. Evaluation with these two methods will be carried out by three evaluators who act as experts. Finding problems and recommending improvements from each method will produce an improvement prototype made in the form of a high-fidelity prototype. Each prototype will be tested against ten participants using the Usability Testing method, which will generate scores through the SUS table. From the test scores, the percentage of Likert scale and the success rate of each prototype will be found. The results show that between the two usability evaluation methods, the Heuristic Evaluation method is the more effective method, finds more usability problems, and has a higher Likert scale percentage, which is 66.5%, while Cognitive Walkthrough is 64.75%.


Author(s):  
Terence S. Andre ◽  
H. Rex Hartson ◽  
Robert C. Williges

Despite the increased focus on usability and on the processes and methods used to increase usability, a substantial amount of software is unusable and poorly designed. Much of this is attributable to the lack of cost-effective usability evaluation tools that provide an interaction-based framework for identifying problems. We developed the user action framework and a corresponding evaluation tool, the usability problem inspector (UPI), to help organize usability concepts and issues into a knowledge base. We conducted a comprehensive comparison study to determine if our theory-based framework and tool could be effectively used to find important usability problems in an interface design, relative to two other established inspection methods (heuristic evaluation and cognitive walkthrough). Results showed that the UPI scored higher than heuristic evaluation in terms of thoroughness, validity, and effectiveness and was consistent with cognitive walkthrough for these same measures. We also discuss other potential advantages of the UPI over heuristic evaluation and cognitive walkthrough when applied in practice. Potential applications of this work include a cost-effective alternative or supplement to lab-based formative usability evaluation during any stage of development.


2018 ◽  
Vol 9 (1) ◽  
pp. 62-81 ◽  
Author(s):  
Jehad Alqurni ◽  
Roobaea Alroobaea ◽  
Mohammed Alqahtani

Heuristic evaluation (HE) is a widely used method for assessing software systems. Several studies have sought to improve the effectiveness of HE by developing its heuristics and procedures. However, few studies have involved the end-user, and to the best of the authors' knowledge, no HE studies involving end-users with non-expert evaluators have been reported. Therefore, the aim of this study is to investigate the impact of end-users on the results obtained by a non-expert evaluator within the HE process, and through that, to explore the number of usability problems and their severity. This article proposes introducing two sessions within the HE process: a user exploration session (UES-HE) and a user review session (URS-HE). The outcomes are compared with two solid benchmarks in the usability-engineering field: the traditional HE and the usability testing (UT) methods. The findings show that the end-user has a significant impact on non-expert evaluator results in both sessions. In the UES-HE method, the results outperformed all usability evaluation methods (UEMs) regarding the usability problems identified, and it tended to identify more major, minor, and cosmetic problems than other methods.


Author(s):  
Olawande Daramola ◽  
Olufunke Oladipupo ◽  
Ibukun Afolabi ◽  
Ademola Olopade

Many African academic institutions have adopted the use of e-learning systems, since they enable students to learn at their own pace, time, and without restriction to the classroom. However, evidence of usability evaluation of e-learning systems in Africa is mostly lacking in the literature. This paper reports the experimental heuristic evaluation of the e-learning system of a Nigerian University. The objective is to demonstrate the application of expert-based usability evaluation techniques such as Heuristic evaluation for assessing the attributes of existing e-learning systems. The study revealed that while the e-learning systems has strong credentials in terms of support for Web 2.0 activities, good learning content and boasts of useful e-learning features, improvements are necessary in other areas such as interactive learning, assessment and feedback, and quality of learning content. The study adds to the body of extant knowledge in the area of usability evaluation of e-learning systems in African institutions.


Author(s):  
Inas Sofiyah Junus ◽  
Harry B. Santoso ◽  
R. Yugo K. Isal ◽  
Andika Yudha Utomo

<p>Student Centered e-Learning Environment (SCeLE) has substantial roles to support learning activities at Faculty of Computer Science, Universitas Indonesia (Fasilkom UI). Although it has been used for about 10 years, the usability aspect of SCeLE as an e-Learning system has not been evaluated. Therefore, the usability aspects of SCeLE Fasilkom UI as a learning support system and what makes SCeLE Fasilkom UI an ideal system are not known yet. Motivated by the mentioned conditions, the researchers found an urge to conduct a usability evaluation in order to propose a set of recommendation for SCeLE usability improvement, based on usability evaluation reflecting both students and lecturers experience as user.</p><p>In this present research, the usability testing was conducted for SCeLE, targeting learning activities underwent by undergraduate students at Fasilkom UI, in the form of blended mode online learning. The data collection stage in the usability testing was performed by distributing questionnaire to students and interviewing several lecturers and students. The collected data was then analyzed and interpreted to obtain usability problems and solution alternatives. The quantitative data was analyzed using central tendency as reference, while the qualitative data was analyzed using theme-based content analysis. Data interpretation was performed by determining how to handle each kind of data based on the theme, and classifying each of the identified usability problem based on its severity rating.</p><p>The recommendations constructed to solve the usability problems were based on solution alternatives from the analyzed data supported by literature study. The present research comes up with seven main recommendations and an extra recommendation. The main recommendations are solutions to tackle the identified usability problems, while the extra recommendation is not directly related to any of identified usability problems, but was considered potential to improve the SCeLE usability.<br /><br /></p>


2021 ◽  
Author(s):  
Mehrdad Farzandipour ◽  
Ehsan Nabovati ◽  
Hamidreza Tadayon ◽  
Monireh Sadeqi Jabali

Abstract Background There are some inconsistencies regarding the selection of the most appropriate usability evaluation method. The present study aimed to compare two expert-based evaluation methods in a nursing module, as the most widely used module of a Hospital Information System (HIS). Methods The Heuristic Evaluation (HE) and Cognitive Walkthrough (CW) methods were used by five independent evaluators to evaluate the nursing module of Shafa HIS. In this regard, the number, severity and ratio of the recognized problems according to the usability attributes were compared using two evaluation methods. Results The use of the HE and CW evaluation methods resulted in the identification of 104 and 24 unique problems, respectively. The average severity of the recognized problems was 2.32 in the HE method and 2.77 in the CW evaluation method; however, there was a significant difference between the number and severity of recognized usability problems by these methods (P < 0.001). Some problems, which were associated with effectiveness, satisfaction and error, were better recognized by the HE method; however, CW evaluation method was more successful in recognizing problems of learnability, efficiency and memorability. Conclusion The HE method recognized more problems with a lower average severity. On the other hand, CW could recognize fewer problems with a higher average severity. Regarding the evaluation goal, HE method would be used to improve effectiveness, increase satisfaction and decrease the number of errors. Furthermore, CW evaluation method is recommended to be used to improve the learnability, efficiency and memorability of the system.


Sign in / Sign up

Export Citation Format

Share Document