Usability in the Context of e-Learning

2009 ◽  
Vol 5 (4) ◽  
pp. 37-59 ◽  
Author(s):  
Panagiotis Zaharias

The issue of e-learning quality remains prominent on end users’ (the learners’) agenda. It is no surprise that many non-motivated adult learners abandon prematurely their e-learning experiences. This is attributed in a great extent to the poor design and usability of e-learning applications. This paper proposes a usability framework that addresses the user as a learner and extends the current e-learning usability practice by focusing on the affective dimension of learning, a frequently neglected issue in e-learning developments. Motivation to learn, a dominant affective factor related with learning effectiveness, has been similarly neglected. Usability and instructional design constructs as well as Keller’s ARCS Model are being employed within the framework proposed in this work upon which new usability evaluation methods can be based. This framework integrates web usability and instructional design parameters and proposes motivation to learn as a new type of usability dimension in designing and evaluating e-learning applications.

Author(s):  
Panagiotis Zaharias

The issue of e-learning quality remains prominent on end users’ (the learners’) agenda. It is no surprise that many non-motivated adult learners abandon prematurely their e-learning experiences. This is attributed in a great extent to the poor design and usability of e-learning applications. This paper proposes a usability framework that addresses the user as a learner and extends the current e-learning usability practice by focusing on the affective dimension of learning, a frequently neglected issue in e-learning developments. Motivation to learn, a dominant affective factor related with learning effectiveness, has been similarly neglected. Usability and instructional design constructs as well as Keller’s ARCS Model are being employed within the framework proposed in this work upon which new usability evaluation methods can be based. This framework integrates web usability and instructional design parameters and proposes motivation to learn as a new type of usability dimension in designing and evaluating e-learning applications.


2015 ◽  
Vol 7 (3) ◽  
pp. 18-39
Author(s):  
Maria Alexandra Rentroia-Bonito ◽  
Daniel Gonçalves ◽  
Joaquim A Jorge

Technological advances during the last decade have provided huge possibilities to support e-learning. However, there are still concerns regarding Return-on-Investment (ROI) of e-learning, its sustainability within organizational bound-aries and effectiveness across potential learner groups. Much previous research has concentrated on learners' motivation, satisfaction, and retention. This leaves room for further research to identify alternative and innovative ways to center design on students' concerns when learning online. The authors' work focuses on designing workable courseware usability evaluation methods to differentiate students to improve learning-support frameworks from both pedagogical and system perspectives. The authors' results suggest that students can be grouped in three clusters based on their motivation to e-Learn. Instructors could predict which cluster a new student belongs to, making it possible to anticipate usability issues that most affect results. This also facilitates pedagogical interventions that could help at-risk learners, contributing to the retention rate.


Author(s):  
Xin C. Wang ◽  
Borchuluun Yadamsuren ◽  
Anindita Paul ◽  
DeeAnna Adkins ◽  
George Laur ◽  
...  

Online education is a popular paradigm for promoting continuing education for adult learners. However, only a handful of studies have addressed usability issues in the online education environment. Particularly, few studies have integrated the multifaceted usability evaluation into the lifecycle of developing such an environment. This paper will show the integration of usability evaluation into the development process of an online education center. Multifaceted usability evaluation methods were applied at four different stages of the MU Extension web portal’s development. These methods were heuristic evaluation, focus group interview and survey, think-aloud interviewing, and multiple-user simultaneous testing. The results of usability studies at each stage enhanced the development team’s understanding of users’ difficulties, needs, and wants, which served to guide web developers’ subsequent decisions.


2010 ◽  
Vol 45 ◽  
Author(s):  
Samuel Ssemugabi ◽  
Ruth De Villiers

The Internet, World Wide Web (WWW) and e-learning are contributing to new forms of teaching and learning. Such environments should be designed and evaluated in effective ways, considering both usability- and pedagogical issues. The selection of usability evaluation methods (UEMs) is influenced by the cost of a methods and its effectiveness in addressing users’ issues. The issue of usability is vital in e-learning, where students cannot begin to learn unless they can first use the application. Heuristic evaluation (HE) remains the most widely-used usability evaluation method. This paper describes meta-evaluation research that investigated an HE of a web-based learning (WBL) application. The evaluations were based on a synthesised framework of criteria, related to usability and learning within WBL environments. HE was found to be effective in terms of the number and nature of problems identified in the target application by a complementary team of experienced experts. The findings correspond closely with those of a survey among learners.


Author(s):  
Shirish C. Srivastava ◽  
Shalini Chandra ◽  
Hwee Ming Lam

Usability evaluation which refers to a series of activities that are designed to measure the effectiveness of a system as a whole, is an important step for determining the acceptance of system by the users. Usability evaluation is becoming important since both user groups, as well as tasks, are increasing in size and diversity. Users are increasingly becoming more informed and, consequently, have higher expectations from the systems. Moreover “system interface” has become a commodity and, hence, user acceptance plays a major role in the success of the system. Currently, there are various usability evaluation methods in vogue, like cognitive walkthrough, think aloud, claims analysis, heuristic evaluation, and so forth. However, for this study we have chosen heuristic evaluation because it is relatively inexpensive, logistically uncomplicated, and is often used as a discount usability-engineering tool (Nielsen, 1994). Heuristic evaluation is a method for finding usability problems in a user interface design by having a small set of evaluators examine an interface and judge its compliance with recognized usability principles. The rest of the chapter is organized as follows: we first look at the definition of e-learning, followed by concepts of usability, LCD, and heuristics. Subsequently, we introduce a methodology for heuristic usability evaluation (Reeves, Benson, Elliot, Grant, Holschuh, Kim, Kim, Lauber, & Loh, 2002), and then use these heuristics for evaluating an existing e-learning system, GETn2. We offer our recommendations for the system and end with a discussion on the contributions of our chapter.


Author(s):  
Christofer Ramos ◽  
Flávio Anthero Nunes Vianna dos Santos ◽  
Monique Vandresen

Heuristic evaluation stands out among the usability evaluation methods regarding its benefits related to time and costs. Nevertheless, generic heuristic sets require improvements when it comes to specific interfaces as seen on m-learning applications that have acquired considerable evidence within the current technologic context. Regarding the lack of studies aimed at interfaces of this sort, the authors propose, through a systematic methodology, the comparative study between a heuristic set specific to the assessment on e-learning interfaces and other, on mobile. The identified usability problems were matched with the aspects of coverage, distribution, redundancy, context and severity, in a way that it was possible to understand the efficiency of each set in covering m-learning issues. Among the findings, e-learning's heuristic set could detect a larger number of usability problems not found by mobile's.


Work ◽  
2012 ◽  
Vol 41 ◽  
pp. 1038-1044 ◽  
Author(s):  
Luciana Lopes Freire ◽  
Pedro Miguel Arezes ◽  
José Creissac Campos

2018 ◽  
Vol 9 (1) ◽  
pp. 62-81 ◽  
Author(s):  
Jehad Alqurni ◽  
Roobaea Alroobaea ◽  
Mohammed Alqahtani

Heuristic evaluation (HE) is a widely used method for assessing software systems. Several studies have sought to improve the effectiveness of HE by developing its heuristics and procedures. However, few studies have involved the end-user, and to the best of the authors' knowledge, no HE studies involving end-users with non-expert evaluators have been reported. Therefore, the aim of this study is to investigate the impact of end-users on the results obtained by a non-expert evaluator within the HE process, and through that, to explore the number of usability problems and their severity. This article proposes introducing two sessions within the HE process: a user exploration session (UES-HE) and a user review session (URS-HE). The outcomes are compared with two solid benchmarks in the usability-engineering field: the traditional HE and the usability testing (UT) methods. The findings show that the end-user has a significant impact on non-expert evaluator results in both sessions. In the UES-HE method, the results outperformed all usability evaluation methods (UEMs) regarding the usability problems identified, and it tended to identify more major, minor, and cosmetic problems than other methods.


Author(s):  
Merle Conyer

<span>Usability evaluation is the analysis of the design of a product or system in order to evaluate the match between users and a product or system within a particular context. Usability evaluation is a dynamic process throughout the life cycle of a product or system. Conducting evaluation both with and without end-users significantly improves the chances of success. Six usability evaluation methods and six data collection techniques are discussed, including advantages and limitations of each. Recommendations are made regarding the selection of particular evaluation methods and recording techniques to evaluate different elements of usability.</span>


Sign in / Sign up

Export Citation Format

Share Document