Iterative Usability Evaluation for an Online Educational Web Portal

Author(s):  
Xin C. Wang ◽  
Borchuluun Yadamsuren ◽  
Anindita Paul ◽  
DeeAnna Adkins ◽  
George Laur ◽  
...  

Online education is a popular paradigm for promoting continuing education for adult learners. However, only a handful of studies have addressed usability issues in the online education environment. Particularly, few studies have integrated the multifaceted usability evaluation into the lifecycle of developing such an environment. This paper will show the integration of usability evaluation into the development process of an online education center. Multifaceted usability evaluation methods were applied at four different stages of the MU Extension web portal’s development. These methods were heuristic evaluation, focus group interview and survey, think-aloud interviewing, and multiple-user simultaneous testing. The results of usability studies at each stage enhanced the development team’s understanding of users’ difficulties, needs, and wants, which served to guide web developers’ subsequent decisions.

Author(s):  
Xin C. Wang ◽  
Borchuluun Yadamsuren ◽  
Anindita Paul ◽  
DeeAnna Adkins ◽  
George Laur ◽  
...  

Online education is a popular paradigm for promoting continuing education for adult learners. However, only a handful of studies have addressed usability issues in the online education environment. Particularly, few studies have integrated the multifaceted usability evaluation into the lifecycle of developing such an environment. This paper will show the integration of usability evaluation into the development process of an online education center. Multifaceted usability evaluation methods were applied at four different stages of the MU Extension web portal’s development. These methods were heuristic evaluation, focus group interview and survey, think-aloud interviewing, and multiple-user simultaneous testing. The results of usability studies at each stage enhanced the development team’s understanding of users’ difficulties, needs, and wants, which served to guide web developers’ subsequent decisions.


Author(s):  
Panagiotis Zaharias

The issue of e-learning quality remains prominent on end users’ (the learners’) agenda. It is no surprise that many non-motivated adult learners abandon prematurely their e-learning experiences. This is attributed in a great extent to the poor design and usability of e-learning applications. This paper proposes a usability framework that addresses the user as a learner and extends the current e-learning usability practice by focusing on the affective dimension of learning, a frequently neglected issue in e-learning developments. Motivation to learn, a dominant affective factor related with learning effectiveness, has been similarly neglected. Usability and instructional design constructs as well as Keller’s ARCS Model are being employed within the framework proposed in this work upon which new usability evaluation methods can be based. This framework integrates web usability and instructional design parameters and proposes motivation to learn as a new type of usability dimension in designing and evaluating e-learning applications.


2018 ◽  
Vol 7 (2.28) ◽  
pp. 10 ◽  
Author(s):  
Freddy Paz ◽  
Freddy A. Paz ◽  
José Antonio Pow-Sang ◽  
César Collazos

Heuristic evaluation is one of the most used techniques to evaluate the level of usability of a software product. In this research, we performed a comprehensive analysis of the recent studies which report the use of this method in the context of a software development process. The purpose was to identify the specific way in which each author performs this usability evaluation method, in order to propose a formal protocol. After an indeed examination of these studies, we have determined there are several differences in the way this technique is conducted according to the literature. There is no agreement about the number of inspectors that should participate, the usability principles that should be used, the profile of the specialists who must be part of the assessment team, or the evaluation process that should be followed. This work highlights the available settings and a detailed procedure to perform a heuristic evaluation in the domain of software products.  


2009 ◽  
Vol 5 (4) ◽  
pp. 37-59 ◽  
Author(s):  
Panagiotis Zaharias

The issue of e-learning quality remains prominent on end users’ (the learners’) agenda. It is no surprise that many non-motivated adult learners abandon prematurely their e-learning experiences. This is attributed in a great extent to the poor design and usability of e-learning applications. This paper proposes a usability framework that addresses the user as a learner and extends the current e-learning usability practice by focusing on the affective dimension of learning, a frequently neglected issue in e-learning developments. Motivation to learn, a dominant affective factor related with learning effectiveness, has been similarly neglected. Usability and instructional design constructs as well as Keller’s ARCS Model are being employed within the framework proposed in this work upon which new usability evaluation methods can be based. This framework integrates web usability and instructional design parameters and proposes motivation to learn as a new type of usability dimension in designing and evaluating e-learning applications.


2020 ◽  
Vol 4 (3) ◽  
pp. 103
Author(s):  
Siti Vika Ngainul Fitri ◽  
Oktalia Juwita ◽  
Tio Dharmawan

Banyuwangi Regency has a new innovation called " Lahir Procot Pulang Bawa Akta " which in this innovation is realized in the form of an Online Deed Website. Every information technology has an interface that can be a link between the user and the technology itself. Interface formation is influenced by needs, and information technology has different interface designs according to the needs of its users. The User Interface has the aim of making it easier for users to operate information technology that can make users feel comfortable using the application or technology. Heuristic Evaluation is one of the Usability evaluation methods that can be used to determine the extent to which a system is used by users to achieve certain goals with effectiveness, efficiency and satisfaction. This research is a research that is focused on the use of Heuristic Evaluation based on user interface design aspects of application usability through observation, interviews and questionnaires to users.


2010 ◽  
Vol 45 ◽  
Author(s):  
Samuel Ssemugabi ◽  
Ruth De Villiers

The Internet, World Wide Web (WWW) and e-learning are contributing to new forms of teaching and learning. Such environments should be designed and evaluated in effective ways, considering both usability- and pedagogical issues. The selection of usability evaluation methods (UEMs) is influenced by the cost of a methods and its effectiveness in addressing users’ issues. The issue of usability is vital in e-learning, where students cannot begin to learn unless they can first use the application. Heuristic evaluation (HE) remains the most widely-used usability evaluation method. This paper describes meta-evaluation research that investigated an HE of a web-based learning (WBL) application. The evaluations were based on a synthesised framework of criteria, related to usability and learning within WBL environments. HE was found to be effective in terms of the number and nature of problems identified in the target application by a complementary team of experienced experts. The findings correspond closely with those of a survey among learners.


SEMINASTIKA ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 99-106
Author(s):  
Gracella Tambunan ◽  
Lit Malem Ginting

Usability is a factor that indicates the success of an interactive product or system, such as a mobile application. The increasing use of smartphones demands a more accurate and effective usability evaluation method to find usability problems, so that they can be used for product improvement in the development process. This study compares the Cognitive Walkthrough method with Heuristic Evaluation in evaluating the usability of the SIRS Del eGov Center mobile application. Evaluation with these two methods will be carried out by three evaluators who act as experts. Finding problems and recommending improvements from each method will produce an improvement prototype made in the form of a high-fidelity prototype. Each prototype will be tested against ten participants using the Usability Testing method, which will generate scores through the SUS table. From the test scores, the percentage of Likert scale and the success rate of each prototype will be found. The results show that between the two usability evaluation methods, the Heuristic Evaluation method is the more effective method, finds more usability problems, and has a higher Likert scale percentage, which is 66.5%, while Cognitive Walkthrough is 64.75%.


Author(s):  
Shirish C. Srivastava ◽  
Shalini Chandra ◽  
Hwee Ming Lam

Usability evaluation which refers to a series of activities that are designed to measure the effectiveness of a system as a whole, is an important step for determining the acceptance of system by the users. Usability evaluation is becoming important since both user groups, as well as tasks, are increasing in size and diversity. Users are increasingly becoming more informed and, consequently, have higher expectations from the systems. Moreover “system interface” has become a commodity and, hence, user acceptance plays a major role in the success of the system. Currently, there are various usability evaluation methods in vogue, like cognitive walkthrough, think aloud, claims analysis, heuristic evaluation, and so forth. However, for this study we have chosen heuristic evaluation because it is relatively inexpensive, logistically uncomplicated, and is often used as a discount usability-engineering tool (Nielsen, 1994). Heuristic evaluation is a method for finding usability problems in a user interface design by having a small set of evaluators examine an interface and judge its compliance with recognized usability principles. The rest of the chapter is organized as follows: we first look at the definition of e-learning, followed by concepts of usability, LCD, and heuristics. Subsequently, we introduce a methodology for heuristic usability evaluation (Reeves, Benson, Elliot, Grant, Holschuh, Kim, Kim, Lauber, & Loh, 2002), and then use these heuristics for evaluating an existing e-learning system, GETn2. We offer our recommendations for the system and end with a discussion on the contributions of our chapter.


Author(s):  
Christofer Ramos ◽  
Flávio Anthero Nunes Vianna dos Santos ◽  
Monique Vandresen

Heuristic evaluation stands out among the usability evaluation methods regarding its benefits related to time and costs. Nevertheless, generic heuristic sets require improvements when it comes to specific interfaces as seen on m-learning applications that have acquired considerable evidence within the current technologic context. Regarding the lack of studies aimed at interfaces of this sort, the authors propose, through a systematic methodology, the comparative study between a heuristic set specific to the assessment on e-learning interfaces and other, on mobile. The identified usability problems were matched with the aspects of coverage, distribution, redundancy, context and severity, in a way that it was possible to understand the efficiency of each set in covering m-learning issues. Among the findings, e-learning's heuristic set could detect a larger number of usability problems not found by mobile's.


Sign in / Sign up

Export Citation Format

Share Document