scholarly journals User and usability testing - how it should be undertaken?

Author(s):  
Merle Conyer

<span>Usability evaluation is the analysis of the design of a product or system in order to evaluate the match between users and a product or system within a particular context. Usability evaluation is a dynamic process throughout the life cycle of a product or system. Conducting evaluation both with and without end-users significantly improves the chances of success. Six usability evaluation methods and six data collection techniques are discussed, including advantages and limitations of each. Recommendations are made regarding the selection of particular evaluation methods and recording techniques to evaluate different elements of usability.</span>

Author(s):  
Regina Bernhaupt ◽  
Kristijan Mihalic ◽  
Marianna Obrist

Evaluating mobile applications and devices is particularly challenging given the variability of users, uses, and environments involved. This chapter introduces usability evaluation methods (UEMs) for mobile applications. Over the past decades various usability evaluation methods have been developed and implemented to improve and assure easy-to-use user interfaces and systems. Since most of the so-called ‘classical’ methods have demonstrated shortcomings when used in the field of mobile applications, they were broadened, varied, and changed to meet the demands of testing usability for mobile applications. This chapter presents a selection of these ‘classical’ methods and introduces some methodological variations for testing usability in the area of mobile devices and applications. It argues for a combination of both field evaluation methods and traditional laboratory testing to cover different phases in the user-centered design and development process.


Author(s):  
Judith Symonds

Usability Evaluation Methods (UEM) are plentiful in the literature. However, there appears to be a new interest in usability testing from the viewpoint of the industry practitioner and renewed effort to reflect usability design principles throughout the software development process. In this chapter we examine one such example of usability testing from the viewpoint of the industry practitioner and reflect upon how usability evaluation methods are perceived by the software developers of a content driven system and discuss some benefits that can be derived from bringing together usability theory and usability evaluation methods protocols used by practitioners. In particular, we use the simulated prototyping method and the “Talk Aloud” protocol to assist a small software development company to undertake usability testing. We propose some issues that arise from usability testing from the perspectives of the researchers and practitioners and discuss our understanding of the knowledge transfer that occurs between the two.


Author(s):  
Regina Bernhaupt

In order to develop easy-to-use multimodal interfaces for mobile applications, effective usability evaluation methods (UEMs) are an essential component of the development process. Over the past decades various usability evaluation methods have been developed and implemented to improve and assure easyto- use user interfaces and systems. However, most of the so-called ‘classical’ methods exhibit shortcomings when used in the field of mobile applications, especially when addressing multimodal interaction (MMI). Hence, several ‘classical’ methods were broadened, varied, and changed to meet the demands of testing usability for multimodal interfaces and mobile applications. This chapter presents a selection of these ‘classical’ methods, and introduces some newly developed methods for testing usability in the area of multimodal interfaces. The chapter concludes with a summary on currently available methods for usability evaluation of multimodal interfaces for mobile devices.


Author(s):  
Niels Ebbe Jacobsen ◽  
Morten Hertzum ◽  
Bonnie E. John

Usability studies are commonly used in industry and applied in research as a yardstick for other usability evaluation methods. Though usability studies have been studied extensively, one potential threat to their reliability has been left virtually untouched: the evaluator effect. In this study, four evaluators individually analyzed four videotaped usability test sessions. Only 20% of the 93 detected problems were detected by all evaluators, and 46% were detected by only a single evaluator. From the total set of 93 problems the evaluators individually selected the ten problems they considered most severe. None of the selected severe problems appeared on all four evaluators' top-10 lists, and 4 of the 11 problems that were considered severe by more than one evaluator were only detected by one or two evaluators. Thus, both detection of usability problems and selection of the most severe problems are subject to considerable individual variability.


2016 ◽  
Vol 2016 ◽  
pp. 1-16 ◽  
Author(s):  
Andrés Solano ◽  
César A. Collazos ◽  
Cristian Rusu ◽  
Habib M. Fardoun

Usability is a fundamental quality characteristic for the success of an interactive system. It is a concept that includes a set of metrics and methods in order to obtain easy-to-learn and easy-to-use systems. Usability Evaluation Methods, UEM, are quite diverse; their application depends on variables such as costs, time availability, and human resources. A large number of UEM can be employed to assess interactive software systems, but questions arise when deciding which method and/or combination of methods gives more (relevant) information. We proposeCollaborative Usability Evaluation Methods, CUEM, following the principles defined by the Collaboration Engineering. This paper analyzes a set of CUEM conducted on different interactive software systems. It proposes combinations of CUEM that provide more complete and comprehensive information about the usability of interactive software systems than those evaluation methods conducted independently.


Author(s):  
Muhammad Nazrul Islam ◽  
Franck Tétard

User interfaces of computer applications encompass a number of objects such as navigation links, buttons, icons, and thumbnails. In this chapter, these are called interface signs. The content and functions of a computer application are generally directed by interface signs to provide the system’s logic to the end users. The interface signs of a usable application need to be intuitive to end users and therefore a necessary part of usability evaluation. Assessing sign intuitiveness can be achieved through a semiotic analysis. This study demonstrates how a semiotic assessment of interface signs’ intuitiveness yielded a number of benefits. For instance, (i) it provides an overall idea of interface signs’ intuitiveness to the end users to interpret the meaning of interface signs, (ii) it assists in finding usability problems and also in (iii) recommending possible solutions, (iv) provides background for introducing guidelines to design user-intuitive interface signs, (v) helps in constructing heuristic checklist from semiotics perspective to evaluate an application, (vi) no additional resource and extra budget are needed. This study also presents a list of methodological guidelines to obtain the perceived benefits of integrating semiotic perception in usability testing for practitioners.


Author(s):  
Panagiotis Zaharias

The issue of e-learning quality remains prominent on end users’ (the learners’) agenda. It is no surprise that many non-motivated adult learners abandon prematurely their e-learning experiences. This is attributed in a great extent to the poor design and usability of e-learning applications. This paper proposes a usability framework that addresses the user as a learner and extends the current e-learning usability practice by focusing on the affective dimension of learning, a frequently neglected issue in e-learning developments. Motivation to learn, a dominant affective factor related with learning effectiveness, has been similarly neglected. Usability and instructional design constructs as well as Keller’s ARCS Model are being employed within the framework proposed in this work upon which new usability evaluation methods can be based. This framework integrates web usability and instructional design parameters and proposes motivation to learn as a new type of usability dimension in designing and evaluating e-learning applications.


Sign in / Sign up

Export Citation Format

Share Document