Integrating Semiotics Perception in Usability Testing to Improve Usability Evaluation

Author(s):  
Muhammad Nazrul Islam ◽  
Franck Tétard

User interfaces of computer applications encompass a number of objects such as navigation links, buttons, icons, and thumbnails. In this chapter, these are called interface signs. The content and functions of a computer application are generally directed by interface signs to provide the system’s logic to the end users. The interface signs of a usable application need to be intuitive to end users and therefore a necessary part of usability evaluation. Assessing sign intuitiveness can be achieved through a semiotic analysis. This study demonstrates how a semiotic assessment of interface signs’ intuitiveness yielded a number of benefits. For instance, (i) it provides an overall idea of interface signs’ intuitiveness to the end users to interpret the meaning of interface signs, (ii) it assists in finding usability problems and also in (iii) recommending possible solutions, (iv) provides background for introducing guidelines to design user-intuitive interface signs, (v) helps in constructing heuristic checklist from semiotics perspective to evaluate an application, (vi) no additional resource and extra budget are needed. This study also presents a list of methodological guidelines to obtain the perceived benefits of integrating semiotic perception in usability testing for practitioners.

Author(s):  
Xian Wu ◽  
Jenay M. Beer

Telepresence has the potential to assist older adults to stay socially connected and to access telehealth. Telepresence was initially created for office use, thus the usability of telepresence for older adults remains unknown and there is a lack of design recommendations, particularly those with an emphasis on users’ age-related needs and limitations. To bridge the gap, this study assessed two telepresence user interfaces (UIs). One UI was designed to mimic common features founds in commercially available telepresence systems. Another UI was designed based on design guidelines for older adults. Each UI was integrated to a virtual driving environment created via Unity. To assess the usability of both UIs, thirty older adults participated in usability testing. Questionnaires and semi-structured interview were administered following each UI test sessions. Results of this study provide insight on what usability features are critical for the aging population to use telepresence, such as high color contrast, automated controls, and consistent icons.


2021 ◽  
pp. 366-383
Author(s):  
Engracia Santos

The primary objective of this study is to determine the usability of the Newspaper-on- DVD project of the Rizal Library, Ateneo de Manila University and to make recommendations that will help improve the system and expand its usage. In 2000, Library started transforming the preserved newspaper from microfilm to digital images. The library then provided an easier mode of access by providing a search tool that will link the index to the images and to the printers. A descriptive evaluative research method through usability testing was used in this study. Ten representative students were asked to complete a series of tasks using NP-DVD. Based on the test, the researcher was able to identify usability problems and recommended futures actions to enhance the system. These problems characterize the difficulties users face while using library search tools available not only in the libraries but also in the internet.


Author(s):  
Romaric Marcilly ◽  
Jessica Schiro ◽  
Louise Heyndels ◽  
Sandra Guerlinger ◽  
Annick Pigot ◽  
...  

It is necessary for hospitals to be able to compare the usability of electronic health records (EHR) before acquisition. Adding usability as a critical element of the procurement process is therefore crucial. During the competitive usability evaluation of several EHRs, the usability walkthrough method has the potential of making end-users more active in the procurement process than demonstrations. This case study presents first results of a comparison of three EHRs performed by nine representative end-users. All users uncovered usability problems while performing their scenarios. The results show that none of the EHRs evaluated is without major usability problems. These problems have been well-known to human factors researchers for a long time.


2018 ◽  
Vol 9 (1) ◽  
pp. 62-81 ◽  
Author(s):  
Jehad Alqurni ◽  
Roobaea Alroobaea ◽  
Mohammed Alqahtani

Heuristic evaluation (HE) is a widely used method for assessing software systems. Several studies have sought to improve the effectiveness of HE by developing its heuristics and procedures. However, few studies have involved the end-user, and to the best of the authors' knowledge, no HE studies involving end-users with non-expert evaluators have been reported. Therefore, the aim of this study is to investigate the impact of end-users on the results obtained by a non-expert evaluator within the HE process, and through that, to explore the number of usability problems and their severity. This article proposes introducing two sessions within the HE process: a user exploration session (UES-HE) and a user review session (URS-HE). The outcomes are compared with two solid benchmarks in the usability-engineering field: the traditional HE and the usability testing (UT) methods. The findings show that the end-user has a significant impact on non-expert evaluator results in both sessions. In the UES-HE method, the results outperformed all usability evaluation methods (UEMs) regarding the usability problems identified, and it tended to identify more major, minor, and cosmetic problems than other methods.


Author(s):  
France Jackson ◽  
Lara Cheng

Introduction Heuristic Evaluation is a usability method that requires usability experts to review and offer feedback on user interfaces based on a list of heuristics or guidelines. Heuristic Evaluations allow designers to get feedback early and quickly in the design process before a full usability test is done. Unlike many usability evaluation methods, Heuristic Evaluations are performed by usability experts as opposed to target users. That is one reason it is going to make a great challenge activity for the UX Day Challenge session. Heuristic Evaluation is a usability method often used in conjunction with usability testing. During the evaluation, usability experts evaluate an interface based on a list of heuristics or guidelines (Nielsen and Molich, 1990). There are several sets of guidelines and they are used to evaluate a myriad of interfaces from gaming (Pinelle, Wong & Stach, 2008) and virtual reality (Sutcliffe & Gault, 2004) to online shopping (Chen & Macredie, 2005). Some of the most common heuristic guidelines to choose from were created by Nielsen (Nielsen and Molich, 1990) (Nielsen, 1994), Norman (Norman, 2013), Tognazzini (Tognazzini, 1998), and Shneiderman (Shneiderman, Plaisant, Cohen and Elmqvist, 2016). Choosing the best set of guidelines and the most appropriate number of usability professions is important. Nielsen and Molich’s research found that individual evaluators only find 20-51% of the usability problems when evaluating alone. However, when the feedback of three to five evaluators is aggregated together, more usability problems can be uncovered (Nielsen and Molich, 1990). This method can be advantageous because designers can get quick feedback early for iteration before a full round of usability testing is performed. The goal of this session is to introduce this method to some and give others a refresher on how to apply this method in the real world. The Challenge For several years, UX day has offered an alternative session. The most intriguing sessions were interactive and offered hands-on training. For this UX Day Challenge session, teams of at most five participants will perform a Heuristic Evaluation of a sponsor’s website or product. During the session, participants will be introduced to Heuristic Evaluations. Topics such as how to perform one, who should perform one, and when it is appropriate to perform one will be covered. Additionally, the pros and cons of using this method will be discussed. Following the introduction to Heuristic Evaluation, teams will use the updated set of Nielson Heuristics (Nielsen, 1994) for the evaluation exercise. Although there are several sets of heuristics, Nielsen’s is one of the best known and widely accepted sets. The following Updated Nielsen Heuristics will be used:  • Visibility of system status  • Match between system and the real world  • User control and freedom  • Consistency and standards  • Error prevention  • Recognition rather than recall  • Flexibility and efficiency of use  • Aesthetic and minimalist design  • Help users recognize, diagnose, and recover from errors  • Help and documentation Following the evaluation period, teams will be asked to report their findings and recommendations to the judges and audience. The judges will deliberate and announce the winner. Conclusion This alternative session will be an opportunity to potentially expose participants to a methodology they may not use often. It will also be an opportunity to have a hands-on learning experience for students who have not formally used this methodology in the real world. Most importantly this session is in continuation of the goal to continue to bring new, interesting, and disruptive sessions to the traditional “conference” format and attract UX practitioners.


2010 ◽  
Vol 5 ◽  
pp. 57-65
Author(s):  
Petr Voldán

This study presents a usability testing as method, which can be used to improve controlling of web map sites. Study refers to the basic principles of this method and describes particular usability tests of mapping sites. In this paper are identified potential usability problems of web sites: Amapy.cz, Google maps and Mapy.cz. The usability testing was focused on problems related with user interfaces, addresses searching and route planning of the map sites.


Author(s):  
Merle Conyer

<span>Usability evaluation is the analysis of the design of a product or system in order to evaluate the match between users and a product or system within a particular context. Usability evaluation is a dynamic process throughout the life cycle of a product or system. Conducting evaluation both with and without end-users significantly improves the chances of success. Six usability evaluation methods and six data collection techniques are discussed, including advantages and limitations of each. Recommendations are made regarding the selection of particular evaluation methods and recording techniques to evaluate different elements of usability.</span>


Author(s):  
Naouel Moha ◽  
Ashraf Gaffar ◽  
Gabriel Michel

Usability testing is a process that employs a sample of future users to evaluate software according to specific usability criteria. With the unprecedented growth and reach of the Internet, it is hard to reach representative users of Websites across the world. The new branch of remote usability testing has emerged as an alternative. While it is prohibitively expensive to conduct usability testing on a global range of users, it is technically possible and is more feasible to remotely collect the necessary information about usability problems and to analyze them the same way we do local tests. In this chapter, we present systematic methods and tools to support remote usability testing and evaluation of Web interfaces.


2012 ◽  
Vol 3 (4) ◽  
pp. 1-19
Author(s):  
Ting Zhang ◽  
Pei-Luen Patrick Rau ◽  
Gavriel Salvendy ◽  
Jia Zhou

This study compared usability testing results found with low- and high-fidelity prototypes for mobile phones. The main objective is to obtain deep understanding of usability problems found with different prototyping methods. Three mobile phones from different manufactures were selected in the experiment. The usability level of the mobile phones was evaluated by participants who completed a questionnaire consisting of 13 usability factors. Incorporating the task-based complexity of the three mobile phones, significant differences in the usability evaluation for each individual factor were found. Suggestions on usability testing with prototyping technique for mobile phones were proposed. This study tries to provide new evidence to the field of mobile phone usability research and develop a feasible way to quantitatively evaluate the prototype usability with novices. The comparisons of paper-based and fully functional prototypes led us to realize how significantly the unique characteristics of different prototypes affect the usability evaluation. The experiment took product complexity into account and made suggestions on choosing proper prototyping technique for testing particular aspects of mobile phone usability.


2019 ◽  
Vol 29 (2) ◽  
pp. 325-333 ◽  
Author(s):  
Olalekan Lee Aiyegbusi

Abstract Introduction Recent advances in information technology and improved access to the internet have led to a rapid increase in the adoption and ownership of electronic devices such as touch screen smartphones and tablet computers. This has also led to a renewed interest in the field of digital health also referred to as telehealth or electronic health (eHealth). There is now a drive to collect these PROs electronically using ePRO systems. Method However, the user interfaces of ePRO systems need to be adequately assessed to ensure they are not only fit for purpose but also acceptable to patients who are the end users. Usability testing is a technique that involves the testing of systems, products or websites with participants drawn from the target population. Usability testing can assist ePRO developers in the evaluation of ePRO user interface. The complexity of ePRO systems; stage of development; metrics to measure; and the use of scenarios, moderators and appropriate sample sizes are key methodological issues to consider when planning usability tests. Conclusion The findings from usability testing may facilitate the improvement of ePRO systems making them more usable and acceptable to end users. This may in turn improve the adoption of ePRO systems post-implementation. This article highlights the key methodological issues to consider and address when planning usability testing of ePRO systems.


Sign in / Sign up

Export Citation Format

Share Document