scholarly journals Evaluating complex digital resources

2003 ◽  
Vol 11 (1) ◽  
Author(s):  
Canan Tosunoglu Blake ◽  
Clare Davies ◽  
Ann Jones ◽  
Erica Morris ◽  
Eileen Scanlon

Squires (1999) discussed the gap between HCI (Human Computer Interaction) and the educational computing communities in their very different approaches to evaluating educational software. This paper revisits that issue in the context of evaluating digital resources, focusing on two approaches to evaluation: an HCI and an educational perspective. Squires and Preece's HCI evaluation model is a predictive model - it helps teachers decide whether or not to use educational software - whilst our own concern is in evaluating the use of learning technologies. It is suggested that in part the different approaches of the two communities relate to the different focus that each takes: in HCI the focus is typically on development and hence usability, whilst in education the concern is with the learner and teacher use.DOI:10.1080/0968776030110102

Author(s):  
Gicele Vieira Prebianca ◽  
Vital Pereira dos Santos Junior ◽  
Kyria Rebeca Finardi

The aim of the study is to analyze a software for teaching English as a foreign language reporting (i) the interaction between the software and the learner; (ii) the cognitive/mental operations required to perform the tasks in software and (iii) the pedagogical strategies implemented by the software. Human-Computer Interaction (HCI) aspects of the software were also analyzed so as to evaluate its degree of interactiveness and usability (Ergolist, 2011). Results of the study suggest that the software is content-oriented and the ergonomic analysis revealed that the didactic resources applied by the software meet most usability criteria, requiring few modifications.


Author(s):  
B Chimbo ◽  
J H Gelderblom ◽  
M R De Villiers

The learnability principle relates to improving the usability of software, as well as users’ performance and productivity. A gap has been identified as the current definition of the principle does not distinguish between users of different ages. To determine the extent of the gap, this article compares the ways in which two user groups, adults and children, learn how to use an unfamiliar software application. In doing this, we bring together the research areas of human-computer interaction (HCI), adult and child learning, learning theories and strategies, usability evaluation and interaction design. A literature survey conducted on learnability and learning processes considered the meaning of learnability of software applications across generations. In an empirical investigation, users aged from 9 to 12 and from 35 to 50 were observed in a usability laboratory while learning to use educational software applications. Insights that emerged from data analysis showed different tactics and approaches that children and adults use when learning unfamiliar software. Eye tracking data was also recorded. Findings indicated that subtle re- interpretation of the learnability principle and its associated sub-principles was required. An additional sub-principle, namely engageability was proposed to incorporate aspects of learnability that are not covered by the existing sub-principles. Our re-interpretation of the learnability principle and the resulting design recommendations should help designers to fulfill the varying needs of different-aged users, and improve the learnability of their designs.Keywords: Child computer interaction, Design principles, Eye tracking, Generational differences, human-computer interaction, Learning theories, Learnability, Engageability, Software applications, UasabilityDisciplines: Human-Computer Interaction (HCI) Studies, Computer science, Observational Studies


Author(s):  
Karin Hedstrom ◽  
Stefan Cronholm

In this chapter, we discuss an evaluation of a computerized information system in an elderly care unit. The evaluation is based on the concept of actability, which is a combination of theories from Human-Computer Interaction and the Language Action Perspective. The reason for uniting different theories is to obtain a more holistic evaluation model. The findings show that the evaluated system has a low degree of actability, and the users had a positive attitude towards the system. One explanation could be that we, as evaluators, reviewed both structure and content, whereas the users saw only the content of the information system (i.e., its functions) as the most important aspect.


Sign in / Sign up

Export Citation Format

Share Document