usability metrics
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 20)

H-INDEX

11
(FIVE YEARS 2)

The current study developed a proposed mobile app for tourism companies in Egypt and tested its usability. A survey from a group of 53 respondents was conducted based on the mobile app features which were developed by tourists. The proposed mobile app was then tested by using usability measurement framework which was used to test the usability of the app interface and to ensure that this app meets user requirements. Three main usability metrics were employed in this study; effectiveness, efficiency and satisfaction. This study contributes to the current Mobile tourism and Mobile apps literature and offers useful information for ministry of tourism, software companies, mobile application developers and, of course, mobile device users in addition to entrepreneurs, policy makers, practitioners, researchers and educators through providing a clearer view and deep understanding for the issues related to the adoption of tourism-related new mobile phone application in Egypt.


Author(s):  
CJ Montalbano ◽  
Julian Abich ◽  
Eric Sikorski

Researchers took a user-centered approach to evaluate pilots’ preferences and perceptions of training with an innovative VR-based immersive training device (ITD). Over the course of one week, usability and user experience data were gathered from U.S. Air Force instructor pilots (IPs), unqualified instructor pilots (UIs),and student pilots (SP). Coming from various squadrons, these pilots provided feedback regarding their interactions with the ITDs. A think-aloud protocol, observations, and surveys were used to capture participants’ perceptions of the different hardware variants using the following usability metrics: fit and feel, function, and sustained and future use. At this stage of the development, various configurations of the ITDs were evaluated to determine which technological components should be included in the final design. The data presented here focused on one of those components, the aircraft control or center stick. The results for the stick component will be discussed as a use case as it illustrates the user-centered approach and data analysis strategy that captured and identified noteworthy differences in perceived training value.


2021 ◽  
Vol 12 (3) ◽  
pp. 1048-1053
Author(s):  
Muhammad Modi Lakulu Et.al

Many recent research emphasized the potential of mobile educational application for enhancing early reading among kindergarteners. From the Framework of Mobile Application for Kindergarten Early Reading, the researchers was developed a novel and fascinating mobile educational application for kindergarten early reading, Adik Jom Baca. This study explored the usability the mobile application by using Usability Metrics for Mobile Learning User Interface for Children. This metric evaluated the usability of application from three dimensions; effectiveness, efficiency and satisfaction. The evaluation of usability involved 30 kindergarten teachers using Nominal Group Technique (NGT). Results indicated the high level of usability for Adik Jom Baca application.


2020 ◽  
Vol 2 (2) ◽  
pp. 140-157
Author(s):  
Antonius Rachmat Chrismanto ◽  
Joko Purwadi ◽  
Argo Wibowo ◽  
Halim Budi Santoso ◽  
Rosa Delima ◽  
...  

The Agricultural Land Mapping System (SPLP) is indispensable in an agricultural country where part of the population is farmers. This system has been developed by the research team since 2019 and has resulted in web and mobile based systems. The Dutatani SPLP system was developed using the Rapid Application Development (RAD) method. Before this system is further implemented in the community, this system needs to be tested in terms of functionality and usability. This research article aims to compare the functionality and reusability testing of web and mobile-based SPLP. The test was carried out using ISO / IEC 9126-4 usability metrics that focus on effectiveness and efficiency, and involve farmers and farmer groups from Gilang Harjo Village, Bantul, Yogyakarta. The results of testing the web-based and mobile-based SPLP system show that overall respondents can do all the tasks given, but it takes a long time to complete. This is influenced by internal factors of the respondents, namely the respondent's lack of experience in using mobile phones for other activities besides telephone and short messages. So that when testing, respondents need more time to adapt to the system. However, based on time on task, mobile-based SPLP testing is faster than web-based ones.


Author(s):  
Daniela Chanci ◽  
Naveen Madapana ◽  
Glebys Gonzalez ◽  
Juan Wachs

The choice of best gestures and commands for touchless interfaces is a critical step that determines the user- satisfaction and overall efficiency of surgeon computer interaction. In this regard, usability metrics such as task completion time, error rate, and memorability have a long-standing as potential entities in determining the best gesture vocabulary. In addition, some previous works concerned with this problem have utilized qualitative measures to identify the best gesture. In this work, we hypothesize that there is a correlation between the qualitative properties of gestures (v) and their usability metrics (u). Therefore, we conducted an experiment with linguists to quantify the properties of the gestures. Next, a user study was conducted with surgeons, and the usability metrics were measured. Lastly, linear and non-linear regression techniques were used to find the correlations between u and v. Results show that usability metrics are correlated with the gestures’ qualitative properties ( R2 = 0.4).


2020 ◽  
Author(s):  
Aleeha Iftikhar ◽  
Raymond R. Bond ◽  
Victoria McGilligan ◽  
Stephen J. Leslie ◽  
Khaled Rjoob ◽  
...  

BACKGROUND Even in the era of digital technology, a number of hospitals still rely on paper-based forms for data entry for patient admission, triage, drug prescriptions, and procedures. Paper-based forms can be efficient to complete but often at the expense of data quality, completeness, sustainability, and automated data analytics to name but a few limitations. As an additional benefit, digital forms could also assist with decision making when deciding on the appropriate response to certain data inputs (e.g. when classifying symptoms, etc.). OBJECTIVE Objective: Nevertheless, there is a lack of empirical best practices and guidelines for the interaction design of digital health forms. In this study, we assess the usability of three different interactive forms, namely 1) a single page digital form (where all data input is required on one web page), 2) a multi-page digital form and 3) a conversational digital form (a chatbot). METHODS Methods: These three digital forms were developed as candidates to replace a current paper-based form that is used to record patient referrals to an interventional cardiology department (Cath-Lab) at Altnagelvin Hospital. We recorded three different usability metrics from data collected in a counterbalanced usability test (60 usability tests: 20 subjects x 3 form usability tests). RESULTS Results: The usability metrics includes the SUS questionnaire, UEQ, and a final customised questionnaire. We found that the single-page form outperformed the other two digital form techniques in almost all of the metrics. The mean SUS score for the single page form was 76±15.8 (p<0.05) and achieved the least task completion time compared to the other two digital form styles. CONCLUSIONS Conclusion: In conclusion, the digital single page form outperformed the other two forms in almost all the usability metrics. The mean SUS score for the single page was 76±15 with the least task completion time compared to other two digital forms. Moreover, upon answering the open-ended question, the single-page form was the preferred choice. However, this preference might change over time as multi-page and conversational forms become more common.


i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 139-151
Author(s):  
Thomas Schmidt ◽  
Miriam Schlindwein ◽  
Katharina Lichtner ◽  
Christian Wolff

AbstractDue to progress in affective computing, various forms of general purpose sentiment/emotion recognition software have become available. However, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. We investigate if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. We present the results of a UE project examining this question for the three modalities text, speech and face. We perform a large scale usability test (N = 125) with a counterbalanced within-subject design with two websites of varying usability. We have identified a weak but significant correlation between text-based sentiment analysis on the text acquired via thinking aloud and SUS scores as well as a weak positive correlation between the proportion of neutrality in users’ voice and SUS scores. However, for the majority of the output of emotion recognition software, we could not find any significant results. Emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. We discuss reasons for these results and how to continue research with more sophisticated methods.


10.2196/18301 ◽  
2020 ◽  
Vol 22 (6) ◽  
pp. e18301 ◽  
Author(s):  
Alaa Abd-Alrazaq ◽  
Zeineb Safi ◽  
Mohannad Alajlani ◽  
Jim Warren ◽  
Mowafa Househ ◽  
...  

Background Dialog agents (chatbots) have a long history of application in health care, where they have been used for tasks such as supporting patient self-management and providing counseling. Their use is expected to grow with increasing demands on health systems and improving artificial intelligence (AI) capability. Approaches to the evaluation of health care chatbots, however, appear to be diverse and haphazard, resulting in a potential barrier to the advancement of the field. Objective This study aims to identify the technical (nonclinical) metrics used by previous studies to evaluate health care chatbots. Methods Studies were identified by searching 7 bibliographic databases (eg, MEDLINE and PsycINFO) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. The studies were independently selected by two reviewers who then extracted data from the included studies. Extracted data were synthesized narratively by grouping the identified metrics into categories based on the aspect of chatbots that the metrics evaluated. Results Of the 1498 citations retrieved, 65 studies were included in this review. Chatbots were evaluated using 27 technical metrics, which were related to chatbots as a whole (eg, usability, classifier performance, speed), response generation (eg, comprehensibility, realism, repetitiveness), response understanding (eg, chatbot understanding as assessed by users, word error rate, concept error rate), and esthetics (eg, appearance of the virtual agent, background color, and content). Conclusions The technical metrics of health chatbot studies were diverse, with survey designs and global usability metrics dominating. The lack of standardization and paucity of objective measures make it difficult to compare the performance of health chatbots and could inhibit advancement of the field. We suggest that researchers more frequently include metrics computed from conversation logs. In addition, we recommend the development of a framework of technical metrics with recommendations for specific circumstances for their inclusion in chatbot studies.


2020 ◽  
Author(s):  
Alaa Abd-Alrazaq ◽  
Zeineb Safi ◽  
Mohannad Alajlani ◽  
Jim Warren ◽  
Mowafa Househ ◽  
...  

BACKGROUND Dialog agents (chatbots) have a long history of application in health care, where they have been used for tasks such as supporting patient self-management and providing counseling. Their use is expected to grow with increasing demands on health systems and improving artificial intelligence (AI) capability. Approaches to the evaluation of health care chatbots, however, appear to be diverse and haphazard, resulting in a potential barrier to the advancement of the field. OBJECTIVE This study aims to identify the technical (nonclinical) metrics used by previous studies to evaluate health care chatbots. METHODS Studies were identified by searching 7 bibliographic databases (eg, MEDLINE and PsycINFO) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. The studies were independently selected by two reviewers who then extracted data from the included studies. Extracted data were synthesized narratively by grouping the identified metrics into categories based on the aspect of chatbots that the metrics evaluated. RESULTS Of the 1498 citations retrieved, 65 studies were included in this review. Chatbots were evaluated using 27 technical metrics, which were related to chatbots as a whole (eg, usability, classifier performance, speed), response generation (eg, comprehensibility, realism, repetitiveness), response understanding (eg, chatbot understanding as assessed by users, word error rate, concept error rate), and esthetics (eg, appearance of the virtual agent, background color, and content). CONCLUSIONS The technical metrics of health chatbot studies were diverse, with survey designs and global usability metrics dominating. The lack of standardization and paucity of objective measures make it difficult to compare the performance of health chatbots and could inhibit advancement of the field. We suggest that researchers more frequently include metrics computed from conversation logs. In addition, we recommend the development of a framework of technical metrics with recommendations for specific circumstances for their inclusion in chatbot studies.


Sign in / Sign up

Export Citation Format

Share Document