scholarly journals Quantifying usability: an evaluation of a diabetes mHealth system on effectiveness, efficiency, and satisfaction metrics with associated user characteristics

2015 ◽  
Vol 23 (1) ◽  
pp. 5-11 ◽  
Author(s):  
Mattias Georgsson ◽  
Nancy Staggers

Abstract Objective Mobile health (mHealth) systems are becoming more common for chronic disease management, but usability studies are still needed on patients’ perspectives and mHealth interaction performance. This deficiency is addressed by our quantitative usability study of a mHealth diabetes system evaluating patients’ task performance, satisfaction, and the relationship of these measures to user characteristics. Materials and Methods We used metrics in the International Organization for Standardization (ISO) 9241-11 standard. After standardized training, 10 patients performed representative tasks and were assessed on individual task success, errors, efficiency (time on task), satisfaction (System Usability Scale [SUS]) and user characteristics. Results Tasks of exporting and correcting values proved the most difficult, had the most errors, the lowest task success rates, and consumed the longest times on task. The average SUS satisfaction score was 80.5, indicating good but not excellent system usability. Data trends showed males were more successful in task completion, and younger participants had higher performance scores. Educational level did not influence performance, but a more recent diabetes diagnosis did. Patients with more experience in information technology (IT) also had higher performance rates. Discussion Difficult task performance indicated areas for redesign. Our methods can assist others in identifying areas in need of improvement. Data about user background and IT skills also showed how user characteristics influence performance and can provide future considerations for targeted mHealth designs. Conclusion Using the ISO 9241-11 usability standard, the SUS instrument for satisfaction and measuring user characteristics provided objective measures of patients’ experienced usability. These could serve as an exemplar for standardized, quantitative methods for usability studies on mHealth systems.

2017 ◽  
Author(s):  
Satheesh Kumar Chandran ◽  
James Forbes ◽  
Carrie Bittick ◽  
Kathleen Allanson ◽  
Santosh Erupaka ◽  
...  

2018 ◽  
Vol 25 (8) ◽  
pp. 1064-1068 ◽  
Author(s):  
Adam Wright ◽  
Pamela M Neri ◽  
Skye Aaron ◽  
Thu-Trang T Hickman ◽  
Francine L Maloney ◽  
...  

Abstract Background Microbiology laboratory results are complex and cumbersome to review. We sought to develop a new review tool to improve the ease and accuracy of microbiology results review. Methods We observed and informally interviewed clinicians to determine areas in which existing microbiology review tools were lacking. We developed a new tool that reorganizes microbiology results by time and organism. We conducted a scenario-based usability evaluation to compare the new tool to existing legacy tools, using a balanced block design. Results The average time-on-task decreased from 45.3 min for the legacy tools to 27.1 min for the new tool (P < .0001). Total errors decreased from 41 with the legacy tools to 19 with the new tool (P = .0068). The average Single Ease Question score was 5.65 (out of 7) for the new tool, compared to 3.78 for the legacy tools (P < .0001). The new tool scored 88 (“Excellent”) on the System Usability Scale. Conclusions The new tool substantially improved efficiency, accuracy, and usability. It was subsequently integrated into the electronic health record and rolled out system-wide. This project provides an example of how clinical and informatics teams can innovative alongside a commercial Electronic Health Record (EHR).


2015 ◽  
Author(s):  
◽  
Martina A. Clarke

Background: EHRs with poor usability present steep learning curves for new resident physicians, who are already overwhelmed in learning a new specialty. This may lead to error prone use of EHR in medical practice by new resident physicians. The goal of this study is to identify usability-related and performance-related differences that arise between primary care physicians by expertise when using an EHR. Methods: We compared usability measures after three rounds of usability tests Lab-based usability tests using video analyses were conducted to analyze learnability gaps between novice and expert physicians. Physicians completed nineteen tasks, based on an artificial but typical patient visit note. We used a mixed methods approach including quantitative performance measures (percent task success, time on task, mouse activities), a survey instrument: system usability scale (SUS), qualitative narrative feedback during the debriefing session, subtask analysis, and debriefing session with physicians. Results: Geometric mean values of percent task success rates, time on task, and mouse activities were compared between the two physician groups across three rounds. Our findings show that there were mixed changes in performance measures and expert physicians were more proficient than novice physicians on some performance measures. Thirty-one common and four unique usability issues were identified between the two physician groups across three rounds. Five themes emerged during analysis: six usability issues were related to inconsistencies, nine issues concerning user interface issues, six issues in relation to structured data issues, seven ambiguous terminology issues, and six issues in regards to workarounds. Discussion and Conclusion: This study found differences in novice and expert physicians' performance, demonstrating that physicians' proficiency did increase with EHR experience. Future directions include identifying usability issues faced by physicians when using the EHR through a more granular task analysis to recognize subtle usability issues that would have otherwise been unnoticed. Also, exploring associations between performance measures and usability issues will also be studied. Training physicians to use the EHR may decrease difficulty of completing tasks in the EHR. Improving physician training may reduce the amount of workarounds created that may lead to workflow problems. These results highlight the areas of difficulty resident physicians with different experience levels are currently facing, which may potentially improve the EHR training program and increase physicians' performance when using an EHR.


Author(s):  
Philip Kortum ◽  
Claudia Ziegler Acemyan

Objective metrics, such as effectiveness and efficiency, are often considered to be the best website usability measurements. User performance metrics that can be collected remotely, such as mouse clicks and the distance the mouse has traveled show particular promise. However, no studies have demonstrated a direct relationship between subjective usability measures and these two objective user performance metrics. In this paper, thirty participants completed five different tasks of varying difficulty on a commercial website. Mouse clicks, the distance the mouse moved, success rates, and System Usability Scale (SUS) scores were collected for each task. Results showed that participants made fewer mouse clicks on tasks at which they were successful than on tasks they failed. Participants moved the mouse over twice as far on failed tasks as compared to successful tasks. The correlations between SUS scores and the two mouse-based measurements were remarkably strong.


Author(s):  
Alexis L Beatty ◽  
John C Fortney ◽  
P M Ho ◽  
George G Sayre ◽  
Mary A Whooley

Introduction: Cardiac rehabilitation improves outcomes for patients with ischemic heart disease or heart failure, but is underused. New strategies to improve delivery of cardiac rehabilitation are needed. We developed a mobile application for technology-facilitated home cardiac rehabilitation and sought to determine its usability. Methods: We recruited patients eligible for cardiac rehabilitation who had access to a smartphone, tablet, or computer with internet access to participate in usability testing of the mobile application. The mobile application includes physical activity goal setting, logs for physical activity and health measures, health education, reminders, and feedback (Figure). Participants were introduced to the mobile application and then observed while completing pre-specified tasks with the mobile application. Participants completed the System Usability Scale (0-100), rated likelihood to use the mobile application (0-100), and participated in a semi-structured interview. Based on participant feedback, we made iterative revisions to the mobile application. Results: We conducted usability testing in 13 participants. The first version of the mobile application was used by the first 5 participants, and revised versions were used by the final 8 participants. From the first version to revised versions, task completion success rate improved from 44% to 78% (p=0.05), System Usability Scale improved from 54% to 76% (p=0.04), and rated likelihood of using the mobile application remained high at 76% and 87% (p = 0.30). Interview responses revealed a need for introductory training ( “Initially, training with a technical person, instead of me relying on myself”) and on-demand help (“ If I had problems I’d try to find out how to fix it on this or call you” ). Additionally, many participants were interested in sharing data with providers (“ I can show my doctor what I’ve been working on ”). Conclusions: With participant feedback and iterative revisions, we significantly improved the usability of a mobile application for cardiac rehabilitation. Patient expectations for using a mobile application for cardiac rehabilitation include introductory training, on-demand help, and sharing data with providers. Iterative mixed-method evaluation may be useful for improving the usability of health technology.


2020 ◽  
Vol 12 (1) ◽  
pp. 1-13
Author(s):  
Agustinus Rendi Walewowan ◽  
Willy Sudiarto Raharjo ◽  
Gloria Virginia

Ketua program studi (kaprodi) memiliki banyak tugas dan tanggung jawab yang harus dilaksanakan dalam kegiatan akademik. Salah satu tugas dari seorang kaprodi adalah melaporkan seluruh pelaksanaan kegiatan di suatu prodi. Untuk itu, diperlukan suatu sistem yang dapatdigunakan untuk memantau kegiatan operasional sehari-hari dan memberikan laporan. Dashboard adalah sebuah tampilan panel informasi yang digunakan dalam suatu organisasi untuk mengevaluasi suatu masalah sehingga memudahkan seseorang untuk mengambil keputusan. Penelitian ini bertujuan melakukan perancangan sebuah dashboard beasiswa dan pinjaman dengan menggunakan metode prototyping. Prototyping adalah metode pengembangan perangkat lunak, yang berupa model fisik kerja sistem dan berfungsi sebagai versi awal dari sistem. [1]. Hasil rata-rata pengujian task success pada kedua iterasi, yaitu 96,66 sehingga sistem yang dibangun dapat dikatakan cukup efektif dalam menampilkan informasi beasiswa dan pinjaman serta mudah untuk dipelajari. Evaluasi desain antarmuka yang dilakukan menggunakan System Usability Scale (SUS) kepada 5 orang responden pada masing-masing iterasi dan menghasilkan skor SUS 75,2 pada iterasi I dan 76,6 pada iterasi II. Berdasarkan hasil tersebut diperoleh rata-rata skor SUS untuk kedua iterasi, yaitu 75,9. Dengan demikian, antarmuka sistem dinyatakan baik dengan grade scale bernilai C, adjective rating bernilai Good, dan acceptability ranges dapat diterima (acceptable).


2020 ◽  
Vol 6 (1) ◽  
pp. 34-43
Author(s):  
Fitri Purwaningtias ◽  
Usman Ependi

Website saat ini telah digunakan diberbagai jenis instansi termasuk instansi pendidikan seperti Pondok Pesantren Qodratullah. Saat ini website Pondok Pesantren Qodratullah menjadi tulang punggung dalam penyebaran informasi terkain pondok pesantren kepada wali santri, alumni, calon santri dan masyarakat luas. Mengingat pentingnya website bagi Pondok Pesantren Qodratullah maka perlu untuk dilakukan evaluasi apakah informasi yang diberikan dan website yang ada telah memiliki nilai kebergunaan bagi pengguna atau tidak. Untuk itu di dalam penelitian ini dilakukan evaluasi untuk melihat perspektif pengguna terhadap website. Prose evaluasi dilakukan dengan system usability scale dengan sepuluh instrumen sebagai pernyataan evaluasi. Hasil evaluasi menunjukkan bahwa website Pondok Pesantren Qodratullah mendapatkan nilai akhir 88. Nilai 88 berarti website Pondok Pesantren Qodratullah mendapatkan adjective rating yang excellence, grade scale tergolong kelompok B dan tingkat acceptability termasuk acceptable. The website is currently used in various types of institutions including educational institutions such as Qodratullah Islamic Boarding School. Currently the Qodratullah Islamic Boarding School website is the backbone in the dissemination of information about Islamic boarding schools to the guardians of students, alumni, prospective students and the wider community. Considering the importance of the website for the Qodratullah Islamic Boarding School, it is necessary to evaluate whether the information provided, and the existing website have a useful value for the user or not. For this reason, in this study an evaluation was conducted to see the user's perspective on the website. The evaluation process is carried out with a system usability scale with ten instruments as evaluation statements. Evaluation results show that the Qodratullah Islamic Boarding School website gets a final score of 88. A value of 88 means that the Qodratullah Islamic Boarding School website gets an adjective rating that excellence, grade scale belongs to group B and the level of acceptability is acceptable


BMJ Open ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. e050448
Author(s):  
Romaric Marcilly ◽  
Wu Yi Zheng ◽  
Regis Beuscart ◽  
Melissa T Baysari

IntroductionResearch has shown that improvements to the usability of medication alert systems are needed. For designers and decisions-makers to assess usability of their alert systems, two paper-based tools are currently available: the instrument for evaluating human-factors principles in medication-related decision support alerts (I-MeDeSA) and the tool for evaluating medication alerting systems (TEMAS). This study aims to compare the validity, usability and usefulness of both tools to identify their strengths and limitations and assist designers and decision-makers in making an informed decision about which tool is most suitable for assessing their current or prospective system.Methods and analysisFirst, TEMAS and I-MeDeSA will be translated into French. This translation will be validated by three experts in human factors. Then, in 12 French hospitals with a medication alert system in place, staff with expertise in the system will evaluate their alert system using the two tools successively. After the use of each tool, participants will be asked to fill in the System Usability Scale (SUS) and complete a survey on the understandability and perceived usefulness of each tool. Following the completion of both assessments, participants will be asked to nominate their preferred tool and relay their opinions on the tools. The design philosophy of TEMAS and I-MeDeSA differs on the calculation of a score, impacting the way the comparison between the tools can be performed. Convergent validity will be evaluated by matching the items of the two tools with respect to the usability dimensions they assess. SUS scores and answers to the survey will be statistically compared for I-MeDeSA and TEMAS to identify differences. Free-text responses in surveys will be analysed using an inductive approach.Ethics and disseminationEthical approval is not required in France for a study of this nature. The results will be published in a peer-reviewed journal.


Author(s):  
Dahlia Alharoon ◽  
Douglas J. Gillan

Aesthetics and usability both play critical roles in product design. But how might measurement of these two conceptually-different features of products interfere with one another? The current research study examines the effect of differences in aesthetics on perceived usability. Participants completed three tasks on a simulated website with a low usability interface. One group of participants used an interface with high aesthetics, whereas a second group interacted with an interface with poor aesthetics. Both groups rated the usability and aesthetics of the interface after completing the tasks. The aesthetics manipulation was effective in that the high aesthetics group provided higher ratings on two aesthetics scales than did the low aesthetics group; however, differences in aesthetics had no significant effect on usability as measured by the System Usability Scale (SUS). These findings support the idea that users make independent judgments of usability and aesthetics.


Author(s):  
Joohwan Kim ◽  
Pyarelal Knowles ◽  
Josef Spjut ◽  
Ben Boudaoud ◽  
Morgan Mcguire

End-to-end latency in remote-rendering systems can reduce user task performance. This notably includes aiming tasks on game streaming services, which are presently below the standards of competitive first-person desktop gaming. We evaluate the latency-induced penalty on task completion time in a controlled environment and show that it can be significantly mitigated by adopting and modifying image and simulation-warping techniques from virtual reality, eliminating up to 80% of the penalty from 80 ms of added latency. This has potential to enable remote rendering for esports and increase the effectiveness of remote-rendered content creation and robotic teleoperation. We provide full experimental methodology, analysis, implementation details, and source code.


Sign in / Sign up

Export Citation Format

Share Document