HEURISTIC EVALUATION FOR MOBILE APPLICATION (STUDI KASUS: APLIKASI DEPO AUTO 2000 TANJUNG API API PALEMBANG)

2017 ◽  
Vol 8 (2) ◽  
pp. 563 ◽  
Author(s):  
Usman Ependi

Heuristic evaluation merupakan salah satu bentuk usability testing perangkat lunak yang dinilai oleh pengguna (evaluator). Dalam melakukan heuristic evaluation instrumen penilaian terdiri dari sepuluh (10) pernyataan dengan lima pilihan jawaban dalam skala severity ratings. Dalam penelitian ini heuristic evaluation terhadap aplikasi Depo Auto 2000 Tanjung Api-Api Palembang yang dilakukan oleh 4 evaluator.  Hasil dari heuristic evaluation dikelompokkan kedalam  masing-masing instrumen yaitu visibility of system status dengan nilai 0,75, match between system and the real world dengan nilai 0,25, user control and freedom dengan nilai 0,25, consistency and standards dengan nilai 0,75, error prevention dengan nilai 1, recognition rather than recall dengan nilai 1,25, flexibility and efficiency of use dengan nilai 0,25, Aesthetic and minimalist design dengan nilai 0,25, help users recognize, diagnose, and recover from errors dengan nilai 1 dan Help and documentation dengan nilai 0. Dari hasil heuristic evaluation yang dilakukan menunjukkan bahwa evaluator memberikan nilai 0 dan 1 aplikasi Depo Atuo 2000 Tanjung Api-Api Palembang. Hasil penilaian tersebut menunjukkan bahwa aplikasi yang buat tidak ada masalah usability dan hanya memiliki cosmetic problem sehingga aplikasi Depo Auto 2000 Tanjung Api Api Palembang  dapat dinyatakan layak untuk didistribusikan kepada pengguna akhir (end user). 

2021 ◽  
Vol 4 (1) ◽  
pp. 45-52
Author(s):  
Febria Sri Handayani

Kuesioner bagi pengguna akhir merupakan salah satu alat yang dapat digunakan sebagai instrumen survei pengujian usabilitas aplikasi. Identifikasi atribut usabilitas dalam kuesioner dapat dirancang berdasarkan standar ataupun model kualitas perangkat lunak. Salah satunya adalah model kualitas heuristik usabilitas Nielson. Desain kuesioner sebagai instrumen survei pengujian usabilitas aplikasi ini menghasilkan 22 butir atribut usabilitas dan pertanyaan yang dirumuskan dengan menyesuaikan 10 karakteristik model heuristik usabilitas Nielson dengan karakteristik ataupun komponen dasar aplikasi yang akan diuji usabilitasnya. Adapun 10 karakteristik model heuristik usabilitas Nielson tersebut diantaranya (1) Visibility of System Status, (2) Match Between the System and the Real World, (3) User Control and Freedom, (4) Consistency and Standards, (5) Error Prevention, (6) Recognition vs Recall in User Interfaces, (7) Flexibility and Efficiency of Use, (8) Aesthetic and Minimalist Desain, (9) Help Users Recognize, Diagnose and Recover from Errors, (10) Help and Documentation. Desain instrumen ini dapat dikembangkan lagi dan diaplikasikan sebagai pedoman perancangan alat ukur pengujian usabilitas aplikasi baik berbasis web maupun android.


Author(s):  
France Jackson ◽  
Lara Cheng

Introduction Heuristic Evaluation is a usability method that requires usability experts to review and offer feedback on user interfaces based on a list of heuristics or guidelines. Heuristic Evaluations allow designers to get feedback early and quickly in the design process before a full usability test is done. Unlike many usability evaluation methods, Heuristic Evaluations are performed by usability experts as opposed to target users. That is one reason it is going to make a great challenge activity for the UX Day Challenge session. Heuristic Evaluation is a usability method often used in conjunction with usability testing. During the evaluation, usability experts evaluate an interface based on a list of heuristics or guidelines (Nielsen and Molich, 1990). There are several sets of guidelines and they are used to evaluate a myriad of interfaces from gaming (Pinelle, Wong & Stach, 2008) and virtual reality (Sutcliffe & Gault, 2004) to online shopping (Chen & Macredie, 2005). Some of the most common heuristic guidelines to choose from were created by Nielsen (Nielsen and Molich, 1990) (Nielsen, 1994), Norman (Norman, 2013), Tognazzini (Tognazzini, 1998), and Shneiderman (Shneiderman, Plaisant, Cohen and Elmqvist, 2016). Choosing the best set of guidelines and the most appropriate number of usability professions is important. Nielsen and Molich’s research found that individual evaluators only find 20-51% of the usability problems when evaluating alone. However, when the feedback of three to five evaluators is aggregated together, more usability problems can be uncovered (Nielsen and Molich, 1990). This method can be advantageous because designers can get quick feedback early for iteration before a full round of usability testing is performed. The goal of this session is to introduce this method to some and give others a refresher on how to apply this method in the real world. The Challenge For several years, UX day has offered an alternative session. The most intriguing sessions were interactive and offered hands-on training. For this UX Day Challenge session, teams of at most five participants will perform a Heuristic Evaluation of a sponsor’s website or product. During the session, participants will be introduced to Heuristic Evaluations. Topics such as how to perform one, who should perform one, and when it is appropriate to perform one will be covered. Additionally, the pros and cons of using this method will be discussed. Following the introduction to Heuristic Evaluation, teams will use the updated set of Nielson Heuristics (Nielsen, 1994) for the evaluation exercise. Although there are several sets of heuristics, Nielsen’s is one of the best known and widely accepted sets. The following Updated Nielsen Heuristics will be used:  • Visibility of system status  • Match between system and the real world  • User control and freedom  • Consistency and standards  • Error prevention  • Recognition rather than recall  • Flexibility and efficiency of use  • Aesthetic and minimalist design  • Help users recognize, diagnose, and recover from errors  • Help and documentation Following the evaluation period, teams will be asked to report their findings and recommendations to the judges and audience. The judges will deliberate and announce the winner. Conclusion This alternative session will be an opportunity to potentially expose participants to a methodology they may not use often. It will also be an opportunity to have a hands-on learning experience for students who have not formally used this methodology in the real world. Most importantly this session is in continuation of the goal to continue to bring new, interesting, and disruptive sessions to the traditional “conference” format and attract UX practitioners.


Author(s):  
Rafika Akhsani ◽  
Adimas Ketut Nalendra ◽  
M Mujiono ◽  
Ismanto Ismanto

<em>Kegiatan pramuka merupakan salah satu kegiatan ekstrakurikuler pada suatu sekolah. Aplikasi Phasbara adalah aplikasi berbasis android yang dibuat untuk membantu anggota pramuka dalam mendalami materi SAKA Bayangkara. Untuk mengetahui apakah aplikasi sudah sesuai dengan kebutuhan pengguna memerlukan sebuah evaluasi terhapat aplikasi itu sendiri. Penelitian ini bertujuan untuk mengevaluasi usabilitas aplikasi phasbara dan memberikan usulan perbaikan guna meningkatkan kemudahan penggunaan aplikasi Pasbhara. Pada penelitian ini menggunakan metode Heuristic Evaluation. Responden adalah anggota SAKA Bayangkara di Blitar. Data diperoleh melalui penyebaran kuesioner. Hasil penelitian menunjukkan bahwa rule-rule dalam metode heuristic evaluation yang digunakan pada penelitian yaitu Visibility of system status, Match between system and the real world, User control and freedom, Consistency and standards, Recognition rather than recall, dan Flexibility and efficiency of use mendapatkan nilai presentase persetujuan dengan rating skala setuju. Ini berarti bahwa desain antarmuka aplikasi phasbara sudah baik. Akan tetapi pada rule help and documentation juga mendapatkan persentase persetujuan dengan rating skala setuju. Ini berarti bahwa dari sisi bantuan penggunaan aplikasi masih perlu disempurnakan kembali. </em>


1986 ◽  
Vol 17 (4) ◽  
pp. 212-215 ◽  
Author(s):  
C. Mills ◽  
K. F. Bury ◽  
T. Roberts ◽  
Bruce Tognazzini ◽  
A. Wichansky ◽  
...  

1987 ◽  
Vol 18 (3) ◽  
pp. 67-70 ◽  
Author(s):  
Carol Bergfeld Mills

Kursor ◽  
2017 ◽  
Vol 9 (1) ◽  
Author(s):  
Satrio Agung Wicaksono ◽  
Retno Indah Rokhmawati ◽  
Mochamad Chandra Saputra ◽  
Raden Arief Setiawan

Academic Information System is an essential system in performing academic activities. Universitas Brawijaya are using Academic Information System named SIAKAD-UB. SIAKAD-UB is an information systems that deal with all kinds of student details and academic related reports. SIAKAD-UB encountered many problems along with the growth of data. This study aims to discover factor impact on problems of interface usability found in existing SIAKAD-UB using Heuristic Evaluation and Think Aloud method. This study involving 3 experts and 3 operators with the purpose of the evaluation received input from the experts and users. The result of experiment are find 3 problem heuristic with score 0 which mean no usability problem, 7 problem heuristic with score 1 which mean medium priority refinement, 7 problem heuristic with score 2 which mean low priority refinement, 7 problem heuristic with score 3 which mean high priority refinement. Heuristic evaluation and think aloud find 7 aspect refinement are Visibility of system status, Match between system and the real world, User control and freedom, Consistency and standards, Recognition rather than recall, Flexibility and efficiency of use, Help and documentation.


Author(s):  
Sunny Verma ◽  
Chen Wang ◽  
Liming Zhu ◽  
Wei Liu

Growing awareness towards ethical use of machine learning (ML) models has created a surge for the development of fair models. Existing work in this regard assumes the presence of sensitive attributes in the data and hence can build classifiers whose decisions remain agnostic to such attributes. However, in the real world settings, the end-user of the ML model is unaware of the training data; besides, building custom models is not always feasible. Moreover, utilizing a pre-trained model with high accuracy on certain dataset can not be assumed to be fair. Unknown biases in the training data are the true culprit for unfair models (i.e., disparate performance for groups in the dataset). In this preliminary research, we propose a different lens for building fair models by enabling the user with tools to discover blind spots and biases in a pre-trained model and augment them with corrective measures.


Author(s):  
Alfira Febriyanthi, Et. al.

The webpage portal is a means of information for companies to introduce company profiles, job vacancies, and their products to the public or its users. Heuristic evaluation (Nielsen 1990) is a method that researchers use to test the usability of a web page portal application. The Heuristic Evaluation Method uses ten Nielsen principles, namely Visibility of system status, match with the real world, user control and freedom, consistency and standard, error prevention, recognition than recall, flexibility and efficiency of use, aesthetic and minimalist design, help user recognize, diagnose, and recover from errors, and Help and documentation. The results of the analysis show that there are only 5 variables that have a significance of less than 0.05. Of the independent variables,


1987 ◽  
Vol 19 (1) ◽  
pp. 43-46 ◽  
Author(s):  
Carol Bergfeld Mills

Sign in / Sign up

Export Citation Format

Share Document