scholarly journals Methodological Challenges in Eye-Tracking based Usability Testing of 3-Dimensional Software – Presented via Experiences of Usability Tests of Four 3D Applications

Author(s):  
Mária Babicsné-Horváth ◽  
Károly Hercegfi

Eye-tracking based usability testing and User Experience (UX) research are widespread in the development processes of various types of software; however, there exist specific difficulties during usability tests of three-dimensional (3D) software. Analysing the screen records with gaze plots, heatmaps of fixations, and statistics of Areas of Interests (AOI), methodological problems occur when the participant wants to rotate, zoom, or move the 3D space. The data gained regarded the menu bar is mainly interpretable; however, the data regarded the 3D environment is hardly so, or not at all. Our research tested four software applications with the aforementioned problem in mind: ViveLab and Jack Digital Human Modelling (DHM) and ArchiCAD and CATIA Computer Aided Design (CAD) software. Our original goal was twofold. Firstly, with these usability tests, we aimed to identify issues in the software. Secondly, we tested the utility of a new methodology which was included in the tests. This paper summarizes the results on the methodology based on individual experiments with different software applications. One of the main ideas behind the methodology adopted is to tell the participants (during certain subtasks of the tests) not to move the 3D space while they perform the given tasks at a certain point in the usability test. During the experiments, we applied a Tobii eye-tracking device, and after the task completion, each participant was interviewed. Based on these experiences, the methodology appears to be both useful and applicable, and its visualisation techniques for one or more participants are interpretable.

2017 ◽  
Vol 5 (4) ◽  
pp. 449-457 ◽  
Author(s):  
Ryo Takahashi ◽  
Hiromasa Suzuki ◽  
Jouh Yeong Chew ◽  
Yutaka Ohtake ◽  
Yukie Nagai ◽  
...  

Abstract Eye tracking is a technology that has quickly become a commonplace tool for evaluating package and webpage design. In such design processes, static two-dimensional images are shown on a computer screen while a subject's gaze where he or she looks is measured via an eye tracking device. The collected gaze fixation data are then visualized and analyzed via gaze plots and heat maps. Such evaluations using two-dimensional images are often too limited to analyze gaze on three-dimensional physical objects such as products because users look at them not from a single point of view but rather from various angles. Therefore in this study we propose methods for collecting gaze fixation data for a three-dimensional model of a given product and visualizing corresponding gaze plots and heat maps also in three dimensions. To achieve our goals, we used a wearable eye-tracking device, i.e., eye-tracking glasses. Further, we implemented a prototype system to demonstrate its advantages in comparison with two-dimensional gaze fixation methods. Highlights Proposing a method for collecting gaze fixation data for a three-dimensional model of a given product. Proposing two visualization methods for three dimensional gaze data; gaze plots and heat maps. Proposed system was applied to two practical examples of hair dryer and car interior.


Author(s):  
Eliab Z. Opiyo ◽  
Imre Horva´th

Standard two-dimensional (2D) computer displays are traditionally used in engineering design to display the three-dimensional (3D) images generated by computer-aided design and engineering (CAD/CAE) systems. These displays serve primarily as passive visualization tools. The interaction with the displayed images on these devices is only possible through archaic 2D peripheral input devices such as keyboards and mice; via the Windows, Icons, Menus and Pointing (WIMP) style graphical user interfaces. It is widely acknowledged in the design community that such visualization and interaction methods do not match the way the designers think and work. Overall, the emerging volumetric 3D displays are seen as the obvious replacement of flat displays in future. This paper explores the possibility of stepping beyond the present 2D desktop computer monitors, and investigate the practicalities of using the emerging volumetric 3D displays, coupled with non encumbering natural interaction means such as gestures, hand motions and haptics for designing in 3D space. We first explore the need for spatial visualization and interaction in design, and outline how the volumetric 3D imaging devices could be used in design. We then review the existing volumetric 3D display configurations, and investigate how they would assist designing in 3D space. Next, we present the study we conducted to seek views of the designers on what kind of volumetric 3D display configuration would more likely match their needs. We finally highlight what would be the consequences and benefits of using volumetric 3D displays instead of the canonical flat screen displays and 2D input devices in design. It has been established that the designers who participated as subjects in the above-mentioned preliminary field study feel that dome-shaped and aerial volumetric 3D imaging devices, which allow for both visualization and interaction with virtual objects, are the imaging options that would not only better suit their visualization and interaction needs, but would also satisfy most of the usability requirements. However, apart from dealing with the remaining basic technological gaps, the challenge is also on how to combine the prevailing proven CAD/CAE technologies and the emerging interaction technologies with the emerging volumetric 3D imaging technologies. As a result of turning to volumetric 3D imaging devices, there is also the challenge of putting in place a formal methodology for designing in 3D space by using these devices.


2017 ◽  
Author(s):  
Martin Leroux ◽  
Sofiane Achiche ◽  
Maxime Raison

Over the last decade, eye tracking systems have been developed and used in many fields, mostly to identify targets on a screen, i.e. a plane. For novel applications such as the control of robotic devices by the user vision, there is a great interest in developing methods base on eye tracking to identify target points in free three dimensional environments. The objective of this paper is to characterise the accuracy the eye tracking and computer vision combination that was designed recently to overcome many limitations of eye tracking in 3D space. We propose a characterization protocol to assess the behavior of the accuracy of the system over the workspace of a robotic manipulator assistant. Applying this protocol to 33 subjects, we estimated the behavior of the error of the system relatively to the target position on a cylindrical workspace and to the acquisition time. Over our workspace, targets are located on average at 0.84 m and our method shows an accuracy 12.65 times better than the calculation of the 3D point of gaze. With the current accuracy, many potential applications become possible, such as visually controlled robotic assistants in the field of rehabilitation and adaptation engineering.


2019 ◽  
Vol 9 (15) ◽  
pp. 3078 ◽  
Author(s):  
Hyocheol Ro ◽  
Jung-Hyun Byun ◽  
Yoon Jung Park ◽  
Nam Kyu Lee ◽  
Tack-Don Han

In this paper, we propose AR Pointer, a new augmented reality (AR) interface that allows users to manipulate three-dimensional (3D) virtual objects in AR environment. AR Pointer uses a built-in 6-degrees of freedom (DoF) inertial measurement unit (IMU) sensor in an off-the-shelf mobile device to cast a virtual ray that is used to accurately select objects. It is also implemented using simple touch gestures commonly used in smartphones for 3D object manipulation, so users can easily manipulate 3D virtual objects using the AR Pointer, without a long training period. To demonstrate the usefulness of AR Pointer, we introduce two use-cases, constructing an AR furniture layout and AR education. Then, we conducted two experiments, performance tests and usability tests, to represent the excellence of the designed interaction methods using AR Pointer. We found that AR Pointer is more efficient than other interfaces, achieving 39.4% faster task completion time in the object manipulation. In addition, the participants gave an average of 8.61 points (13.4%) on the AR Pointer in the usability test conducted through the system usability scale (SUS) questionnaires and 8.51 points (15.1%) on the AR Pointer in the fatigue test conducted through the NASA task load index (NASA-TLX) questionnaire. Previous AR applications have been implemented in a passive AR environment where users simply check and pop up the AR objects those are prepared in advance. However, if AR Pointer is used for AR object manipulation, it is possible to provide an immersive AR environment for the user who want/wish to actively interact with the AR objects.


2017 ◽  
Author(s):  
Martin Leroux ◽  
Sofiane Achiche ◽  
Maxime Raison

Over the last decade, eye tracking systems have been developed and used in many fields, mostly to identify targets on a screen, i.e. a plane. For novel applications such as the control of robotic devices by the user vision, there is a great interest in developing methods base on eye tracking to identify target points in free three dimensional environments. The objective of this paper is to characterise the accuracy the eye tracking and computer vision combination that was designed recently to overcome many limitations of eye tracking in 3D space. We propose a characterization protocol to assess the behavior of the accuracy of the system over the workspace of a robotic manipulator assistant. Applying this protocol to 33 subjects, we estimated the behavior of the error of the system relatively to the target position on a cylindrical workspace and to the acquisition time. Over our workspace, targets are located on average at 0.84 m and our method shows an accuracy 12.65 times better than the calculation of the 3D point of gaze. With the current accuracy, many potential applications become possible, such as visually controlled robotic assistants in the field of rehabilitation and adaptation engineering.


Author(s):  
Seok Lee ◽  
Juyong Park ◽  
Dongkyung Nam

In this article, the authors present an image processing method to reduce three-dimensional (3D) crosstalk for eye-tracking-based 3D display. Specifically, they considered 3D pixel crosstalk and offset crosstalk and applied different approaches based on its characteristics. For 3D pixel crosstalk which depends on the viewer’s relative location, they proposed output pixel value weighting scheme based on viewer’s eye position, and for offset crosstalk they subtracted luminance of crosstalk components according to the measured display crosstalk level in advance. By simulations and experiments using the 3D display prototypes, the authors evaluated the effectiveness of proposed method.


2020 ◽  
Vol 64 (5) ◽  
pp. 50405-1-50405-5
Author(s):  
Young-Woo Park ◽  
Myounggyu Noh

Abstract Recently, the three-dimensional (3D) printing technique has attracted much attention for creating objects of arbitrary shape and manufacturing. For the first time, in this work, we present the fabrication of an inkjet printed low-cost 3D temperature sensor on a 3D-shaped thermoplastic substrate suitable for packaging, flexible electronics, and other printed applications. The design, fabrication, and testing of a 3D printed temperature sensor are presented. The sensor pattern is designed using a computer-aided design program and fabricated by drop-on-demand inkjet printing using a magnetostrictive inkjet printhead at room temperature. The sensor pattern is printed using commercially available conductive silver nanoparticle ink. A moving speed of 90 mm/min is chosen to print the sensor pattern. The inkjet printed temperature sensor is demonstrated, and it is characterized by good electrical properties, exhibiting good sensitivity and linearity. The results indicate that 3D inkjet printing technology may have great potential for applications in sensor fabrication.


Micromachines ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 444
Author(s):  
Guoning Si ◽  
Liangying Sun ◽  
Zhuo Zhang ◽  
Xuping Zhang

This paper presents the design, fabrication, and testing of a novel three-dimensional (3D) three-fingered electrothermal microgripper with multiple degrees of freedom (multi DOFs). Each finger of the microgripper is composed of a V-shaped electrothermal actuator providing one DOF, and a 3D U-shaped electrothermal actuator offering two DOFs in the plane perpendicular to the movement of the V-shaped actuator. As a result, each finger possesses 3D mobilities with three DOFs. Each beam of the actuators is heated externally with the polyimide film. The durability of the polyimide film is tested under different voltages. The static and dynamic properties of the finger are also tested. Experiments show that not only can the microgripper pick and place microobjects, such as micro balls and even highly deformable zebrafish embryos, but can also rotate them in 3D space.


Sign in / Sign up

Export Citation Format

Share Document