scholarly journals Virtual reference for video collections: System infrastructure, user interface and pilot user study

Author(s):  
Xiangming Mu ◽  
Lili Luo
2015 ◽  
Vol 78 (2-2) ◽  
Author(s):  
Nuraini Hidayah Sulaiman ◽  
Masitah Ghazali

Guidelines for designing and developing a learning prototype that are compatible with the limited capabilities of children with Cerebral Palsy (CP) are established in the form of a model, known as Learning Software User Interface Design Model (LSUIDM), to ensure children with CP are able to grasp the concepts of a learning software application prototype. In this paper, the LSUIDM is applied in developing a learning software application for children with CP. We present a user study on evaluating a children education game for CP children at Pemulihan dalam Komuniti in Johor Bahru. The findings from the user study shows that the game, which was built, based on the LSUIDM can be applied in the learning process for children with CP and most notably, the children are engaged and excited using the software. This paper highlights the lessons learned from the user study, which should be significant especially in improving the application. The results of the study show that the application is proven to be interactive, useful and efficient as the users used it.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258103
Author(s):  
Andreas Bueckle ◽  
Kilian Buehling ◽  
Patrick C. Shih ◽  
Katy Börner

Working with organs and extracted tissue blocks is an essential task in many medical surgery and anatomy environments. In order to prepare specimens from human donors for further analysis, wet-bench workers must properly dissect human tissue and collect metadata for downstream analysis, including information about the spatial origin of tissue. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks—i.e., to record the size, position, and orientation of human tissue data with regard to reference organs. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks, with planned support for 17 organs in the near future. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk which is replicated in virtual space; VR Standup, where users stand upright while performing their tasks. All three setups were implemented using the Unity game engine. We then ran a user study for these three setups involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks in sequence and reporting position accuracy, rotation accuracy, completion time, and satisfaction. All study materials were made available in support of future study replication, alongside videos documenting our setups. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users (for the sequence of 30 identical tasks), there are no significant differences between the three setups for position accuracy when normalized by the height of the virtual kidney across setups. When extrapolating from the 2D Desktop setup with a 113-mm-tall kidney, the absolute performance values for the 2D Desktop version (22.6 seconds per task, 5.88 degrees rotation, and 1.32 mm position accuracy after 8.3 tasks in the series of 30 identical tasks) confirm that the 2D Desktop interface is well-suited for allowing users in HuBMAP to register tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups.


Author(s):  
Margaret F. Rox ◽  
Richard J. Hendrick ◽  
S. Duke Herrell ◽  
Robert J. Webster

There is a trend towards miniaturization in surgical robotics with the objective of making surgeries less invasive [1]. There has also been increasing recent interest in hand-held robots because of their ability to maintain the current surgical workflow [2, 3]. We have previously presented a system that integrates small-diameter concentric tube robots [4, 5] into a hand-held robotic device [3], as shown in Figure 1. This robot was designed for transurethral laser surgery in the prostate. It provides the surgeon with two dexterous manipulators through a 5mm port in a traditional transurethral endoscope. This system enables the surgeon to retract tissue and aim a fiber optic laser simultaneously to resect prostate tissue. This robot provides the surgeon with a total of ten degrees of freedom (DOF) that must be simultaneously coordinated, including endoscope orientation (3 DOF), endoscope insertion (1 DOF), as well as the tip position of each concentric tube manipulator (3 DOF per manipulator). In [3], a simple user interface was employed that involved thumb joysticks (which also had pushbutton capability) and a unidirectional index finger trigger, as shown in Figure 2 (Left). The thumb joysticks were mapped to manipulator tip motion in the plane of the endoscope image, and the trigger was used for motion perpendicular to the plane. Whether the finger trigger extended or retracted the tip of the concentric tube manipulator was toggled via the pushbutton capability of the thumb joystick. While surgeons could learn this mapping with some effort, and were able to use it to accomplish a cadaver study, the experiments made clear that further work was needed in creating an intuitive user interface — particularly with respect to how motion perpendicular to the image plane is controlled. This paper describes a first step toward improving the user interface; we integrate a bidirectional dial input in place of the unidirectional index finger trigger, so that extension and retraction perpendicular to the image plane can be controlled without the need for a pushbutton toggle. In this paper we describe the design of this dial input and present the results of a user study comparing it to the interface in [3].


Author(s):  
Federico Maria Cau ◽  
Angelo Mereu ◽  
Lucio Davide Spano

In this paper, we present an intelligent support End-User Developers (EUDevs) in creating plot lines for Point and Click games on the web. We introduce a story generator and the associated user interface, which help the EUDev in defining the game plot starting from the images providing the game setting. In particular, we detail a pipeline for creating such game plots starting from 360 degrees images. We identify salient objects in equirectangular images, and we compose the output with other two neural networks for the generation: one generating captions for 2D images and one generating the plot text. The provided suggestions can be further developed by the EUDev, modifying the generated text and saving the result. The interface supports the control of different parameters of the story generator using a user-friendly vocabulary. The results of a user study show good effectiveness and usability of the proposed interface.


SINERGI ◽  
2018 ◽  
Vol 22 (2) ◽  
pp. 91
Author(s):  
Zico Pratama Putera ◽  
Mila Desi Anasanti ◽  
Bagus Priambodo

The gesture is one of the most natural and expressive methods for the hearing impaired. Most researchers, however, focus on either static gestures, postures or a small group of dynamic gestures due to the complexity of dynamic gestures. We propose the Kinect Translation Tool to recognize the user's gesture. As a result, the Kinect Translation Tool can be used for bilateral communication with the deaf community. Since real-time detection of a large number of dynamic gestures is taken into account, some efficient algorithms and models are required. The dynamic time warping algorithm is used here to detect and translate the gesture. Kinect Sign Language should translate sign language into written and spoken words. Conversely, people can reply directly with their spoken word, which is converted into literal text together with the animated 3D sign language gestures. The user study, which included several prototypes of the user interface, was carried out with the observation of ten participants who had to gesture and spell the phrases in American Sign Language (ASL). The speech recognition tests for simple phrases have therefore shown good results. The system also recognized the participant's gesture very well during the test. The study suggested that a natural user interface with Microsoft Kinect could be interpreted as a sign language translator for the hearing impaired.


Author(s):  
Marc Hesenius ◽  
Markus Kleffmann ◽  
Volker Gruhn

Abstract To gain a common understanding of an application’s layouts, dialogs and interaction flows, development teams often sketch user interface (UI). Nowadays, they must also define multi-touch gestures, but tools for sketching UIs often lack support for custom gestures and typically just integrate a basic predefined gesture set, which might not suffice to specifically tailor the interaction to the desired use cases. Furthermore, sketching can be enhanced with digital means, but it remains unclear whether digital sketching is actually beneficial when designing gesture-based applications. We extended the AugIR, a digital sketching environment, with GestureCards, a hybrid gesture notation, to allow software engineers to define custom gestures when sketching UIs. We evaluated our approach in a user study contrasting digital and analog sketching of gesture-based UIs.


2018 ◽  
Vol 2 (4) ◽  
pp. 71 ◽  
Author(s):  
Patrick Lindemann ◽  
Tae-Young Lee ◽  
Gerhard Rigoll

Broad access to automated cars (ACs) that can reliably and unconditionally drive in all environments is still some years away. Urban areas pose a particular challenge to ACs, since even perfectly reliable systems may be forced to execute sudden reactive driving maneuvers in hard-to-predict hazardous situations. This may negatively surprise the driver, possibly causing discomfort, anxiety or loss of trust, which might be a risk for the acceptance of the technology in general. To counter this, we suggest an explanatory windshield display interface with augmented reality (AR) elements to support driver situation awareness (SA). It provides the driver with information about the car’s perceptive capabilities and driving decisions. We created a prototype in a human-centered approach and implemented the interface in a mixed-reality driving simulation. We conducted a user study to assess its influence on driver SA. We collected objective SA scores and self-ratings, both of which yielded a significant improvement with our interface in good (medium effect) and in bad (large effect) visibility conditions. We conclude that explanatory AR interfaces could be a viable measure against unwarranted driver discomfort and loss of trust in critical urban situations by elevating SA.


2008 ◽  
Vol 2008 ◽  
pp. 1-14 ◽  
Author(s):  
Tomi Heimonen

Designing an effective mobile search user interface is challenging, as interacting with the results is often complicated by the lack of available screen space and limited interaction methods. We present Mobile Findex, a mobile search user interface that uses automatically computed result clusters to provide the user with an overview of the result set. In addition, it utilizes a focus-plus-context result list presentation combined with an intuitive browsing method to aid the user in the evaluation of results. A user study with 16 participants was carried out to evaluate Mobile Findex. Subjective evaluations show that Mobile Findex was clearly preferred by the participants over the traditional ranked result list in terms of ease of finding relevant results, suitability to tasks, and perceived efficiency. While the use of categories resulted in a lower rate of nonrelevant result selections and better precision in some tasks, an overall significant difference in search performance was not observed.


Sign in / Sign up

Export Citation Format

Share Document