scholarly journals Map Evaluation under COVID-19 restrictions: A new visual approach based on think aloud interviews

2021 ◽  
Vol 4 ◽  
pp. 1-6
Author(s):  
Martin Knura ◽  
Jochen Schiewe

Abstract. With the beginning of the COVID-19 pandemic, the execution of eye-tracking user studies in indoor environments was no longer possible, and remote and contactless substitutes are needed. With this paper, we want to introduce an alternative method to eye tracking, completely feasible under COVID-19 restrictions. Our main technique are think aloud interviews, where participants constantly verbalize their thoughts as they move through a test. We record the screen and the mouse movements during the interviews, and analyse both the statements and the mouse positions afterwards. With this information, we can encode the approximate map position of the user’s attention for each second of the interview. This allows us to use the same visual methods as for eye-tracking studies, like attention maps or trajectory maps. We implement our method conducting a user study with 21 participants to identify user behaviour while solving high-level interpretation tasks, and with the results of this study, we can show that or new method provides a useful substitute for eye-tracking user studies.

2017 ◽  
Vol 10 (5) ◽  
Author(s):  
Thorsten Roth ◽  
Martin Weier ◽  
André Hinkenjann ◽  
Yongmin Li ◽  
Philipp Slusallek

This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated ren- dering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Yea Som Lee ◽  
Bong-Soo Sohn

3D maps such as Google Earth and Apple Maps (3D mode), in which users can see and navigate in 3D models of real worlds, are widely available in current mobile and desktop environments. Users usually use a monitor for display and a keyboard/mouse for interaction. Head-mounted displays (HMDs) are currently attracting great attention from industry and consumers because they can provide an immersive virtual reality (VR) experience at an affordable cost. However, conventional keyboard and mouse interfaces decrease the level of immersion because the manipulation method does not resemble actual actions in reality, which often makes the traditional interface method inappropriate for the navigation of 3D maps in virtual environments. From this motivation, we design immersive gesture interfaces for the navigation of 3D maps which are suitable for HMD-based virtual environments. We also describe a simple algorithm to capture and recognize the gestures in real-time using a Kinect depth camera. We evaluated the usability of the proposed gesture interfaces and compared them with conventional keyboard and mouse-based interfaces. Results of the user study indicate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in HMD-based virtual environments.


Author(s):  
Hyunmin Cheong ◽  
Wei Li ◽  
Francesco Iorio

This paper presents a novel application of gamification for collecting high-level design descriptions of objects. High-level design descriptions entail not only superficial characteristics of an object, but also function, behavior, and requirement information of the object. Such information is difficult to obtain with traditional data mining techniques. For acquisition of high-level design information, we investigated a multiplayer game, “Who is the Pretender?” in an offline context. Through a user study, we demonstrate that the game offers a more fun, enjoyable, and engaging experience for providing descriptions of objects than simply asking people to list them. We also show that the game elicits more high-level, problem-oriented requirement descriptions and less low-level, solution-oriented structure descriptions due to the unique game mechanics that encourage players to describe objects at an abstract level. Finally, we present how crowdsourcing can be used to generate game content that facilitates the gameplay. Our work contributes towards acquiring high-level design knowledge that is essential for developing knowledge-based CAD systems.


2020 ◽  
Vol 69 ◽  
pp. 471-500
Author(s):  
Shih-Yun Lo ◽  
Shiqi Zhang ◽  
Peter Stone

Intelligent mobile robots have recently become able to operate autonomously in large-scale indoor environments for extended periods of time. In this process, mobile robots need the capabilities of both task and motion planning. Task planning in such environments involves sequencing the robot’s high-level goals and subgoals, and typically requires reasoning about the locations of people, rooms, and objects in the environment, and their interactions to achieve a goal. One of the prerequisites for optimal task planning that is often overlooked is having an accurate estimate of the actual distance (or time) a robot needs to navigate from one location to another. State-of-the-art motion planning algorithms, though often computationally complex, are designed exactly for this purpose of finding routes through constrained spaces. In this article, we focus on integrating task and motion planning (TMP) to achieve task-level-optimal planning for robot navigation while maintaining manageable computational efficiency. To this end, we introduce TMP algorithm PETLON (Planning Efficiently for Task-Level-Optimal Navigation), including two configurations with different trade-offs over computational expenses between task and motion planning, for everyday service tasks using a mobile robot. Experiments have been conducted both in simulation and on a mobile robot using object delivery tasks in an indoor office environment. The key observation from the results is that PETLON is more efficient than a baseline approach that pre-computes motion costs of all possible navigation actions, while still producing plans that are optimal at the task level. We provide results with two different task planning paradigms in the implementation of PETLON, and offer TMP practitioners guidelines for the selection of task planners from an engineering perspective.


2018 ◽  
Author(s):  
D. Kuhner ◽  
L.D.J. Fiederer ◽  
J. Aldinger ◽  
F. Burget ◽  
M. Völker ◽  
...  

AbstractAs autonomous service robots become more affordable and thus available for the general public, there is a growing need for user-friendly interfaces to control these systems. Control interfaces typically get more complicated with increasing complexity of the robotic tasks and the environment. Traditional control modalities as touch, speech or gesture commands are not necessarily suited for all users. While non-expert users can make the effort to familiarize themselves with a robotic system, paralyzed users may not be capable of controlling such systems even though they need robotic assistance most. In this paper, we present a novel framework, that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. The system is composed of several interacting components: non-invasive neuronal signal recording and co-adaptive deep learning which form the brain-computer interface (BCI), high-level task planning based on referring expressions, navigation and manipulation planning as well as environmental perception. We extensively evaluate the BCI in various tasks, determine the performance of the goal formulation user interface and investigate its intuitiveness in a user study. Furthermore, we demonstrate the applicability and robustness of the system in real world scenarios, considering fetch-and-carry tasks and tasks involving human-robot interaction. As our results show, the system is capable of adapting to frequent changes in the environment and reliably accomplishes given tasks within a reasonable amount of time. Combined with high-level planning using referring expressions and autonomous robotic systems, interesting new perspectives open up for non-invasive BCI-based human-robot interactions.


Author(s):  
Renato Ricardo Abreu ◽  
Thyago Oliveira ◽  
Leydson Silva ◽  
Tiago Nascimento ◽  
Alisson Brito

Operations with Unmanned Aerial Vehicles (UAVs) require reliability to execute missions. With the correct diagnostic, it is possible to predict vehicle failure during or before the flight. The objective of this work is to present a testing tool, which analyzes and evaluates drones during the flight in indoor environments. For this purpose, the framework Ptolemy II was extended for communication with real drones using the High-Level Architecture (HLA) for data exchanging and synchronization. The presented testing environment is extendable for other testing routines and is ready for integration with other simulation and analysis tools. In this paper, two failure detection experiments were performed, with a total of 20 flights for each one, which 80\% were used to train a decision tree algorithm, and the other 20% flights to test the algorithm in which one of the propellers had an anomaly. The failure rate or detection rate was 70\% for the first experiment and 90% for the second one.


Author(s):  
Victoria Rautenbach ◽  
Serena Coetzee ◽  
Melissa Hankel

This paper presents the results of an exploratory user study using 2D maps to observe and analyse the effect of street name changes on prospective route planning. The study is part of a larger research initiative to understand the effect of street name changes on wayfinding. The common perception is that street name changes affect our ability to navigate an environment, but this has not yet been tested with an empirical user study. A combination of a survey, the thinking aloud method and eye tracking was used with a group of 20 participants, mainly geoinformatics students. A within-subject participant assignment was used. Independent variables were the street network (regular and irregular) and orientation cues (street names and landmarks) portrayed on a 2D map. Dependent variables recorded were the performance (<i>were the participant able to plan a route between the origin and destination?</i>); the accuracy (<i>was the shortest path identified?</i>); the time taken to complete a task; and fixation points with eye tracking. Overall, the results of this exploratory study suggest that street name changes impact the prospective route planning performance and process that individuals use with 2D maps. The results contribute to understanding how route planning changes when street names are changed on 2D maps. It also contributes to the design of future user studies. To generalise the findings, the study needs to be repeated with a larger group of participants.


Author(s):  
Jianxi Luo ◽  
Binyang Song ◽  
Lucienne Blessing ◽  
Kristin Wood

AbstractTraditionally, design opportunities and directions are conceived based on expertise, intuition, or time-consuming user studies and marketing research at the fuzzy front end of the design process. Herein, we propose the use of the total technology space map (TSM) as a visual ideation aid for rapidly conceiving high-level design opportunities. The map is comprised of various technology domains positioned according to knowledge proximity, which is measured based on a large quantity of patent data. It provides a systematic picture of the total technology space to enable stimulated ideation beyond the designer's knowledge. Designers can browse the map and navigate various technologies to conceive new design opportunities that relate different technologies across the space. We demonstrate the process of using TSM as a rapid ideation aid and then analyze its applications in two experiments to show its effectiveness and limitations. Furthermore, we have developed a cloud-based system for computer-aided ideation, that is, InnoGPS, to integrate interactive map browsing for conceiving high-level design opportunities with domain-specific patent retrieval for stimulating concrete technical concepts, and to potentially embed machine-learning and artificial intelligence in the map-aided ideation process.


Author(s):  
Dhavalkumar Thakker ◽  
Fan Yang-Turner ◽  
Dimoklis Despotakis

It is becoming increasingly popular to expose government and citywide sensor data as linked data. Linked data appears to offer a great potential for exploratory search in supporting smart city goals of helping users to learn and make sense of complex and heterogeneous data. However, there are no systematic user studies to provide an insight of how browsing through linked data can support exploratory search. This paper presents a user study that draws on methodological and empirical underpinning from relevant exploratory search studies. The authors have developed a linked data browser that provides an interface for user browsing through several datasets linked via domain ontologies. In a systematic study that is qualitative and exploratory in nature, they have been able to get an insight on central issues related to exploratory search and browsing through linked data. The study identifies obstacles and challenges related to exploratory search using linked data and draws heuristics for future improvements. The authors also report main problems experienced by users while conducting exploratory search tasks, based on which requirements for algorithmic support to address the observed issues are elicited. The approach and lessons learnt can facilitate future work in browsing of linked data, and points at further issues that have to be addressed.


Sign in / Sign up

Export Citation Format

Share Document