scholarly journals Embodied lenses for collaborative visual queries on tabletop displays

2012 ◽  
Vol 11 (4) ◽  
pp. 319-338 ◽  
Author(s):  
KyungTae Kim ◽  
Niklas Elmqvist

We introduce embodied lenses for visual queries on tabletop surfaces using physical interaction. The lenses are simply thin sheets of paper or transparent foil decorated with fiducial markers, allowing them to be tracked by a diffuse illumination tabletop display. The physical affordance of these embodied lenses allow them to be overlapped, causing composition in the underlying virtual space. We perform a formative evaluation to study users’ conceptual models for overlapping physical lenses. This is followed by a quantitative user study comparing performance for embodied versus purely virtual lenses. Results show that embodied lenses are as efficient as purely virtual lenses, and also support tactile and eyes-free interaction. We then present several examples of the technique, including image layers, map layers, image manipulation, and multidimensional data visualization. The technique is simple, cheap, and can be integrated into many existing tabletop displays.

2002 ◽  
Vol 34 (2) ◽  
pp. 158-162 ◽  
Author(s):  
Matthew J. Pastizzo ◽  
Robert F. Erbacher ◽  
Laurie B. Feldman

PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258103
Author(s):  
Andreas Bueckle ◽  
Kilian Buehling ◽  
Patrick C. Shih ◽  
Katy Börner

Working with organs and extracted tissue blocks is an essential task in many medical surgery and anatomy environments. In order to prepare specimens from human donors for further analysis, wet-bench workers must properly dissect human tissue and collect metadata for downstream analysis, including information about the spatial origin of tissue. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks—i.e., to record the size, position, and orientation of human tissue data with regard to reference organs. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks, with planned support for 17 organs in the near future. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk which is replicated in virtual space; VR Standup, where users stand upright while performing their tasks. All three setups were implemented using the Unity game engine. We then ran a user study for these three setups involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks in sequence and reporting position accuracy, rotation accuracy, completion time, and satisfaction. All study materials were made available in support of future study replication, alongside videos documenting our setups. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users (for the sequence of 30 identical tasks), there are no significant differences between the three setups for position accuracy when normalized by the height of the virtual kidney across setups. When extrapolating from the 2D Desktop setup with a 113-mm-tall kidney, the absolute performance values for the 2D Desktop version (22.6 seconds per task, 5.88 degrees rotation, and 1.32 mm position accuracy after 8.3 tasks in the series of 30 identical tasks) confirm that the 2D Desktop interface is well-suited for allowing users in HuBMAP to register tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups.


2018 ◽  
Vol 224 ◽  
pp. 02071
Author(s):  
Dmitrii Voronin ◽  
Victoria Shevchenko ◽  
Olga Chengar

Scientific problems related to the classification, assessment, visualization and management of risks in the cloud environments have been considered. The analysis of the state-of-the-art methods, offered for these problems solving, has been carried out taking into account the specificity of the cloud infrastructure oriented on large-scale tasks processing in distributed production infrastructures. Unfortunately, not much of scientific and objective researches had been focused on the developing of effective approaches for cloud risks visualization providing the necessary information to support decision-making in distributed production infrastructures. In order to fill this research gap, this study attempts to propose a risks visualization technique that is based on radar chart implementation for multidimensional data visualization.


2013 ◽  
Vol 64 (4) ◽  
pp. 689-700 ◽  
Author(s):  
Ahmet Aker ◽  
Laura Plaza ◽  
Elena Lloret ◽  
Robert Gaizauskas
Keyword(s):  

Author(s):  
Gary M. Stump ◽  
Simon W. Miller ◽  
Michael A. Yukish ◽  
Christopher M. Farrell

A potential source of uncertainty within multi-objective design problems can be the exact value of the underlying design constraints. This uncertainty will affect the resulting performance of the selected system commensurate with the level of risk that decision-makers are willing to accept. This research focuses on developing visualization tools that allow decision-makers to specify uncertainty distributions on design constraints and to visualize their effects in the performance space using multidimensional data visualization methods to solve problems with high orders of computational complexity. These visual tools will be demonstrated using an example portfolio design scenario in which the goal of the design problem is to maximize the performance of a portfolio with an uncertain budget constraint.


Sign in / Sign up

Export Citation Format

Share Document