Interactive 3D Visualization with Dual Leap Motions

2018 ◽  
Vol 30 (7) ◽  
pp. 1268 ◽  
Author(s):  
Guodao Sun ◽  
Puyong Huang ◽  
Yipeng Liu ◽  
Ronghua Liang
2018 ◽  
Vol 477 (2) ◽  
pp. 1495-1507 ◽  
Author(s):  
T Dykes ◽  
A Hassan ◽  
C Gheller ◽  
D Croton ◽  
M Krokos

Author(s):  
Dawei Xu ◽  
Lin Wang ◽  
Xin Wang ◽  
Dianquan Li ◽  
Jianpeng Duan ◽  
...  

2018 ◽  
pp. 31-63 ◽  
Author(s):  
Lukáš Herman ◽  
Tomáš Řezník ◽  
Zdeněk Stachoň ◽  
Jan Russnák

Various widely available applications such as Google Earth have made interactive 3D visualizations of spatial data popular. While several studies have focused on how users perform when interacting with these with 3D visualizations, it has not been common to record their virtual movements in 3D environments or interactions with 3D maps. We therefore created and tested a new web-based research tool: a 3D Movement and Interaction Recorder (3DmoveR). Its design incorporates findings from the latest 3D visualization research, and is built upon an iterative requirements analysis. It is implemented using open web technologies such as PHP, JavaScript, and the X3DOM library. The main goal of the tool is to record camera position and orientation during a user’s movement within a virtual 3D scene, together with other aspects of their interaction. After building the tool, we performed an experiment to demonstrate its capabilities. This experiment revealed differences between laypersons and experts (cartographers) when working with interactive 3D maps. For example, experts achieved higher numbers of correct answers in some tasks, had shorter response times, followed shorter virtual trajectories, and moved through the environment more smoothly. Interaction-based clustering as well as other ways of visualizing and qualitatively analyzing user interaction were explored.


Author(s):  
Matthias Wieczorek ◽  
André Aichert ◽  
Pascal Fallavollita ◽  
Oliver Kutter ◽  
Ahmad Ahmadi ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2982
Author(s):  
Bruno Mataloto ◽  
João C. Ferreira ◽  
Ricardo Resende ◽  
Rita Moura ◽  
Sílvia Luís

In this research work, we present an IoT solution to environment variables using a LoRa transmission technology to give real-time information to users in a Things2People process and achieve savings by promoting behavior changes in a People2People process. These data are stored and later processed to identify patterns and integrate with visualization tools, which allow us to develop an environmental perception while using the system. In this project, we implemented a different approach based on the development of a 3D visualization tool that presents the system collected data, warnings, and other users’ perception in an interactive 3D model of the building. This data representation introduces a new People2People interaction approach to achieve savings in shared spaces like public buildings by combining sensor data with the users’ individual and collective perception. This approach was validated at the ISCTE-IUL University Campus, where this 3D IoT data representation was presented in mobile devices, and from this, influenced user behavior toward meeting campus sustainability goals.


2017 ◽  
Vol 9 (1) ◽  
Author(s):  
Vojtěch Juřík ◽  
Lukáš Herman ◽  
Čeněk Šašinka ◽  
Zdeněk Stachoň ◽  
Jiří Chmelík

AbstractThis study explores the influence of stereoscopic (real) 3D and monoscopic (pseudo) 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant’s motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision). The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.


2014 ◽  
Vol 7 (1) ◽  
Author(s):  
Claudio Pensieri ◽  
Maddalena Pennacchini

Background: Virtual Reality (VR) was defined as a collection of technological devices: “a computer capable of interactive 3D visualization, a head-mounted display and data gloves equipped with one or more position trackers”. Today, lots of scientists define VR as a simulation of the real world based on computer graphics, a three dimensional world in which communities of real people interact, create content, items and services, producing real economic value through e-Commerce.Objective: To report the results of a systematic review of articles and reviews published about the theme: “Virtual Reality in Medicine”.Methods: We used the search query string: “Virtual Reality”, “Metaverse”, “Second Life”, “Virtual World”, “Virtual Life” in order to find out how many articles were written about these themes. For the “Meta-review” we used only “Virtual Reality” AND “Review”. We searched the following databases: Psycinfo, Journal of Medical Internet Research, Isiknowledge till September 2011 and Pubmed till February 2012. We included any source published in either print format or on the Internet, available in all languages, and containing texts that define or attempt to define VR in explicit terms.Results: We retrieved 3,443 articles on Pubmed in 2012 and 8,237 on Isiknowledge in 2011. This large number of articles covered a wide range of themes, but showed no clear consensus about VR. We identified 4 general uses of VR in Medicine, and searched for the existing reviews about them. We found 364 reviews in 2011, although only 197 were pertinent to our aims: 1. Communication Interface (11 Reviews); 2. Medical Education (49 reviews); 3. Surgical Simulation (49 Reviews) and 4. Psychotherapy (88 Reviews).Conclusion: We found a large number of articles, but no clear consensus about the meaning of the term VR in Medicine. We found numerous articles published on these topics and many of them have been reviewed. We decided to group these reviews in 4 areas in order to provide a systematic overview of the subject matter, and to enable those interested to learn more about these particular topics.


Sign in / Sign up

Export Citation Format

Share Document