Multi-touch interaction techniques to control 3D objects on a smartphone screen

Author(s):  
Filip Hlavacek ◽  
Alena Kovarova
Author(s):  
Sean T. Hayes ◽  
Julie A. Adams

Smartphones pose new design challenges for precise interactions, prompting the development of indirect interaction techniques that improve performance by reducing the occlusion caused by touch input. Direct touch interaction (e.g., tap to select) is imprecise, due to occlusion and the finger’s surface area. Many cursor-based interaction techniques address this issue; however, these techniques do not dynamically adjust the control-to-display movement ratio ( CDratio ) to improve accuracy and interaction times. This paper analyzes the performance benefits of applying adaptive CDratio enhancements to smartphone interaction for target-selection tasks. Existing desktop computer enhancements and a new enhancement method, Magnetic Targets, are compared. Magnetic Targets resulted in significantly shorter target selection times compared to the existing enhancements. Further, a simple method that combined enhancements to provide a CDratio based on a greater context of the interactions demonstrated performance improvements.


2016 ◽  
Vol 78 (12-3) ◽  
Author(s):  
Arief Hydayat ◽  
Haslina Arshad ◽  
Nazlena Mohamad Ali ◽  
Lam Meng Chun

In a 3D user interface, interaction plays an important role in helping users to manipulate 3D objects in virtual environments. 3D devices, such as data glove and motion tracking, can potentially give users the opportunity to manipulate 3D objects in virtual reality environments such as checking, zooming, translating, rotating, merging and splitting 3D objects in a more natural and easy manner through the use of hand gestures. Hand gestures are often applied in 3D interaction techniques for converting the manipulation mode. This paper will discuss the interaction technique in a virtual environment using a combination of the Push and Pull navigation and the rotation technique. The unimanual use of these 3D interaction techniques can improve the effectiveness of users in their interaction with and manipulation of 3D objects. This study has enhanced the capability of the unimanual 3D interaction technique in terms of 3D interaction feedback in virtual environments.


2011 ◽  
Vol 10 (4) ◽  
pp. 1-10 ◽  
Author(s):  
Paulo Gallotti Rodrigues ◽  
Alberto Barbosa Raposo ◽  
Luciano Pereira Soares

Traditional interaction devices such as computer mice and keyboards do not adapt very well to immersive envi-ronments, since they were not necessarily designed for users who may be standing or in movement. Moreover, in the current inte-raction model for immersive environments, based on wands and 3D mice, a change of context is necessary in order to execute non-immersive tasks. These constant context changes from im-mersive to 2D desktops introduce a rupture in user interaction with the application. The objective of this work is to study how to adapt interaction techniques from touch surface based systems to 3D virtual environments to reduce this physical rupture from the fully immersive mode to the desktop paradigm. In order to do this, a wireless glove (v-Glove) that maps to a touch interface in a vir-tual reality immersive environment was developed, enabling it to interact in 3D applications. The glove has two main functionalities: tracking the position of the user's index finger and vibrating the fingertip when it reaches an area mapped in the interaction space to simulate a touch feeling. Quantitative and qualitative analysis were performed with users to evaluate the v-Glove, comparing it with a gyroscopic 3D mouse.


Author(s):  
Ronak R. Mohanty ◽  
Vinayak R. Krishnamurthy

Abstract In this article, we report on our investigation of kinesthetic feedback as a means to provide precision, accuracy, and mitigation of arm fatigue in spatial manipulation tasks. Most works on spatial manipulation discuss the use of haptics (kinesthetic/force and tactile) primarily as a means to offer physical realism in spatial user interfaces (SUIs). Our work offers a new perspective in terms of how force-feedback can promote precise manipulations in spatial interactions to aid manual labor, controllability, and precision. To demonstrate this, we develop, implement, and evaluate three new haptics-enabled interaction techniques (kinesthetic metaphors) for precise rotation of 3D objects. The quantitative and qualitative analyses of experiments reveal that the addition of force-feedback improves precision for each of the rotation techniques. Self-reported user feedback further exposes a novel aspect of kinesthetic manipulation in its ability to mitigate arm fatigue for close-range spatial manipulation tasks.


2021 ◽  
Vol 15 (2) ◽  
pp. 49-55
Author(s):  
Dino Caesaron ◽  
Rio Prasetyo Lukodono ◽  
Yunita Nugrahaini Safrudin

The interaction of user performance with three-dimensional (3D) objects has become an important issue in the recent development of virtual reality applications. Additionally, the basic conviction of current Virtual Reality (VR) supports the development of the viable interface between humans and machines. The research focuses on the user’s interaction technique by considering two approaches (direct and indirect interaction techniques) for the users while interacting with threedimensional objects. Numerous possible uses can benefit from virtual reality by considering a few fundamental visual and cognitive activities in the Virtual Environment (VE), such as the interpretation of space that users of clear and indirect perception are not well established. The experiment is performed in a stereoscopic environment using a reciprocal tapping task. Participants are expected to use direct pointing as well as indirect cursor techniques to select a stereoscopic spherical target. The results show that, in the sense of a direct interaction technique, user recognition of an object appears to converge in the center of a simulated area. Unfortunately, this convergence is not demonstrated in the indirect cursor situation. The pointing estimation from the users is more accurate when using the indirect interaction approach. The findings provide an understanding of the interaction characteristics done by the users in the stereoscopic environment. Importantly, developers of a virtual environment may use the result when developing effective user interface perception in specific interaction techniques.


Author(s):  
Sean T. Hayes ◽  
Julie A. Adams

Linear changes in position are difficult to measure using only a mobile device’s onboard sensors. Prior research has relied on external sensors or known environmental references in order to develop mobile phone interaction techniques. The Amazon Fire Phone’s® unique head-tracking capabilities were leveraged and evaluated for navigating large application spaces using device motion gestures. Although touch interaction is shown to outperform the device-motion, this research demonstrates the feasibility of using effective device-motion gestures that rely on changes in the device’s position and orientation. Design guidance for future device motion interaction capabilities is provided.


2012 ◽  
Vol 433-440 ◽  
pp. 4584-4589
Author(s):  
Yi Lin ◽  
Yue Liu

This paper analyzes the Virtual Engine platform “Virtools” and the theory of Multi-touch technology, explores the way to merge the behavior interactive module of Virtools and Multi-touch control mechanism, and proposes a resolution which integrates the virtual scene rendering and touch interaction. By creating the virtual scene of Dunhuang Mogao Grotto and combining the interactive advantage of Virtools when manipulating the 3D objects, the proposed system has created an immersive sensation of entering into the real environment. During the tour, the system will bring strong interests in Historical sites’ culture to its visitors. Implementation of the proposed system show that the researches on the virtual scene rendering and the integration of interactive technologies have achieved the desired effect.


2019 ◽  
Vol 9 (21) ◽  
pp. 4652 ◽  
Author(s):  
Chiuhsiang Joe Lin ◽  
Dino Caesaron ◽  
Bereket Haile Woldegiorgis

Recent developments in virtual environment applications allow users to interact with three-dimensional (3D) objects in virtual environments. As interaction with 3D objects in virtual environments becomes more established, it is important to investigate user performance with such interaction techniques within a specific task. This study investigated two interaction modes, direct and indirect, depending on how the users interacted with the 3D objects, by measuring the accuracy of egocentric distance estimation in a stereoscopic environment. Fourteen participants were recruited to perform an acquisition task with both direct pointing and indirect cursor techniques at three egocentric distances and three task difficulty levels. The accuracy of the egocentric distance estimation, throughput, and task completion time were analyzed for each interaction technique. The indirect cursor technique was found to be more accurate than the direct pointing one. On the other hand, a higher throughput was observed with the direct pointing technique than with the indirect cursor technique. However, there were no significant differences in task completion time between the two interaction techniques. The results also showed accuracy to be higher at the greatest distance (150 cm from the participant) than at the closer distances of 90 cm and 120 cm. Furthermore, the difficulty of the task also significantly affected the accuracy, with accuracy lower in the highest difficulty condition than in the medium and low difficulty conditions. The findings of this study contribute to the understanding of user-interaction techniques in a stereoscopic environment. Furthermore, developers of virtual environments may refer to these findings in designing effective user interactions, especially those in which performance relies on accuracy.


1989 ◽  
Vol 136 (2) ◽  
pp. 124
Author(s):  
Ming-Hong Chan ◽  
Hung-Tat Tsui

Sign in / Sign up

Export Citation Format

Share Document