scholarly journals Improving freehand placement for grasping virtual objects via dual view visual feedback in mixed reality

Author(s):  
Maadh Al-Kalbani ◽  
Ian Williams ◽  
Maite Frutos-Pascual
2008 ◽  
Vol 02 (02) ◽  
pp. 207-233
Author(s):  
SATORU MEGA ◽  
YOUNES FADIL ◽  
ARATA HORIE ◽  
KUNIAKI UEHARA

Human-computer interaction systems have been developed in recent years. These systems use multimedia techniques to create Mixed-Reality environments where users can train themselves. Although most of these systems rely strongly on interactivity with the users, taking into account users' states, they still lack the possibility of considering users preferences when they help them. In this paper, we introduce an Action Support System for Interactive Self-Training (ASSIST) in cooking. ASSIST focuses on recognizing users' cooking actions as well as real objects related to these actions to be able to provide them with accurate and useful assistance. Before the recognition and instruction processes, it takes users' cooking preferences and suggests one or more recipes that are likely to satisfy their preferences by collaborative filtering. When the cooking process starts, ASSIST recognizes users' hands movement using a similarity measure algorithm called AMSS. When the recognized cooking action is correct, ASSIST instructs the user on the next cooking procedure through virtual objects. When a cooking action is incorrect, the cause of its failure is analyzed and ASSIST provides the user with support information according to the cause to improve the user's incorrect cooking action. Furthermore, we construct parallel transition models from cooking recipes for more flexible instructions. This enables users to perform necessary cooking actions in any order they want, allowing more flexible learning.


2020 ◽  
Vol 10 (16) ◽  
pp. 5436 ◽  
Author(s):  
Dong-Hyun Kim ◽  
Yong-Guk Go ◽  
Soo-Mi Choi

A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects.


2005 ◽  
Vol 14 (6) ◽  
pp. 677-696 ◽  
Author(s):  
Christoph W. Borst ◽  
Richard A. Volz

We present a haptic feedback technique that combines feedback from a portable force-feedback glove with feedback from direct contact with rigid passive objects. This approach is a haptic analogue of visual mixed reality, since it can be used to haptically combine real and virtual elements in a single display. We discuss device limitations that motivated this combined approach and summarize technological challenges encountered. We present three experiments to evaluate the approach for interactions with buttons and sliders on a virtual control panel. In our first experiment, this approach resulted in better task performance and better subjective ratings than the use of only a force-feedback glove. In our second experiment, visual feedback was degraded and the combined approach resulted in better performance than the glove-only approach and in better ratings of slider interactions than both glove-only and passive-only approaches. A third experiment allowed subjective comparison of approaches and provided additional evidence that the combined approach provides the best experience.


2017 ◽  
Vol 16 (1) ◽  
pp. 185
Author(s):  
Mikkel Thøgersen ◽  
John Hansen ◽  
Herta Flor ◽  
Lars Arendt-Nielsen ◽  
Laura Petrini

AbstractAimsVisual feedback is hypothesized to play an important role in the phantom limb condition. In this study we attempt to create an illusory experimental model of phantom limb wherein this condition is simulated by removing the visual input from the upper limb in a group of intact participants. The aim of the study is to investigate the role of visual feedback on somatosensation, nociception and bodily-self perception.MethodsUsing a novel mixed reality (MR) system, the visual feedback of the left hand is removed in order to visually simulate a left hand amputation on 30 healthy participants (15 females). Using a within-subject design, three conditions are created: visual amputation condition (MR with no visual input); visual condition (MR with normal vision); and a baseline condition (no MR). Thermal detection and nociceptive thresholds using method of limits are measured. Proprioception of the visually amputated hand is investigated by probing the felt hand position on a proximal-distal axis from the body. Using a questionnaire the effects of the missing visual feedback on bodily self is assessed.ResultsThere was a clear drift in proprioception of the left hand in the proximal direction between the control and visual amputation condition (p <0.001). A decrease in cold detection was also significant between the control and visual amputation condition (p < 0.001). Finally, questions on perceptual experiences indicated that the observed proprioceptive retraction of the visually amputated hand was also felt by the participants.ConclusionsMissing visual feedback greatly influences the perception of the visually amputated arm underlining the importance of visual feedback. The observed proprioceptive retraction of the hand resembles the telescoping perceptions often reported by phantom limb patients. The novel method developed for this study, is a new tool to investigate the influence of visual feedback on the relationship of bodily-self and chronic pain.


2020 ◽  
Vol 10 (3) ◽  
pp. 1135 ◽  
Author(s):  
Mulun Wu ◽  
Shi-Lu Dai ◽  
Chenguang Yang

This paper proposes a novel control system for the path planning of an omnidirectional mobile robot based on mixed reality. Most research on mobile robots is carried out in a completely real environment or a completely virtual environment. However, a real environment containing virtual objects has important actual applications. The proposed system can control the movement of the mobile robot in the real environment, as well as the interaction between the mobile robot’s motion and virtual objects which can be added to a real environment. First, an interactive interface is presented in the mixed reality device HoloLens. The interface can display the map, path, control command, and other information related to the mobile robot, and it can add virtual objects to the real map to realize a real-time interaction between the mobile robot and the virtual objects. Then, the original path planning algorithm, vector field histogram* (VFH*), is modified in the aspects of the threshold, candidate direction selection, and cost function, to make it more suitable for the scene with virtual objects, reduce the number of calculations required, and improve the security. Experimental results demonstrated that this proposed method can generate the motion path of the mobile robot according to the specific requirements of the operator, and achieve a good obstacle avoidance performance.


Sign in / Sign up

Export Citation Format

Share Document