Haptic Interaction with Video Streams Containing Depth Data

Author(s):  
Shahzad Rasool ◽  
Alexei Sourin
2016 ◽  
Vol 32 (10) ◽  
pp. 1311-1321 ◽  
Author(s):  
Shahzad Rasool ◽  
Alexei Sourin

2013 ◽  
Vol 22 (3) ◽  
pp. 255-270 ◽  
Author(s):  
Yuki Ban ◽  
Takuji Narumi ◽  
Tomohiro Tanikawa ◽  
Michitaka Hirose

In this study, we aim to construct a perception-based shape display system to provide users with the sensation of touching virtual objects of varying shapes using only a simple mechanism. Thus far, we have proved that identified curved surface shapes or edge angles can be modified by displacing the visual representation of the user's hand. However, using this method, we cannot emulate multifinger touch, because of spatial unconformity. To solve this problem, we focus on modifying the identification of shapes using two fingers by deforming the visual representation of the user's hand. We devised a video see-through system that enables us to change the perceived shape of an object that a user is touching visually. The visual representation of the user's hand is deformed as if the user were handling a visual object; however, the user is actually handling an object of a different shape. Using this system, we conducted two experiments to investigate the effects of visuo-haptic interaction and evaluate its effectiveness. One is an investigation on the modification of size perception to confirm that the fingers did not stroke the shape but only touched it statically. The other is an investigation on the modification of shape perception for confirming that the fingers dynamically stroked the surface of the shape. The results of these experiments show that the perceived sizes of objects handled using a thumb and other finger(s) could be modified if the difference between the size of physical and visual stimuli was in the −40% to 35% range. In addition, we found that the algorithm can create an effect of shape perception modification when users stroke the shape with multiple fingers.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


2020 ◽  
Vol 53 (2) ◽  
pp. 5542-5549
Author(s):  
Alexandre Martins ◽  
Mikael Lindberg ◽  
Martina Maggio ◽  
Karl-Erik Årzén

Sign in / Sign up

Export Citation Format

Share Document