Virtual Object Categorisation Methods: Towards a Richer Understanding of Object Grasping for Virtual Reality

2021 ◽  
Author(s):  
Andreea Dalia Blaga ◽  
Maite Frutos-Pascual ◽  
Chris Creed ◽  
Ian Williams
Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1069
Author(s):  
Deyby Huamanchahua ◽  
Adriana Vargas-Martinez ◽  
Ricardo Ramirez-Mendoza

Exoskeletons are an external structural mechanism with joints and links that work in tandem with the user, which increases, reinforces, or restores human performance. Virtual Reality can be used to produce environments, in which the intensity of practice and feedback on performance can be manipulated to provide tailored motor training. Will it be possible to combine both technologies and have them synchronized to reach better performance? This paper consists of the kinematics analysis for the position and orientation synchronization between an n DoF upper-limb exoskeleton pose and a projected object in an immersive virtual reality environment using a VR headset. To achieve this goal, the exoskeletal mechanism is analyzed using Euler angles and the Pieper technique to obtain the equations that lead to its orientation, forward, and inverse kinematic models. This paper extends the author’s previous work by using an early stage upper-limb exoskeleton prototype for the synchronization process.


Author(s):  
Mario Covarrubias ◽  
Michele Antolini ◽  
Monica Bordegoni ◽  
Umberto Cugini

This paper describes a multimodal system whose aim is to replicate in a virtual reality environment some typical operations performed by professional designers with real splines laid over the surface of a physical prototype of an aesthetic product, in order to better evaluate the characteristics of the shape they are creating. The system described is able not only to haptically render a continuous contact along a curve, by means of a servo controlled haptic strip, but also to allow the user to modify the shape applying force directly on the haptic device. The haptic strip is able to bend and twist in order to better approximate the portion of the surface of the virtual object over which the strip is laying. This device is 600mm long and is controlled by 11 digital servos for the control of the shape (6 for bending and 5 for twisting) and by two MOOG-FCS HapticMaster devices and two additional digital servos for 6-DOF positioning. We have developed additional input devices, which have been integrated with the haptic strip, which consist of two force sensitive handles positioned at the extremities of the strip, and a capacitive linear touch sensor placed along the surface of the strip, and four buttons. These devices are used to interact with the system, to select menu options, and to apply deformations to the virtual object. The paper describes the interaction modalities and the developed user interface, the applied methodologies, the achieved results and the conclusions elicited from the user tests.


Author(s):  
Raajan N. R. ◽  
Nandhini Kesavan

Augmented Reality (AR) plays a vital role in the field of visual computing. AR is actually different but often confused to be the same is Virtual Reality (VR). While VR creates a whole new world, AR aims at designing an environment in real time with virtual components that are overlaid on the real components. Due to this reason, AR comes under the category of 'mixed reality'. AR could be viewed on any smart electronic gadgets like mobile, laptop, projector, tablet etc., AR could be broadly classified as Marker-based and Markerless. If it is marker-based, a pattern is used whereas in markerless system there is no need of it. In case of marker, if we show the pattern to a webcam it will get details about it and impose the object on the marker. We are incorporating a new efficient solution for integrating a virtual object on to a real world which can be very much handful for tourism and advertisement for showcasing objects or things. The ultimate goal is to augmenting the 3D video onto a real world on which it will increase the person's conceptual understanding of the subject.


2016 ◽  
Vol 12 (9) ◽  
pp. 144
Author(s):  
Qin Liu ◽  
Ru-liang Zhang

<p class="a"><span lang="EN-US">Virtual practice is a new form of human practice, it is an important change in previous human practice. It is composed of the virtual subject, virtual object and virtual intermediary. With the development of virtual practice, virtual cognition makes epistemology changed radically. Discussion on virtual practice is a hot issue in the philosophy of virtual reality. The discussion is mainly focused on the virtual practice, the composition of virtual practice and virtual cognition.</span></p>


2009 ◽  
Vol 18 (1) ◽  
pp. 39-53 ◽  
Author(s):  
Anatole Lécuyer

This paper presents a survey of the main results obtained in the field of “pseudo-haptic feedback”: a technique meant to simulate haptic sensations in virtual environments using visual feedback and properties of human visuo-haptic perception. Pseudo-haptic feedback uses vision to distort haptic perception and verges on haptic illusions. Pseudo-haptic feedback has been used to simulate various haptic properties such as the stiffness of a virtual spring, the texture of an image, or the mass of a virtual object. This paper describes the several experiments in which these haptic properties were simulated. It assesses the definition and the properties of pseudo-haptic feedback. It also describes several virtual reality applications in which pseudo-haptic feedback has been successfully implemented, such as a virtual environment for vocational training of milling machine operations, or a medical simulator for training in regional anesthesia procedures.


2007 ◽  
Vol 16 (3) ◽  
pp. 293-306 ◽  
Author(s):  
Gregorij Kurillo ◽  
Matjaž Mihelj ◽  
Marko Munih ◽  
Tadej Bajd

In this article we present a new isometric input device for multi-fingered grasping in virtual environments. The device was designed to simultaneously assess forces applied by the thumb, index, and middle finger. A mathematical model of grasping, adopted from the analysis of multi-fingered robot hands, was applied to achieve multi-fingered interaction with virtual objects. We used the concept of visual haptic feedback where the user was presented with visual cues to acquire haptic information from the virtual environment. The virtual object corresponded dynamically to the forces and torques applied by the three fingers. The application of the isometric finger device for multi-fingered interaction is demonstrated in four tasks aimed at the rehabilitation of hand function in stroke patients. The tasks include opening the combination lock on a safe, filling and pouring water from a glass, muscle strength training with an elastic torus, and a force tracking task. The training tasks were designed to train patients' grip force coordination and increase muscle strength through repetitive exercises. The presented virtual reality system was evaluated in a group of healthy subjects and two post-stroke patients (early post-stroke and chronic) to obtain overall performance results. The healthy subjects demonstrated consistent performance with the finger device after the first few trials. The two post-stroke patients completed all four tasks, however, with much lower performance scores as compared to healthy subjects. The results of the preliminary assessment suggest that the patients could further improve their performance through virtual reality training.


2019 ◽  
Vol 9 (2) ◽  
Author(s):  
Muhammad Nur Affendy Nor'a ◽  
Ajune Wanis Ismail

Application that adopts collaborative system allows multiple users to interact with other users in the same virtual space either in Virtual Reality (VR) or Augmented Reality (AR). This paper aims to integrate the VR and AR space in a Collaborative User Interface that enables the user to cooperate with other users in a different type of interfaces in a single shared space manner. The gesture interaction technique is proposed as the interaction tool in both of the virtual spaces as it can provide a more natural gesture interaction when interacting with the virtual object. The integration of VR and AR space provide a cross-discipline shared data interchange through the network protocol of client-server architecture.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8329
Author(s):  
Vratislav Cmiel ◽  
Larisa Chmelikova ◽  
Inna Zumberg ◽  
Martin Kralik

With the development of light microscopy, it is becoming increasingly easy to obtain detailed multicolor fluorescence volumetric data. The need for their appropriate visualization has become an integral part of fluorescence imaging. Virtual reality (VR) technology provides a new way of visualizing multidimensional image data or models so that the entire 3D structure can be intuitively observed, together with different object features or details on or within the object. With the need for imaging advanced volumetric data, demands for the control of virtual object properties are increasing; this happens especially for multicolor objects obtained by fluorescent microscopy. Existing solutions with universal VR controllers or software-based controllers with the need to define sufficient space for the user to manipulate data in VR are not usable in many practical applications. Therefore, we developed a custom gesture-based VR control system with a custom controller connected to the FluoRender visualization environment. A multitouch sensor disk was used for this purpose. Our control system may be a good choice for easier and more comfortable manipulation of virtual objects and their properties, especially using confocal microscopy, which is the most widely used technique for acquiring volumetric fluorescence data so far.


2021 ◽  
Vol 2 ◽  
Author(s):  
Mariusz P. Furmanek ◽  
Madhur Mangalam ◽  
Kyle Lockwood ◽  
Andrea Smith ◽  
Mathew Yarossi ◽  
...  

Technological advancements and increased access have prompted the adoption of head- mounted display based virtual reality (VR) for neuroscientific research, manual skill training, and neurological rehabilitation. Applications that focus on manual interaction within the virtual environment (VE), especially haptic-free VR, critically depend on virtual hand-object collision detection. Knowledge about how multisensory integration related to hand-object collisions affects perception-action dynamics and reach-to-grasp coordination is needed to enhance the immersiveness of interactive VR. Here, we explored whether and to what extent sensory substitution for haptic feedback of hand-object collision (visual, audio, or audiovisual) and collider size (size of spherical pointers representing the fingertips) influences reach-to-grasp kinematics. In Study 1, visual, auditory, or combined feedback were compared as sensory substitutes to indicate the successful grasp of a virtual object during reach-to-grasp actions. In Study 2, participants reached to grasp virtual objects using spherical colliders of different diameters to test if virtual collider size impacts reach-to-grasp. Our data indicate that collider size but not sensory feedback modality significantly affected the kinematics of grasping. Larger colliders led to a smaller size-normalized peak aperture. We discuss this finding in the context of a possible influence of spherical collider size on the perception of the virtual object’s size and hence effects on motor planning of reach-to-grasp. Critically, reach-to-grasp spatiotemporal coordination patterns were robust to manipulations of sensory feedback modality and spherical collider size, suggesting that the nervous system adjusted the reach (transport) component commensurately to the changes in the grasp (aperture) component. These results have important implications for research, commercial, industrial, and clinical applications of VR.


2020 ◽  
Author(s):  
Madhur Mangalam ◽  
Mathew Yarossi ◽  
Mariusz P. Furmanek ◽  
Eugene Tunik

AbstractVirtual reality (VR) has garnered much interest as a training environment for motor skill acquisition, including for neurological rehabilitation of upper extremities. While the focus has been on gross upper limb motion, VR applications that involve reaching for, and interacting with, virtual objects are growing. The absence of true haptics in VR when it comes to hand-object interactions raises a fundamentally important question: can haptic-free immersive virtual environments (hf-VEs) support naturalistic coordination of reach-to-grasp movements? This issue has been grossly understudied, and yet is of significant importance in the development and application of VR across a number of sectors. In a previous study (Furmanek et al. 2019), we reported that reach-to-grasp movements are similarly coordinated in both the physical environment (PE) and hf-VE. The most noteworthy difference was that the closure phase—which begins at maximum aperture and lasts through the end of the movement—was longer in hf-VE than in PE, suggesting that different control laws might govern the initiation of closure between the two environments. To do so, we reanalyzed data from Furmanek et al. (2019), in which the participants reached to grasp three differently sized physical objects, and matching 3D virtual object renderings, placed at three different locations. Our analysis revealed two key findings pertaining to the initiation of closure in PE and hf-VE. First, the respective control laws governing the initiation of aperture closure in PE and hf-VE both included state estimates of transport velocity and acceleration, supporting a general unified control scheme for implementing reach-to-grasp across physical and virtual environments. Second, aperture was less informative to the control law in hf-VE. We suggest that the latter was likely because transport velocity at closure onset and aperture at closure onset were less independent in hf-VE than in PE, ultimately resulting in aperture at closure onset having a weaker influence on the initiation of closure. In this way, the excess time and muscular effort needed to actively bring the fingers to a stop at the interface of a virtual object was factored into the control law governing the initiation of closure in hf-VE. Crucially, this control law remained applicable, albeit with different weights in hf-VE, despite the absence of terminal haptic feedback and potential perceptual differences.


Sign in / Sign up

Export Citation Format

Share Document