scholarly journals Directional Force Feedback: Mechanical Force Concentration for Immersive Experience in Virtual Reality

2019 ◽  
Vol 9 (18) ◽  
pp. 3692 ◽  
Author(s):  
Seonghoon Ban ◽  
Kyung Hoon Hyun

In recent years, consumer-level virtual-reality (VR) devices and content have become widely available. Notably, establishing a sense of presence is a key objective of VR and an immersive interface with haptic feedback for VR applications has long been in development. Despite the state-of-the-art force feedback research being conducted, a study on directional feedback, based on force concentration, has not yet been reported. Therefore, we developed directional force feedback (DFF), a device that generates directional sensations for virtual-reality (VR) applications via mechanical force concentrations. DFF uses the rotation of motors to concentrate force and deliver directional sensations to the user. To achieve this, we developed a novel method of force concentration for directional sensation; by considering both rotational rebound and gravity, the optimum rotational motor speeds and rotation angles were identified. Additionally, we validated the impact of DFF in a virtual environment, showing that the users’ presence and immersion within VR were higher with DFF than without. The result of the user studies demonstrated that the device significantly improves immersivity of virtual applications.

2001 ◽  
Vol 1 (2) ◽  
pp. 123-128 ◽  
Author(s):  
Sergei Volkov ◽  
Judy M. Vance

Virtual reality techniques provide a unique new way to interact with three-dimensional digital objects. Virtual prototyping refers to the use of virtual reality to obtain evaluations of designs while they are still in digital form before physical prototypes are built. While the state-of-the-art in virtual reality relies mainly on the use of stereo viewing and auditory feedback, commercial haptic devices have recently become available that can be integrated into the virtual environment to provide force feedback to the user. This paper outlines a study that was performed to determine whether the addition of force feedback to the virtual prototyping task improved the ability of the participants to make design decisions. Seventy-six people participated in the study. The specific task involved comparing the location and movement of two virtual parking brakes located in the virtual cockpit of an automobile. The results indicate that the addition of force feedback to the virtual environment did not increase the accuracy of the participants’ answers, but it did allow them to complete the task in a shorter time. This paper describes the purpose, methods, and results of the study.


2018 ◽  
Vol 35 (2) ◽  
pp. 149-160 ◽  
Author(s):  
Mustufa H. Abidi ◽  
Abdulrahman M. Al-Ahmari ◽  
Ali Ahmad ◽  
Saber Darmoul ◽  
Wadea Ameen

AbstractThe design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.


2019 ◽  
Vol 121 (4) ◽  
pp. 1398-1409 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants’ movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.


2016 ◽  
Vol 8 (2) ◽  
pp. 60-68 ◽  
Author(s):  
Igor D.D. Curcio ◽  
Anna Dipace ◽  
Anita Norlund

Abstract The purpose of this article is to highlight the state of the art of virtual reality, augmented reality, mixed reality technologies and their applications in formal education. We also present a selected list of case studies that prove the utility of these technologies in the context of formal education. Furthermore, as byproduct, the mentioned case studies show also that, although the industry is able to develop very advanced virtual environment technologies, their pedagogical implications are strongly related to a well-designed theoretical framework.


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Ashraf Ayoub ◽  
Yeshwanth Pulijala

Abstract Background Virtual reality is the science of creating a virtual environment for the assessment of various anatomical regions of the body for the diagnosis, planning and surgical training. Augmented reality is the superimposition of a 3D real environment specific to individual patient onto the surgical filed using semi-transparent glasses to augment the virtual scene.. The aim of this study is to provide an over view of the literature on the application of virtual and augmented reality in oral & maxillofacial surgery. Methods We reviewed the literature and the existing database using Ovid MEDLINE search, Cochran Library and PubMed. All the studies in the English literature in the last 10 years, from 2009 to 2019 were included. Results We identified 101 articles related the broad application of virtual reality in oral & maxillofacial surgery. These included the following: Eight systematic reviews, 4 expert reviews, 9 case reports, 5 retrospective surveys, 2 historical perspectives, 13 manuscripts on virtual education and training, 5 on haptic technology, 4 on augmented reality, 10 on image fusion, 41 articles on the prediction planning for orthognathic surgery and maxillofacial reconstruction. Dental implantology and orthognathic surgery are the most frequent applications of virtual reality and augmented reality. Virtual planning improved the accuracy of inserting dental implants using either a statistic guidance or dynamic navigation. In orthognathic surgery, prediction planning and intraoperative navigation are the main applications of virtual reality. Virtual reality has been utilised to improve the delivery of education and the quality of training in oral & maxillofacial surgery by creating a virtual environment of the surgical procedure. Haptic feedback provided an additional immersive reality to improve manual dexterity and improve clinical training. Conclusion Virtual and augmented reality have contributed to the planning of maxillofacial procedures and surgery training. Few articles highlighted the importance of this technology in improving the quality of patients’ care. There are limited prospective randomized studies comparing the impact of virtual reality with the standard methods in delivering oral surgery education.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1814
Author(s):  
Yuzhao Liu ◽  
Yuhan Liu ◽  
Shihui Xu ◽  
Kelvin Cheng ◽  
Soh Masuko ◽  
...  

Despite the convenience offered by e-commerce, online apparel shopping presents various product-related risks, as consumers can neither physically see nor try products on themselves. Augmented reality (AR) and virtual reality (VR) technologies have been used to improve the shopping online experience. Therefore, we propose an AR- and VR-based try-on system that provides users a novel shopping experience where they can view garments fitted onto their personalized virtual body. Recorded personalized motions are used to allow users to dynamically interact with their dressed virtual body in AR. We conducted two user studies to compare the different roles of VR- and AR-based try-ons and validate the impact of personalized motions on the virtual try-on experience. In the first user study, the mobile application with the AR- and VR-based try-on is compared to a traditional e-commerce interface. In the second user study, personalized avatars with pre-defined motion and personalized motion is compared to a personalized no-motion avatar with AR-based try-on. The result shows that AR- and VR-based try-ons can positively influence the shopping experience, compared with the traditional e-commerce interface. Overall, AR-based try-on provides a better and more realistic garment visualization than VR-based try-on. In addition, we found that personalized motions do not directly affect the user’s shopping experience.


Author(s):  
Adam J. Faeth ◽  
Chris Harding

This research describes a theoretical framework for designing multimodal feedback for 3D buttons in a virtual environment. Virtual button implementations often suffer from inadequate feedback compared to their mechanical, real-world, counterparts. This lack of feedback can lead to accidental button actuations and reduce the user’s ability to discover how to interact with the virtual button. We propose a framework for more expressive virtual button feedback that communicates visual, audio, and haptic feedback to the user. We apply the theoretical framework by implementing a software library prototype to support multimodal feedback from virtual buttons in a 3D virtual reality workspace.


Robotica ◽  
2009 ◽  
Vol 28 (1) ◽  
pp. 47-56 ◽  
Author(s):  
M. Karkoub ◽  
M.-G. Her ◽  
J.-M. Chen

SUMMARYIn this paper, an interactive virtual reality motion simulator is designed and analyzed. The main components of the system include a bilateral control interface, networking, a virtual environment, and a motion simulator. The virtual reality entertainment system uses a virtual environment that enables the operator to feel the actual feedback through a haptic interface as well as the distorted motion from the virtual environment just as s/he would in the real environment. The control scheme for the simulator uses the change in velocity and acceleration that the operator imposes on the joystick, the environmental changes imposed on the motion simulator, and the haptic feedback to the operator to maneuver the simulator in the real environment. The stability of the closed-loop system is analyzed based on the Nyquist stability criteria. It is shown that the proposed design for the simulator system works well and the theoretical findings are validated experimentally.


2021 ◽  
Author(s):  
Belén Agulló ◽  
◽  
Anna Matamala ◽  

Virtual reality has attracted the attention of industry and researchers. Its applications for entertainment and audiovisual content creation are endless. Filmmakers are experimenting with different techniques to create immersive stories. Also, subtitle creators and researchers are finding new ways to implement (sub)titles in this new medium. In this article, the state-of-the-art of cinematic virtual reality content is presented and the current challenges faced by filmmakers when dealing with this medium and the impact of immersive content on subtitling practices are discussed. Moreover, the different studies on subtitles in 360º videos carried out so far and the obtained results are reviewed. Finally, the results of a corpus analysis are presented in order to illustrate the current subtitle practices by The New York Times and the BBC. The results have shed some light on issues such as position, innovative graphic strategies or the different functions, challenging current subtitling standard practices in 2D content.


Sign in / Sign up

Export Citation Format

Share Document