Multi-Mode Haptic Display of Image Based on Force and Vibration Tactile Feedback Integration

Author(s):  
Lei Tian ◽  
Aiguo Song ◽  
Dapeng Chen

In order to enhance the sense of reality haptic display based on image, it is widely expected to express various characteristics of the objects in the image using different kinds of haptic feedback. To this end, a multi-mode haptic display method of image was proposed in this paper, including the multi-feature extraction of image and the image expression with various types of haptic rendering. First, the device structure integrating force and vibrotactile feedbacks was designed for multi-mode haptic display. Meanwhile, the three-dimensional geometric shape, detail texture and outline of the object in the image were extracted by various image processing algorithms. Then, a rendering method for the object in the image was proposed based on the psychophysical experiments on the piezoelectric ceramic actuator. The 3D geometric shape, detail texture and outline of the object were rendered by force and vibration tactile feedbacks, respectively. Finally, these three features of the image were haptic expressed simultaneously by the integrated device. Haptic perception experiment results show that the multi-mode haptic display method can effectively improve the authenticity of haptic perception.

Author(s):  
Thomas A. Furness III ◽  
Woodrow Barfield

We understand from the anthropologists that almost from the beginning of our species we have been tool builders. Most of these tools have been associated with the manipulation of matter. With these tools we have learned to organize or reorganize and arrange the elements for our comfort, safety, and entertainment. More recently, the advent of the computer has given us a new kind of tool. Instead of manipulating matter, the computer allows us to manipulate symbols. Typically, these symbols represent language or other abstractions such as mathematics, physics, or graphical images. These symbols allow us to operate at a different conscious level, providing a mechanism to communicate ideas as well as to organize and plan the manipulation of matter that will be accomplished by other tools. However, a problem with the current technology that we use to manipulate symbols is the interface between the human and computer. That is, the means by which we interact with the computer and receive feedback that our actions, thoughts, and desires are recognized and acted upon. Another problem with current computing systems is the format with which they display information. Typically, the computer, via a display monitor, only allows a limited two-dimensional view of the three-dimensional world we live in. For example, when using a computer to design a three dimensional building, what we see and interact with is often only a two-dimensional representation of the building, or at most a so-called 2½D perspective view. Furthermore, unlike the sounds in the real world which stimulate us from all directions and distances, the sounds emanating from a computer originate from a stationary speaker, and when it comes to touch, with the exception of a touch screen or the tactile feedback provided by pressing a key or mouse button (limited haptic feedback to be sure), the tools we use to manipulate symbols are primitive at best. This book is about a new and better way to interact with and manipulate symbols. These are the technologies associated with virtual environments and what we term advanced interfaces. In fact, the development of virtual environment technologies for interacting with and manipulating symbols may represent the next step in the evolution of tools.


2014 ◽  
Vol 112 (12) ◽  
pp. 3189-3196 ◽  
Author(s):  
Chiara Bozzacchi ◽  
Robert Volcic ◽  
Fulvio Domini

Perceptual estimates of three-dimensional (3D) properties, such as the distance and depth of an object, are often inaccurate. Given the accuracy and ease with which we pick up objects, it may be expected that perceptual distortions do not affect how the brain processes 3D information for reach-to-grasp movements. Nonetheless, empirical results show that grasping accuracy is reduced when visual feedback of the hand is removed. Here we studied whether specific types of training could correct grasping behavior to perform adequately even when any form of feedback is absent. Using a block design paradigm, we recorded the movement kinematics of subjects grasping virtual objects located at different distances in the absence of visual feedback of the hand and haptic feedback of the object, before and after different training blocks with different feedback combinations (vision of the thumb and vision of thumb and index finger, with and without tactile feedback of the object). In the Pretraining block, we found systematic biases of the terminal hand position, the final grip aperture, and the maximum grip aperture like those reported in perceptual tasks. Importantly, the distance at which the object was presented modulated all these biases. In the Posttraining blocks only the hand position was partially adjusted, but final and maximum grip apertures remained unchanged. These findings show that when visual and haptic feedback are absent systematic distortions of 3D estimates affect reach-to-grasp movements in the same way as they affect perceptual estimates. Most importantly, accuracy cannot be learned, even after extensive training with feedback.


Actuators ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 60
Author(s):  
Eun-Hyuk Lee ◽  
Sang-Hoon Kim ◽  
Kwang-Seok Yun

Haptic displays have been developed to provide operators with rich tactile information using simple structures. In this study, a three-axis tactile actuator capable of thermal display was developed to deliver tactile senses more realistically and intuitively. The proposed haptic display uses pneumatic pressure to provide shear and normal tactile pressure through an inflation of the balloons inherent in the device. The device provides a lateral displacement of ±1.5 mm for shear haptic feedback and a vertical inflation of the balloon of up to 3.7 mm for normal haptic feedback. It is designed to deliver thermal feedback to the operator through the attachment of a heater to the finger stage of the device, in addition to mechanical haptic feedback. A custom-designed control module is employed to generate appropriate haptic feedback by computing signals from sensors or control computers. This control module has a manual gain control function to compensate for the force exerted on the device by the user’s fingers. Experimental results showed that it could improve the positional accuracy and linearity of the device and minimize hysteresis phenomena. The temperature of the device could be controlled by a pulse-width modulation signal from room temperature to 90 °C. Psychophysical experiments show that cognitive accuracy is affected by gain, and temperature is not significantly affected.


Author(s):  
Maria E. Currie ◽  
Ana Luisa Trejos ◽  
Reiza Rayman ◽  
Michael W.A. Chu ◽  
Rajni Patel ◽  
...  

Objective The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. Methods A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. Results The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P < 0.001). However, there was no significant difference in the maximum force applied by the novices to the mitral valve during suturing (P = 0.7) and suture tying (P = 0.6) using either 2D or 3D visualization. The mean time required and forces applied by both the experts and the novices were significantly less using the conventional surgical technique than when using the robotic system with either 2D or 3D vision (P < 0.001). Conclusions Despite high-quality binocular images, both the experts and the novices applied significantly more force to the cardiac tissue during 3D robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.


2021 ◽  
Vol 33 (5) ◽  
pp. 1104-1116
Author(s):  
Yoshihiro Tanaka ◽  
Shogo Shiraki ◽  
Kazuki Katayama ◽  
Kouta Minamizawa ◽  
Domenico Prattichizzo ◽  
...  

Tactile sensations are crucial for achieving precise operations. A haptic connection between a human operator and a robot has the potential to promote smooth human-robot collaboration (HRC). In this study, we assemble a bilaterally shared haptic system for grasping operations, such as both hands of humans using a bottle cap-opening task. A robot arm controls the grasping force according to the tactile information from the human that opens the cap with a finger-attached acceleration sensor. Then, the grasping force of the robot arm is fed back to the human using a wearable squeezing display. Three experiments are conducted: measurement of the just noticeable difference in the tactile display, a collaborative task with different bottles under two conditions, with and without tactile feedback, including psychological evaluations using a questionnaire, and a collaborative task under an explicit strategy. The results obtained showed that the tactile feedback provided the confidence that the cooperative robot was adjusting its action and improved the stability of the task with the explicit strategy. The results indicate the effectiveness of the tactile feedback and the requirement for an explicit strategy of operators, providing insight into the design of an HRC with bilaterally shared haptic perception.


Author(s):  
Nikolaos Kaklanis ◽  
Konstantinos Moustakas ◽  
Dimitrios Tsovaras

This chapter describes an interaction technique wherein web pages are parsed so as to automatically generate a corresponding 3D virtual environment with haptic feedback. The automatically created 3D scene is composed of “hapgets” (haptically-enhanced widgets), which are three dimensional widgets providing a behavior that is analogous to the behavior of the original HTML components but are also enhanced with haptic feedback. Moreover, for each 2D map included in a web page a corresponding multimodal (haptic-aural) map is automatically generated. The proposed interaction technique enables the haptic navigation through the internet as well as the haptic exploration of conventional 2D maps for the visually impaired users. A rendering engine of web pages that was developed according to the proposed interaction technique is also presented.


Author(s):  
Christopher D. Wickens ◽  
Polly Baker

Virtual reality involves the creation of multisensory experience of an environment (its space and events) through artificial, electronic means; but that environment incorporates a sufficient number of features of the non-artificial world that it is experienced as “reality.” The cognitive issues of virtual reality are those that are involved in knowing and understanding about the virtual environment (cognitive: to perceive and to know). The knowledge we are concerned with in this chapter is both short term (Where am I in the environment? What do I see? Where do I go and how do I get there?), and long term (What can and do I learn about the environment as I see and explore it?). Given the recent interest in virtual reality as a concept (Rheingold, 1991; Wexelblat, 1993; Durlach and Mavor, 1994), it is important to consider that virtual reality is not, in fact, a unified thing, but can be broken down into a set of five features, any one of which can be present or absent to create a greater sense of reality. These features consist of the following five points. 1. Three-dimensional (perspective and/or stereoscopic) viewing vs. two-dimensional planar viewing. (Sedgwick, 1986; Wickens et al., 1989). Thus, the geography student who views a 3D representation of the environment has a more realistic view than one who views a 2D contour map. 2. Dynamic vs. static display. A video or movie is more real than a series of static images of the same material. 3. Closed-loop (interactive or learner-centered) vs. open-loop interaction. A more realistic closed-loop mode is one in which the learner has control over what aspect of the learning “world” is viewed or visited. That is, the learner is an active navigator as well as an observer. 4. Inside-out (ego-referenced) vs. outside-in (world-referenced) frame-of-reference. The more realistic inside-out frame-of-reference is one in which the image of the world on the display is viewed from the perspective of the point of ego-reference of the user (that point which is being manipulated by the control). This is often characterized as the property of “immersion.” Thus, the explorer of a virtual undersea environment will view that world from a perspective akin to that of a camera placed on the explorer’s head;


2019 ◽  
Vol 30 (17) ◽  
pp. 2521-2533 ◽  
Author(s):  
Alex Mazursky ◽  
Jeong-Hoi Koo ◽  
Tae-Heon Yang

Realistic haptic feedback is needed to provide information to users of numerous technologies, such as virtual reality, mobile devices, and robotics. For a device to convey realistic haptic feedback, two touch sensations must be present: tactile feedback and kinesthetic feedback. Although many devices today convey tactile feedback through vibrations, most neglect to incorporate kinesthetic feedback. To address this issue, this study investigates a haptic device with the aim of conveying both kinesthetic and vibrotactile information to users. A prototype based on electrorheological fluids was designed and fabricated. By controlling the electrorheological fluid flow via applied electric fields, the device can generate a range of haptic sensations. The design centered on an elastic membrane that acts as the actuator’s contact surface. Moreover, the control electronics and structural components were integrated into a compact printed circuit board, resulting in a slim device suitable for mobile applications. The device was tested using a dynamic mechanical analyzer to evaluate its performance. The device design was supported with mathematical modeling and was in agreement with experimental results. According to the just-noticeable difference analysis, this range is sufficient to transmit distinct kinesthetic and vibrotactile sensations to users, indicating that the electrorheological fluid–based actuator is capable of conveying haptic feedback.


2017 ◽  
Vol 2017 ◽  
pp. 1-7 ◽  
Author(s):  
Andrew J. Hughes ◽  
Cathal DeBuitleir ◽  
Philip Soden ◽  
Brian O’Donnchadha ◽  
Anthony Tansey ◽  
...  

Revision hip arthroplasty requires comprehensive appreciation of abnormal bony anatomy. Advances in radiology and manufacturing technology have made three-dimensional (3D) representation of osseous anatomy obtainable, which provide visual and tactile feedback. Such life-size 3D models were manufactured from computed tomography scans of three hip joints in two patients. The first patient had undergone multiple previous hip arthroplasties for bilateral hip infections, resulting in right-sided pelvic discontinuity and a severe left-sided posterosuperior acetabular deficiency. The second patient had a first-stage revision for infection and recurrent dislocations. Specific metal reduction protocols were used to reduce artefact. The images were imported into Materialise MIMICS 14.12®. The models were manufactured using selective laser sintering. Accurate templating was performed preoperatively. Acetabular cup, augment, buttress, and cage sizes were trialled using the models, before being adjusted, and resterilised, enhancing the preoperative decision-making process. Screw trajectory simulation was carried out, reducing the risk of neurovascular injury. With 3D printing technology, complex pelvic deformities were better evaluated and treated with improved precision. Life-size models allowed accurate surgical simulation, thus improving anatomical appreciation and preoperative planning. The accuracy and cost-effectiveness of the technique should prove invaluable as a tool to aid clinical practice.


Sign in / Sign up

Export Citation Format

Share Document