haptic information
Recently Published Documents


TOTAL DOCUMENTS

201
(FIVE YEARS 43)

H-INDEX

21
(FIVE YEARS 2)

Author(s):  
Magdalena Szubielska ◽  
Marta Szewczyk ◽  
Wenke Möhring

AbstractThe present study examined differences in adults’ spatial-scaling abilities across three perceptual conditions: (1) visual, (2) haptic, and (3) visual and haptic. Participants were instructed to encode the position of a convex target presented in a simple map without a time limit. Immediately after encoding the map, participants were presented with a referent space and asked to place a disc at the same location from memory. All spaces were designed as tactile graphics. Positions of targets varied along the horizontal dimension. The referent space was constant in size while sizes of maps were systematically varied, resulting in three scaling factor conditions: 1:4, 1:2, 1:1. Sixty adults participated in the study (M = 21.18; SD = 1.05). One-third of them was blindfolded throughout the entire experiment (haptic condition). The second group of participants was allowed to see the graphics (visual condition); the third group were instructed to see and touch the graphics (bimodal condition). An analysis of participants’ absolute errors showed that participants produced larger errors in the haptic condition as opposed to the visual and bimodal conditions. There was also a significant interaction effect between scaling factor and perceptual condition. In the visual and bimodal conditions, results showed a linear increase in errors with higher scaling factors (which may suggest that adults adopted mental transformation strategies during the spatial scaling process), whereas, in the haptic condition, this relation was quadratic. Findings imply that adults’ spatial-scaling performance decreases when visual information is not available.


Author(s):  
Felix Heinrich ◽  
Jonas Kaste ◽  
Sevsel Gamze Kabil ◽  
Michael Sanne ◽  
Ferit Küçükay ◽  
...  

AbstractUnlike electromechanical steering systems, steer-by-wire systems do not have a mechanical coupling between the wheels and the steering wheel. Therefore, a synthetic steering feel has to be generated to supply the driver with the necessary haptic information. In this paper, the authors analyze two approaches of creating a realistic steering feel. One is a modular approach that uses several measured and estimated input signals to model a steering wheel torque via mathematical functions. The other approach is based on an artificial neural network. It depends on steering and vehicle measurements. Both concepts are optimized and trained, respectively, to best fit a reference steering feel obtained from vehicle measurements. To carry out the analysis, the two approaches are evaluated using a simulation model consisting of a vehicle, a rack actuator, and a steering wheel actuator. The research shows that both concepts are able to adequately model a desired steering feel.


2021 ◽  
Author(s):  
Marta Russo ◽  
Jongwoo Lee ◽  
Neville Hogan ◽  
Dagmar Sternad

Abstract BackgroundNumerous studies showed that postural balance improves through light touch on a stable surface highlighting the importance of haptic information, seemingly downplaying the mechanical contributions of the support. The present study examined the mechanical effects of canes for assisting balance in healthy individuals challenged by standing on a beam. MethodsSixteen participants supported themselves with two canes, one in each hand, and applied minimal, preferred, or maximum force onto the canes. They positioned the canes in the frontal plane or in a tripod configuration. ResultsResults showed that canes significantly reduced the variability of the center of pressure and center of mass to the same level as when standing on the ground. In the preferred condition, participants exploited the altered mechanics by resting their arms on the canes and, in the tripod configuration, allowing for larger CoP motions in the task-irrelevant dimension. Increasing the exerted force beyond the preferred level yielded no further benefits, in fact had a destabilizing effect on the canes: the displacement of the hand on the cane handle increased with the force. ConclusionsDespite the challenge of a statically unstable system, these results show that, in addition to augmenting perceptual information, using canes can provide mechanical benefits and challenges. First, the controller minimizes effort channeling noise in the task-irrelevant dimensions and, second, resting the arms on the canes but avoiding large forces that would have destabilizing effects. However, if maximal force is applied to the canes, the instability of the support needs to be counteracted, possibly by arm and shoulder stiffness.


2021 ◽  
Vol 7 (2) ◽  
pp. 472-475
Author(s):  
Maximilian Neidhardt ◽  
Stefan Gerlach ◽  
Max-Heinrich Laves ◽  
Sarah Latus ◽  
Carolin Stapper ◽  
...  

Abstract Needles are key tools to realize minimally invasive interventions. Physicians commonly rely on subjectively perceived insertion forces at the distal end of the needle when advancing the needle tip to the desired target. However, detecting tissue transitions at the distal end of the needle is difficult since the sensed forces are dominated by shaft forces. Disentangling insertion forces has the potential to substantially improve needle placement accuracy.We propose a collaborative system for robotic needle insertion, relaying haptic information sensed directly at the needle tip to the physician by haptic feedback through a light weight robot. We integrate optical fibers into medical needles and use optical coherence tomography to image a moving surface at the tip of the needle. Using a convolutional neural network, we estimate forces acting on the needle tip from the optical coherence tomography data. We feed back forces estimated at the needle tip for real time haptic feedback and robot control. When inserting the needle at constant velocity, the force change estimated at the tip when penetrating tissue layers is up to 94% between deep tissue layers compared to the force change at the needle handle of 2.36 %. Collaborative needle insertion results in more sensible force change at tissue transitions with haptic feedback from the tip (49.79 ± 25.51)% compared to the conventional shaft feedback (15.17 ± 15.92) %. Tissue transitions are more prominent when utilizing forces estimated at the needle tip compared to the forces at the needle shaft, indicating that a more informed advancement of the needle is possible with our system.


2021 ◽  
Vol 11 (18) ◽  
pp. 8772
Author(s):  
Laura Raya ◽  
Sara A. Boga ◽  
Marcos Garcia-Lorenzo ◽  
Sofia Bayona

Technological advances enable the capture and management of complex data sets that need to be correctly understood. Visualisation techniques can help in complex data analysis and exploration, but sometimes the visual channel is not enough, or it is not always available. Some authors propose using the haptic channel to reinforce or substitute the visual sense, but the limited human haptic short-term memory still poses a challenge. We present the haptic tuning fork, a reference signal displayed before the haptic information for increasing the discriminability of haptic icons. With this reference, the user does not depend only on short-term memory. We have decided to evaluate the usefulness of the haptic tuning fork in impedance kinesthetic devices as these are the most common. Furthermore, since the renderable signal ranges are device-dependent, we introduce a methodology to select a discriminable set of signals called the haptic scale. Both the haptic tuning fork and the haptic scale proved their usefulness in the performed experiments regarding haptic stimuli varying in frequency.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0248084
Author(s):  
Vonne van Polanen

When grasping an object, the opening between the fingertips (grip aperture) scales with the size of the object. If an object changes in size, the grip aperture has to be corrected. In this study, it was investigated whether such corrections would influence the perceived size of objects. The grasping plan was manipulated with a preview of the object, after which participants initiated their reaching movement without vision. In a minority of the grasps, the object changed in size after the preview and participants had to adjust their grasping movement. Visual feedback was manipulated in two experiments. In experiment 1, vision was restored during reach and both visual and haptic information was available to correct the grasp and lift the object. In experiment 2, no visual information was provided during the movement and grasps could only be corrected using haptic information. Participants made reach-to-grasp movements towards two objects and compared these in size. Results showed that participants adjusted their grasp to a change in object size from preview to grasped object in both experiments. However, a change in object size did not bias the perception of object size or alter discrimination performance. In experiment 2, a small perceptual bias was found when objects changed from large to small. However, this bias was much smaller than the difference that could be discriminated and could not be considered meaningful. Therefore, it can be concluded that the planning and execution of reach-to-grasp movements do not reliably affect the perception of object size.


2021 ◽  
Vol 15 (3) ◽  
pp. 237-249
Author(s):  
Eliane Mauerberg-deCastro ◽  
Gabriella A. Figueiredo ◽  
Thayna P. Iasi ◽  
Debra F. Campbell ◽  
Renato Moraes

BACKGROUND: When a person walks a dog, information from variables of their own postural control is integrated with haptic information from the dog’s movements (e.g., direction, speed of movement, pulling forces). AIM: We examined how haptic information provided through contact with a moving endpoint (here, the leash of a dog walking on a treadmill) influenced an individual’s postural control during a quiet tandem standing task with and without restricted vision and under various elevations of the support surface (increased task difficulty levels). METHOD: Adults performed a 30-second quiet tandem stance task on a force platform while holding a leash attached to a dog who walked on a treadmill parallel to the force platform. Conditions included: haptic contact (dog and no-dog), vision constraint (eyes open, EO, and eyes closed, EC), and surfaces (4 heights). RESULTS: Interaction between haptic condition and vision showed that contact with the dog leash reduced root mean square (RMS) and mean sway speed (MSS). RMS showed that the highest surface had the greatest rate of sway reduction during haptic contact with EC, and an increase with EO. CONCLUSION: The dog’s movements were used as a haptic reference to aid balance when eyes were closed. In this condition, contact with the dog’s leash reduced the extent of sway variability on the higher surfaces.


2021 ◽  
Author(s):  
David Miralles ◽  
Guillem Garrofé ◽  
Calota Parés ◽  
Alejandro González ◽  
Gerard Serra ◽  
...  

Abstract The cognitive connection between the senses of touch and vision is probably the best-known case of cross-modality. Recent discoveries suggest that the mapping between both senses is learned rather than innate. These evidences open the door to a dynamic cross-modality that allows individuals to adaptively develop within their environment. Mimicking this aspect of human learning, we propose a new cross-modal mechanism that allows artificial cognitive systems (ACS) to adapt quickly to unforeseen perceptual anomalies generated by the environment or by the system itself. In this context, visual recognition systems have advanced remarkably in recent years thanks to the creation of large-scale datasets together with the advent of deep learning algorithms. However, such advances have not occurred on the haptic mode, mainly due to the lack of two-handed dexterous datasets that allow learning systems to process the tactile information of human object exploration. This data imbalance limits the creation of synchronized multimodal datasets that would enable the development of cross-modality in ACS during object exploration. In this work, we use a multimodal dataset recently generated from tactile sensors placed on a collection of objects that capture haptic data from human manipulation, together with the corresponding visual counterpart. Using this data, we create a cross-modal learning transfer mechanism capable of detecting both sudden and permanent anomalies in the visual channel and still maintain visual object recognition performance by retraining the visual mode for a few minutes using haptic information. Here we show the importance of cross-modality in perceptual awareness and its ecological capabilities to self-adapt to different environments.


2021 ◽  
Vol 8 ◽  
Author(s):  
Yu Xia ◽  
Alireza Mohammadi ◽  
Ying Tan ◽  
Bernard Chen ◽  
Peter Choong ◽  
...  

Haptic perception is one of the key modalities in obtaining physical information of objects and in object identification. Most existing literature focused on improving the accuracy of identification algorithms with less attention paid to the efficiency. This work aims to investigate the efficiency of haptic object identification to reduce the number of grasps required to correctly identify an object out of a given object set. Thus, in a case where multiple grasps are required to characterise an object, the proposed algorithm seeks to determine where the next grasp should be on the object to obtain the most amount of distinguishing information. As such, the paper proposes the construction of the object description that preserves the association of the spatial information and the haptic information on the object. A clustering technique is employed both to construct the description of the object in a data set and for the identification process. An information gain (IG) based method is then employed to determine which pose would yield the most distinguishing information among the remaining possible candidates in the object set to improve the efficiency of the identification process. This proposed algorithm is validated experimentally. A Reflex TakkTile robotic hand with integrated joint displacement and tactile sensors is used to perform both the data collection for the dataset and the object identification procedure. The proposed IG approach was found to require a significantly lower number of grasps to identify the objects compared to a baseline approach where the decision was made by random choice of grasps.


Sign in / Sign up

Export Citation Format

Share Document