scholarly journals KEEPING IN TOUCH WITH OUR HIDDEN SIDE

2021 ◽  
Author(s):  
Benjamin Mathieu ◽  
Antonin Abillama ◽  
Malvina Martinez ◽  
Laurence Mouchnino ◽  
Jean Blouin

Previous studies have shown that the sensory modality used to identify the position of proprioceptive targets hidden from sight, but frequently viewed, influences the type of the body representation employed for reaching them with the finger. The question then arises as to whether this observation also applies to proprioceptive targets which are hidden from sight, and rarely, if ever, viewed. We used an established technique for pinpointing the type of body representation used for the spatial encoding of targets which consisted of assessing the effect of peripheral gaze fixation on the pointing accuracy. More precisely, an exteroceptive, visually dependent, body representation is thought to be used if gaze deviation induces a deviation of the pointing movement. Three light-emitting diodes (LEDs) were positioned at the participants' eye level at -25 deg, 0 deg and +25 deg with respect to the cyclopean eye. Without moving the head, the participant fixated the lit LED before the experimenter indicated one of the three target head positions: topmost point of the head (vertex) and two other points located at the front and back of the head. These targets were either verbal-cued or tactile-cued. The goal of the subjects (n=27) was to reach the target with their index finger. We analysed the accuracy of the movements directed to the topmost point of the head, which is a well-defined, yet out of view anatomical point. Based on the possibility of the brain to create visual representations of the body areas that remain out of view, we hypothesized that the position of the vertex is encoded using an exteroceptive body representation, both when verbally or tactile-cued. Results revealed that the pointing errors were biased in the opposite direction of gaze fixation for both verbal-cued and tactile-cued targets, suggesting the use of a vision-dependent exteroceptive body representation. The enhancement of the visual body representations by sensorimotor processes was suggested by the greater pointing accuracy when the vertex was identified by tactile stimulation compared to verbal instruction. Moreover, we found in a control condition that participants were more accurate in indicating the position of their own vertex than the vertex of other people. This result supports the idea that sensorimotor experiences increase the spatial resolution of the exteroceptive body representation. Together, our results suggest that the position of rarely viewed body parts are spatially encoded by an exteroceptive body representation and that non-visual sensorimotor processes are involved in the constructing of this representation.

Perception ◽  
10.1068/p5853 ◽  
2007 ◽  
Vol 36 (10) ◽  
pp. 1547-1554 ◽  
Author(s):  
Francesco Pavani ◽  
Massimiliano Zampini

When a hand (either real or fake) is stimulated in synchrony with our own hand concealed from view, the felt position of our own hand can be biased toward the location of the seen hand. This intriguing phenomenon relies on the brain's ability to detect statistical correlations in the multisensory inputs (ie visual, tactile, and proprioceptive), but it is also modulated by the pre-existing representation of one's own body. Nonetheless, researchers appear to have accepted the assumption that the size of the seen hand does not matter for this illusion to occur. Here we used a real-time video image of the participant's own hand to elicit the illusion, but we varied the hand size in the video image so that the seen hand was either reduced, veridical, or enlarged in comparison to the participant's own hand. The results showed that visible-hand size modulated the illusion, which was present for veridical and enlarged images of the hand, but absent when the visible hand was reduced. These findings indicate that very specific aspects of our own body image (ie hand size) can constrain the multisensory modulation of the body schema highlighted by the fake-hand illusion paradigm. In addition, they suggest an asymmetric tendency to acknowledge enlarged (but not reduced) images of body parts within our body representation.


2021 ◽  
pp. 133-151 ◽  
Author(s):  
Noriaki Kanayama ◽  
Kentaro Hiromitsu

Is the body reducible to neural representation in the brain? There is some evidence that the brain contributes to the functioning of the body from neuroimaging, neurophysiological, and lesion studies. Well-known dyadic taxonomy of the body schema and the body image (hereafter BSBI) is based primarily on the evidence in brain-damaged patients. Although there is a growing consensus that the BSBI exists, there is little agreement on the dyadic taxonomy because it is not a concrete and common concept across various research fields. This chapter tries to investigate the body representation in the cortex and nervous system in terms of sensory modality and psychological function using two different approaches. The first approach is to review the neurological evidence and cortical area which is related to body representation, regardless of the BSBI, and then to reconsider how we postulate the BSBI in our brain. It can be considered that our body representation could be constructed by the whole of the neural system, including the cortex and peripheral nerves. The second approach is to revisit the BSBI conception from the viewpoint of recent neuropsychology and propose three types of body representation: body schema, body structural description, and body semantics. This triadic taxonomy is considered consistent with the cortical networks based on the evidence of bodily disorders due to brain lesions. These two approaches allow to reconsider the BSBI more carefully and deeply and to give us the possibility that the body representation could be underpinned with the network in the brain.


2004 ◽  
Vol 92 (4) ◽  
pp. 2380-2393 ◽  
Author(s):  
M. A. Admiraal ◽  
N.L.W. Keijsers ◽  
C.C.A.M. Gielen

We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.


2012 ◽  
Vol 25 (0) ◽  
pp. 197
Author(s):  
Adria E. N. Hoover ◽  
Laurence R. Harris

We have previously shown that people are more sensitive at detecting asynchrony between a self-generated movement and delayed visual feedback when the perspective of the movement matches the ‘natural view’ suggesting an internal, visual, canonical body representation (Hoover and Harris, 2011). Is there a similar variation in sensitivity for parts of the body that cannot be seen in a first-person perspective? To test this, participants made movements with their hands and head (viewing their face or the back of their head) under four viewing conditions: (1) the natural (or direct) view, (2) mirror-reversed, (3) inverted, and (4) inverted and mirror-reversed. Participants indicated which of two periods (one with a minimum delay, the other with an added delay of 33–264 ms) was delayed and their sensitivity to delay was calculated. A significant linear trend was found when comparing sensitivity to detect cross-modal asynchrony in the ‘natural’ or ‘direct’ view condition across body parts; where sensitivity was greatest when viewing body parts seen most often (hands), intermediary for viewing body parts that are seen only indirectly (moving head while viewing face), and least for viewing body parts that are never seen at all (moving head while viewing back of the head). Further, dependency on viewpoint was most evident for body parts that are seen most often or indirectly, but not for body parts that are never seen. Results are discussed in terms of a visual representation of the body.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Mariella Pazzaglia ◽  
Marta Zantedeschi

Knowledge of the body is filtered by perceptual information, recalibrated through predominantly innate stored information, and neurally mediated by direct sensory motor information. Despite multiple sources, the immediate prediction, construction, and evaluation of one’s body are distorted. The origins of such distortions are unclear. In this review, we consider three possible sources of awareness that inform body distortion. First, the precision in the body metric may be based on the sight and positioning sense of a particular body segment. This view provides information on the dual nature of body representation, the reliability of a conscious body image, and implicit alterations in the metrics and positional correspondence of body parts. Second, body awareness may reflect an innate organizational experience of unity and continuity in the brain, with no strong isomorphism to body morphology. Third, body awareness may be based on efferent/afferent neural signals, suggesting that major body distortions may result from changes in neural sensorimotor experiences. All these views can be supported empirically, suggesting that body awareness is synthesized from multimodal integration and the temporal constancy of multiple body representations. For each of these views, we briefly discuss abnormalities and therapeutic strategies for correcting the bodily distortions in various clinical disorders.


2021 ◽  
Vol 11 (3) ◽  
pp. 284
Author(s):  
Grazia Fernanda Spitoni ◽  
Giorgio Pireddu ◽  
Valerio Zanellati ◽  
Beatrice Dionisi ◽  
Gaspare Galati ◽  
...  

Several studies have found in the sense of touch a good sensory modality by which to study body representation. Here, we address the “metric component of body representation”, a specific function developed to process the discrimination of tactile distances on the body. The literature suggests the involvement of the right angular gyrus (rAG) in processing the tactile metricity on the body. The question of this study is the following: is the rAG also responsible for the visual metric component of body representation? We used tDCS (anodal and sham) in 20 subjects who were administered an on-body distance discrimination task with both tactile and visual stimuli. They were also asked to perform the same task in a near-body condition. The results allow us to confirm the role of rAG in the estimation of tactile distances. Further, we also showed that rAG might be involved in the discrimination of distances on the body not only in tactile but also in visual modality. Finally, based on the significant effects of anodal stimulation even in a near-body visual discrimination task, we proposed a higher-order function of the AG in terms of a supramodal comparator of quantities.


2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Silvio Ionta ◽  
Michael Villiger ◽  
Catherine R Jutzeler ◽  
Patrick Freund ◽  
Armin Curt ◽  
...  

Abstract The brain integrates multiple sensory inputs, including somatosensory and visual inputs, to produce a representation of the body. Spinal cord injury (SCI) interrupts the communication between brain and body and the effects of this deafferentation on body representation are poorly understood. We investigated whether the relative weight of somatosensory and visual frames of reference for body representation is altered in individuals with incomplete or complete SCI (affecting lower limbs’ somatosensation), with respect to controls. To study the influence of afferent somatosensory information on body representation, participants verbally judged the laterality of rotated images of feet, hands and whole-bodies (mental rotation task) in two different postures (participants’ body parts were hidden from view). We found that (i) complete SCI disrupts the influence of postural changes on the representation of the deafferented body parts (feet, but not hands) and (ii) regardless of posture, whole-body representation progressively deteriorates proportionally to SCI completeness. These results demonstrate that the cortical representation of the body is dynamic, responsive and adaptable to contingent conditions, in that the role of somatosensation is altered and partially compensated with a change in the relative weight of somatosensory versus visual bodily representations.


Author(s):  
Minoru Asada

Proprioception is our ability to sense the position of our own limbs and other body parts in space, and body schema is a body representation that allows both biological and artificial agents to execute their actions based on proprioception. The proprioceptive information used by current artificial agents (robots) is mainly related to posture (and its change) and consists of joint angles (joint velocities) given a linked structure. However, the counterpart in biological agents (humans and other animals) includes more complicated components with associated controversies concerning the relationship between the body schema and the body image. A new trend of constructive approaches has been attacking this topic using computational models and robots. This chapter provides an overview of the biology of proprioception and body representation, summarizes the classical use of body schema in robotics, and describes a series of constructive approaches that address some of the mysteries of body representation.


2009 ◽  
Vol 21 (10) ◽  
pp. 2027-2045 ◽  
Author(s):  
Ryo Kitada ◽  
Ingrid S. Johnsrude ◽  
Takanori Kochiyama ◽  
Susan J. Lederman

Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.


2019 ◽  
Author(s):  
Michele Scandola ◽  
Gaetano Tieri ◽  
Renato Avesani ◽  
Massimo Brambilla ◽  
...  

Despite the many links between body representation, acting and perceiving the environment, no research has to date explored whether specific tool embodiment in conditions of sensorimotor deprivation influences extrapersonal space perception. We tested 20 spinal cord injured (SCI) individuals to investigate whether specific wheelchair embodiment interacts with extrapersonal space representation. As a measure of wheelchair embodiment, we used a Body View Enhancement Task in which participants (either sitting in their own wheelchair or in one which they had never used before) were asked to respond promptly to flashing lights presented on their above- and below-lesion body parts. Similar or slower reaction times (RT) to stimuli on the body and wheelchair indicate, respectively, the presence or absence of tool embodiment. The RTs showed that the participants embodied their own wheelchair but not the other one. Moreover, they coded their deprived lower limbs as external objects and, when not in their own wheelchair, also showed disownership of their intact upper limbs. To measure extrapersonal space perception, we used a novel, ad-hoc designed paradigm in which the participants were asked to observe a 3D scenario by means of immersive virtual reality and estimate the distance of a flag positioned on a ramp. In healthy subjects, errors in estimation increased as the distance increased, suggesting that they mentally represent the physical distance. The same occurred with the SCI participants, but only when they were in their own wheelchair. The results demonstrate for the first time that tool-embodiment modifies extrapersonal space estimations.


Sign in / Sign up

Export Citation Format

Share Document