scholarly journals How visual experience and task context modulate the use of internal and external spatial coordinate for perception and action

2018 ◽  
Author(s):  
Virginie Crollen ◽  
Tiffany Spruyt ◽  
Pierre Mahau ◽  
Roberto Bottini ◽  
Olivier Collignon

Recent studies proposed that the use of internal and external coordinate systems may be more flexible in congenitally blind when compared to sighted individuals. To investigate this hypothesis further, we asked congenitally blind and sighted people to perform, with the hands uncrossed and crossed over the body midline, a tactile TOJ and an auditory Simon task. Crucially, both tasks were carried out under task instructions either favoring the use of an internal (left vs. right hand) or an external (left vs. right hemispace) frame of reference. In the internal condition of the TOJ task, our results replicated previous findings (Röder et al., 2004) showing that hand crossing only impaired sighted participants’ performance, suggesting that blind people did not activate by default a (conflicting) external frame of reference. However, under external instructions, a decrease of performance was observed in both groups, suggesting that even blind people activated an external coordinate system in this condition. In the Simon task, and in contrast with a previous study (Roder et al., 2007), both groups responded more efficiently when the sound was presented from the same side of the response (‘‘Simon effect’’) independently of the hands position. This was true under the internal and external conditions, therefore suggesting that blind and sighted by default activated an external coordinate system in this task. All together, these data comprehensively demonstrate how visual experience shapes the default weight attributed to internal and external coordinate systems for action and perception depending on task demand.

Perception ◽  
10.1068/p3340 ◽  
2002 ◽  
Vol 31 (10) ◽  
pp. 1263-1274 ◽  
Author(s):  
Morton A Heller ◽  
Deneen D Brackett ◽  
Kathy Wilson ◽  
Keiko Yoneyama ◽  
Amanda Boyer ◽  
...  

We examined the effect of visual experience on the haptic Müller-Lyer illusion. Subjects made size estimates of raised lines by using a sliding haptic ruler. Independent groups of blindfolded-sighted, late-blind, congenitally blind, and low-vision subjects judged the sizes of wings-in and wings-out stimuli, plain lines, and lines with short vertical ends. An illusion was found, since the wings-in stimuli were judged as shorter than the wings-out patterns and all of the other stimuli. Subjects generally underestimated the lengths of lines. In a second experiment we found a nonsignificant difference between length judgments of raised lines as opposed to smooth wooden dowels. The strength of the haptic illusion depends upon the angles of the wings, with a much stronger illusion for more acute angles. The effect of visual status was nonsignificant, suggesting that spatial distortion in the haptic Müller-Lyer illusion does not depend upon visual imagery or visual experience.


2001 ◽  
Vol 17 (2) ◽  
pp. 173-180 ◽  
Author(s):  
Adrienne E. Hunt ◽  
Richard M. Smith

Three-dimensional ankle joint moments were calculated in two separate coordinate systems, from 18 healthy men during the stance phase of walking, and were then compared. The objective was to determine the extent of differences in the calculated moments between these two commonly used systems and their impact on interpretation. Video motion data were obtained using skin surface markers, and ground reaction force data were recorded from a force platform. Moments acting on the foot were calculated about three orthogonal axes, in a global coordinate system (GCS) and also in a segmental coordinate system (SCS). No differences were found for the sagittal moments. However, compared to the SCS, the GCS significantly (p < .001) overestimated the predominant invertor moment at midstance and until after heel rise. It also significantly (p < .05) underestimated the late stance evertor moment. This frontal plane discrepancy was attributed to sensitivity of the GCS to the degree of abduction of the foot. For the transverse plane, the abductor moment peaked earlier (p < .01) and was relatively smaller (p < .01) in the GCS. Variability in the transverse plane was greater for the SCS, and attributed to its sensitivity to the degree of rearfoot inversion. We conclude that the two coordinate systems result in different calculations of nonsagittal moments at the ankle joint during walking. We propose that the body-based SCS provides a more meaningful interpretation of function than the GCS and would be the preferred method in clinical research, for example where there is marked abduction of the foot.


1997 ◽  
Vol 91 (2) ◽  
pp. 151-162
Author(s):  
J. Bullington ◽  
G. Karlsson

The aim of this qualitative-interpretive, phenomenological-psychological study was to discover the essential dimensions (distinctive features) of the body experiences of congenitally blind people. The information was obtained through semi structured interviews, consisting of open-ended questions, to which the subjects could reply freely and at length. Of the various forms of body experiences mentioned in the interviews, three are discussed in this article: the functional body, the objectified body, and the identity-creating body.


Author(s):  
Claudio Mellace ◽  
Antonio Gugliotta ◽  
Tariq Sinokrot ◽  
Ahmed A. Shabana

One of the important issues associated with the use of the trajectory coordinates in railroad vehicle simulations is the ability of such coordinates in dealing with braking and traction scenarios. In existing specialized railroad computer algorithms, the trajectory coordinates instead of the absolute Cartesian coordinates are often used. In these algorithms, track coordinate systems that travel with constant speeds are employed to define the configuration of the components in railroad vehicle systems. As the result of using a prescribed motion for these track coordinate systems, the simulation of braking and/or traction scenarios becomes difficult or even impossible, as reported in recent investigations [2]. The assumption of the prescribed motion of the track coordinate systems can be relaxed, thereby allowing the trajectory coordinate systems to be effectively used in modeling braking and traction scenarios. It is the objective of this investigation to demonstrate that by using track coordinate systems that can have an arbitrary motion, the trajectory coordinates can be used as the basis for developing computer algorithms for modeling braking and traction scenarios. To this end, a set of six generalized trajectory coordinates is used to define the configuration of each rigid body in the railroad vehicle system. This set of coordinates consists of one absolute coordinate, which is an arc length that represents the distance traveled by the body, and five relative coordinates. The arc length parameter defines the location of the origin and the orientation of a track coordinate system that follows the motion of the body. The other five relative coordinates are two translations that define the position of the origin of body coordinate system with respect to the track coordinate system in directions lateral and normal to the track, and three Euler angles that define the orientation of the body coordinate system with respect to its track coordinate system. The independent state equations of motion associated with the trajectory coordinates are identified and integrated forward in time in order to determine the trajectory coordinates and velocities. The results obtained in this study show that when the track coordinates systems are allowed to have an arbitrary motion, the resulting set of trajectory coordinates can be used effectively in the study of braking and traction conditions. The numerical examples presented in this paper include two different vehicle models subjected to several braking conditions. The results obtained are compared with the results obtained using the absolute Cartesian coordinate based formulations which allow modeling braking and traction scenarios.


2017 ◽  
Author(s):  
Virginie Crollen ◽  
Latifa Lazzouni ◽  
Mohamed Rezk ◽  
Antoine Bellemare ◽  
Franco Lepore ◽  
...  

AbstractLocalizing touch relies on the activation of skin-based and externally defined spatial frames of references. Psychophysical studies have demonstrated that early visual deprivation prevents the automatic remapping of touch into external space. We used fMRI to characterize how visual experience impacts on the brain circuits dedicated to the spatial processing of touch. Sighted and congenitally blind humans (male and female) performed a tactile temporal order judgment (TOJ) task, either with the hands uncrossed or crossed over the body midline. Behavioral data confirmed that crossing the hands has a detrimental effect on TOJ judgments in sighted but not in blind. Crucially, the crossed hand posture elicited more activity in a fronto-parietal network in the sighted group only. Psychophysiological interaction analysis revealed that the congenitally blind showed enhanced functional connectivity between parietal and frontal regions in the crossed versus uncrossed hand postures. Our results demonstrate that visual experience scaffolds the neural implementation of touch perception.Significance statementAlthough we seamlessly localize tactile events in our daily life, it is not a trivial operation because the hands move constantly within the peripersonal space. To process touch correctly, the brain has therefore to take the current position of the limbs into account and remap them to their location in the external world. In sighted, parietal and premotor areas support this process. However, while visual experience has been suggested to support the implementation of the automatic external remapping of touch, no studies so far have investigated how early visual deprivation alters the brain network supporting touch localization. Examining this question is therefore crucial to conclusively determine the intrinsic role vision plays in scaffolding the neural implementation of touch perception.


1983 ◽  
Vol 77 (4) ◽  
pp. 161-166 ◽  
Author(s):  
James F. Herman ◽  
Steven P. Chatman ◽  
Steven F. Roth

Examines the spatial ability of sighted, blindfolded sighted, and congenitally blind subjects. They walked through an unfamiliar, large-scale space in which target locations could not be seen simultaneously and were then taken to each target location and asked the position of the other locations. Results indicate that past visual experience helps individuals to acquire spatial information from large-scale environments.


1976 ◽  
Vol 70 (5) ◽  
pp. 188-194
Author(s):  
Dean Rosencranz ◽  
Richard Suslick

Congenitally blind, adventitiously blind, and sighted subjects’ informal verbal response to questions about furniture arrangements verbally described by the experimenter were analyzed. From their accuracy, RT, and verbal protocols it was concluded that prior visual experience is crucial though not essential to the development of a “frame of reference,” i.e., a two-dimensional symbol structure for spatial representation. The conclusion supports the belief that visual-modality perceptual-symbol-structures are useful in the development of a frame of reference and also casts some doubt on Juurmaa's conclusion (1973) that the congenitally blind develop the same type of spatial representations as the sighted.


2021 ◽  
Author(s):  
Jeroen van Paridon ◽  
Qiawen Liu ◽  
Gary Lupyan

Certain colors are strongly associated with certain adjectives (e.g. red is hot, blue is cold). Some of these associations are grounded in visual experiences like seeing hot embers glow red. Surprisingly, many congenitally blind people show similar color associations, despite lacking all visual experience of color. Presumably, they learn these associations via language. Can we detect these associations in the statistics of language? And if so, what form do they take? We apply a projection method to word embeddings trained on corpora of spoken and written text to identify color-adjective associations as they are represented in language. We show that these projections are predictive of color-adjective ratings collected from blind and sighted people, and that the effect size depends on the training corpus. Finally, we examine how color-adjective associations might be represented in language by training word embeddings on corpora from which various sources of color-semantic information are removed.


2019 ◽  
Author(s):  
Ceren Battal ◽  
Valeria Occelli ◽  
Giorgia Bertonati ◽  
Federica Falagiarda ◽  
Olivier Collignon

Vision is thought to scaffold the development of spatial abilities in the other senses. How does spatial hearing therefore develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum audible angle task exempt of sensori-motor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, frontal and rear spaces. We observed unequivocal enhancement of spatial hearing abilities in congenitally blind people, irrespective of the field of space that is assessed. Our results are conclusive in demonstrating that visual experience is not a mandatory prerequisite for developing optimal spatial hearing abilities and that, in striking contrast, the lack of vision leads to ubiquitous enhancement of auditory spatial skills.


2020 ◽  
Author(s):  
IRENE TOGOLI ◽  
Virginie Crollen ◽  
Roberto Arrighi ◽  
Olivier Collignon

Humans share with other animals a number sense, a system allowing a rapid and approximate estimate of the number of items in a scene. Recently it has been shown that numerosity is shared between action and perception as the number of repetitions of self-produced actions affects the perceived numerosity of subsequent visual stimuli presented around the area where actions occurred. Here we investigate whether this interplay between action and perception for numerosity depends on visual input and visual experience. We measured the effects of adaptation to motor routines (finger tapping) on numerical estimates of auditory sequences in sighted and congenitally blind people. In both groups, our results show a consistent adaptation effect with relative under- or over-estimation of perceived auditory numerosity following rapid or slow tapping adaptation, respectively. Moreover, adaptation occurred around the tapping area irrespective of the hand posture (crossed or uncrossed hands), indicating that motor adaptation was coded using external (not hand centred) coordinates in both groups. Overall, these results support the existence of a generalized interaction between action and perception for numerosity that occurs in external space and manifests independently of visual input or even visual experience.


Sign in / Sign up

Export Citation Format

Share Document