Task-Specific Sensorimotor Adaptation to Reversing Prisms

2005 ◽  
Vol 93 (2) ◽  
pp. 1104-1110 ◽  
Author(s):  
Jonathan J. Marotta ◽  
Gerald P. Keith ◽  
J. Douglas Crawford

We tested between three levels of visuospatial adaptation (global map, parallel feature modules, and parallel sensorimotor transformations) by training subjects to reach and grasp virtual objects viewed through a left-right reversing prism, with either visual location or orientation feedback. Even though spatial information about the global left-right reversal was present in every training session, subjects trained with location feedback reached to the correct location but with the wrong (reversed) grasp orientation. Subjects trained with orientation feedback showed the opposite pattern. These errors were task-specific and not feature-specific; subjects trained to correctly grasp visually reversed–oriented bars failed to show knowledge of the reversal when asked to point to the end locations of these bars. These results show that adaptation to visuospatial distortion—even global reversals—is implemented through learning rules that operate on parallel sensorimotor transformations (e.g., reach vs. grasp).

Author(s):  
Mauricio Carlos Henrich ◽  
Ken Steffen Frahm ◽  
Ole K. Andersen

Spatial information of nociceptive stimuli applied in the skin of healthy humans is integrated in the spinal cord to determine the appropriate withdrawal reflex response. Double-simultaneous stimulus applied in different skin sites are integrated, eliciting a larger reflex response. The temporal characteristics of the stimuli also modulate the reflex e.g. by temporal summation. The primary aim of this study was to investigate how the combined tempo-spatial aspects of two stimuli are integrated in the nociceptive system. This was investigated by delivering single and double simultaneous stimulation, and sequential stimulation with different inter-stimulus intervals (ISIs ranging 30-500 ms.) to the sole of the foot of fifteen healthy subjects. The primary outcome measure was the size of the nociceptive withdrawal reflex (NWR) recorded from the Tibialis Anterior (TA) and Biceps Femoris (BF) muscles. Pain intensity was measured using an NRS scale. Results showed spatial summation in both TA and BF when delivering simultaneous stimulation. Simultaneous stimulation provoked larger reflexes than sequential stimulation in TA, but not in BF. Larger ISIs elicited significantly larger reflexes in TA, while the opposite pattern occurred in BF. This differential modulation between proximal and distal muscles suggests the presence of spinal circuits eliciting a functional reflex response based on the specific tempo-spatial characteristics of a noxious stimulus. No modulation was observed in pain intensity ratings across ISIs. Absence of modulation in the pain intensity ratings argues for an integrative mechanism located within the spinal cord governed by a need for efficient withdrawal from a potentially harmful stimulus.


2021 ◽  
Author(s):  
Vladislav Ayzenberg ◽  
Samoni Nag ◽  
Amy Krivoshik ◽  
Stella F. Lourenco

To accurately represent an object, it must be individuated from the surrounding objects and then classified with the appropriate category or identity. To this end, adults flexibly weight different visual cues when perceiving objects. However, less is known about whether, and how, the weighting of visual object information changes over development. The current study examined how children use different types of information— spatial (e.g., left/right location) and featural (e.g., color)—in different object tasks. In Experiment 1, we tested whether infants and preschoolers extract both the spatial and featural properties of objects, and, importantly, how these cues are weighted when pitted against each other. We found that infants relied primarily on spatial cues and neglected featural cues. By contrast, preschoolers showed the opposite pattern of weighting, placing greater weight on featural information. In Experiment 2, we tested the hypothesis that the developmental shift from spatial to featural weighting reflects a shift from a priority on object individuation (how many objects) in infancy to object classification (what are the objects) at preschool age. Here, we found that preschoolers weighted spatial information more than features when the task required individuating objects without identifying them, consistent with a specific role for spatial information in object individuation. We discuss the relevance of spatial-featural weighting in relation to developmental changes in children’s object representations.


Author(s):  
Mariacarla Memeo ◽  
Marco Jacono ◽  
Giulio Sandini ◽  
Luca Brayda

Abstract Background In this work, we present a novel sensory substitution system that enables to learn three dimensional digital information via touch when vision is unavailable. The system is based on a mouse-shaped device, designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. The device hosts a tactile actuator with three degrees of freedom: elevation, roll and pitch. The actuator approximates the tactile interaction with a plane tangential to the contact point between the finger and the field. Spatial information can therefore be mentally constructed by integrating local and global tactile cues: the actuator provides local cues, whereas proprioception associated with the mouse motion provides the global cues. Methods The efficacy of the system is measured by a virtual/real object-matching task. Twenty-four gender and age-matched participants (one blind and one blindfolded sighted group) matched a tactile dictionary of virtual objects with their 3D-printed solid version. The exploration of the virtual objects happened in three conditions, i.e., with isolated or combined height and inclination cues. We investigated the performance and the mental cost of approximating virtual objects in these tactile conditions. Results In both groups, elevation and inclination cues were sufficient to recognize the tactile dictionary, but their combination worked at best. The presence of elevation decreased a subjective estimate of mental effort. Interestingly, only visually impaired participants were aware of their performance and were able to predict it. Conclusions The proposed technology could facilitate the learning of science, engineering and mathematics in absence of vision, being also an industrial low-cost solution to make graphical user interfaces accessible for people with vision loss.


Author(s):  
Frank Dickmann ◽  
Julian Keil ◽  
Paula L. Dickmann ◽  
Dennis Edler

AbstractAugmented reality (AR) is playing an increasingly important role in a variety of everyday application scenarios. Users are not completely disconnected from the current sensory influences of reality. They are merely confronted with additional virtual objects that are projected into reality. This allows users to obtain additional spatial information, which makes this technology interesting for cartographic applications (e.g. navigation). The dynamic positioning of the superimposed image in the scene being viewed is crucial for the generation of AR elements displayed correctly in terms of perspective. Understanding these technical basics is an important prerequisite for the cartographic use of augmented reality. The different techniques influence the visualization and the perception of AR elements in 3D space. This article highlights important visualization properties of current augmented reality techniques.


1988 ◽  
Vol 66 (4) ◽  
pp. 429-429
Author(s):  
John Kalaska ◽  
Allan Smith ◽  
Yves Lamarre

Each year, the Centre de recherche en sciences neurologiques of the Université de Montréal organizes a symposium on a topic in the neurosciences. For the IXth International Symposium, the theme chosen was "Spatial Representations and Sensorimotor Transformations."Many of the diverse functions performed by the central nervous system have an important spatial component in common. For instance, there are neural mechanisms for the analysis and perception of the three-dimensional structure of visual space, such as the location, form, and movement of objects in the visual environment. There exist processes to determine the spatial location of auditory stimuli. One can also regard the body as an "internal" space for which mechanisms have evolved for the kinesthetic perception of the position and movement of body parts relative to one another, and for the position and orientation of the body within its immediate external environment. Motor control also requires spatial information, since many movements of the eyes, head, and limbs follow specific paths or are aimed at the specific spatial location of an object as signalled by sensory processes.One can argue, therefore, that a major aspect of the functioning of the brain involves the generation of many different spatial representations, and the exchange of information among them. Each of these neural representations provides a spatial coordinate framework whose coordinate axes are based on certain types of information. For instance, movement of the limb toward an object can be described equally well in several different coordinate systems, such as those based on its spatial path, its dynamics (the direction and level of forces, torques, and external loads), or the muscle activity by which it is achieved. A better understanding of the coordinates in which the CNS codes these various types of information will provide a better appreciation of the neural mechanisms generating the spatial representations. It will also provide a clearer understanding of the transformations that must occur to relay information between sensory and motor representations, which permit an animal to interact successfully with its environment.The participants at this Symposium were invited to examine some of these issues as they pertain to the somatic and visual systems.


2020 ◽  
Vol 33 (4-5) ◽  
pp. 417-431 ◽  
Author(s):  
Chiara Martolini ◽  
Giulia Cappagli ◽  
Claudio Campus ◽  
Monica Gori

Abstract Recent studies have demonstrated that audition used to complement or substitute visual feedback is effective in conveying spatial information, e.g., sighted individuals can understand the curvature of a shape when solely auditory input is provided. Recently we also demonstrated that, in the absence of vision, auditory feedback of body movements can enhance spatial perception in visually impaired adults and children. In the present study, we assessed whether sighted adults can also improve their spatial abilities related to shape recognition with an audio-motor training based on the idea that the coupling of auditory and motor information can further refine the representation of space when vision is missing. Auditory shape recognition was assessed in 22 blindfolded sighted adults with an auditory task requiring participants to identify four shapes by means of the sound conveyed through a set of consecutive loudspeakers embedded on a fixed two-dimensional vertical array. We divided participants into two groups of 11 adults each, performing a training session in two different modalities: active audio-motor training (experimental group) and passive auditory training (control group). The audio-motor training consisted in the reproduction of specific movements with the arm by relying on the sound produced by an auditory source positioned on the wrist of participants. Results showed that sighted individuals improved the recognition of auditory shapes only after active training, suggesting that audio-motor feedback can be an effective tool to enhance spatial representation when visual information is lacking.


2021 ◽  
Author(s):  
Alex Miklashevsky

Previous research demonstrated a close bidirectional relationship between spatial attention and the manual motor system. However, it is unclear whether an explicit hand movement is necessary for this relationship to appear. A novel method with high temporal resolution – bimanual grip force registration – sheds light on this issue. Participants held two grip force sensors while being presented with lateralized stimuli (exogenous attentional shifts, Experiment 1), left- or right-pointing central arrows (endogenous attentional shifts, Experiment 2), or the words "left" or "right" (endogenous attentional shifts, Experiment 3). There was an early interaction between the presentation side or arrow direction and grip force: lateralized objects and central arrows led to an increase of the ipsilateral force and a decrease of the contralateral force. Surprisingly, words led to the opposite pattern: increased force in the contralateral hand and decreased force in the ipsilateral hand. The effect was stronger and appeared earlier for lateralized objects (60 ms after stimulus presentation) than for arrows (100 ms) or words (250 ms). Thus, processing visuospatial information automatically activates the manual motor system, but the timing and direction of this effect vary depending on the type of stimulus.


1992 ◽  
Vol 15 (3) ◽  
pp. 154-162 ◽  
Author(s):  
Thomas E. Scruggs ◽  
Margo A. Mastropieri ◽  
Frederick J. Brigham ◽  
G. Sharon Sullivan

Thirty-nine seventh- and eighth-grade students with learning disabilities received verbal and spatial information about eighteenth-century North American battles under two conditions. In the control condition, learners were provided a map depicting locations of battles, accompanied by descriptive/decorative pictures. Mnemonic condition learners received the same map with the exception that pictures accompanying place names represented reconstructed keywords of those names. In both conditions, pictures were colored red if they represented British victories, blue if they represented American victories. After a training session and a 90-second filler activity, students were asked to locate each battle on an unlabeled map and indicate which side had won the battle. Analysis of results indicated that mnemonic condition students significantly outperformed controls on measures of spatial relocation and correct matching of place name with victor. Effects were especially pronounced on the measure of spatial relocation, in which an effect size of over two standard deviations was obtained. Implications for research and practice are discussed.


Author(s):  
T. A. Welton

Various authors have emphasized the spatial information resident in an electron micrograph taken with adequately coherent radiation. In view of the completion of at least one such instrument, this opportunity is taken to summarize the state of the art of processing such micrographs. We use the usual symbols for the aberration coefficients, and supplement these with £ and 6 for the transverse coherence length and the fractional energy spread respectively. He also assume a weak, biologically interesting sample, with principal interest lying in the molecular skeleton remaining after obvious hydrogen loss and other radiation damage has occurred.


Author(s):  
Vijay Krishnamurthi ◽  
Brent Bailey ◽  
Frederick Lanni

Excitation field synthesis (EFS) refers to the use of an interference optical system in a direct-imaging microscope to improve 3D resolution by axially-selective excitation of fluorescence within a specimen. The excitation field can be thought of as a weighting factor for the point-spread function (PSF) of the microscope, so that the optical transfer function (OTF) gets expanded by convolution with the Fourier transform of the field intensity. The simplest EFS system is the standing-wave fluorescence microscope, in which an axially-periodic excitation field is set up through the specimen by interference of a pair of collimated, coherent, s-polarized beams that enter the specimen from opposite sides at matching angles. In this case, spatial information about the object is recovered in the central OTF passband, plus two symmetric, axially-shifted sidebands. Gaps between these bands represent "lost" information about the 3D structure of the object. Because the sideband shift is equal to the spatial frequency of the standing-wave (SW) field, more complete recovery of information is possible by superposition of fields having different periods. When all of the fields have an antinode at a common plane (set to be coincident with the in-focus plane), the "synthesized" field is peaked in a narrow infocus zone.


Sign in / Sign up

Export Citation Format

Share Document