scholarly journals Neuroplastic Reorganization Induced by Sensory Augmentation for Self-Localization During Locomotion

2021 ◽  
Vol 2 ◽  
Author(s):  
Hiroyuki Sakai ◽  
Sayako Ueda ◽  
Kenichi Ueno ◽  
Takatsune Kumada

Sensory skills can be augmented through training and technological support. This process is underpinned by neural plasticity in the brain. We previously demonstrated that auditory-based sensory augmentation can be used to assist self-localization during locomotion. However, the neural mechanisms underlying this phenomenon remain unclear. Here, by using functional magnetic resonance imaging, we aimed to identify the neuroplastic reorganization induced by sensory augmentation training for self-localization during locomotion. We compared activation in response to auditory cues for self-localization before, the day after, and 1 month after 8 days of sensory augmentation training in a simulated driving environment. Self-localization accuracy improved after sensory augmentation training, compared with the control (normal driving) condition; importantly, sensory augmentation training resulted in auditory responses not only in temporal auditory areas but also in higher-order somatosensory areas extending to the supramarginal gyrus and the parietal operculum. This sensory reorganization had disappeared by 1 month after the end of the training. These results suggest that the use of auditory cues for self-localization during locomotion relies on multimodality in higher-order somatosensory areas, despite substantial evidence that information for self-localization during driving is estimated from visual cues on the proximal part of the road. Our findings imply that the involvement of higher-order somatosensory, rather than visual, areas is crucial for acquiring augmented sensory skills for self-localization during locomotion.

Author(s):  
Adam F. Werner ◽  
Jamie C. Gorman

Objective This study examines visual, auditory, and the combination of both (bimodal) coupling modes in the performance of a two-person perceptual-motor task, in which one person provides the perceptual inputs and the other the motor inputs. Background Parking a plane or landing a helicopter on a mountain top requires one person to provide motor inputs while another person provides perceptual inputs. Perceptual inputs are communicated either visually, auditorily, or through both cues. Methods One participant drove a remote-controlled car around an obstacle and through a target, while another participant provided auditory, visual, or bimodal cues for steering and acceleration. Difficulty was manipulated using target size. Performance (trial time, path variability), cue rate, and spatial ability were measured. Results Visual coupling outperformed auditory coupling. Bimodal performance was best in the most difficult task condition but also high in the easiest condition. Cue rate predicted performance in all coupling modes. Drivers with lower spatial ability required a faster auditory cue rate, whereas drivers with higher ability performed best with a lower rate. Conclusion Visual cues result in better performance when only one coupling mode is available. As predicted by multiple resource theory, when both cues are available, performance depends more on auditory cueing. In particular, drivers must be able to transform auditory cues into spatial actions. Application Spotters should be trained to provide an appropriate cue rate to match the spatial ability of the driver or pilot. Auditory cues can enhance visual communication when the interpersonal task is visual with spatial outputs.


1976 ◽  
Vol 28 (2) ◽  
pp. 193-202 ◽  
Author(s):  
Philip Merikle

Report of single letters from centrally-fixated, seven-letter, target rows was probed by either auditory or visual cues. The target rows were presented for 100 ms, and the report cues were single digits which indicated the spatial location of a letter. In three separate experiments, report was always better with the auditory cues. The advantage for the auditory cues was maintained both when target rows were masked by a patterned stimulus and when the auditory cues were presented 500 ms later than comparable visual cues. The results indicate that visual cues produce modality-specific interference which operates at a level of processing beyond iconic representation.


Author(s):  
Nada Zwayyid Almutairi ◽  
Eman Salah Ibrahim Rizk

This study explores interactive e-book cues and Information Processing Levels (IPL)’s effectiveness on Learning Retention (LR) and External Cognitive Load (ECL). 117 middle school pupils (MSP) were divided into six experimental groups based on their IPL and cues during the second term of the academic year 2019–2020. Visual Cues (VC)/Audiovisual Cues (VAC) and Auditory Cues (AC)/Audiovisual Cues (VAC) statistically varied in the Ie-book in LR test and ECL scale, same for the average scores when testing the LR in Science for MSP due to the difference between IPL for the DL. There is a statistically significant effect of cue types' interaction in Ie-book with IPL in ECL scale for MSP, at its highest peak in the case of the AVC with DL, followed by the interaction resulting from the VC with DL then AC with SL. Also, cues interaction in Ie-book with IPL immensely affect the LR test for MEP, which is at its highest peak in the case of the AVC with DL. The interactions between (DL–SL) and (AC–VC) seem to equally influence the ELC.


2021 ◽  
Vol 11 ◽  
Author(s):  
Christopher R. Madan ◽  
Anthony Singhal

Learning to play a musical instrument involves mapping visual + auditory cues to motor movements and anticipating transitions. Inspired by the serial reaction time task and artificial grammar learning, we investigated explicit and implicit knowledge of statistical learning in a sensorimotor task. Using a between-subjects design with four groups, one group of participants were provided with visual cues and followed along by tapping the corresponding fingertip to their thumb, while using a computer glove. Another group additionally received accompanying auditory tones; the final two groups received sensory (visual or visual + auditory) cues but did not provide a motor response—all together following a 2 × 2 design. Implicit knowledge was measured by response time, whereas explicit knowledge was assessed using probe tests. Findings indicate that explicit knowledge was best with only the single modality, but implicit knowledge was best when all three modalities were involved.


2004 ◽  
Vol 91 (5) ◽  
pp. 2172-2184 ◽  
Author(s):  
Andrew H. Bell ◽  
Jillian H. Fecteau ◽  
Douglas P. Munoz

Reflexively orienting toward a peripheral cue can influence subsequent responses to a target, depending on when and where the cue and target appear relative to each other. At short delays between the cue and target [cue-target onset asynchrony (CTOA)], subjects are faster to respond when they appear at the same location, an effect referred to as reflexive attentional capture. At longer CTOAs, subjects are slower to respond when the two appear at the same location, an effect referred to as inhibition of return (IOR). Recent evidence suggests that these phenomena originate from sensory interactions between the cue- and target-related responses. The capture of attention originates from a strong target-related response, derived from the overlap of the cue- and target-related activities, whereas IOR corresponds to a weaker target-aligned response. If such interactions are responsible, then modifying their nature should impact the neuronal and behavioral outcome. Monkeys performed a cue-target saccade task featuring visual and auditory cues while neural activity was recorded from the superior colliculus (SC). Compared with visual stimuli, auditory responses are weaker and occur earlier, thereby decreasing the likelihood of interactions between these signals. Similar to previous studies, visual stimuli evoked reflexive attentional capture at a short CTOA (60 ms) and IOR at longer CTOAs (160 and 610 ms) with corresponding changes in the target-aligned activity in the SC. Auditory cues used in this study failed to elicit either a behavioral effect or modification of SC activity at any CTOA, supporting the hypothesis that reflexive orienting is mediated by sensory interactions between the cue and target stimuli.


2010 ◽  
Vol 21 (05) ◽  
pp. 567-581 ◽  
Author(s):  
IRENE CRISOLOGO ◽  
RENE BATAC ◽  
ANTHONY LONGJAS ◽  
ERIKA FILLE LEGARA ◽  
CHRISTOPHER MONTEROLA

Humans are deemed ineffective in generating a seemingly random number sequence primarily because of inherent biases and fatigue. Here, we establish statistically that human-generated number sequence in the presence of visual cues considerably reduce one's tendency to be fixated to a certain group of numbers allowing the number distribution to be statistically uniform. We also show that a stitching procedure utilizing auditory cues significantly minimizes human's intrinsic biases towards doublet and sequential ordering of numbers. The article provides extensive experimentation and comprehensive pattern analysis of the sequences formed when humans are tasked to generate a random series using numbers "0" to "9." In the process, we develop a statistical framework for analyzing the apparent randomness of finite discrete sequences via numerical measurements.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Junpeng Zhang ◽  
Yuan Cui ◽  
Lihua Deng ◽  
Ling He ◽  
Junran Zhang ◽  
...  

This paper proposed a prewhitening invariance of noise space (PW-INN) as a new magnetoencephalography (MEG) source analysis method, which is particularly suitable for localizing closely spaced and highly correlated cortical sources under real MEG noise. Conventional source localization methods, such as sLORETA and beamformer, cannot distinguish closely spaced cortical sources, especially under strong intersource correlation. Our previous work proposed an invariance of noise space (INN) method to resolve closely spaced sources, but its performance is seriously degraded under correlated noise between MEG sensors. The proposed PW-INN method largely mitigates the adverse influence of correlated MEG noise by projecting MEG data to a new space defined by the orthogonal complement of dominant eigenvectors of correlated MEG noise. Simulation results showed that PW-INN is superior to INN, sLORETA, and beamformer in terms of localization accuracy for closely spaced and highly correlated sources. Lastly, source connectivity between closely spaced sources can be satisfactorily constructed from source time courses estimated by PW-INN but not from results of other conventional methods. Therefore, the proposed PW-INN method is a promising MEG source analysis to provide a high spatial-temporal characterization of cortical activity and connectivity, which is crucial for basic and clinical research of neural plasticity.


Behaviour ◽  
1992 ◽  
Vol 123 (1-2) ◽  
pp. 121-143 ◽  
Author(s):  
Carlos R. Ruiz-Miranda

AbstractAlthough it is known that the young play an active role in the formation of mother-young attachment in ruminants, there is scant knowledge of how neonates identify their mothers. This research investigated the use of visual cues, particularly pelage pigmentation, in maternal recognition by domestic goat kids. Observations on the use of auditory cues were carried out secondarily. The findings of this study were: (1) The analysis of error patterns revealed that goat kids performed phenotype matching on the basis of pelage pigmentation when seeking their mothers in two- and six-choice tests, at a distance of 10 m. Presenting the kids with a choice between two females of the same colour resulted in more vacillation, and fewer kids were able to go to their mother directly than when the adults were of different colours. The phenomenon was not evident when the kids were 3 days old. Because it occurred at all other ages, regardless of whether the mother was absent, covered, or fully visible, colour-matching seems to be an important aspect of maternal recognition. (2) Visual cues were important for recognition, as evidenced by the performance of kids when maternal cues were limited (i.e. the mother was covered). (3) The efficiency measures did not correlate strongly with maternal vocalizations when visual cues from the mother were not completely present or when pelage pigmentation was not a good cue for discrimination. On the contrary, kids unexpectedly vocalized more in the conditions in which they could discriminate on the basis of visual cues, that is, when the mother was bare rather than covered, and when she was paired with a doe of a different colour category rather than one of the same colour category. (4) Five-day-old domestic goat kids recognized their mothers efficiently, even within a group, and at a distance of at least 10 m. Most 3-day-kids were not able to find their mothers efficiently in the six-choice test. Errors were made at all ages. The observed performance is consistent with the abilities required of kids under natural conditions.


2009 ◽  
Vol 3 (2) ◽  
Author(s):  
Donald C. Brien ◽  
Brian D. Corneil ◽  
Jillian H. Fecteau ◽  
Andrew H. Bell ◽  
Douglas P. Munoz

Systematic modulations of microsaccades have been observed in humans during covert orienting. We show here that monkeys are a suitable model for studying the neurophysiology governing these modulations of microsaccades. Using various cue-target saccade tasks, we observed the effects of visual and auditory cues on microsaccades in monkeys. As in human studies, following visual cues there was an early bias in cue-congruent microsaccades followed by a later bias in cue-incongruent microsaccades. Following auditory cues there was a cue-incongruent bias in left cues only. In a separate experiment, we observed that brainstem omnipause neurons, which gate all saccades, also paused during microsaccade generation. Thus, we provide evidence that at least part of the same neurocircuitry governs both large saccades and microsaccades.


2020 ◽  
Author(s):  
Adam Qureshi ◽  
Rebecca Monk ◽  
Charlotte Rebecca Pennington ◽  
Jennifer Rose Oulton

Introduction: Representing a more immersive testing environment, the current study exposed individuals to both alcohol-related visual and auditory cues to assess their respective impact on alcohol-related inhibitory control. It examined further whether individual variation in alcohol consumption and trait effortful control may predict inhibitory control performance. Method: Twenty-five U.K. university students (Mage = 23.08, SD = 8.26) completed an anti-saccade eye-tracking task and were instructed to look towards (pro) or directly away (anti) from alcohol-related and neutral visual stimuli. Short alcohol-related sound cues (bar audio) were played on 50% of trials and were compared with responses where no sounds were played. Results: Findings indicate that participants launched more incorrect saccades towards alcohol-related visual stimuli on anti-saccade trials, and responded quicker to alcohol on pro-saccade trials. Alcohol-related audio cues reduced latencies for both pro- and anti-saccade trials and reduced anti-saccade error rates to alcohol-related visual stimuli. Controlling for trait effortful control and problem alcohol consumption removed these effects. Conclusion: These findings suggest that alcohol-related visual cues may be associated with reduced inhibitory control, evidenced by increased errors and faster response latencies. The presentation of alcohol-related auditory cues, however, appears to enhance performance accuracy. It is postulated that auditory cues may re-contextualise visual stimuli into a more familiar setting that reduces their saliency and lessens their attentional pull.


Sign in / Sign up

Export Citation Format

Share Document