scholarly journals Learning to balance on one leg: motor strategy and sensory weighting

2015 ◽  
Vol 114 (5) ◽  
pp. 2967-2982 ◽  
Author(s):  
Jaap H. van Dieën ◽  
Marloes van Leeuwen ◽  
Gert S. Faber

We investigated motor and sensory changes underlying learning of a balance task. Fourteen participants practiced balancing on one leg on a board that could freely rotate in the frontal plane. They performed six, 16-s trials standing on one leg on a stable surface (2 trials without manipulation, 2 with vestibular, and 2 with visual stimulation) and six trials on the balance board before and after a 30-min training. Center of mass (COM) movement, segment, and total angular momenta and board angles were determined. Trials on stable surface were compared with trials after training to assess effects of surface conditions. Trials pretraining and posttraining were compared to assess rapid (between trials pretraining) and slower (before and after training) learning, and sensory manipulation trials were compared with unperturbed trials to assess sensory weighting. COM excursions were larger on the unstable surface but decreased with practice, with the largest improvement over the pretraining trials. Changes in angular momentum contributed more to COM acceleration on the balance board, but with practice this decreased. Visual stimulation increased sway similarly in both surface conditions, while vestibular stimulation increased sway less on the balance board. With practice, the effects of visual and vestibular stimulation increased rapidly. Initially, oscillations of the balance board occurred at 3.5 Hz, which decreased with practice. The initial decrease in sway with practice was associated with upweighting of visual information, while later changes were associated with suppression of oscillations that we suggest are due to too high proprioceptive feedback gains.

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Eslam Mounier ◽  
Bassem Abdullah ◽  
Hani Mahdi ◽  
Seif Eldawlatly

AbstractThe Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual cortex. In this paper, we introduce a deep learning encoder to predict LGN neuronal firing in response to different visual stimulation patterns. The encoder comprises a deep Convolutional Neural Network (CNN) that incorporates visual stimulus spatiotemporal representation in addition to LGN neuronal firing history to predict the response of LGN neurons. Extracellular activity was recorded in vivo using multi-electrode arrays from single units in the LGN in 12 anesthetized rats with a total neuronal population of 150 units. Neural activity was recorded in response to single-pixel, checkerboard and geometrical shapes visual stimulation patterns. Extracted firing rates and the corresponding stimulation patterns were used to train the model. The performance of the model was assessed using different testing data sets and different firing rate windows. An overall mean correlation coefficient between the actual and the predicted firing rates of 0.57 and 0.7 was achieved for the 10 ms and the 50 ms firing rate windows, respectively. Results demonstrate that the model is robust to variability in the spatiotemporal properties of the recorded neurons outperforming other examined models including the state-of-the-art Generalized Linear Model (GLM). The results indicate the potential of deep convolutional neural networks as viable models of LGN firing.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Canhuang Luo ◽  
Rufin VanRullen ◽  
Andrea Alamia

Abstract Alpha rhythms (∼10Hz) in the human brain are classically associated with idling activities, being predominantly observed during quiet restfulness with closed eyes. However, recent studies demonstrated that alpha (∼10Hz) rhythms can directly relate to visual stimulation, resulting in oscillations, which can last for as long as one second. This alpha reverberation, dubbed perceptual echoes (PE), suggests that the visual system actively samples and processes visual information within the alpha-band frequency. Although PE have been linked to various visual functions, their underlying mechanisms and functional role are not completely understood. In this study, we investigated the relationship between conscious perception and the generation and the amplitude of PE. Specifically, we displayed two coloured Gabor patches with different orientations on opposite sides of the screen, and using a set of dichoptic mirrors, we induced a binocular rivalry between the two stimuli. We asked participants to continuously report which one of two Gabor patches they consciously perceived, while recording their EEG signals. Importantly, the luminance of each patch fluctuated randomly over time, generating random sequences from which we estimated two impulse-response functions (IRFs) reflecting the PE generated by the perceived (dominant) and non-perceived (suppressed) stimulus, respectively. We found that the alpha power of the PE generated by the consciously perceived stimulus was comparable with that of the PE generated during monocular vision (control condition) and higher than the PE induced by the suppressed stimulus. Moreover, confirming previous findings, we found that all PEs propagated as a travelling wave from posterior to frontal brain regions, irrespective of conscious perception. All in all our results demonstrate a correlation between conscious perception and PE, suggesting that the synchronization of neural activity plays an important role in visual sampling and conscious perception.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


2021 ◽  
Author(s):  
Shachar Sherman ◽  
Koichi Kawakami ◽  
Herwig Baier

The brain is assembled during development by both innate and experience-dependent mechanisms1-7, but the relative contribution of these factors is poorly understood. Axons of retinal ganglion cells (RGCs) connect the eye to the brain, forming a bottleneck for the transmission of visual information to central visual areas. RGCs secrete molecules from their axons that control proliferation, differentiation and migration of downstream components7-9. Spontaneously generated waves of retinal activity, but also intense visual stimulation, can entrain responses of RGCs10 and central neurons11-16. Here we asked how the cellular composition of central targets is altered in a vertebrate brain that is depleted of retinal input throughout development. For this, we first established a molecular catalog17 and gene expression atlas18 of neuronal subpopulations in the retinorecipient areas of larval zebrafish. We then searched for changes in lakritz (atoh7-) mutants, in which RGCs do not form19. Although individual forebrain-expressed genes are dysregulated in lakritz mutants, the complete set of 77 putative neuronal cell types in thalamus, pretectum and tectum are present. While neurogenesis and differentiation trajectories are overall unaltered, a greater proportion of cells remain in an uncommitted progenitor stage in the mutant. Optogenetic stimulation of a pretectal area20,21 evokes a visual behavior in blind mutants indistinguishable from wildtype. Our analysis shows that, in this vertebrate visual system, neurons are produced more slowly, but specified and wired up in a proper configuration in the absence of any retinal signals.


2008 ◽  
Vol 99 (1) ◽  
pp. 200-207 ◽  
Author(s):  
Olivia Andrea Masseck ◽  
Klaus-Peter Hoffmann

Single-unit recordings were performed from a retinorecipient pretectal area (corpus geniculatum laterale) in Scyliorhinus canicula. The function and homology of this nucleus has not been clarified so far. During visual stimulation with a random dot pattern, 45 (35%) neurons were found to be direction selective, 10 (8%) were axis selective (best neuronal responses to rotations in both directions around one particular stimulus axis), and 75 (58%) were movement sensitive. Direction-selective responses were found to the following stimulus directions (in retinal coordinates): temporonasal and nasotemporal horizontal movements, up- and downward vertical movements, and oblique movements. All directions of motion were represented equally by our sample of pretectal neurons. Additionally we tested the responses of 58 of the 130 neurons to random dot patterns rotating around the semicircular canal or body axes to investigate whether direction-selective visual information is mapped into vestibular coordinates in pretectal neurons of this chondrichthyan species. Again all rotational directions were represented equally, which argues against a direct transformation from a retinal to a vestibular reference frame. If a complete transformation had occurred, responses to rotational axes corresponding to the axes of the semicircular canals should have been overrepresented. In conclusion, the recorded direction-selective neurons in the Cgl are plausible detectors for retinal slip created by body rotations in all directions.


2015 ◽  
Vol 28 (2) ◽  
pp. 241-249
Author(s):  
Fabiane Maria Klitzke dos Santos ◽  
Franciely Voltolini Mendes ◽  
Simone Suzuki Woellner ◽  
Noé Gomes Borges Júnior ◽  
Antonio Vinicius Soares

Introduction Hemiparetic Stroke patients have their daily activities affected by the balance impairment. Techniques that used visual information for training this impairment it seems to be effective. Objective To analyze the effects of the unstable balance board training and compare two ways of visual feedback: the biomechanical instrumentation and the mirror. Materials and methods Eight chronic hemiparetic Stroke patients participated in the research, randomized in two groups. The first group (G1) accomplished the training with biomechanical instrumentation, and the second group (G2) trained in front of the mirror. Sixteen training sessions were done with feet together, and feet apart. The evaluation instruments that were used before and after the period of training were the Time Up and Go Test (TUGT), Berg Balance Scale (BBS) and the Instrumented Balance Board (IBB), that quantified the functional mobility, the balance and the posture control respectively. Results The TUGT showed significant results (p < 0.05) favorable to G1. Despite the results of BBS were significant for G2, the intergroup comparison did not reveal statistical significance. Both groups obtained decrease in levels of IBB oscillation, what can indicate a higher stability, however the results did not indicate statistical significance (p > 0.05). A strong correlation between all the applied tests was observed in this research. Conclusion Although the advantages found were different between the groups, in both it could be observed that the training brought benefits, with the transference to the functional mobility.


2017 ◽  
Vol 30 (1) ◽  
pp. 65-90 ◽  
Author(s):  
Séamas Weech ◽  
Nikolaus F. Troje

Studies of the illusory sense of self-motion elicited by a moving visual surround (‘vection’) have revealed key insights about how sensory information is integrated. Vection usually occurs after a delay of several seconds following visual motion onset, whereas self-motion in the natural environment is perceived immediately. It has been suggested that this latency relates to the sensory mismatch between visual and vestibular signals at motion onset. Here, we tested three techniques with the potential to reduce sensory mismatch in order to shorten vection onset latency: noisy galvanic vestibular stimulation (GVS) and bone conducted vibration (BCV) at the mastoid processes, and body vibration applied to the lower back. In Experiment 1, we examined vection latency for wide field visual rotations about the roll axis and applied a burst of stimulation at the start of visual motion. Both GVS and BCV reduced vection latency by two seconds compared to the control condition, whereas body vibration had no effect on latency. In Experiment 2, the visual stimulus rotated about the pitch, roll, or yaw axis and we found a similar facilitation of vection by both BCV and GVS in each case. In a control experiment, we confirmed that air-conducted sound administered through headphones was not sufficient to reduce vection onset latency. Together the results suggest that noisy vestibular stimulation facilitates vection, likely due to an upweighting of visual information caused by a reduction in vestibular sensory reliability.


1998 ◽  
Vol 18 (7) ◽  
pp. 724-734 ◽  
Author(s):  
Randy L. Gollub ◽  
Hans C. Breiter ◽  
Howard Kantor ◽  
David Kennedy ◽  
David Gastfriend ◽  
...  

The authors used functional magnetic resonance imaging (fMRI) to determine whether acute intravenous (IV) cocaine use would change global cerebral blood flow (CBF) or visual stimulation-induced functional activation. They used flow-sensitive alternating inversion recovery (FAIR) scan sequences to measure CBF and blood oxygen level-dependent (BOLD) sensitive T2* scan sequences during visual stimulation to measure neuronal activation before and after cocaine and saline infusions. Cocaine (0.6 mg/kg IV over 30 seconds) increased heart rate and mean blood pressure and decreased end tidal carbon dioxide (CO2). All measures returned to baseline by 2 hours, the interinfusion interval, and were unchanged by saline. Flow-sensitive alternating inversion recovery imaging demonstrated that cortical gray matter CBF was unchanged after saline infusion (–2.4 ± 6.5%) but decreased (–14.1 ± 8.5%) after cocaine infusion (n = 8, P < 0.01). No decreases were detected in white matter, nor were changes found comparing BOLD signal intensity in cortical gray matter immediately before cocaine infusion with that measured 10 minutes after infusion. Visual stimulation resulted in comparable BOLD signal increases in visual cortex in all conditions (before and after cocaine and saline infusion). Despite a small (14%) but significant decrease in global cortical gray matter CBF after acute cocaine infusion, specific regional increases in BOLD imaging, mediated by neurons, can be measured reliably.


Author(s):  
Jorge Martín-Gutiérrez ◽  
Cristina Roca González ◽  
Melchor García Domínguez

This paper presents the results of a study designed to evaluate the effect of attending an intensive remedial course based on desktop augmented reality exercises to improve the spatial ability of freshman engineering students. Many of these students have problems in managing visual information or in creating mental models of objects represented by their orthographic projections. The study reports about research on comparison tests about the spatial skills of engineering students from two Spanish universities before and after performing a specific training for improving these abilities. The training was completed by 66 students as participants, considering a control group composed of 25 students from both universities. Results show that students from both universities improve their spatial ability and there is no statistical significance between students from both universities, neither before nor after training, so we may conclude that training’s effect on both universities is analogue.


2017 ◽  
Vol 118 (3) ◽  
pp. 1542-1555 ◽  
Author(s):  
Bastian Schledde ◽  
F. Orlando Galashan ◽  
Magdalena Przybyla ◽  
Andreas K. Kreiter ◽  
Detlef Wegener

Nonspatially selective attention is based on the notion that specific features or objects in the visual environment are effectively prioritized in cortical visual processing. Feature-based attention (FBA), in particular, is a well-studied process that dynamically and selectively addresses neurons preferentially processing the attended feature attribute (e.g., leftward motion). In everyday life, however, behavior may require high sensitivity for an entire feature dimension (e.g., motion), but experimental evidence for a feature dimension-specific attentional modulation on a cellular level is lacking. Therefore, we investigated neuronal activity in macaque motion-selective mediotemporal area (MT) in an experimental setting requiring the monkeys to detect either a motion change or a color change. We hypothesized that neural activity in MT is enhanced when the task requires perceptual sensitivity to motion. In line with this, we found that mean firing rates were higher in the motion task and that response variability and latency were lower compared with values in the color task, despite identical visual stimulation. This task-specific, dimension-based modulation of motion processing emerged already in the absence of visual input, was independent of the relation between the attended and stimulating motion direction, and was accompanied by a spatially global reduction of neuronal variability. The results provide single-cell support for the hypothesis of a feature dimension-specific top-down signal emphasizing the processing of an entire feature class. NEW & NOTEWORTHY Cortical processing serving visual perception prioritizes information according to current task requirements. We provide evidence in favor of a dimension-based attentional mechanism addressing all neurons that process visual information in the task-relevant feature domain. Behavioral tasks required monkeys to attend either color or motion, causing modulations of response strength, variability, latency, and baseline activity of motion-selective monkey area MT neurons irrespective of the attended motion direction but specific to the attended feature dimension.


Sign in / Sign up

Export Citation Format

Share Document