Potters Make Shorter Pots Under Conditions of Reduced Sensory Input

Perception ◽  
2018 ◽  
Vol 47 (8) ◽  
pp. 860-872
Author(s):  
Mounia Ziat ◽  
Min Park ◽  
Brian Kakas ◽  
David A. Rosenbaum

Although people have made clay pots for millennia, little behavioral research has explored how they do so. We were specifically interested in potters’ use of auditory, haptic, and visual feedback. We asked what would happen if one or two of these sources of feedback were removed and potters tried to create pots of a given height, stopping when they thought they had reached that height. We asked students in a pottery class to build simple clay vessels either when they had full sensory feedback (in the control condition for all participants) or when they had reduced input from one modality (in Experiment 1) or two modalities (in Experiment 2). Participants were asked to stop building the vessels when they thought the vessels were 5 in. high. We found that participants produced shorter vessels when one or more forms of sensory feedback was reduced. The degree of shortening did not depend on the type or number of reduced sensory channels. The results are consistent with a control hypothesis where potters must have learned how to use sensory feedback from the modalities to help them control their ceramic creations. The results help highlight the importance of the intimate connections between perception and action.

2016 ◽  
Vol 12 (6) ◽  
pp. 20160196 ◽  
Author(s):  
S. M. Cox ◽  
Gary B. Gillis

Coordinated landing requires anticipating the timing and magnitude of impact, which in turn requires sensory input. To better understand how cane toads, well known for coordinated landing, prioritize visual versus vestibular feedback during hopping, we recorded forelimb joint angle patterns and electromyographic data from five animals hopping under two conditions that were designed to force animals to land with one forelimb well before the other. In one condition, landing asymmetry was due to mid-air rolling, created by an unstable takeoff surface. In this condition, visual, vestibular and proprioceptive information could be used to predict asymmetric landing. In the other, animals took off normally, but landed asymmetrically because of a sloped landing surface. In this condition, sensory feedback provided conflicting information, and only visual feedback could appropriately predict the asymmetrical landing. During the roll treatment, when all sensory feedback could be used to predict an asymmetrical landing, pre-landing forelimb muscle activity and movement began earlier in the limb that landed first. However, no such asymmetries in forelimb preparation were apparent during hops onto sloped landings when only visual information could be used to predict landing asymmetry. These data suggest that toads prioritize vestibular or proprioceptive information over visual feedback to coordinate landing.


2004 ◽  
Vol 27 (3) ◽  
pp. 377-396 ◽  
Author(s):  
Rick Grush

The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.


1975 ◽  
Vol 63 (1) ◽  
pp. 17-32
Author(s):  
P. J. Snow

1. The effects of altering sensory input on the motoneuronal activity underlying antennular flicking have been tested. 2. Removal of the short segments of the outer flagellum results in a reduction of the number of spikes/burst in the fast flexor motoneurones A31F and A32F. 3. During a flick the delay between the burst in motoneurone A31F and the burst in motoneurone A32F is insensitive to alteration of sensory input. 4. Sensory feedback from the flexion phase of a flick is necessary for the activation of either extensor motoneurone. Evidence is presented to suggest that this feedback is primarily from joint-movement receptors at the MS-DS and DS-OF joints. 5. The results are incorporated into a model in which the patterns of flexor activity result from some specified properties of three components: a trigger system, a follower system, and the spike initiating zone of the flexor motoneurones. The trigger system determines when a flick will occur. The follower system determines the number of flexor spikes during a flick. Properties of the spike initiating zone determine the spike frequency and the timing between bursts in the flexor motoneurones. Extensor activity in the model is reflexively elicited by feedback from phasic, unidirectional receptors sensitive to joint flexion. 6. The functional significance of reflex control of extensor activity is discussed in relation to the form and proposed function of antennular flicking. It is suggested that this form of control is adapted to the function of antennular flicking because flexion at the MS-DS joint is not always necessary for the fulfilment of the fuction of a flick.


2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


2016 ◽  
Vol 115 (4) ◽  
pp. 2237-2245 ◽  
Author(s):  
Hannah M. Krüger ◽  
Thérèse Collins ◽  
Bernhard Englitz ◽  
Patrick Cavanagh

Orienting our eyes to a light, a sound, or a touch occurs effortlessly, despite the fact that sound and touch have to be converted from head- and body-based coordinates to eye-based coordinates to do so. We asked whether the oculomotor representation is also used for localization of sounds even when there is no saccade to the sound source. To address this, we examined whether saccades introduced similar errors of localization judgments for both visual and auditory stimuli. Sixteen subjects indicated the direction of a visual or auditory apparent motion seen or heard between two targets presented either during fixation or straddling a saccade. Compared with the fixation baseline, saccades introduced errors in direction judgments for both visual and auditory stimuli: in both cases, apparent motion judgments were biased in direction of the saccade. These saccade-induced effects across modalities give rise to the possibility of shared, cross-modal location coding for perception and action.


1968 ◽  
Vol 26 (3) ◽  
pp. 731-743 ◽  
Author(s):  
Raymond S. Karlovich ◽  
James T. Graham

20 young adult female Ss tapped on a tapping key to low, mid, and high sensation-level pure-tone auditory-pacing stimuli while being exposed to synchronous visual-feedback, delayed visual-feedback, and decreased sensory-feedback conditions. The stroboscopic visual-feedback stimulus was judged to be as bright as the mid-sensation-level auditory stimulus was loud in a preliminary cross-modality matching study. The dependent variables evaluated were tapping error, temporal deviation of the taps from the onset of the pacing stimuli, and tap duration. Few tapping errors occurred under any of the conditions which indicated that the auditory sensory modality is effective in regulating motor performance even when temporally distorted visual feedback is associated with the performance. Tapping deviation data strongly suggested that the relative perceptual magnitudes between the auditory pacing stimuli and the delayed visual-feedback stimulus are important factors in determining the speed of motor response. Tap durations were greater during decreased sensory-feedback and delayed visual-feedback conditions than during synchronous visual-feedback conditions, and it was speculated that these changes occurred due to an increase in tactual and kinesthetic feedback employed by Ss to counterbalance the distorted and decreased sensory feedbacks.


1975 ◽  
Vol 19 (2) ◽  
pp. 162-165 ◽  
Author(s):  
Jack A. Adams ◽  
Daniel Gopher ◽  
Gavan Lintern

A self paced linear positioning task was used to study the effects of visual and proprioceptive feedback on learning and performance. Subjects were trained with knowledge of results (KR) and tested without it. The analysis of the absolute error scores of the no-KR trials is discussed in this paper. Visual feedback was the more effective source of sensory feedback, but proprioceptive feedback was also effective. An observation that the response did not become independent of sensory feedback as a result of learning, was interpreted as supporting Adams closed loop theory of motor learning in preference to the motor program hypothesis. Other data showed that the presence of visual feedback during learning could inhibit the later effectiveness of proprioceptive feedback.


Perception ◽  
2005 ◽  
Vol 34 (9) ◽  
pp. 1153-1155 ◽  
Author(s):  
Eric Lewin Altschuler

I have noticed a striking effect that vision can have on movement: when a person makes circular motions with both hands, clockwise with the left hand, counterclockwise with the right hand, while watching the reflection of one hand in a parasagitally placed mirror, if one arm makes a vertical excursion, the other arm tends to make the same vertical excursion, but not typically if the excursing arm is viewed in plain vision. This observation may help in understanding how visual feedback via a mirror may be beneficial for rehabilitation of some patients with movement deficits secondary to certain neurologic conditions, and illustrates that the traditional division of neural processes into sensory input and motor output is somewhat arbitrary.


2021 ◽  
Author(s):  
Julian R. Day-Cooney ◽  
Jackson J. Cone ◽  
John H.R. Maunsell

SummaryDuring visually guided behaviors, mere hundreds of milliseconds can elapse between a sensory input and its associated behavioral response. How spikes occurring at different times are integrated to drive perception and action remains poorly understood. We delivered random trains of optogenetic stimulation (white noise) to excite inhibitory interneurons in V1 of mice while they performed a visual detection task. We then performed a reverse correlation analysis on the optogenetic stimuli to generate a neuronal-behavioral kernel: an unbiased, temporally-precise estimate of how suppression of V1 spiking at different moments around the onset of a visual stimulus affects detection of that stimulus. Electrophysiological recordings enabled us to capture the effects of optogenetic stimuli on V1 responsivity and revealed that the earliest stimulus-evoked spikes are preferentially weighted for guiding behavior. These data demonstrate that white noise optogenetic stimulation is a powerful tool for understanding how patterns of spiking in neuronal populations are decoded in generating perception and action.


Sign in / Sign up

Export Citation Format

Share Document