scholarly journals Responses of Anterior Superior Temporal Polysensory (STPa) Neurons to “Biological Motion” Stimuli

1994 ◽  
Vol 6 (2) ◽  
pp. 99-116 ◽  
Author(s):  
M. W. Oram ◽  
D. I. Perrett

Cells have been found in the superior temporal polysensory area (STPa) of the macaque temporal cortex that are selectively responsive to the sight of particular whole body movements (e.g., walking) under normal lighting. These cells typically discriminate the direction of walking and the view of the body (e.g., left profile walking left). We investigated the extent to which these cells are responsive under “biological motion” conditions where the form of the body is defined only by the movement of light patches attached to the points of limb articulation. One-third of the cells (25/72) selective for the form and motion of walking bodies showed sensitivity to the moving light displays. Seven of these cells showed only partial sensitivity to form from motion, in so far as the cells responded more to moving light displays than to moving controls but failed to discriminate body view. These seven cells exhibited directional selectivity. Eighteen cells showed statistical discrimination for both direction of movement and body view under biological motion conditions. Most of these cells showed reduced responses to the impoverished moving light stimuli compared to full light conditions. The 18 cells were thus sensitive to detailed form information (body view) from the pattern of articulating motion. Cellular processing of the global pattern of articulation was indicated by the observations that none of these cells were found sensitive to movement of individual limbs and that jumbling the pattern of moving limbs reduced response magnitude. A further 10 cells were tested for sensitivity to moving light displays of whole body actions other than walking. Of these cells 5/10 showed selectivity for form displayed by biological motion stimuli that paralleled the selectivity under normal lighting conditions. The cell responses thus provide direct evidence for neural mechanisms computing form from nonrigid motion. The selectivity of the cells was for body view, specific direction, and specific type of body motion presented by moving light displays and is not predicted by many current computational approaches to the extraction of form from motion.

2017 ◽  
Vol 118 (4) ◽  
pp. 2499-2506 ◽  
Author(s):  
A. Pomante ◽  
L. P. J. Selen ◽  
W. P. Medendorp

The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical—as a proxy for the tilt percept—during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s2peak acceleration, 80 cm displacement). While subjects ( n=10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model’s prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical.NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion.


2019 ◽  
Vol 121 (6) ◽  
pp. 2392-2400 ◽  
Author(s):  
Romy S. Bakker ◽  
Luc P. J. Selen ◽  
W. Pieter Medendorp

In daily life, we frequently reach toward objects while our body is in motion. We have recently shown that body accelerations influence the decision of which hand to use for the reach, possibly by modulating the body-centered computations of the expected reach costs. However, head orientation relative to the body was not manipulated, and hence it remains unclear whether vestibular signals contribute in their head-based sensory frame or in a transformed body-centered reference frame to these cost calculations. To test this, subjects performed a preferential reaching task to targets at various directions while they were sinusoidally translated along the lateral body axis, with their head either aligned with the body (straight ahead) or rotated 18° to the left. As a measure of hand preference, we determined the target direction that resulted in equiprobable right/left-hand choices. Results show that head orientation affects this balanced target angle when the body is stationary but does not further modulate hand preference when the body is in motion. Furthermore, reaction and movement times were larger for reaches to the balanced target angle, resembling a competitive selection process, and were modulated by head orientation when the body was stationary. During body translation, reaction and movement times depended on the phase of the motion, but this phase-dependent modulation had no interaction with head orientation. We conclude that the brain transforms vestibular signals to body-centered coordinates at the early stage of reach planning, when the decision of hand choice is computed. NEW & NOTEWORTHY The brain takes inertial acceleration into account in computing the anticipated biomechanical costs that guide hand selection during whole body motion. Whereas these costs are defined in a body-centered, muscle-based reference frame, the otoliths detect the inertial acceleration in head-centered coordinates. By systematically manipulating head position relative to the body, we show that the brain transforms otolith signals into body-centered coordinates at an early stage of reach planning, i.e., before the decision of hand choice is computed.


2016 ◽  
Vol 113 (17) ◽  
pp. E2450-E2459 ◽  
Author(s):  
Ivo D. Popivanov ◽  
Philippe G. Schyns ◽  
Rufin Vogels

Body category-selective regions of the primate temporal cortex respond to images of bodies, but it is unclear which fragments of such images drive single neurons’ responses in these regions. Here we applied the Bubbles technique to the responses of single macaque middle superior temporal sulcus (midSTS) body patch neurons to reveal the image fragments the neurons respond to. We found that local image fragments such as extremities (limbs), curved boundaries, and parts of the torso drove the large majority of neurons. Bubbles revealed the whole body in only a few neurons. Neurons coded the features in a manner that was tolerant to translation and scale changes. Most image fragments were excitatory but for a few neurons both inhibitory and excitatory fragments (opponent coding) were present in the same image. The fragments we reveal here in the body patch with Bubbles differ from those suggested in previous studies of face-selective neurons in face patches. Together, our data indicate that the majority of body patch neurons respond to local image fragments that occur frequently, but not exclusively, in bodies, with a coding that is tolerant to translation and scale. Overall, the data suggest that the body category selectivity of the midSTS body patch depends more on the feature statistics of bodies (e.g., extensions occur more frequently in bodies) than on semantics (bodies as an abstract category).


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8357
Author(s):  
Akito Tohma ◽  
Maho Nishikawa ◽  
Takuya Hashimoto ◽  
Yoichi Yamazaki ◽  
Guanghao Sun

Camera-based remote photoplethysmography (rPPG) is a low-cost and casual non-contact heart rate measurement method suitable for telemedicine. Several factors affect the accuracy of measuring the heart rate and heart rate variability (HRV) using rPPG despite HRV being an important indicator for healthcare monitoring. This study aimed to investigate the appropriate setup for precise HRV measurements using rPPG while considering the effects of possible factors including illumination, direction of the light, frame rate of the camera, and body motion. In the lighting conditions experiment, the smallest mean absolute R–R interval (RRI) error was obtained when light greater than 500 lux was cast from the front (among the following conditions—illuminance: 100, 300, 500, and 700 lux; directions: front, top, and front and top). In addition, the RRI and HRV were measured with sufficient accuracy at frame rates above 30 fps. The accuracy of the HRV measurement was greatly reduced when the body motion was not constrained; thus, it is necessary to limit the body motion, especially the head motion, in an actual telemedicine situation. The results of this study can act as guidelines for setting up the shooting environment and camera settings for rPPG use in telemedicine.


2021 ◽  
Author(s):  
Omid A Zobeiri ◽  
Kathleen E Cullen

The ability to accurately control our posture and perceive spatial orientation during self-motion requires knowledge of the motion of both the head and body. However, whereas the vestibular sensors and nuclei directly encode head motion, no sensors directly encode body motion. Instead, the integration of vestibular and neck proprioceptive inputs is necessary to transform vestibular information into the body-centric reference frame required for postural control. The anterior vermis of the cerebellum is thought to play a key role in this transformation, yet how its Purkinje cells integrate these inputs or what information they dynamically encode during self-motion remains unknown. Here we recorded the activity of individual anterior vermis Purkinje cells in alert monkeys during passively applied whole-body, body-under-head, and head-on-body rotations. Most neurons dynamically encoded an intermediate representation of self-motion between head and body motion. Notably, these neurons responded to both vestibular and neck proprioceptive stimulation and showed considerable heterogeneity in their response dynamics. Furthermore, their vestibular responses demonstrated tuning in response to changes in head-on-body position. In contrast, a small remaining percentage of neurons sensitive only to vestibular stimulation unambiguously encoded head-in-space motion across conditions. Using a simple population model, we establish that combining responses from 40 Purkinje cells can explain the responses of their target neurons in deep cerebellar nuclei across all self-motion conditions. We propose that the observed heterogeneity in Purkinje cells underlies the cerebellum's capacity to compute the dynamic representation of body motion required to ensure accurate postural control and perceptual stability in our daily lives.


Author(s):  
Jian Wan ◽  
Nanxin Wang ◽  
Robert Pakko

One of the common usages of a captured human body motion in automotive application is creating swept volumes of the body surfaces based on the trajectories of its motion. Recent development of depth sensors enables fast and natural motion capture without attaching markers on subjects’ bodies. Microsoft Kinect is one of widely used depth sensors. It can track a whole body motion and output a skeleton model. A new method is developed to create the swept volumes from the motion captured by Kinect using an open source graphic system. The skeleton motion is recorded in a file format that is flexible to retain the skeleton’s structure and acceptable to various graphic systems. The motion is then bound with a surface manikin model in the graphic system, where the swept volumes are generated. This method is more flexible and portable than utilizing a commercial digital manikin, and potentially provides more accurate result.


2019 ◽  
Author(s):  
Meghan E. Huber ◽  
Enrico Chiovetto ◽  
Martin Giese ◽  
Dagmar Sternad

ABSTRACTMaintaining balance while walking on a narrow beam is a challenging motor task. This is presumably because the foot’s ability to exert torque on the support surface is limited by the beam width. Still, the feet serve as a critical interface between the body and the external environment, and it is unclear how the mechanical properties of the feet affect balance. Here we examined how restricting the degrees of freedom of the feet influenced balance behavior during beam walking. We recorded whole-body joint kinematics of subjects with varying skill levels as they walked on a narrow beam with and without wearing flat, rigid soles on their feet. We computed changes in whole-body motion and angular momentum across these conditions. Results showed that wearing rigid soles improved balance in the beam walking task, but that practice with rigid soles did not affect or transfer to task performance with bare feet. The absence of any after-effect suggested that the improved balance from constraining the foot was the result of a mechanical effect rather than a change in neural strategy. Though wearing rigid soles can be used to assist balance, there appear to be limited training or rehabilitation benefits from wearing rigid soles.


2003 ◽  
Vol 15 (7) ◽  
pp. 991-1001 ◽  
Author(s):  
Michael S. Beauchamp ◽  
Kathryn E. Lee ◽  
James V. Haxby ◽  
Alex Martin

We used fMRI to study the organization of brain responses to different types of complex visual motion. In a rapid eventrelated design, subjects viewed video clips of humans performing different whole-body motions, video clips of manmade manipulable objects (tools) moving with their characteristic natural motion, point-light displays of human whole-body motion, and point-light displays of manipulable objects. The lateral temporal cortex showed strong responses to both moving videos and moving point-light displays, supporting the hypothesis that the lateral temporal cortex is the cortical locus for processing complex visual motion. Within the lateral temporal cortex, we observed segregated responses to different types of motion. The superior temporal sulcus (STS) responded strongly to human videos and human point-light displays, while the middle temporal gyrus (MTG) and the inferior temporal sulcus responded strongly to tool videos and tool point-light displays. In the ventral temporal cortex, the lateral fusiform responded more to human videos than to any other stimulus category while the medial fusiform preferred tool videos. The relatively weak responses observed to point-light displays in the ventral temporal cortex suggests that form, color, and texture (present in video but not point-light displays) are the main contributors to ventral temporal activity. In contrast, in the lateral temporal cortex, the MTG responded as strongly to point-light displays as to videos, suggesting that motion is the key determinant of response in the MTG. Whereas the STS responded strongly to point-light displays, it showed an even larger response to video displays, suggesting that the STS integrates form, color, and motion information.


2017 ◽  
Vol 117 (6) ◽  
pp. 2250-2261 ◽  
Author(s):  
Romy S. Bakker ◽  
Roel H. A. Weijer ◽  
Robert J. van Beers ◽  
Luc P. J. Selen ◽  
W. Pieter Medendorp

In everyday life, we frequently have to decide which hand to use for a certain action. It has been suggested that for this decision the brain calculates expected costs based on action values, such as expected biomechanical costs, expected success rate, handedness, and skillfulness. Although these conclusions were based on experiments in stationary subjects, we often act while the body is in motion. We investigated how hand choice is affected by passive body motion, which directly affects the biomechanical costs of the arm movement due to its inertia. With the use of a linear motion platform, 12 right-handed subjects were sinusoidally translated (0.625 and 0.5 Hz). At 8 possible motion phases, they had to reach, using either their left or right hand, to a target presented at 1 of 11 possible locations. We predicted hand choice by calculating the expected biomechanical costs under different assumptions about the future acceleration involved in these computations, being the forthcoming acceleration during the reach, the instantaneous acceleration at target onset, or zero acceleration as if the body were stationary. Although hand choice was generally biased to use of the dominant hand, it also modulated sinusoidally with the motion, with the amplitude of the bias depending on the motion’s peak acceleration. The phase of hand choice modulation was consistent with the cost model that took the instantaneous acceleration signal at target onset. This suggests that the brain relies on the bottom-up acceleration signals, and not on predictions about future accelerations, when deciding on hand choice during passive whole body motion. NEW & NOTEWORTHY Decisions of hand choice are a fundamental aspect of human behavior. Whereas these decisions are typically studied in stationary subjects, this study examines hand choice while subjects are in motion. We show that accelerations of the body, which differentially modulate the biomechanical costs of left and right hand movements, are also taken into account when deciding which hand to use for a reach, possibly based on bottom-up processing of the otolith signal.


Author(s):  
Michelle Tong ◽  
Priyanka Mensinkai

The study examines the visual processes underlying the detection of the motion of land animals, or biological motion. The ability to process the motion of other living beings has profound ecological implications in the wilderness and in our everyday life. Earlier models suggest that there are two distinct ways to process this information. One uses the shape of an entire figure and one uses the motion of one part of the body. In this experiment, we aim to study whether the local motion of the feet or the configuration of the body is used to determine the direction into which a figure is facing. We do this by training pigeons to discriminate facing direction of a stationary walking point‐light figure. Pigeons chose one of two walkers by pecking on a touch screen. Once the task was learned, catch trials of backwards walkers were introduced. This kind of display gives the pigeon opposing information about direction. While the shape of the walker tells them it is walking one way, the feet give the impression that it is moving in the other. Pigeons were successful in learning to discriminate directions and at the introduction of the catch trials, most birds used the local motion cue of the feet to determine direction. The results indicate that pigeons seem to being using the feet, rather than the shape of a figure, to process direction of movement. In conjunction with previous literature, this study suggests that there exists an innate “life detector” specialized for filtering the movement of the feet.


Sign in / Sign up

Export Citation Format

Share Document