scholarly journals Effect of Viewing Distance On Object Responses In Macaque Areas 45B, F5a And F5p

Author(s):  
Irene Caprara ◽  
Peter Janssen

Abstract To perform tasks like grasping, the brain has to process visual object information so that the grip aperture can be adjusted before touching the object. Previous studies have demonstrated that the posterior subsector of the Anterior Intraparietal area (pAIP) is connected to area 45B, and its anterior counterpart (aAIP) to F5a. However, the role of area 45B and F5a in visually-guided grasping is poorly understood. Here, we investigated the role of area 45B, F5a and F5p in object processing during visually-guided grasping in two monkeys. If the presentation of an object activates a motor command related to the preshaping of the hand, as in F5p, such neurons should prefer objects presented within reachable distance. Conversely, neurons encoding a purely visual representation of an object – possibly in area 45B and F5a – should be less affected by viewing distance. Contrary to our expectations, we found that most neurons in area 45B were object- and viewing distance-selective (mostly Near-preferring). Area F5a showed much weaker object selectivity compared to 45B, with a similar preference for objects presented at the Near position. Finally, F5p neurons were less object selective and frequently Far-preferring. In sum, area 45B – but not F5p– prefers objects presented in peripersonal space.

2020 ◽  
Author(s):  
I Caprara ◽  
P Janssen

AbstractTo perform real-world tasks like grasping, the primate brain has to process visual object information so that the grip aperture can be adjusted before contact with the object is made. Previous studies have demonstrated that the posterior subsector of the Anterior Intraparietal area (pAIP) is connected to frontal area 45B, and the anterior subsector of AIP (aAIP) to F5a (Premereur et al., 2015). However, the role of area 45B and F5a in visually-guided object grasping is poorly understood. Here, we investigated the role of area 45B, F5a and F5p in visually-guided grasping. If a neuronal response to an object during passive fixation represents the activation of a motor command related to the preshaping of the hand, such neurons should prefer objects presented within reachable distance. Conversely, neurons encoding a pure visual representation of an object should be less affected by viewing distance. Contrary to our expectations, we found that the majority of neurons in area 45B were object- and viewing distance selective, with a clear preference for the near viewing distance. Area F5a showed much weaker object selectivity compared to 45B, with a similar preference for objects presented at the Near position emerging mainly in the late epoch. Finally, F5p neurons were less object selective and frequently preferred objects presented at the Far position. Therefore, contrary to our expectations, neurons in area 45B – but not F5p neurons – prefer objects presented in peripersonal space.Significance statementThe current experiment provides the first evidence on the neural representation of distance in frontal areas that are active during visually-guided grasping. Area 45B and F5a neurons were object- and distance-selective, and preferred the near viewing distance even for objects with identical retinal size. In area F5p we observed strong visual responses with an unexpected preference for the Far viewing distance, suggesting that the motor-related object representation was still active during the presentation of objects outside reaching distance.


Author(s):  
Steven C. Chamberlain

The lateral eye of the horseshoe crab, Limulus polyphemus, is an important model system for studies of visual processes such as phototransduction, lateral inhibition, and light adaptation. It has also been the system of choice for pioneering studies of the role of circadian efferent input from the brain to the eye. For example, light and efferent input interact in controlling the daily shedding of photosensitive membrane and photomechanical movements. Most recently, modeling efforts have begun to relate anatomy, physiology and visually guided behavior using parallel computing. My laboratory has pursued collaborative morphological studies of the compound eye for the past 15 years. Some of this research has been correlated structure/function studies; the rest has been studies of basic morphology and morphological process.


2005 ◽  
Vol 58 (3-4b) ◽  
pp. 361-377 ◽  
Author(s):  
Peter Bright ◽  
Helen E. Moss ◽  
Emmanuel A. Stamatakis ◽  
Lorraine K. Tyler

How objects are represented and processed in the brain remains a key issue in cognitive neuroscience. We have developed a conceptual structure account in which category-specific semantic deficits emerge due to differences in the structure and content of concepts rather than from explicit divisions of conceptual knowledge in separate stores. The primary claim is that concepts associated with particular categories (e.g., animals, tools) differ in the number and type of properties and the extent to which these properties are correlated with each other. In this review, we describe recent neuropsychological and neuroimaging studies in which we have extended our theoretical account by incorporating recent claims about the neuroanatomical basis of feature integration and differentiation that arise from research into hierarchical object processing streams in nonhuman primates and humans. A clear picture has emerged in which the human perirhinal cortex and neighbouring anteromedial temporal structures appear to provide the neural infrastructure for making fine-grained discriminations among objects, suggesting that damage within the perirhinal cortex may underlie the emergence of category-specific semantic deficits in brain-damaged patients.


2004 ◽  
Vol 92 (1) ◽  
pp. 10-19 ◽  
Author(s):  
J. D. Crawford ◽  
W. P. Medendorp ◽  
J. J. Marotta

Eye–hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye– hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye–head–shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.


2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Tijl Grootswagers ◽  
Ivy Zhou ◽  
Amanda K. Robinson ◽  
Martin N. Hebart ◽  
Thomas A. Carlson

AbstractThe neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.


Author(s):  
I Caprara ◽  
P Janssen

AbstractEfficient object grasping requires the continuous control of arm and hand movements based on visual information. Previous studies have identified a network of parietal and frontal areas that is crucial for the visual control of prehension movements. Electrical microstimulation of 3D shape-selective clusters in AIP during fMRI activates areas F5a and 45B, suggesting that these frontal areas may represent important downstream areas for object processing during grasping, but the role of area F5a and 45B in grasping is unknown. To assess their causal role in the frontal grasping network, we reversibly inactivated 45B, F5a and F5p during visually-guided grasping in macaque monkeys. First, we recorded single neuron activity in 45B, F5a and F5p to identify sites with object responses during grasping. Then, we injected muscimol or saline to measure the grasping deficit induced by the temporary disruption of each of these three nodes in the grasping network. The inactivation of all three areas resulted in a significant increase in the grasping time in both animals, with the strongest effect observed in area F5p. These results not only confirm a clear involvement of F5p, but also indicate causal contributions of area F5a and 45B in visually-guided object grasping.


Author(s):  
J.E. Johnson

Although neuroaxonal dystrophy (NAD) has been examined by light and electron microscopy for years, the nature of the components in the dystrophic axons is not well understood. The present report examines nucleus gracilis and cuneatus (the dorsal column nuclei) in the brain stem of aging mice.Mice (C57BL/6J) were sacrificed by aldehyde perfusion at ages ranging from 3 months to 23 months. Several brain areas and parts of other organs were processed for electron microscopy.At 3 months of age, very little evidence of NAD can be discerned by light microscopy. At the EM level, a few axons are found to contain dystrophic material. By 23 months of age, the entire nucleus gracilis is filled with dystrophic axons. Much less NAD is seen in nucleus cuneatus by comparison. The most recurrent pattern of NAD is an enlarged profile, in the center of which is a mass of reticulated material (reticulated portion; or RP).


1969 ◽  
Vol 21 (02) ◽  
pp. 294-303 ◽  
Author(s):  
H Mihara ◽  
T Fujii ◽  
S Okamoto

SummaryBlood was injected into the brains of dogs to produce artificial haematomas, and paraffin injected to produce intracerebral paraffin masses. Cerebrospinal fluid (CSF) and peripheral blood samples were withdrawn at regular intervals and their fibrinolytic activities estimated by the fibrin plate method. Trans-form aminomethylcyclohexane-carboxylic acid (t-AMCHA) was administered to some individuals. Genera] relationships were found between changes in CSF fibrinolytic activity, area of tissue damage and survival time. t-AMCHA was clearly beneficial to those animals given a programme of administration. Tissue activator was extracted from the brain tissue after death or sacrifice for haematoma examination. The possible role of tissue activator in relation to haematoma development, and clinical implications of the results, are discussed.


Sign in / Sign up

Export Citation Format

Share Document