scholarly journals Neuronal Correlates of Multiple Top–Down Signals during Covert Tracking of Moving Objects in Macaque Prefrontal Cortex

2012 ◽  
Vol 24 (10) ◽  
pp. 2043-2056 ◽  
Author(s):  
Ayano Matsushima ◽  
Masaki Tanaka

Resistance to distraction is a key component of executive functions and is strongly linked to the prefrontal cortex. Recent evidence suggests that neural mechanisms exist for selective suppression of task-irrelevant information. However, neuronal signals related to selective suppression have not yet been identified, whereas nonselective surround suppression, which results from attentional enhancement for relevant stimuli, has been well documented. This study examined single neuron activities in the lateral PFC when monkeys covertly tracked one of randomly moving objects. Although many neurons responded to the target, we also found a group of neurons that exhibited a selective response to the distractor that was visually identical to the target. Because most neurons were insensitive to an additional distractor that explicitly differed in color from the target, the brain seemed to monitor the distractor only when necessary to maintain internal object segregation. Our results suggest that the lateral PFC might provide at least two top–down signals during covert object tracking: one for enhancement of visual processing for the target and the other for selective suppression of visual processing for the distractor. These signals might work together to discriminate objects, thereby regulating both the sensitivity and specificity of target choice during covert object tracking.

Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

This chapter addresses primary visual perception, detailing how visual information comes about and, as a consequence, which visual properties provide particularly useful information about the environment. The brain extracts this information systematically, and also separates redundant and complementary visual information aspects to improve the effectiveness of visual processing. Computationally, image smoothing, edge detectors, and motion detectors must be at work. These need to be applied in a convolutional manner over the fixated area, which are computations that are predestined to be solved by means of cortical columnar structures in the brain. On the next level, the extracted information needs to be integrated to be able to segment and detect object structures. The brain solves this highly challenging problem by incorporating top-down expectations and by integrating complementary visual information aspects, such as light reflections, texture information, line convergence information, shadows, and depth information. In conclusion, the need for integrating top-down visual expectations to form complete and stable perceptions is made explicit.


2005 ◽  
Vol 17 (8) ◽  
pp. 1341-1352 ◽  
Author(s):  
Joseph B. Hopfinger ◽  
Anthony J. Ries

Recent studies have generated debate regarding whether reflexive attention mechanisms are triggered in a purely automatic stimulus-driven manner. Behavioral studies have found that a nonpredictive “cue” stimulus will speed manual responses to subsequent targets at the same location, but only if that cue is congruent with actively maintained top-down settings for target detection. When a cue is incongruent with top-down settings, response times are unaffected, and this has been taken as evidence that reflexive attention mechanisms were never engaged in those conditions. However, manual response times may mask effects on earlier stages of processing. Here, we used event-related potentials to investigate the interaction of bottom-up sensory-driven mechanisms and top-down control settings at multiple stages of processing in the brain. Our results dissociate sensory-driven mechanisms that automatically bias early stages of visual processing from later mechanisms that are contingent on top-down control. An early enhancement of target processing in the extrastriate visual cortex (i.e., the P1 component) was triggered by the appearance of a unique bright cue, regardless of top-down settings. The enhancement of visual processing was prolonged, however, when the cue was congruent with top-down settings. Later processing in posterior temporal-parietal regions (i.e., the ipsilateral invalid negativity) was triggered automatically when the cue consisted of the abrupt appearance of a single new object. However, in cases where more than a single object appeared during the cue display, this stage of processing was contingent on top-down control. These findings provide evidence that visual information processing is biased at multiple levels in the brain, and the results distinguish automatically triggered sensory-driven mechanisms from those that are contingent on top-down control settings.


2011 ◽  
Vol 14 (5) ◽  
pp. 656-661 ◽  
Author(s):  
Theodore P Zanto ◽  
Michael T Rubens ◽  
Arul Thangavel ◽  
Adam Gazzaley

Detection And Tracking Of Multiple Moving Objects From A Sequence Of Video Frame And Obtaining Visual Records Of Objects Play An Important Role In The Video Surveillance Systems. Transform And Filtering Technique Designed For Video Pattern Matching And Moving Object Detection, Failed To Handle Large Number Of Objects In Video Frame And Further Needs To Be Optimized. Several Existing Methods Perform Detection And Tracking Of Moving Objects. However, The Performance Efficiency Of The Existing Methods Needs To Be Optimized To Achieve More Robust And Reliable Detection And Tracking Of Moving Objects. In Order To Improve The Pattern Matching Accuracy, A Quantized Kalman Filter-Based Pattern Matching (Qkf-Pm) Technique Is Proposed For Detecting And Tracking Of Moving Objects. The Present Phase Includes Three Functionalities: Top-Down Approach, Kernel Pattern Segment Function And Kalman Filtering. First, The Top-Down Approach Based On Kalman Filtering (Kf) Technique Is Performed To Detect The Chromatic Shadows Of Objects. Next, Kernel Pattern Segment Function Creates The Seed Points For Detecting Moving Object Pattern. Finally, Object Tracking Is Performed Using The Proposed Quantized Kalman Filter Based On The Center Of Seed Point Affinity Feature Values Are Used To Track The Moving Objects In A Particular Region Using The Minimum Bounding Box Approach. Experimental Results Reveals That The Proposed Qkf-Pm Technique Achieves Better Performance In Terms Of True Detection Rate, Pattern Matching Accuracy, Pattern Matching Time, And Object Tracking Accuracy With Respect To The Number Of Video Frames Per Second.


2012 ◽  
Vol 107 (3) ◽  
pp. 766-771 ◽  
Author(s):  
Aymar de Rugy ◽  
Welber Marinovic ◽  
Guy Wallis

To intercept or avoid moving objects successfully, we must compensate for the sensorimotor delays associated with visual processing and motor movement. Although straightforward in the case of constant velocity motion, it is unclear how humans compensate for accelerations, as our visual system is relatively poor at detecting changes in velocity. Work on free-falling objects suggests that we are able to predict the effects of gravity, but this represents the most simple, limiting case in which acceleration is constant and motion linear. Here, we show that an internal model also predicts the effects of complex, varying accelerations when they result from lawful interactions with the environment. Participants timed their responses with the arrival of a ball rolling within a tube of various shapes. The pattern of errors indicates that participants were able to compensate for most of the effects of the ball acceleration (∼85%) within a relatively short practice (∼300 trials). Errors on catch trials in which the ball velocity was unexpectedly maintained constant further confirmed that participants were expecting the effect of acceleration induced by the shape of the tube. A similar effect was obtained when the visual scene was projected upside down, indicating that the mechanism of this prediction is flexible and not confined to ecologically valid interactions. These findings demonstrate that the brain is able to predict motion on the basis of prior experience of complex interactions between an object and its environment.


Sign in / Sign up

Export Citation Format

Share Document