scholarly journals A neural circuit mechanism of categorical perception: top-down signaling in the primate cortex

Author(s):  
Bin Min ◽  
Daniel P. Bliss ◽  
Arup Sarma ◽  
David J. Freedman ◽  
Xiao-Jing Wang

AbstractIn contrast to feedforward architecture commonly used in deep networks at the core of today’s AI revolution, the biological cortex is endowed with an abundance of feedback projections. Feedback signaling is often difficult to differentially identify, and its computational roles remain poorly understood. Here, we investigated a cognitive phenomenon, called categorical perception (CP), that reveals the influences of high-level category learning on low-level feature-based perception, as a putative signature of top-down signaling. By examining behavioral data from a visual motion delayed matching experiment in non-human primates, we found that, after categorization training, motion directions closer to (respectively, away from) a category center became more (less) difficult to discriminate. This distance-dependent discrimination performance change along the dimension relevant to the learned categories provides direct evidence for the CP phenomenon. To explain this experimental finding, we developed a neural circuit model that incorporated key neurophysiological findings in visual categorization, working memory and decision making. Our model accounts for the behavioral data indicative of CP, pinpoints its circuit basis, suggests novel experimentally testable predictions and provides a functional explanation for its existence. Our work shows that delayed matching paradigms in non-human primates combined with biologically-based modeling can serve as a promising model system for elucidating the neural mechanisms of CP, as a manifestation of top-down signaling in the cortex.Significant StatementCategorical perception is a cognitive phenomenon revealing the influences of high-level category learning on low-level feature-based perception. However, its underlying neural mechanisms are largely unknown. Here, we found behavioral evidence for this phenomenon from a visual motion delayed matching experiment in non-human primates. We developed a neural circuit model that can account for this behavioral data, pinpoints its circuit basis, suggests novel experimentally testable predictions and provides a functional explanation for its existence. Our work shows that delayed matching paradigms in non-human primates combined with biologically-based modeling can serve as a promising model system for elucidating the neural mechanisms of categorical perception, as a manifestation of top-down signaling in the cortex.

2015 ◽  
Vol 113 (9) ◽  
pp. 3219-3228 ◽  
Author(s):  
Shizuka Nohara ◽  
Kenji Kawano ◽  
Kenichiro Miura

To understand the mechanisms underlying visual motion analyses for perceptual and oculomotor responses and their similarities/differences, we analyzed eye movement responses to two-frame animations of dual-grating 3 f5 f stimuli while subjects performed direction discrimination tasks. The 3 f5 f stimulus was composed of two sinusoids with a spatial frequency ratio of 3:5 (3 f and 5 f), creating a pattern with fundamental frequency f. When this stimulus was shifted by 1/4 of the wavelength, the two components shifted 1/4 of their wavelengths and had opposite directions: the 5 f forward and the 3 f backward. By presenting the 3 f5 f stimulus with various interstimulus intervals (ISIs), two visual-motion-analysis mechanisms, low-level energy-based and high-level feature-based, could be effectively distinguished. This is because response direction depends on the relative contrast between the components when the energy-based mechanism operates, but not when the feature-based mechanism works. We found that when the 3 f5 f stimuli were presented with shorter ISIs (<100 ms), and 3 f component had higher contrast, both perceptual and ocular responses were in the direction of the pattern shift, whereas the responses were reversed when the 5 f had higher contrast, suggesting operation of the energy-based mechanism. On the other hand, the ocular responses were almost negligible with longer ISIs (>100 ms), whereas perceived directions were biased toward the direction of pattern shift. These results suggest that the energy-based mechanism is dominant in oculomotor responses throughout ISIs; however, there is a transition from energy-based to feature-tracking mechanisms when we perceive visual motion.


2019 ◽  
Author(s):  
Cheng Qiu ◽  
Long Luu ◽  
Alan A. Stocker

AbstractHumans have the tendency to commit to a single interpretation of what has caused some observed evidence rather than considering all possible alternatives. This tendency can explain various forms of confirmation and reference biases. However, committing to a single high-level interpretation seems short-sighted and irrational, and thus it is unclear why humans seem motivated to pursue such strategy.In a first step toward answering this question, we systematically quantified how this strategy affects estimation accuracy at the feature level in the context of two universal hierarchical inference tasks, categorical perception and causal cue combination. Using model simulations, we demonstrate that although estimation is generally impaired when conditioned on only a single high-level inter-pretation, the impairment is not uniform across the entire feature range. On the contrary, compared to a full inference strategy that considers all high-level interpretations, accuracy is actually better for feature values for which the probability of an incorrect categorical/structural commitment is relatively low. That is to say, if an observer is reasonably certain about the high-level interpretation of the feature, it is advantageous to condition subsequent feature inference only on that particular interpretation. We also show that this benefit of commitment is substantially amplified if late noise corrupts information processing (e.g., during retention in working memory). Our results suggest that top-down inference strategies that solely rely on the most likely high-level interpretation can be favorable and at least locally outperform a full inference strategy.


Author(s):  
Alan Wee-Chung Liew ◽  
Ngai-Fong Law

With the rapid growth of Internet and multimedia systems, the use of visual information has increased enormously, such that indexing and retrieval techniques have become important. Historically, images are usually manually annotated with metadata such as captions or keywords (Chang & Hsu, 1992). Image retrieval is then performed by searching images with similar keywords. However, the keywords used may differ from one person to another. Also, many keywords can be used for describing the same image. Consequently, retrieval results are often inconsistent and unreliable. Due to these limitations, there is a growing interest in content-based image retrieval (CBIR). These techniques extract meaningful information or features from an image so that images can be classified and retrieved automatically based on their contents. Existing image retrieval systems such as QBIC and Virage extract the so-called low-level features such as color, texture and shape from an image in the spatial domain for indexing. Low-level features sometimes fail to represent high level semantic image features as they are subjective and depend greatly upon user preferences. To bridge the gap, a top-down retrieval approach involving high level knowledge can complement these low-level features. This articles deals with various aspects of CBIR. This includes bottom-up feature- based image retrieval in both the spatial and compressed domains, as well as top-down task-based image retrieval using prior knowledge.


2021 ◽  
Vol 43 (1) ◽  
pp. 1-46
Author(s):  
David Sanan ◽  
Yongwang Zhao ◽  
Shang-Wei Lin ◽  
Liu Yang

To make feasible and scalable the verification of large and complex concurrent systems, it is necessary the use of compositional techniques even at the highest abstraction layers. When focusing on the lowest software abstraction layers, such as the implementation or the machine code, the high level of detail of those layers makes the direct verification of properties very difficult and expensive. It is therefore essential to use techniques allowing to simplify the verification on these layers. One technique to tackle this challenge is top-down verification where by means of simulation properties verified on top layers (representing abstract specifications of a system) are propagated down to the lowest layers (that are an implementation of the top layers). There is no need to say that simulation of concurrent systems implies a greater level of complexity, and having compositional techniques to check simulation between layers is also desirable when seeking for both feasibility and scalability of the refinement verification. In this article, we present CSim 2 a (compositional) rely-guarantee-based framework for the top-down verification of complex concurrent systems in the Isabelle/HOL theorem prover. CSim 2 uses CSimpl, a language with a high degree of expressiveness designed for the specification of concurrent programs. Thanks to its expressibility, CSimpl is able to model many of the features found in real world programming languages like exceptions, assertions, and procedures. CSim 2 provides a framework for the verification of rely-guarantee properties to compositionally reason on CSimpl specifications. Focusing on top-down verification, CSim 2 provides a simulation-based framework for the preservation of CSimpl rely-guarantee properties from specifications to implementations. By using the simulation framework, properties proven on the top layers (abstract specifications) are compositionally propagated down to the lowest layers (source or machine code) in each concurrent component of the system. Finally, we show the usability of CSim 2 by running a case study over two CSimpl specifications of an Arinc-653 communication service. In this case study, we prove a complex property on a specification, and we use CSim 2 to preserve the property on lower abstraction layers.


Author(s):  
Maarten J. G. M. van Emmerik

Abstract Feature modeling enables the specification of a model with standardized high-level shape aspects that have a functional meaning for design or manufacturing. In this paper an interactive graphical approach to feature-based modeling is presented. The user can represent features as new CSG primitives, specified as a Boolean combination of halfspaces. Constraints between halfspaces specify the geometric characteristics of a feature and control feature validity. Once a new feature is defined and stored in a library, it can be used in other objects and positioned, oriented and dimensioned by direct manipulation with a graphics cursor. Constraints between features prevent feature interference and specify spatial relations between features.


Author(s):  
Eugene Poh ◽  
Naser Al-Fawakari ◽  
Rachel Tam ◽  
Jordan A. Taylor ◽  
Samuel D. McDougle

ABSTRACTTo generate adaptive movements, we must generalize what we have previously learned to novel situations. The generalization of learned movements has typically been framed as a consequence of neural tuning functions that overlap for similar movement kinematics. However, as is true in many domains of human behavior, situations that require generalization can also be framed as inference problems. Here, we attempt to broaden the scope of theories about motor generalization, hypothesizing that part of the typical motor generalization function can be characterized as a consequence of top-down decisions about different movement contexts. We tested this proposal by having participants make explicit similarity ratings over traditional contextual dimensions (movement directions) and abstract contextual dimensions (target shape), and perform a visuomotor adaptation generalization task where trials varied over those dimensions. We found support for our predictions across five experiments, which revealed a tight link between subjective similarity and motor generalization. Our findings suggest that the generalization of learned motor behaviors is influenced by both low-level kinematic features and high-level inferences.


2021 ◽  
Author(s):  
Davendu Y. Kulkarni ◽  
Gan Lu ◽  
Feng Wang ◽  
Luca di Mare

Abstract The gas turbine engine design involves multi-disciplinary, multi-fidelity iterative design-analysis processes. These highly intertwined processes are nowadays incorporated in automated design frameworks to facilitate high-fidelity, fully coupled, large-scale simulations. The most tedious and time-consuming step in such simulations is the construction of a common geometry database that ensures geometry consistency at every step of the design iteration, is accessible to multi-disciplinary solvers and allows system-level analysis. This paper presents a novel design-intent-driven geometry modelling environment that is based on a top-down feature-based geometry model generation method. In the proposed object-oriented environment, each feature entity possesses a separate identity, denotes an abstract geometry, and carries a set of characteristics. These geometry features are organised in a turbomachinery feature taxonomy. The engine geometry is represented by a tree-like logical structure of geometry features, wherein abstract features outline the engine architecture, while the detailed geometry is defined by lower-level features. This top-down flexible arrangement of feature-tree enables the design intent to be preserved throughout the design process, allows the design to be modified freely and supports the design intent variations to be propagated throughout the geometry automatically. The application of the proposed feature-based geometry modelling environment is demonstrated by generating a whole-engine computational geometry. This geometry modelling environment provides an efficient means of rapidly populating complex turbomachinery assemblies. The generated engine geometry is fully scalable, easily modifiable and is re-usable for generating the geometry models of new engines or their derivatives. This capability also enables fast multi-fidelity simulation and optimisation of various gas turbine systems.


Author(s):  
Rajneet Sodhi ◽  
Joshua U. Turner

Abstract This paper describes a strategy for representing tolerance information and assembly information in a feature-based design environment. The concept of designing with features is extended to incorporate the specification of tolerance information. This allows appropriate tolerancing strategies to be provided within the feature definitions themselves. Thus a closer connection is formed between features and the functional intent implicit in their use. The concept of designing with features is also extended to incorporate the specification of assembly information, through the use of assembly features which provide a high-level user interface for the creation and modeling of assemblies, and which handle the identification and creation of mating relations between components. Several examples of component and assembly design using this extended feature-based approach are presented.


2018 ◽  
Author(s):  
◽  
Guanghan Ning

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] The task of human pose estimation in natural scenes is to determine the precise pixel locations of body keypoints. It is very important for many high-level computer vision tasks, including action and activity recognition, human-computer interaction, motion capture, and animation. We cover two different approaches for this task: top-down approach and bottom-up approach. In the top-down approach, we propose a human tracking method called ROLO that localizes each person. We then propose a state-of-the-art single-person human pose estimator that predicts the body keypoints of each individual. In the bottomup approach, we propose an efficient multi-person pose estimator with which we participated in a PoseTrack challenge [11]. On top of these, we propose to employ adversarial training to further boost the performance of single-person human pose estimator while generating synthetic images. We also propose a novel PoSeg network that jointly estimates the multi-person human poses and semantically segment the portraits of these persons at pixel-level. Lastly, we extend some of the proposed methods on human pose estimation and portrait segmentation to the task of human parsing, a more finegrained computer vision perception of humans.


Mind-Society ◽  
2019 ◽  
pp. 22-47
Author(s):  
Paul Thagard

Psychological explanations based on representations and procedures can be deepened by showing how they emerge from neural mechanisms. Neurons represent aspects of the world by collective patterns of firing. These patterns can be bound into more complicated patterns that can transcend the limitations of sensory inputs. Semantic pointers are a special kind of representation that operates by binding neural patterns encompassing sensory, motor, verbal, and emotional information. The semantic pointer theory applies not only to the ordinary operations of mental representations like concepts and rules but also to the most high-level kinds of human thinking, including language, creativity, and consciousness. Semantic pointers also encompass emotions, construed as bindings that combine cognitive appraisal with physiological perception.


Sign in / Sign up

Export Citation Format

Share Document