computational theory
Recently Published Documents


TOTAL DOCUMENTS

366
(FIVE YEARS 65)

H-INDEX

38
(FIVE YEARS 3)

2022 ◽  
pp. 243-266
Author(s):  
Ashu M. G. Solo ◽  
Madan M. Gupta

Fuzzy logic can deal with information arising from perception and cognition that is uncertain, imprecise, vague, partially true, or without sharp boundaries. Fuzzy logic can be used for assigning linguistic grades and for decision making and data mining with those linguistic grades by teachers, instructors, and professors. Many aspects of fuzzy logic including fuzzy sets, linguistic variables, fuzzy rules, fuzzy math, fuzzy database queries, computational theory of perceptions, and computing with words are useful in uncertainty management of linguistic evaluations for students. This chapter provides many examples of this after describing the theory of fuzzy logic.


2021 ◽  
Author(s):  
Hermann Moisl

Abstract This paper proposes a model for implementation of intrinsic natural language sentence meaning in a physical language understanding system, where 'intrinsic' is understood as 'independent of meaning ascription by system-external observers'. The proposal is that intrinsic meaning can be implemented as a point attractor in the state space of a nonlinear dynamical system with feedback which is generated by temporally sequenced inputs. It is motivated by John Searle's well known (1980) critique of the then-standard and currently still influential Computational Theory of Mind (CTM), the essence of which was that CTM representations lack intrinsic meaning because that meaning is dependent on ascription by an observer. The proposed dynamical model comprises a collection of interacting artificial neural networks, and constitutes a radical simplification of the principle of compositional phrase structure which is at the heart of the current standard view of sentence semantics because it is computationally interpretable as a finite state machine.


Author(s):  
Yu-Ying Chuang ◽  
R. Harald Baayen

Naive discriminative learning (NDL) and linear discriminative learning (LDL) are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error that calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the end state of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since, mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing.


2021 ◽  
Author(s):  
Shi Pui Donald Li ◽  
Michael F. Bonner

The scene-preferring portion of the human ventral visual stream, known as the parahippocampal place area (PPA), responds to scenes and landmark objects, which tend to be large in real-world size, fixed in location, and inanimate. However, the PPA also exhibits preferences for low-level contour statistics, including rectilinearity and cardinal orientations, that are not directly predicted by theories of scene- and landmark-selectivity. It is unknown whether these divergent findings of both low- and high-level selectivity in the PPA can be explained by a unified computational theory. To address this issue, we fit hierarchical computational models of mid-level tuning to the image-evoked fMRI responses of the PPA, and we performed a series of high-throughput experiments on these models. Our findings show that hierarchical encoding models of the PPA exhibit emergent selectivity across multiple levels of complexity, giving rise to high-level preferences along dimensions of real-world size, fixedness, and naturalness/animacy as well as low-level preferences for rectilinear shapes and cardinal orientations. These results reconcile disparate theories of PPA function in a unified model of mid-level visual representation, and they demonstrate how multifaceted selectivity profiles naturally emerge from the hierarchical computations of visual cortex and the natural statistics of images.


2021 ◽  
Author(s):  
Sangeet Khemlani ◽  
P. N. Johnson-Laird
Keyword(s):  

2021 ◽  
Author(s):  
Aran Nayebi ◽  
Nathan C. L. Kong ◽  
Chengxu Zhuang ◽  
Justin L. Gardner ◽  
Anthony M. Norcia ◽  
...  

Task-optimized deep convolutional neural networks are the most quantitatively accurate models of the primate ventral visual stream. However, such networks are implausible as a model of the mouse visual system because mouse visual cortex has a known shallower hierarchy and the supervised objectives these networks are typically trained with are likely neither ethologically relevant in content nor in quantity. Here we develop shallow network architectures that are more consistent with anatomical and physiological studies of mouse visual cortex than current models. We demonstrate that hierarchically shallow architectures trained using contrastive objective functions applied to visual-acuity-adapted images achieve neural prediction performance that exceed those of the same architectures trained in a supervised manner and result in the most quantitatively accurate models of the mouse visual system. Moreover, these models' neural predictivity significantly surpasses those of supervised, deep architectures that are known to correspond well to the primate ventral visual stream. Finally, we derive a novel measure of inter-animal consistency, and show that the best models closely match this quantity across visual areas. Taken together, our results suggest that contrastive objectives operating on shallow architectures with ethologically-motivated image transformations may be a biologically-plausible computational theory of visual coding in mice.


Stats ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 486-508
Author(s):  
Kunhui Zhang ◽  
Yen-Chi Chen

In this paper, we propose a new clustering method inspired by mode-clustering that not only finds clusters, but also assigns each cluster with an attribute label. Clusters obtained from our method show connectivity of the underlying distribution. We also design a local two-sample test based on the clustering result that has more power than a conventional method. We apply our method to the Astronomy and GvHD data and show that our method finds meaningful clusters. We also derive the statistical and computational theory of our method.


Author(s):  
Artur Ribeiro

Posthumanist approaches in archaeology have given plenty of focus to things in the last decade. This focus on things is a reaction to the over-anthropocentric view of social life advanced by postprocessual archaeologists. Whereas agency of more than 10 years ago was about how individuals expressed purpose and identity, agency today is about how both humans and non-human objects affect one another in a symmetrical manner. It seems without doubt that Posthumanism has contributed greatly to new understandings of social reality, but in the process it has also forced archaeologists to sacrifice many topics of interest, namely those involving consciousness and purpose. But is this sacrifice really necessary? This is one of the central problems of Posthumanism: it disallows a compromise of ideas from more conventional social theory (e.g. norms, purpose, practice) with those of posthumanist theory. This paper revisits John Searle's ‘Chinese Room’ and reiterates what this thought-experiment meant to understanding consciousness and purpose. The thought-experiment highlighted the differences between humans and machines and demonstrated that, even if a machine could replicate human purpose, it would still not be considered human because, unlike mechanical processes, human purpose is based on ethics. The thought-experiment was the first step in debunking the computational theory of mind. In light of this thought-experiment, the paper argues that, in a world where things interact with humans, we should think of agency in terms of ethics and keep the focus on humans.


Sign in / Sign up

Export Citation Format

Share Document