associative networks
Recently Published Documents


TOTAL DOCUMENTS

130
(FIVE YEARS 17)

H-INDEX

21
(FIVE YEARS 2)

2021 ◽  
Vol 15 ◽  
Author(s):  
Yan Yufik ◽  
Raj Malhotra

The Air Force research programs envision developing AI technologies that will ensure battlespace dominance, by radical increases in the speed of battlespace understanding and decision-making. In the last half century, advances in AI have been concentrated in the area of machine learning. Recent experimental findings and insights in systems neuroscience, the biophysics of cognition, and other disciplines provide converging results that set the stage for technologies of machine understanding and machine-augmented Situational Understanding. This paper will review some of the key ideas and results in the literature, and outline new suggestions. We define situational understanding and the distinctions between understanding and awareness, consider examples of how understanding—or lack of it—manifest in performance, and review hypotheses concerning the underlying neuronal mechanisms. Suggestions for further R&D are motivated by these hypotheses and are centered on the notions of Active Inference and Virtual Associative Networks.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jennifer H. Coane ◽  
Dawn M. McBride ◽  
Mark J. Huff ◽  
Kai Chang ◽  
Elizabeth M. Marsh ◽  
...  

The use of list-learning paradigms to explore false memory has revealed several critical findings about the contributions of similarity and relatedness in memory phenomena more broadly. Characterizing the nature of “similarity and relatedness” can inform researchers about factors contributing to memory distortions and about the underlying associative and semantic networks that support veridical memory. Similarity can be defined in terms of semantic properties (e.g., shared conceptual and taxonomic features), lexical/associative properties (e.g., shared connections in associative networks), or structural properties (e.g., shared orthographic or phonological features). By manipulating the type of list and its relationship to a non-studied critical item, we review the effects of these types of similarity on veridical and false memory. All forms of similarity reviewed here result in reliable error rates and the effects on veridical memory are variable. The results across a variety of paradigms and tests provide partial support for a number of theoretical explanations of false memory phenomena, but none of the theories readily account for all results.


2021 ◽  
Vol 54 (3) ◽  
pp. 1354-1365
Author(s):  
Irina Mahmad Rasid ◽  
Niels Holten-Andersen ◽  
Bradley D. Olsen

2021 ◽  
Vol 5 ◽  
Author(s):  
Vsevolod Kapatsinski

Constructionist approaches to language propose that the language system is a network of constructions, defined as bidirectional mappings between a complex form and a meaning. This paper critically evaluates the evidence for and against two possible construals of this proposal as a psycholinguistic theory: that direct, bidirectional form-meaning associations play a central role in language comprehension and production, and the stronger claim that they are the only type of association at play. Bidirectional form-meaning associations are argued to be plausible, despite some apparent evidence against bidirectionality. However, form-meaning associations are insufficient to account for some morphological patterns. In particular, there is convincing evidence for productive paradigmatic mappings that are phonologically arbitrary, which cannot be captured by form-meaning mappings alone, without associations between paradigmatically related forms or constructions. Paradigmatic associations are argued to be unidirectional. In addition, subtraction and backformation at first glance require augmenting the associative networks with conditioned operations (rules). However, it is argued that allowing for negative form-meaning associations accommodates subtraction and backformation within the constructionist approach without introducing any additional mechanisms. The interplay of positive and negative form-meaning associations and paradigmatic mappings is exemplified using a previously undescribed morphological construction in Russian, the bez-Adjective construction.


2021 ◽  
Vol 126 (1) ◽  
Author(s):  
Francesca Schönsberg ◽  
Yasser Roudi ◽  
Alessandro Treves

Soft Matter ◽  
2021 ◽  
Author(s):  
Irina Mahmad Rasid ◽  
Changwoo Do ◽  
Niels Holten-Andersen ◽  
Bradley D. Olsen

Exploration of effect of sticker clustering on dynamics of associative polymer networks showed trends in rheological relaxation and diffusion with clustering are different than for uniformly distributed stickers.


Systems ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 1
Author(s):  
Ismo T. Koponen

Associative knowledge networks are central in many areas of learning and teaching. One key problem in evaluating and exploring such networks is to find out its key items (nodes), sub-structures (connected set of nodes), and how the roles of sub-structures can be compared. In this study, we suggest an approach for analyzing associative networks, so that analysis is based on spreading activation and systemic states that correpond to the state of spreading. The method is based on the construction of diffusion-propagators as generalized systemic states of the network, for an exploration of the connectivity of a network and, subsequently, on generalized Jensen–Shannon–Tsallis relative entropy (based on Tsallis-entropy) in order to compare the states. It is shown that the constructed systemic states provide a robust way to compare roles of sub-networks in spreading activation. The viability of the method is demonstrated by applying it to recently published network representations of students’ associative knowledge regarding the history of science.


2020 ◽  
Author(s):  
Francesca Schönsberg ◽  
Yasser Roudi ◽  
Alessandro Treves

We show that associative networks of threshold linear units endowed with Hebbian learning can operate closer to the Gardner optimal storage capacity than their binary counterparts and even surpass this bound. This is largely achieved through a sparsification of the retrieved patterns, which we analyze for theoretical and empirical distributions of activity. As reaching the optimal capacity via non-local learning rules like back-propagation requires slow and neurally implausible training procedures, our results indicate that one-shot self-organized Hebbian learning can be just as efficient.


Sign in / Sign up

Export Citation Format

Share Document