distributed representations
Recently Published Documents


TOTAL DOCUMENTS

280
(FIVE YEARS 69)

H-INDEX

28
(FIVE YEARS 4)

2021 ◽  
Vol 72 ◽  
pp. 215-249
Author(s):  
Anthony Thomas ◽  
Sanjoy Dasgupta ◽  
Tajana Rosing

Hyperdimensional (HD) computing is a set of neurally inspired methods for obtaining highdimensional, low-precision, distributed representations of data. These representations can be combined with simple, neurally plausible algorithms to effect a variety of information processing tasks. HD computing has recently garnered significant interest from the computer hardware community as an energy-efficient, low-latency, and noise-robust tool for solving learning problems. In this review, we present a unified treatment of the theoretical foundations of HD computing with a focus on the suitability of representations for learning.


2021 ◽  
Author(s):  
Sarah Solomon ◽  
Anna Schapiro

Concepts contain rich structures that support flexible semantic cognition. These structures can be characterized by patterns of feature covariation: certain clusters of features tend to occur in the same items (e.g., feathers, wings, can fly). Existing computational models demonstrate how this kind of structure can be leveraged to slowly learn the distinctions between categories, on developmental timescales. It is not clear whether and how we leverage feature structure to quickly learn a novel category. We thus investigated how the internal structure of a new category is extracted from experience and what kinds of representations guide this learning. We predicted that humans can leverage feature clusters within an individual category to benefit learning and that this relies on the rapid formation of distributed representations. Novel categories were designed with patterns of feature associations determined by carefully constructed graph structures (Modular, Random, and Lattice). In Experiment 1, a feature inference task using verbal stimuli revealed that Modular categories—containing clusters of reliably covarying features—were more easily learned than non-Modular categories. Experiment 2 replicated this effect using visual categories. In Experiment 3, a temporal statistical learning paradigm revealed that this Modular benefit persisted even when category structure was incidental to the task. We found that a neural network model employing distributed representations was able to account for the effects, whereas prototype and exemplar models could not. The findings constrain theories of category learning and of structure learning more broadly, suggesting that humans quickly form distributed representations that reflect coherent feature structure.


2021 ◽  
Author(s):  
Wenyuan Zeng ◽  
Ming Liang ◽  
Renjie Liao ◽  
Raquel Urtasun

2021 ◽  
Author(s):  
Zhenglong Zhou ◽  
Dhairyya Singh ◽  
Marlie C Tandoc ◽  
Anna C Schapiro

Neural representations can be characterized as falling along a continuum, from distributed representations, in which neurons are responsive to many related features of the environment, to localist representations, where neurons orthogonalize activity patterns despite any input similarity. Distributed representations support powerful learning in neural network models and have been posited to exist throughout the brain, but it is unclear under what conditions humans acquire these representations and what computational advantages they may confer. In a series of behavioral experiments, we present evidence that interleaved exposure to new information facilitates the rapid formation of distributed representations in humans. As in neural network models with distributed representations, interleaved learning supports fast and automatic recognition of item relatedness, affords efficient generalization, and is especially critical for inference when learning requires statistical integration of noisy information over time. We use the data to adjudicate between several existing computational models of human memory and inference. The results demonstrate the power of interleaved learning and implicate the use of distributed representations in human inference.


2021 ◽  
pp. 115418
Author(s):  
Haoqing Wang ◽  
Huiyu Mai ◽  
Zhi-hong Deng ◽  
Chao Yang ◽  
Luxia Zhang ◽  
...  

2021 ◽  
Vol 11 (8) ◽  
pp. 3659
Author(s):  
Ayako Yagahara ◽  
Masahito Uesugi ◽  
Hideto Yokoi

Japanese medical device adverse events terminology, published by the Japan Federation of Medical Devices Associations (JFMDA terminology), contains entries for 89 terminology items, with each of the terminology entries created independently. It is necessary to establish and verify the consistency of these terminology entries and map them efficiently and accurately. Therefore, developing an automatic synonym detection tool is an important concern. Such tools for edit distances and distributed representations have achieved good performance in previous studies. The purpose of this study was to identify synonyms in JFMDA terminology and evaluate the accuracy using these algorithms. A total of 125 definition sentence pairs were created from the terminology as baselines. Edit distances (Levenshtein and Jaro–Winkler distance) and distributed representations (Word2vec, fastText, and Doc2vec) were employed for calculating similarities. Receiver operating characteristic analysis was carried out to evaluate the accuracy of synonym detection. A comparison of the accuracies of the algorithms showed that the Jaro–Winkler distance had the highest sensitivity, Doc2vec with DM had the highest specificity, and the Levenshtein distance had the highest value in area under the curve. Edit distances and Doc2vec makes it possible to obtain high accuracy in predicting synonyms in JFMDA terminology.


Author(s):  
Mostafa Ali Rushdi ◽  
Ali Muhammad Rushdi

We utilize the electromagnetically-oriented LTI∅ dimensional basis in the matrix solution of dimensional-analysis (DA) problems involving mainly electromagnetic quantities, whether these quantities are lumped or distributed. Representations in the LTI∅ basis (compared with the standard MLTI basis) are more informative and much simpler. Moreover, matrix DA computations employing the LTI∅ basis are more efficient and much less error prone. Extensive discussions of two demonstrative examples expose technical details of a novel DA scheme, and clarify many important facets of modern dimensional analysis.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Rocío Cabrera Lozoya ◽  
Arnaud Baumann ◽  
Antonino Sabetta ◽  
Michele Bezzi

Sign in / Sign up

Export Citation Format

Share Document