distributed representation
Recently Published Documents


TOTAL DOCUMENTS

230
(FIVE YEARS 60)

H-INDEX

23
(FIVE YEARS 3)

2021 ◽  
Vol 11 (22) ◽  
pp. 10786
Author(s):  
Kyuchang Kang ◽  
Changseok Bae

Recent achievements on CNN (convolutional neural networks) and DNN (deep neural networks) researches provide a lot of practical applications on computer vision area. However, these approaches require construction of huge size of training data for learning process. This paper tries to find a way for continual learning which does not require prior high-cost training data construction by imitating a biological memory model. We employ SDR (sparse distributed representation) for information processing and semantic memory model, which is known as a representation model of firing patterns on neurons in neocortex area. This paper proposes a novel memory model to reflect remembrance of morphological semantics of visual input stimuli. The proposed memory model considers both memory process and recall process separately. First, memory process converts input visual stimuli to sparse distributed representation, and in this process, morphological semantic of input visual stimuli can be preserved. Next, recall process can be considered by comparing sparse distributed representation of new input visual stimulus and remembered sparse distributed representations. Superposition of sparse distributed representation is used to measure similarities. Experimental results using 10,000 images in MNIST (Modified National Institute of Standards and Technology) and Fashion-MNIST data sets show that the sparse distributed representation of the proposed model efficiently keeps morphological semantic of the input visual stimuli.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Hrushikesh Bhosale ◽  
Ashwin Lahorkar ◽  
Divye Singh ◽  
Aamod Sane ◽  
Jayaraman Valadi

Author(s):  
Alexander Demidovskij ◽  
Eduard Babkin

Introduction: The construction of integrated neurosymbolic systems is an urgent and challenging task. Building neurosymbolic decision support systems requires new approaches to represent knowledge about a problem situation and to express symbolic reasoning at the subsymbolic level.  Purpose: Development of neural network architectures and methods for effective distributed knowledge representation and subsymbolic reasoning in decision support systems in terms of algorithms for aggregation of fuzzy expert evaluations to select alternative solutions. Methods: Representation of fuzzy and uncertain estimators in a distributed form using tensor representations; construction of a trainable neural network architecture for subsymbolic aggregation of linguistic estimators. Results: The study proposes two new methods of representation of linguistic assessments in a distributed form. The first approach is based on the possibility of converting an arbitrary linguistic assessment into a numerical representation and consists in converting this numerical representation into a distributed one by converting the number itself into a bit string and further forming a matrix storing the distributed representation of the whole expression for aggregating the assessments. The second approach to translating linguistic assessments to a distributed representation is based on representing the linguistic assessment as a tree and coding this tree using the method of tensor representations, thus avoiding the step of translating the linguistic assessment into a numerical form and ensuring the transition between symbolic and subsymbolic representations of linguistic assessments without any loss of information. The structural elements of the linguistic assessment are treated as fillers with their respective positional roles. A new subsymbolic method of aggregation of linguistic assessments is proposed, which consists in creating a trainable neural network module in the form of a Neural Turing Machine. Practical relevance: The results of the study demonstrate how a symbolic algorithm for aggregation of linguistic evaluations can be implemented by connectionist (or subsymbolic) mechanisms, which is an essential requirement for building distributed neurosymbolic decision support systems.


Author(s):  
Hrushikesh Bhosale ◽  
Vigneshwar Ramakrishnan ◽  
Valadi K. Jayaraman

Bacterial virulence can be attributed to a wide variety of factors including toxins that harm the host. Pore-forming toxins are one class of toxins that confer virulence to the bacteria and are one of the promising targets for therapeutic intervention. In this work, we develop a sequence-based machine learning framework for the prediction of pore-forming toxins. For this, we have used distributed representation of the protein sequence encoded by reduced alphabet schemes based on conformational similarity and hydropathy index as input features to Support Vector Machines (SVMs). The choice of conformational similarity and hydropathy indices is based on the functional mechanism of pore-forming toxins. Our methodology achieves about 81% accuracy indicating that conformational similarity, an indicator of the flexibility of amino acids, along with hydrophobic index can capture the intrinsic features of pore-forming toxins that distinguish it from other types of transporter proteins. Increased understanding of the mechanisms of pore-forming toxins can further contribute to the use of such “mechanism-informed” features that may increase the prediction accuracy further.


Author(s):  
Takuma Oda ◽  
Shih-Wei Chiu ◽  
Takuhiro Yamaguchi

Abstract Objective This study aimed to develop a semi-automated process to convert legacy data into clinical data interchange standards consortium (CDISC) study data tabulation model (SDTM) format by combining human verification and three methods: data normalization; feature extraction by distributed representation of dataset names, variable names, and variable labels; and supervised machine learning. Materials and Methods Variable labels, dataset names, variable names, and values of legacy data were used as machine learning features. Because most of these data are string data, they had been converted to a distributed representation to make them usable as machine learning features. For this purpose, we utilized the following methods for distributed representation: Gestalt pattern matching, cosine similarity after vectorization by Doc2vec, and vectorization by Doc2vec. In this study, we examined five algorithms—namely decision tree, random forest, gradient boosting, neural network, and an ensemble that combines the four algorithms—to identify the one that could generate the best prediction model. Results The accuracy rate was highest for the neural network, and the distribution of prediction probabilities also showed a split between the correct and incorrect distributions. By combining human verification and the three methods, we were able to semi-automatically convert legacy data into the CDISC SDTM format. Conclusion By combining human verification and the three methods, we have successfully developed a semi-automated process to convert legacy data into the CDISC SDTM format; this process is more efficient than the conventional fully manual process.


Author(s):  
RUI XUE TANG

Existing models extract entity relations only after two entity spans have been precisely extracted that influenced the performance of relation extraction. Compared with recognizing entity spans, because the boundary has a small granularity and a less ambiguity, it can be detected precisely and incorporated to learn better representation. Motivated by the strengths of boundary, we propose a boundary determined neural (BDN) model, which leverages boundaries as task-related cues to predict the relation labels. Our model can predict high-quality relation instance via the pairs of boundaries, which can relieve error propagation problem. Moreover, our model fuses with boundary-relevant information encoding to represent distributed representation to improve the ability of capturing semantic and dependency information, which can increase the discriminability of neural network. Experiments show that our model achieves state-of-the-art performances on ACE05 corpus.


2021 ◽  
Vol 22 (11) ◽  
pp. 5521
Author(s):  
Lei Deng ◽  
Hui Wu ◽  
Xuejun Liu ◽  
Hui Liu

Predicting in vivo protein–DNA binding sites is a challenging but pressing task in a variety of fields like drug design and development. Most promoters contain a number of transcription factor (TF) binding sites, but only a small minority has been identified by biochemical experiments that are time-consuming and laborious. To tackle this challenge, many computational methods have been proposed to predict TF binding sites from DNA sequence. Although previous methods have achieved remarkable performance in the prediction of protein–DNA interactions, there is still considerable room for improvement. In this paper, we present a hybrid deep learning framework, termed DeepD2V, for transcription factor binding sites prediction. First, we construct the input matrix with an original DNA sequence and its three kinds of variant sequences, including its inverse, complementary, and complementary inverse sequence. A sliding window of size k with a specific stride is used to obtain its k-mer representation of input sequences. Next, we use word2vec to obtain a pre-trained k-mer word distributed representation model. Finally, the probability of protein–DNA binding is predicted by using the recurrent and convolutional neural network. The experiment results on 50 public ChIP-seq benchmark datasets demonstrate the superior performance and robustness of DeepD2V. Moreover, we verify that the performance of DeepD2V using word2vec-based k-mer distributed representation is better than one-hot encoding, and the integrated framework of both convolutional neural network (CNN) and bidirectional LSTM (bi-LSTM) outperforms CNN or the bi-LSTM model when used alone. The source code of DeepD2V is available at the github repository.


2021 ◽  
Author(s):  
Keno Juechems ◽  
Andrew Saxe

From a young age, we can select actions to achieve desired goals, infer the goals of other agents, and learn causal relations in our environment through social interactions. Crucially, these abilities are productive and generative: we can impute desires to others that we have never held ourselves. These abilities are often captured by only partially overlapping models, each requiring substantial changes to fit combinations of abilities. Here, in an attempt to unify previous models, we present a neural network underpinned by the linearly solvable Markov Decision Process (LMDP) framework which permits a distributed representation of tasks. The network contains two pathways: one captures the desirability of states, and another encodes the passive dynamics of state transitions in the absence of control. Interactions between pathways are bound by a principle of rational action, enabling generative inference of actions, goals, and causal relations supported by gradient updates to parts of the network.


Author(s):  
Le Yan ◽  
Zhen Qin ◽  
Rama Kumar Pasumarthi ◽  
Xuanhui Wang ◽  
Michael Bendersky

Sign in / Sign up

Export Citation Format

Share Document