Adaptive Network Structures for Data/Text Pattern Recognition (Application)

Author(s):  
Emmanuel Buabin

The objective of this chapter is implementation of neural based solutions in real world context. In particular, a step-wise approach to constructing, training, validating, and testing of selected feed-forward (Multi-Layer Perceptron, Radial Basis function) and recurrent (Recurrent Neural Networks) neural based classification systems are demonstrated. The pre-processing techniques adopted in extracting information from selected datasets are also discussed. In terms of future practical directions, a catalogue of intelligent systems across selected disciplines, are outlined. The main contribution of this book chapter is to provide basic introductory text with less mathematical rigor for the benefit of students, tutors, lecturers, researchers, and/or professionals who wish to delve into foundational (practical) representations of bio-intelligent intelligent systems.

Author(s):  
Emmanuel Buabin

The objective of this chapter is the introduction of artificial neural networks in the context of directed graphs. In particular, a linkage from graph theory through signal flow graphs to artificial neural networks is provided. Within the context of pattern recognition, a number of feed-forward neural based approaches are introduced and discussed. Motivation leading to the design of each neural method is also given. The main contribution of this book chapter is the provision of a basic introductory text with less mathematical rigor for the benefit of students, tutors, lecturers, researchers, and/or professionals who wish to delve into the foundational representations, concepts, and theory of bio-inspired intelligent systems.


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


2021 ◽  
Author(s):  
Behnaz Ghoraani

Most of the real-world signals in nature are non-stationary, i.e., their statistics are time variant. Extracting the time-varying frequency characteristics of a signal is very important in understanding the signal better, which could be of immense use in various applications such as pattern recognition and automated-decision making systems. In order to extract meaningful time-frequency (TF) features, a joint TF analysis is required. The proposed work is an attempt to develop a generalized TF analysis methodology that exploits the benefits of TF distribution (TFD) in pattern classification systems as related to discriminant feature detection and classification. Our objective is to introduce a unique and efficient way of performing non-stationary signal analysis using adaptive and discriminant TF techniques. To fulfill this objective, in the first point, we build a novel TF matrix (TFM) decomposition that increases the effectiveness of segmentation in real-world signals. Instantaneous and unique features are extracted from each segment such that they successfully represent joint TF structure of the signal. In the second point, based on the above technique, two unique and novel discriminant TF analysis methods are proposed to perform an improved and discriminant feature selection of any non-stationary signals. The first approach is a new machine learning method that identifies the clusters of the discriminant features to compute the presence of the discriminative pattern in any given signal, and classify them accordingly. The second approach is a discriminant TFM (DTFM) framework, which is a combination of TFM decomposition and the discriminant clustering techniques. The developed DTFM analysis automatically identifies the differences between different classes as the distinguishing structure, and uses the identified structure to accurately classify and locate the discriminant structure in the signal. The theoretical properties of the proposed approaches pertaining to pattern recognition and detection are examined in this dissertation. The extracted TF features provide strong and successful characterization and classification of real and synthetic non-stationary signals. The proposed TF techniques facilitate the adaptation of TF quantification to any feature detection technique in automating the identification process of discriminatory TF features, and can find applications in many different fields including biomedical and multimedia signal processing.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-25
Author(s):  
Meiyi Ma ◽  
John Stankovic ◽  
Ezio Bartocci ◽  
Lu Feng

Predictive monitoring—making predictions about future states and monitoring if the predicted states satisfy requirements—offers a promising paradigm in supporting the decision making of Cyber-Physical Systems (CPS). Existing works of predictive monitoring mostly focus on monitoring individual predictions rather than sequential predictions. We develop a novel approach for monitoring sequential predictions generated from Bayesian Recurrent Neural Networks (RNNs) that can capture the inherent uncertainty in CPS, drawing on insights from our study of real-world CPS datasets. We propose a new logic named Signal Temporal Logic with Uncertainty (STL-U) to monitor a flowpipe containing an infinite set of uncertain sequences predicted by Bayesian RNNs. We define STL-U strong and weak satisfaction semantics based on whether all or some sequences contained in a flowpipe satisfy the requirement. We also develop methods to compute the range of confidence levels under which a flowpipe is guaranteed to strongly (weakly) satisfy an STL-U formula. Furthermore, we develop novel criteria that leverage STL-U monitoring results to calibrate the uncertainty estimation in Bayesian RNNs. Finally, we evaluate the proposed approach via experiments with real-world CPS datasets and a simulated smart city case study, which show very encouraging results of STL-U based predictive monitoring approach outperforming baselines.


2021 ◽  
Author(s):  
Daniel B. Ehrlich ◽  
John D. Murray

Real-world tasks require coordination of working memory, decision making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. Our experiments revealed that human behavior is consistent with contingency representations, and not with traditional sensory models of working memory. In task-optimized recurrent neural networks we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from prefrontal cortex during working memory tasks. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision making.


Author(s):  
Emmanuel Buabin

The objective of this chapter is the introduction of reinforcement learning in the context of graphs. In particular, a linkage between reinforcement learning theory and graph theory is established. Within the context of semi-supervised pattern recognition, reinforcement learning theory is introduced and discussed in basic steps. Motivation, leading to learning agent development with reinforcement capabilities for massive data pattern learning is also given. The main contribution of this book chapter is the provision of a basic introductory text authored in less mathematical rigor for the benefit of students, tutors, lecturers, researchers, and/or professionals who wish to delve into the foundation, representations, concepts, and theory of graph based semi-supervised intelligent systems.


Sign in / Sign up

Export Citation Format

Share Document