System and Circuit Design for Biologically-Inspired Intelligent Learning
Latest Publications


TOTAL DOCUMENTS

15
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781609600181, 9781609600204

Author(s):  
Jörg Bornschein

An FPGA-based coprocessor has been implemented which simulates the dynamics of a large recurrent neural network composed of binary neurons. The design has been used for unsupervised learning of receptive fields. Since the number of neurons to be simulated (>104) exceeds the available FPGA logic capacity for direct implementation, a set of streaming processors has been designed. Given the state- and activity vectors of the neurons at time t and a sparse connectivity matrix, these streaming processors calculate the state- and activity vectors for time t + 1. The operation implemented by the streaming processors can be understood as a generalized form of a sparse matrix vector product (SpMxV). The largest dataset, the sparse connectivity matrix, is stored and processed in a compressed format to better utilize the available memory bandwidth.


Author(s):  
Syed Bokhari ◽  
Behrouz Nowrouzian

This work is concerned with the development of a novel diversity-controlled (DC) genetic algorithm (GA) for the design and rapid optimization of frequency-response masking (FRM) digital filters incorporating bilinear lossless discrete-integrator (LDI) IIR interpolation sub-filters. The selection of FRM approach is inspired by the fact it lends itself to the design of practical sharp-transition band digital filters in terms of gradual-transition band FIR interpolation sub-filters. The proposed DCGA optimization is carried out over the canonical-signed-digit (CSD) multiplier coefficient space, resulting in FRM digital filters which are capable of direct implementation in digital hardware. A novel CSD look-up table (LUT) scheme is developed so that in every stage of DCGA optimization, the IIR interpolation sub-filters constituent in the intermediate and final FRM digital filters are guaranteed to be automatically BIBO stable. The proposed DCGA optimization permits simultaneous optimization of the magnitude-frequency as well of the group-delay frequency response of the desired FRM digital filters. An example is given to illustrate the application of the resulting DCGA optimization to the design of a lowpass FRM digital filter incorporating a fifth-order bilinear-LDI interpolation subfilter.


Author(s):  
Turgay Temel

Since biologically-inspired intelligent systems with learning and decision-making capabilities vastly act upon comparison among inputs, the ability to select those inputs which satisfy certain conditions is of great significance in realization of such systems. Moreover intelligent systems need to operate with concurrency so as to reflect inherited capability of their biological counterparts like human. Due to difficulties in programmability, storage and design complexities, the analog implementation has been considerably less favored in most computational information processing systems. However, in the case of biologically-inspired computation, their suitability for concurrency, accuracy and capability in simulating the natural behavior of biological signals, analog neural information processing is regarded an attractive solution. Benefiting the full advantage involves comprehensive understanding and knowledge of what trade-offs can be established with design topologies available and theoretical necessities. On the other hand, fuzzy reasoning offers rule-based inferential manipulation on inputs where it expresses the input-output relationship in terms of clauses. Considering a nonlinear operation carried out by artificial neural networks based on experience, realization of rule-based clauses is much easier. This chapter introduces fundamental notions of fuzzy reasoning, and fuzzy-based analog design approaches. Rather than resorting on analytical derivation for the architecture of interest, the main focus is directed at suitability for use, which is expected to indicate possibility toward developing complex intelligent systems. It should be noted that the circuits having selectivity property in deciding maximum and/or minimum on inputs demonstrate their use in much broader field than inference, thus they have great importance in realization of information processing systems. The chapter presents a very compact selectivity circuit as decision maker for the minimum of its inputs. Further to it, a considerably simple yet elaborate membership structure is introduced. The circuit simplifies the fuzzy controller design. Since mostly decision making is performed on a (dis)similarity measure between inputs, e.g. the input and label patterns for respective categories, it is convenient to express the proximity in terms of a metric. The chapter also introduces important designs proposed for assessing the similarity in the Euclidean distance.


Author(s):  
S. Soltic ◽  
N. Kasabov

The human brain has an amazing ability to recognize hundreds of thousands of different tastes. The question is: can we build artificial systems that can achieve this level of complexity? Such systems would be useful in biosecurity, the chemical and food industry, security, in home automation, etc. The purpose of this chapter is to explore how spiking neurons could be employed for building biologically plausible and efficient taste recognition systems. It presents an approach based on a novel spiking neural network model, the evolving spiking neural network with population coding (ESNN-PC), which is characterized by: (i) adaptive learning, (ii) knowledge discovery and (iii) accurate classification. ESNN-PC is used on a benchmark taste problem where the effectiveness of the information encoding, the quality of extracted rules and the model’s adaptive properties are explored. Finally, applications of ESNN-PC in recognition of the increasing interest in robotics and pervasive computing are suggested.


Author(s):  
Ziqian Liu

This chapter presents a theoretical design of how a global robust control is achieved in a class of noisy recurrent neural networks which is a promising method for modeling the behavior of biological motor-sensor systems. The approach is developed by using differential minimax game, inverse optimality, Lyapunov technique, and the Hamilton-Jacobi-Isaacs equation. In order to implement the theory of differential games into neural networks, we consider the vector of external inputs as a player and the vector of internal noises (or disturbances or modeling errors) as an opposing player. The proposed design achieves global inverse optimality with respect to some meaningful cost functional, global disturbance attenuation, as well as global asymptotic stability provided no disturbance. Finally, numerical examples are used to demonstrate the effectiveness of the proposed design.


Author(s):  
Paulo H. da F. Silva ◽  
Rossana M. S. Cruz ◽  
Adaildo G. D’Assunção

This chapter describes some/new artificial neural network (ANN) neuromodeling techniques and natural optimization algorithms for electromagnetic modeling and optimization of nonlinear devices and circuits. Neuromodeling techniques presented are based on single hidden layer feedforward neural network configurations, which are trained by the resilient back-propagation algorithm to solve the modeling learning tasks associated with device or circuit under analysis. Modular configurations of these feedforward networks and optimal neural networks are also presented considering new activation functions for artificial neurons. In addition, some natural optimization algorithms are described, such as continuous genetic algorithm (GA), a proposed improved-GA and particle swarm optimization (PSO). These natural optimization algorithms are blended with multilayer perceptrons (MLP) artificial neural network models for fast and accurate resolution of optimization problems. Some examples of applications are presented and include nonlinear RF/microwave devices and circuits, such as transistors, filters and antennas.


Author(s):  
N. Medrano ◽  
G. Zatorre ◽  
M. T. Sanz ◽  
B. Calvo ◽  
S. Celma

This chapter presents the suitability, development and implementation of programmable analogue artificial neural networks for sensor conditioning in embedded systems. Comments on the use of analogue instead of digital electronics due to the size and power constraints of these applications are included. Performance of an ad-hoc analogue architecture is evaluated, and its characteristics are analyzed. We will verify its low sensitivity to undesired effects, such as component mismatching, due to the capability of selecting and programming the proper weights for a given task. In addition, a brief discussion is offered on the selection of perturbative algorithms instead of classical error back-propagation techniques for weight tuning. At the end of the chapter, we will show the main characteristics of the proposed arithmetic cells implemented in a low-cost CMOS technology.


Author(s):  
Frank van der Velde

This chapter reviews research into human and animal forms of learning. It concentrates on two forms of learning in particular. The first is conditioning. The study of conditioning constitutes the first example of experimental research on learning. At first, it seemed to corroborate the view that learning consists of establishing associations. This form of learning was proposed by the early empiricists. The notion of associative learning influenced the emergence of behaviorism, which used conditioning to account for all forms of human and animal behavior. More recent research, however, has shown that conditioning is a more complex form of learning, related to propositional learning. This makes conditioning important for the study of the mechanisms of other, more complex, forms of propositional learning, as found in language and reasoning. The second form of learning reviewed here is visual learning. The study of this form of learning is important for understanding visual processing. And it is important for investigating the neural mechanisms of learning, given the availability of animal models of visual processing.


Author(s):  
Damien Coyle ◽  
Girijesh Prasad ◽  
Martin McGinnity

This chapter describes a number of modifications to the learning algorithm and architecture of the self-organizing fuzzy neural network (SOFNN) to improve its computational efficiency and learning ability. To improve the SOFNN’s computational efficiency, a new method of checking the network structure after it has been modified is proposed. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking network structure changes, to ensure performance degradation does not occur, resulting in significantly reduced training times. It is shown that the modified SOFNN compares favorably to other evolving fuzzy systems in terms of accuracy and structural complexity. In addition, a new architecture of the SOFNN is proposed where recurrent feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from data and, in contrast to pure feed forward architectures which exhibit static input-output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time-series) and are therefore better suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight is updated online (not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints.


Author(s):  
Benjamin I. Rapoport ◽  
Rahul Sarpeshkar

Algorithmically and energetically efficient computational architectures that operate in real time are essential for clinically useful neural prosthetic devices. Such architectures decode raw neural data to obtain direct motor control signals for external devices. They can also perform data compression and vastly reduce the bandwidth and consequently power expended in wireless transmission of raw data from implantable brain–machine interfaces. We describe a biomimetic algorithm and micropower analog circuit architecture for decoding neural cell ensemble signals. The decoding algorithm implements a continuous-time artificial neural network, using a bank of adaptive linear filters with kernels that emulate synaptic dynamics. The filters transform neural signal inputs into control-parameter outputs, and can be tuned automatically in an on-line learning process. We demonstrate that the algorithm is suitable for decoding both local field potentials and mean spike rates. We also provide experimental validation of our system, decoding discrete reaching decisions from neuronal activity in the macaque parietal cortex, and decoding continuous head direction trajectories from cell ensemble activity in the rat thalamus. We further describe a method of mapping the algorithm to a highly parallel circuit architecture capable of continuous learning and real-time operation. Circuit simulations of a subthreshold analog CMOS instantiation of the architecture reveal that its performance is comparable to the predicted performance of our decoding algorithm for a system decoding three control parameters from 100 neural input channels at microwatt levels of power consumption. While the algorithm and decoding architecture are suitable for analog or digital implementation, we indicate how a micropower analog system trades some algorithmic programmability for reductions in power and area consumption that could facilitate implantation of a neural decoder within the brain. We also indicate how our system can compress neural data more than 100,000-fold, greatly reducing the power needed for wireless telemetry of neural data.


Sign in / Sign up

Export Citation Format

Share Document