scholarly journals Exploring Neural Network Hidden Layer Activity Using Vector Fields

Information ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 426
Author(s):  
Gabriel D. Cantareira ◽  
Elham Etemad ◽  
Fernando V. Paulovich

Deep Neural Networks are known for impressive results in a wide range of applications, being responsible for many advances in technology over the past few years. However, debugging and understanding neural networks models’ inner workings is a complex task, as there are several parameters and variables involved in every decision. Multidimensional projection techniques have been successfully adopted to display neural network hidden layer outputs in an explainable manner, but comparing different outputs often means overlapping projections or observing them side-by-side, presenting hurdles for users in properly conveying data flow. In this paper, we introduce a novel approach for comparing projections obtained from multiple stages in a neural network model and visualizing differences in data perception. Changes among projections are transformed into trajectories that, in turn, generate vector fields used to represent the general flow of information. This representation can then be used to create layouts that highlight new information about abstract structures identified by neural networks.

2012 ◽  
Vol 241-244 ◽  
pp. 2055-2058
Author(s):  
Jia Xuan Yang

Over the last decade, neural networks have found application for solving a wide range of areas from business, commerce, data mining and service systems. Hence, this paper constructs a new model based extension theory and neural network to forecast the ship transportation. The new neural network is a combination of extension theory and neural network. It uses an extension distance to measure the similarity between data and cluster center, and seek out the useless data, then to use neural network to forecast. When presenting a test example of prediction of ship transportation, the results verifies the effectiveness and applicability of the novel extension neural network. Compared with other forecasting techniques, especially other various neural networks, the extension neural network permits an adaptive process for significant and new information, and gives simpler structure, shorter learning times and higher accuracy.


2020 ◽  
Vol 8 (4) ◽  
pp. 469
Author(s):  
I Gusti Ngurah Alit Indrawan ◽  
I Made Widiartha

Artificial Neural Networks or commonly abbreviated as ANN is one branch of science from the field of artificial intelligence which is often used to solve various problems in fields that involve grouping and pattern recognition. This research aims to classify Letter Recognition datasets using Artificial Neural Networks which are weighted optimally using the Artificial Bee Colony algorithm. The best classification accuracy results from this study were 92.85% using a combination of 4 hidden layers with each hidden layer containing 10 neurons.


2005 ◽  
Vol 128 (4) ◽  
pp. 773-782 ◽  
Author(s):  
H. S. Tan

The conventional approach to neural network-based aircraft engine fault diagnostics has been mainly via multilayer feed-forward systems with sigmoidal hidden neurons trained by back propagation as well as radial basis function networks. In this paper, we explore two novel approaches to the fault-classification problem using (i) Fourier neural networks, which synthesizes the approximation capability of multidimensional Fourier transforms and gradient-descent learning, and (ii) a class of generalized single hidden layer networks (GSLN), which self-structures via Gram-Schmidt orthonormalization. Using a simulation program for the F404 engine, we generate steady-state engine parameters corresponding to a set of combined two-module deficiencies and require various neural networks to classify the multiple faults. We show that, compared to the conventional network architecture, the Fourier neural network exhibits stronger noise robustness and the GSLNs converge at a much superior speed.


2008 ◽  
Vol 392-394 ◽  
pp. 891-897
Author(s):  
G.Q. Shang ◽  
C.H. Sun ◽  
X.F. Chen ◽  
J.H. Du

Fused deposition modeling (FDM) has been widely applied in complex parts manufacturing and rapid tooling and so on. The precision of prototype was affected by many factors during FDM, so it is difficult to depict the process using a precise mathematical model. A novel approach for establishing a BP neural network model to predict FDM prototype precision was proposed in this paper. Firstly, based on analyzing effect of each factor on prototyping precision, some key parameters were confirmed to be feature parameters of BP neural networks. Then, the dimensional numbers of input layer and middle hidden layer were confirmed according to practical conditions, and therefore the model structure was fixed. Finally, the structure was trained by a great lot of experimental data, a model of BP neural network to predict precision of FDM prototype was constituted. The results show that the error can be controlled within 10%, which possesses excellent capability of predicting precision.


2006 ◽  
Vol 71 (11) ◽  
pp. 1207-1218
Author(s):  
Dondeti Satyanarayana ◽  
Kamarajan Kannan ◽  
Rajappan Manavalan

Simultaneous estimation of all drug components in a multicomponent analgesic dosage form with artificial neural networks calibration models using UV spectrophotometry is reported as a simple alternative to using separate models for each component. A novel approach for calibration using a compound spectral dataset derived from three spectra of each component is described. The spectra of mefenamic acid and paracetamol were recorded as several concentrations within their linear range and used to compute a calibration mixture between the wavelengths 220 to 340 nm. Neural networks trained by a Levenberg-Marquardt algorithm were used for building and optimizing the calibration models using MATALAB? Neural Network Toolbox and were compared with the principal component regression model. The calibration models were thoroughly evaluated at several concentration levels using 104 spectra obtained for 52 synthetic binary mixtures prepared using orthogonal designs. The optimized model showed sufficient robustness even when the calibration sets were constructed from a different set of pure spectra of the components. The simultaneous prediction of both components by a single neural network with the suggested calibration approach was successful. The model could accurately estimate the drugs, with satisfactory precision and accuracy, in tablet dosage with no interference from excipients as indicated by the results of a recovery study.


2008 ◽  
Vol 20 (11) ◽  
pp. 2757-2791 ◽  
Author(s):  
Yoshifusa Ito

We have constructed one-hidden-layer neural networks capable of approximating polynomials and their derivatives simultaneously. Generally, optimizing neural network parameters to be trained at later steps of the BP training is more difficult than optimizing those to be trained at the first step. Taking into account this fact, we suppressed the number of parameters of the former type. We measure degree of approximation in both the uniform norm on compact sets and the Lp-norm on the whole space with respect to probability measures.


2021 ◽  
Vol 23 (6) ◽  
pp. 317-326
Author(s):  
E.A. Ryndin ◽  
◽  
N.V. Andreeva ◽  
V.V. Luchinin ◽  
K.S. Goncharov ◽  
...  

In the current era, design and development of artificial neural networks exploiting the architecture of the human brain have evolved rapidly. Artificial neural networks effectively solve a wide range of common for artificial intelligence tasks involving data classification and recognition, prediction, forecasting and adaptive control of object behavior. Biologically inspired underlying principles of ANN operation have certain advantages over the conventional von Neumann architecture including unsupervised learning, architectural flexibility and adaptability to environmental change and high performance under significantly reduced power consumption due to heavy parallel and asynchronous data processing. In this paper, we present the circuit design of main functional blocks (neurons and synapses) intended for hardware implementation of a perceptron-based feedforward spiking neural network. As the third generation of artificial neural networks, spiking neural networks perform data processing utilizing spikes, which are discrete events (or functions) that take place at points in time. Neurons in spiking neural networks initiate precisely timing spikes and communicate with each other via spikes transmitted through synaptic connections or synapses with adaptable scalable weight. One of the prospective approach to emulate the synaptic behavior in hardware implemented spiking neural networks is to use non-volatile memory devices with analog conduction modulation (or memristive structures). Here we propose a circuit design for functional analogues of memristive structure to mimic a synaptic plasticity, pre- and postsynaptic neurons which could be used for developing circuit design of spiking neural network architectures with different training algorithms including spike-timing dependent plasticity learning rule. Two different circuits of electronic synapse were developed. The first one is an analog synapse with photoresistive optocoupler used to ensure the tunable conductivity for synaptic plasticity emulation. While the second one is a digital synapse, in which the synaptic weight is stored in a digital code with its direct conversion into conductivity (without digital-to-analog converter andphotoresistive optocoupler). The results of the prototyping of developed circuits for electronic analogues of synapses, pre- and postsynaptic neurons and the study of transient processes are presented. The developed approach could provide a basis for ASIC design of spiking neural networks based on CMOS (complementary metal oxide semiconductor) design technology.


2022 ◽  
pp. 202-226
Author(s):  
Leema N. ◽  
Khanna H. Nehemiah ◽  
Elgin Christo V. R. ◽  
Kannan A.

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.


Author(s):  
Juan R. Rabuñal Dopico ◽  
Daniel Rivero Cebrian ◽  
Julián Dorado de la Calle ◽  
Nieves Pedreira Souto

The world of Data Mining (Cios, Pedrycz & Swiniarrski, 1998) is in constant expansion. New information is obtained from databases thanks to a wide range of techniques, which are all applicable to a determined set of domains and count with a series of advantages and inconveniences. The Artificial Neural Networks (ANNs) technique (Haykin, 1999; McCulloch & Pitts, 1943; Orchad, 1993) allows us to resolve complex problems in many disciplines (classification, clustering, regression, etc.), and presents a series of advantages that convert it into a very powerful technique that is easily adapted to any environment. The main inconvenience of ANNs, however, is that they can not explain what they learn and what reasoning was followed to obtain the outputs. This implies that they can not be used in many environments in which this reasoning is essential.


Sign in / Sign up

Export Citation Format

Share Document