General Adaptive Transfer Functions Design for Volume Rendering by Using Neural Networks

Author(s):  
Liansheng Wang ◽  
Xucan Chen ◽  
Sikun Li ◽  
Xun Cai
2019 ◽  
Vol 11 (4) ◽  
pp. 1 ◽  
Author(s):  
Tobias de Taillez ◽  
Florian Denk ◽  
Bojana Mirkovic ◽  
Birger Kollmeier ◽  
Bernd T. Meyer

Diferent linear models have been proposed to establish a link between an auditory stimulus and the neurophysiological response obtained through electroencephalography (EEG). We investigate if non-linear mappings can be modeled with deep neural networks trained on continuous speech envelopes and EEG data obtained in an auditory attention two-speaker scenario. An artificial neural network was trained to predict the EEG response related to the attended and unattended speech envelopes. After training, the properties of the DNN-based model are analyzed by measuring the transfer function between input envelopes and predicted EEG signals by using click-like stimuli and frequency sweeps as input patterns. Using sweep responses allows to separate the linear and nonlinear response components also with respect to attention. The responses from the model trained on normal speech resemble event-related potentials despite the fact that the DNN was not trained to reproduce such patterns. These responses are modulated by attention, since we obtain significantly lower amplitudes at latencies of 110 ms, 170 ms and 300 ms after stimulus presentation for unattended processing in contrast to the attended. The comparison of linear and nonlinear components indicates that the largest contribution arises from linear processing (75%), while the remaining 25% are attributed to nonlinear processes in the model. Further, a spectral analysis showed a stronger 5 Hz component in modeled EEG for attended in contrast to unattended predictions. The results indicate that the artificial neural network produces responses consistent with recent findings and presents a new approach for quantifying the model properties.


2020 ◽  
Vol 48 (1) ◽  
pp. 366-377 ◽  
Author(s):  
Yeşim Benal ÖZTEKİN ◽  
Alper TANER ◽  
Hüseyin DURAN

The present study investigated the possible use of artificial neural networks (ANN) to classify five chestnut (Castanea sativa Mill.) varieties. For chestnut classification, back-propagation neural networks were framed on the basis of physical and mechanical parameters. Seven physical and mechanical characteristics (geometric mean diameter, sphericity, volume of nut, surface area, shell thickness, shearing force and strength) of chestnut were determined. It was found that these characteristics were statistically different and could be used in the classification of species. In the developed ANN model, the design of the network is 7-(5-6)-1 and it consists of 7 input, 2 hidden and 1 output layers. Tansig transfer functions were used in both hidden layers, while linear transfer functions were used in the output layer. In ANN model, R2 value was obtained as 0.99999 and RMSE value was obtained as 0.000083 for training. For testing, R2 value was found as 0.99999 and RMSE value was found as 0.00031. In the approximation of values obtained with ANN model to the values measured, average error was found as 0.011%. It was found that the results found with ANN model were very compatible with the measured data. It was found that the ANN model obtained can classify chestnut varieties in a fast and reliable way.


2002 ◽  
Vol 8 (3) ◽  
pp. 270-285 ◽  
Author(s):  
J. Kniss ◽  
G. Kindlmann ◽  
C. Hansen

2004 ◽  
Vol 14 (05) ◽  
pp. 1549-1558 ◽  
Author(s):  
FRANK HOPPENSTEADT

Important components of neural networks are input synapses, action potential generators and output synapses. Rather than modeling a whole neuron in terms of a few ionic channels or as having Hodgkin–Huxley, Morris–Lecar or FitzHugh–Nagumo dynamics, we describe a neuron's action potential generator (APG). An APG may be at the hillock region at the base of an axon or another specific region of a cell. We model it using bifurcation theory based on observations by A. F. Hodgkin about membrane excitability. The result is a simplified model that leads us to view a neural network as comprising input and output synapses (electrical or chemical) that network APGs. These centers of activity are coupled by transfer functions from input synapses to an APG and from an APG to output synapses. The transfer functions account for time delays and signal attenuation that result from input and output structures. While this falls far short of a complete biophysical model of specific neurons in a network, it is consistent with empirical data, it is easily formulated, it is analytically tractable, and computer simulations based on it are straightforward. One outcome is a precise description of the cumulative distribution function (CDF) of action potentials. Since records of cell firing amount to collections of CDFs, the model is for a variable that is accessible to experimental observation. This methodology is applied here to describe bursting neural circuits and embedded loop networks similar to those occurring in basal ganglia.


1994 ◽  
Vol 6 (3) ◽  
pp. 469-490 ◽  
Author(s):  
K. P. Unnikrishnan ◽  
K. P. Venugopal

We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feedforward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a “temperature” parameter in a manner similar to that in simulated annealing. A heuristic “annealing schedule” is presented that is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feedforward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems.


Sign in / Sign up

Export Citation Format

Share Document