scholarly journals Predicting the Dispersion Relations of One-Dimensional Phononic Crystals by Neural Networks

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Chen-Xu Liu ◽  
Gui-Lan Yu

Abstract In this paper, deep back propagation neural networks (DBP-NNs) and radial basis function neural networks (RBF-NNs) are employed to predict the dispersion relations (DRs) of one-dimensional (1D) phononic crystals (PCs). The data sets generated by transfer matrix method (TMM) are used to train the NNs and detect their prediction accuracy. In our work, filling fractions, mass density ratios and shear modulus ratios of PCs are considered as the input values of NNs. The results show that both the DBP-NNs and the RBF-NNs exhibit good performances in predicting the DRs of PCs. For one-parameter prediction, the RBF-NNs have shorter training time and remarkable prediction accuracy, for two- and three-parameter prediction, the DBP-NNs have more stable performance. The present work confirms the feasibility of predicting the DRs of PCs by NNs, and provides a useful reference for the application of NNs in the design of PCs and metamaterials.

Author(s):  
DE-SHUANG HUANG

This paper investigates the capabilities of radial basis function networks (RBFN) and kernel neural networks (KNN), i.e. a specific probabilistic neural networks (PNN), and studies their similarities and differences. In order to avoid the huge amount of hidden units of the KNNs (or PNNs) and reduce the training time for the RBFNs, this paper proposes a new feedforward neural network model referred to as radial basis probabilistic neural network (RBPNN). This new network model inherits the merits of the two old odels to a great extent, and avoids their defects in some ways. Finally, we apply this new RBPNN to the recognition of one-dimensional cross-images of radar targets (five kinds of aircrafts), and the experimental results are given and discussed.


Author(s):  
Julia El Zini ◽  
Yara Rizk ◽  
Mariette Awad

AbstractRecurrent neural networks (RNN) have been successfully applied to various sequential decision-making tasks, natural language processing applications, and time-series predictions. Such networks are usually trained through back-propagation through time (BPTT) which is prohibitively expensive, especially when the length of the time dependencies and the number of hidden neurons increase. To reduce the training time, extreme learning machines (ELMs) have been recently applied to RNN training, reaching a 99% speedup on some applications. Due to its non-iterative nature, ELM training, when parallelized, has the potential to reach higher speedups than BPTT.In this work, we present Opt-PR-ELM, an optimized parallel RNN training algorithm based on ELM that takes advantage of the GPU shared memory and of parallel QR factorization algorithms to efficiently reach optimal solutions. The theoretical analysis of the proposed algorithm is presented on six RNN architectures, including LSTM and GRU, and its performance is empirically tested on ten time-series prediction applications. Opt-PR-ELM is shown to reach up to 461 times speedup over its sequential counterpart and to require up to 20x less time to train than parallel BPTT. Such high speedups over new generation CPUs are extremely crucial in real-time applications and IoT environments.


2007 ◽  
Vol 353-358 ◽  
pp. 2325-2328
Author(s):  
Zi Chang Shangguan ◽  
Shou Ju Li ◽  
Mao Tian Luan

The inverse problem of rock damage detection is formulated as an optimization problem, which is then solved by using artificial neural networks. Convergence measurements of displacements at a few of positions are used to determine the location and magnitude of the damaged rock in the excavation disturbed zones. Unlike the classical optimum methods, ANN is able to globally converge. However, the most frequently used Back-Propagation neural networks have a set of problems: dependence on initial parameters, long training time, lack of problemindependent way to choose appropriate network topology and incomprehensive nature of ANNs. To identify the location and magnitude of the damaged rock using an artificial neural network is feasible and a well trained artificial neural network by Levenberg-Marquardt algorithm reveals an extremely fast convergence and a high degree of accuracy.


2003 ◽  
Vol 56 (2) ◽  
pp. 291-304 ◽  
Author(s):  
Dah-Jing Jwo ◽  
Chien-Cheng Lai

The neural networks (NN)-based geometry classification for good or acceptable navigation satellite subset selection is presented. The approach is based on classifying the values of satellite Geometry Dilution of Precision (GDOP) utilizing the classification-type NNs. Unlike some of the NNs that approximate the function, such as the back-propagation neural network (BPNN), the NNs here are employed as classifiers. Although BPNN can also be employed as a classifier, it takes a long training time. Two other methods that feature a fast learning speed will be implemented, including Optimal Interpolative (OI) Net and Probabilistic Neural Network (PNN). Simulation results from these three neural networks are presented. The classification performance and computational expense of neural network-based GDOP classification are explored.


Author(s):  
Farzad Hosseinali

Artificial Intelligence is dominated by Artificial Neural Networks (ANNs). Currently, the Batch Gradient Descent (BGD) is the only solution to train ANN weights when dealing with large datasets. In this article, a modification to the BGD is proposed which significantly reduces the training time and improves the convergence. The modification, called Instance Eliminating Back Propagation (IEBP), eliminates correctly-predicted-instances from the Back Propagation. The speedup is due to the elimination of unnecessary matrix multiplication operations from the Back Propagation. The proposed modification does not add any training hyperparameter to the existing ones and reduces the memory consumption during the training.


Author(s):  
Kidong Lee ◽  
David Booth ◽  
Pervaiz Alam

The back-propagation (BP) network and the Kohonen self-organizing feature map, selected as the representative types for the supervised and unsupervised artificial neural networks (ANN) respectively, are compared in terms of prediction accuracy in the area of bankruptcy prediction. Discriminant analysis and logistic regression are also performed to provide performance benchmarks. The findings suggest that the BP network is a better choice when a target vector is available. Advantages as well as limitations of the studied methods are also identified and discussed.


2020 ◽  
Vol 39 (5) ◽  
pp. 6419-6430
Author(s):  
Dusan Marcek

To forecast time series data, two methodological frameworks of statistical and computational intelligence modelling are considered. The statistical methodological approach is based on the theory of invertible ARIMA (Auto-Regressive Integrated Moving Average) models with Maximum Likelihood (ML) estimating method. As a competitive tool to statistical forecasting models, we use the popular classic neural network (NN) of perceptron type. To train NN, the Back-Propagation (BP) algorithm and heuristics like genetic and micro-genetic algorithm (GA and MGA) are implemented on the large data set. A comparative analysis of selected learning methods is performed and evaluated. From performed experiments we find that the optimal population size will likely be 20 with the lowest training time from all NN trained by the evolutionary algorithms, while the prediction accuracy level is lesser, but still acceptable by managers.


Sign in / Sign up

Export Citation Format

Share Document