scholarly journals PEMODELAN WAVELET NEURAL NETWORK UNTUK PREDIKSI NILAI TUKAR RUPIAH TERHADAP DOLAR AS

2020 ◽  
Vol 9 (2) ◽  
pp. 217-226
Author(s):  
Tri Yani Elisabeth Nababan ◽  
Budi Warsito ◽  
Agus Rusgiyono

Each country has its own currency that is used as a tool of exchange rate valid in the transaction process. In the process of transaction between countries often experience problems in terms of payment because of the difference in the value of money prevailing in each country. The price movement of the exchange rate or the value of foreign currencies that fluctuate from time to time it encouraged predictions of the value of the rupiah exchange rate against the U.S. dollar. Wavelet Neural Network (WNN) is a combination of methods between wavelet transforms and Neural networks. WNN modeling begins with wavelet decomposition resulting in wavelet coefficients and scale coefficients. Selection of inputs is based on PACF plots and divides into training data and testing data. To determine the final output by calculating the value of MAPE in data testing. The best architecture on WNN model for prediction of the value of the rupiah exchange rate against the U.S. dollar is a model with sigmoid logistic activation function, 2 neurons in the input layer, 10 neurons in the hidden layer, and 1 neuron in the output layer. The MAPE value is obtained at 0.2221%.  

2011 ◽  
Vol 396-398 ◽  
pp. 711-715
Author(s):  
Jian Xin Chen ◽  
Xu Na Shi ◽  
Shu Chun Pang ◽  
Mei Jing Zhang ◽  
Sheng Yu Li

Wavelet neural network(WNN) was applied to predicate the cortisol solubility. The model consists of a multilayer feedforward hierarchical structure, and the flow of information is directed from the input to the output layer by using wavelet transforms to achieve faster convergence. By adaptively adjusting the number of training data involved during training, an adaptive robust learning algorithm is derived for improvement of the efficiency of the network. The neural network was trained and simulated cortisol solubility with different input and output parameters. Simulation results confirmed that this approach gave more accurate predictions solubility.


2020 ◽  
Vol 9 (3) ◽  
pp. 273-282
Author(s):  
Isna Wulandari ◽  
Hasbi Yasin ◽  
Tatik Widiharih

The recognition of herbs and spices among young generation is still low. Based on research in SMK 9 Bandung, showed that there are 47% of students that did not recognize herbs and spices. The method that can be used to overcome this problem is automatic digital sorting of herbs and spices using Convolutional Neural Network (CNN) algorithm. In this study, there are 300 images of herbs and spices that will be classified into 3 categories. It’s ginseng, ginger and galangal. Data in each category is divided into two, training data and testing data with a ratio of 80%: 20%. CNN model used in classification of digital images of herbs and spices is a model with 2 convolutional layers, where the first convolutional layer has 10 filters and the second convolutional layer has 20 filters. Each filter has a kernel matrix with a size of 3x3. The filter size at the pooling layer is 3x3 and the number of neurons in the hidden layer is 10. The activation function at the convolutional layer and hidden layer is tanh, and the activation function at the output layer is softmax. In this model, the accuracy of training data is 0.9875 and the loss value is 0.0769. The accuracy of testing data is 0.85 and the loss value is 0.4773. Meanwhile, testing new data with 3 images for each category produces an accuracy of 88.89%. Keywords: image classification, herbs and spices, CNN. 


2012 ◽  
Vol 225 ◽  
pp. 144-149
Author(s):  
Hadi Samareh Salavati Pour ◽  
Mojtaba Sadighi ◽  
Abdolvahed Kami

The orientation of fibers in the layers is an important factor that must be obtained in order to predict how well the finished composite product will perform under real-world working conditions. In this research, a five-layer glass-epoxy composite truncated cone structure under buckling load was considered. The simulation of the structure was done utilizing finite element method and was confirmed comparing with the published experimental results. Then the effect of different orientation of fibers on the buckling load was considered. For this, a computer programing was developed to compute the buckling load for different orientations of fibers in each layer. These orientations were produced randomly with the delicacy of 15 degrees. Finally, neural network and genetic algorithm methods were utilized to obtain the optimum orientations of fibers in each layer using the training data obtained from finite element simulation. There are many parameters such as the number of hidden layers, the number of neurons in each hidden layer, the training algorithm, the activation function and so on which must be specified properly in development of a neural network model. The number of hidden layers and number of neurons in each layer was obtained by try and error method. In this study, multilayer back-propagation (BP) neural network with the Levenberg-Marquardt training algorithm (trainlm) was used. Finally, the results showed that the truncated cone with optimum layers withstand considerably more buckling load.


2021 ◽  
pp. 1063293X2110251
Author(s):  
K Vijayakumar ◽  
Vinod J Kadam ◽  
Sudhir Kumar Sharma

Deep Neural Network (DNN) stands for multilayered Neural Network (NN) that is capable of progressively learn the more abstract and composite representations of the raw features of the input data received, with no need for any feature engineering. They are advanced NNs having repetitious hidden layers between the initial input and the final layer. The working principle of such a standard deep classifier is based on a hierarchy formed by the composition of linear functions and a defined nonlinear Activation Function (AF). It remains uncertain (not clear) how the DNN classifier can function so well. But it is clear from many studies that within DNN, the AF choice has a notable impact on the kinetics of training and the success of tasks. In the past few years, different AFs have been formulated. The choice of AF is still an area of active study. Hence, in this study, a novel deep Feed forward NN model with four AFs has been proposed for breast cancer classification: hidden layer 1: Swish, hidden layer, 2:-LeakyReLU, hidden layer 3: ReLU, and final output layer: naturally Sigmoidal. The purpose of the study is twofold. Firstly, this study is a step toward a more profound understanding of DNN with layer-wise different AFs. Secondly, research is also aimed to explore better DNN-based systems to build predictive models for breast cancer data with improved accuracy. Therefore, the benchmark UCI dataset WDBC was used for the validation of the framework and evaluated using a ten-fold CV method and various performance indicators. Multiple simulations and outcomes of the experimentations have shown that the proposed solution performs in a better way than the Sigmoid, ReLU, and LeakyReLU and Swish activation DNN in terms of different parameters. This analysis contributes to producing an expert and precise clinical dataset classification method for breast cancer. Furthermore, the model also achieved improved performance compared to many established state-of-the-art algorithms/models.


2019 ◽  
Vol 2 (1) ◽  
pp. 1
Author(s):  
Hijratul Aini ◽  
Haviluddin Haviluddin

Crude palm oil (CPO) production at PT. Perkebunan Nusantara (PTPN) XIII from January 2015 to January 2018 have been treated. This paper aims to predict CPO production using intelligent algorithms called Backpropagation Neural Network (BPNN). The accuracy of prediction algorithms have been measured by mean square error (MSE). The experiment showed that the best hidden layer architecture (HLA) is 5-10-11-12-13-1 with learning function (LF) of trainlm, activation function (AF) of logsig and purelin, and learning rate (LR) of 0.5. This architecture has a good accuracy with MSE of 0.0643. The results showed that this model can predict CPO production in 2019.


2016 ◽  
Vol 36 (2) ◽  
pp. 172-178 ◽  
Author(s):  
Liang Chen ◽  
Leitao Cui ◽  
Rong Huang ◽  
Zhengyun Ren

Purpose This paper aims to present a bio-inspired neural network for improvement of information processing capability of the existing artificial neural networks. Design/methodology/approach In the network, the authors introduce a property often found in biological neural system – hysteresis – as the neuron activation function and a bionic algorithm – extreme learning machine (ELM) – as the learning scheme. The authors give the gradient descent procedure to optimize parameters of the hysteretic function and develop an algorithm to online select ELM parameters, including number of the hidden-layer nodes and hidden-layer parameters. The algorithm combines the idea of the cross validation and random assignment in original ELM. Finally, the authors demonstrate the advantages of the hysteretic ELM neural network by applying it to automatic license plate recognition. Findings Experiments on automatic license plate recognition show that the bio-inspired learning system has better classification accuracy and generalization capability with consideration to efficiency. Originality/value Comparing with the conventional sigmoid function, hysteresis as the activation function enables has two advantages: the neuron’s output not only depends on its input but also on derivative information, which provides the neuron with memory; the hysteretic function can switch between the two segments, thus avoiding the neuron falling into local minima and having a quicker learning rate. The improved ELM algorithm in some extent makes up for declining performance because of original ELM’s complete randomness with the cost of a litter slower than before.


2000 ◽  
Author(s):  
Arturo Pacheco-Vega ◽  
Mihir Sen ◽  
Rodney L. McClain

Abstract In the current study we consider the problem of accuracy in heat rate estimations from artificial neural network models of heat exchangers used for refrigeration applications. The network configuration is of the feedforward type with a sigmoid activation function and a backpropagation algorithm. Limited experimental measurements from a manufacturer are used to show the capability of the neural network technique in modeling the heat transfer in these systems. Results from this exercise show that a well-trained network correlates the data with errors of the same order as the uncertainty of the measurements. It is also shown that the number and distribution of the training data are linked to the performance of the network when estimating the heat rates under different operating conditions, and that networks trained from few tests may give large errors. A methodology based on the cross-validation technique is presented to find regions where not enough data are available to construct a reliable neural network. The results from three tests show that the proposed methodology gives an upper bound of the estimated error in the heat rates.


2022 ◽  
pp. 202-226
Author(s):  
Leema N. ◽  
Khanna H. Nehemiah ◽  
Elgin Christo V. R. ◽  
Kannan A.

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jingwei Liu ◽  
Peixuan Li ◽  
Xuehan Tang ◽  
Jiaxin Li ◽  
Jiaming Chen

AbstractArtificial neural networks (ANN) which include deep learning neural networks (DNN) have problems such as the local minimal problem of Back propagation neural network (BPNN), the unstable problem of Radial basis function neural network (RBFNN) and the limited maximum precision problem of Convolutional neural network (CNN). Performance (training speed, precision, etc.) of BPNN, RBFNN and CNN are expected to be improved. Main works are as follows: Firstly, based on existing BPNN and RBFNN, Wavelet neural network (WNN) is implemented in order to get better performance for further improving CNN. WNN adopts the network structure of BPNN in order to get faster training speed. WNN adopts the wavelet function as an activation function, whose form is similar to the radial basis function of RBFNN, in order to solve the local minimum problem. Secondly, WNN-based Convolutional wavelet neural network (CWNN) method is proposed, in which the fully connected layers (FCL) of CNN is replaced by WNN. Thirdly, comparative simulations based on MNIST and CIFAR-10 datasets among the discussed methods of BPNN, RBFNN, CNN and CWNN are implemented and analyzed. Fourthly, the wavelet-based Convolutional Neural Network (WCNN) is proposed, where the wavelet transformation is adopted as the activation function in Convolutional Pool Neural Network (CPNN) of CNN. Fifthly, simulations based on CWNN are implemented and analyzed on the MNIST dataset. Effects are as follows: Firstly, WNN can solve the problems of BPNN and RBFNN and have better performance. Secondly, the proposed CWNN can reduce the mean square error and the error rate of CNN, which means CWNN has better maximum precision than CNN. Thirdly, the proposed WCNN can reduce the mean square error and the error rate of CWNN, which means WCNN has better maximum precision than CWNN.


Sign in / Sign up

Export Citation Format

Share Document