scholarly journals Input data preprocessing method for exchange rate forecasting via neural network

2014 ◽  
Vol 11 (4) ◽  
pp. 597-608
Author(s):  
Dragan Antic ◽  
Miroslav Milovanovic ◽  
Stanisa Peric ◽  
Sasa Nikolic ◽  
Marko Milojkovic

The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data.

2014 ◽  
Vol 9 (14) ◽  
pp. 645-651
Author(s):  
Alsakran Jamal ◽  
Rodan Ali ◽  
Alhindawi Nouh ◽  
Faris Hossam

1995 ◽  
Vol 7 (3) ◽  
pp. 507-517 ◽  
Author(s):  
Marco Idiart ◽  
Barry Berk ◽  
L. F. Abbott

Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.


Water ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 1387 ◽  
Author(s):  
Le ◽  
Ho ◽  
Lee ◽  
Jung

Flood forecasting is an essential requirement in integrated water resource management. This paper suggests a Long Short-Term Memory (LSTM) neural network model for flood forecasting, where the daily discharge and rainfall were used as input data. Moreover, characteristics of the data sets which may influence the model performance were also of interest. As a result, the Da River basin in Vietnam was chosen and two different combinations of input data sets from before 1985 (when the Hoa Binh dam was built) were used for one-day, two-day, and three-day flowrate forecasting ahead at Hoa Binh Station. The predictive ability of the model is quite impressive: The Nash–Sutcliffe efficiency (NSE) reached 99%, 95%, and 87% corresponding to three forecasting cases, respectively. The findings of this study suggest a viable option for flood forecasting on the Da River in Vietnam, where the river basin stretches between many countries and downstream flows (Vietnam) may fluctuate suddenly due to flood discharge from upstream hydroelectric reservoirs.


Author(s):  
Jerry Lin ◽  
Rajeev Kumar Pandey ◽  
Paul C.-P. Chao

Abstract This study proposes a reduce AI model for the accurate measurement of the blood pressure (BP). In this study varied temporal periods of photoplethysmography (PPG) waveforms is used as the features for the artificial neural networks to estimate blood pressure. A nonlinear Principal component analysis (PCA) method is used herein to remove the redundant features and determine a set of dominant features which is highly correlated to the Blood pressure (BP). The reduce features-set not only helps to minimize the size of the neural network but also improve the measurement accuracy of the systolic blood pressure (SBP) and diastolic blood pressure (DBP). The designed Neural Network has the 5-input layer, 2 hidden layers (32 nodes each) and 2 output nodes for SBP and DBP, respectively. The NN model is trained by the PPG data sets, acquired from the 96 subjects. The testing regression for the SBP and DBP estimation is obtained as 0.81. The resultant errors for the SBP and DBP measurement are 2.00±6.08 mmHg and 1.87±4.09 mmHg, respectively. According to the Advancement of Medical Instrumentation (AAMI) and British Hypertension Society (BHS) standard, the measured error of ±6.08 mmHg is less than 8 mmHg, which shows that the device performance is in grade “A”.


Author(s):  
Jae Eun Yoon ◽  
Jong Joon Lee ◽  
Tong Seop Kim ◽  
Jeong Lak Sohn

This study aims to simulate performance deterioration of a microturbine and apply artificial neural network to its performance diagnosis. As it is hard to obtain test data with degraded component performance, the degraded engine data have been acquired through simulation. Artificial neural network is adopted as the diagnosis tool. First, the microturbine has been tested to get reference operation data, assumed to be degradation free. Then, a simulation program was set up to regenerate the performance test data. Deterioration of each component (compressor, turbine and recuperator) was modeled by changes in the component characteristic parameters such as compressor and turbine efficiency, their flow capacities and recuperator effectiveness and pressure drop. Single and double faults (deterioration of single and two components) were simulated to generate fault data. The neural network was trained with majority of the data sets. Then, the remaining data sets were used to check the predictability of the neural network. Given measurable performance parameters (power, temperatures, pressures) as inputs to the neural network, characteristic parameters of each component were predicted as outputs and compared with original data. The neural network produced sufficiently accurate prediction. Reducing the number of input data decreased prediction accuracy. However, excluding up to a couple of input data still produced acceptable accuracy.


2015 ◽  
Vol 76 (7) ◽  
Author(s):  
Nor’aini A.J. ◽  
Syahrul Akram Z. A. ◽  
Azilah S.

Iris recognition not only can be used in biometrics technology but also in medical application by identifying the region that relates to the body part.  This paper describes a technique for identification of vagina and pelvis regions from iris region using Artificial Neural Network (ANN) based on iridology chart whereby the ANN process utilized Feed Forward Neural Network (FFNN).  The localization of the iris is carried out using two methods namely Circular Boundary Detector (CBD) and Circular Hough Transform (CHT). The iris is segmented based on the iridology chart and unwrapped into polar form using Daugman’s Rubber Sheet Model.  The vagina and pelvis regions are cropped into pixel size of 40x7 for feature extraction using Principal component Analysis (PCA) and classified using FFNN.  In the experiments, 15 pelvis and 20 vagina regions are used for classification. The best result obtained gives overall correct identification from localization using CBD and CHT of about 67% and 81% respectively.  From the experiments, it is observed that vagina and pelvis regions are able to be identified even though the results obtained are not 100% accurate. 


2013 ◽  
Vol 321-324 ◽  
pp. 2203-2208
Author(s):  
Liang Liu ◽  
Xiao Hong He ◽  
Hao Sun

This paper describes a dimension reduction method of input vector to improve classification efficiency of LVQ neural network, where GA is used to decrease the redundancy of input data. And in order to solve the initial weight vector sensitivity, GA is also employed to optimize the initial vector. The experimental results on the UCI data sets demonstrate that the efficiency and accuracy of our LVQ network by GA is higher than general LVQ neural network classification algorithm.


2017 ◽  
Vol 10 (13) ◽  
pp. 355 ◽  
Author(s):  
Reshma Remesh ◽  
Pattabiraman. V

Dimensionality reduction techniques are used to reduce the complexity for analysis of high dimensional data sets. The raw input data set may have large dimensions and it might consume time and lead to wrong predictions if unnecessary data attributes are been considered for analysis. So using dimensionality reduction techniques one can reduce the dimensions of input data towards accurate prediction with less cost. In this paper the different machine learning approaches used for dimensionality reductions such as PCA, SVD, LDA, Kernel Principal Component Analysis and Artificial Neural Network  have been studied.


Sign in / Sign up

Export Citation Format

Share Document