Application of feed forward and cascade forward neural network models for prediction of hourly ambient air temperature based on MERRA-2 reanalysis data in a coastal area of Turkey

Author(s):  
Serdar Gündoğdu ◽  
Tolga Elbir
Author(s):  
Tshilidzi Marwala

In this chapter, a classifier technique that is based on a missing data estimation framework that uses autoassociative multi-layer perceptron neural networks and genetic algorithms is proposed. The proposed method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey and compared to conventional feed-forward neural networks. The missing data approach based on the autoassociative network model proposed gives an accuracy of 92%, when compared to the accuracy of 84% obtained from the conventional feed-forward neural network models. The area under the receiver operating characteristics curve for the proposed autoassociative network model is 0.86 compared to 0.80 for the conventional feed-forward neural network model. The autoassociative network model proposed in this chapter, therefore, outperforms the conventional feed-forward neural network models and is an improved classifier. The reasons for this are: (1) the propagation of errors in the autoassociative network model is more distributed while for a conventional feed-forward network is more concentrated; and (2) there is no causality between the demographic properties and the HIV and, therefore, the HIV status does change the demographic properties and vice versa. Therefore, it is better to treat the problem as a missing data problem rather than a feed-forward problem.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
A. P. Kamoutsis ◽  
A. S. Matsoukis ◽  
K. I. Chronopoulos

Air temperature (T) data were estimated in the regions of Nea Smirni, Penteli, and Peristeri, in the greater Athens area, Greece, using the T data of a reference station in Penteli. Two artificial neural network approaches were developed. The first approach, MLP1, used the T as input parameter and the second, MLP2, used additionally the time of the corresponding T. One site in Nea Smirni, three sites in Penteli, from which two are located in the Pentelikon mountain, and one site in Peristeri were selected based on different land use and altitude. T data were monitored in each site for the period between December 1, 2009, and November 30, 2010. In this work the two extreme seasons (winter and summer) are presented. The results showed that the MLP2 model was better (higher and lower MAE) than MLP1 for the T estimation in both winter and summer, independently of the examined region. In general, MLP1 and MLP2 models provided more accurate T estimations in regions located in greater distance (Nea Smirni and Peristeri) from the reference station in relation to the nearby Pentelikon mountain. The greater distance T estimations, in most cases, were better in winter compared to summer.


Computers ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 36
Author(s):  
Tessfu Geteye Fantaye ◽  
Junqing Yu ◽  
Tulu Tilahun Hailu

Deep neural networks (DNNs) have shown a great achievement in acoustic modeling for speech recognition task. Of these networks, convolutional neural network (CNN) is an effective network for representing the local properties of the speech formants. However, CNN is not suitable for modeling the long-term context dependencies between speech signal frames. Recently, the recurrent neural networks (RNNs) have shown great abilities for modeling long-term context dependencies. However, the performance of RNNs is not good for low-resource speech recognition tasks, and is even worse than the conventional feed-forward neural networks. Moreover, these networks often overfit severely on the training corpus in the low-resource speech recognition tasks. This paper presents the results of our contributions to combine CNN and conventional RNN with gate, highway, and residual networks to reduce the above problems. The optimal neural network structures and training strategies for the proposed neural network models are explored. Experiments were conducted on the Amharic and Chaha datasets, as well as on the limited language packages (10-h) of the benchmark datasets released under the Intelligence Advanced Research Projects Activity (IARPA) Babel Program. The proposed neural network models achieve 0.1–42.79% relative performance improvements over their corresponding feed-forward DNN, CNN, bidirectional RNN (BRNN), or bidirectional gated recurrent unit (BGRU) baselines across six language collections. These approaches are promising candidates for developing better performance acoustic models for low-resource speech recognition tasks.


Sign in / Sign up

Export Citation Format

Share Document