scholarly journals Parallel Approach for Time Series Analysis with General Regression Neural Networks

Author(s):  
J.C. Cuevas-Tello ◽  
R.A. González-Grimaldo ◽  
O. Rodríguez-González ◽  
H.G. Pérez-González ◽  
O. VitalOchoa

The accuracy on time delay estimation given pairs of irregularly sampled time series is of great relevance in astrophysics. However the computational time is also important because the study of  large data sets is needed. Besides introducing a new approach for time delay estimation, this paper presents a parallel approach to obtain a fast algorithm for time delay estimation. The neural network  architecture that we use is general Regression Neural Network (GRNN). For the parallel approach, we use Message Passing Interface (MPI) on a beowulf-type cluster and on a Cray supercomputer and we also use the Compute Unified Device Architecture (CUDA™) language on Graphics Processing Units (GPUs). We demonstrate that, with our approach, fast algorithms can be obtained for time delay estimation on large data sets with the same accuracy as state-of-the-art methods.

2018 ◽  
Vol 272 ◽  
pp. 178-188 ◽  
Author(s):  
Xinyi Zhang ◽  
Haoping Wang ◽  
Yang Tian ◽  
Laurent Peyrodie ◽  
Xikun Wang

Fractals ◽  
2003 ◽  
Vol 11 (04) ◽  
pp. 377-390 ◽  
Author(s):  
DARRYL VEITCH ◽  
PATRICE ABRY ◽  
MURAD S. TAQQU

A method is developed for the automatic detection of the onset of scaling for long-range dependent (LRD) time series and other asymptotically scale-invariant processes. Based on wavelet techniques, it provides the lower cutoff scale for the regression that yields the scaling exponent. The method detects the onset of scaling through the dramatic improvement of a goodness-of-fit statistic taken as a function of this lower cutoff scale. It relies on qualitative features of the goodness-of-fit statistic and on features of the wavelet analysis. The method is easy to implement, appropriate for large data sets and highly robust. It is tested against 34 time series models and found to perform very well. Examples involving telecommunications data are presented.


2013 ◽  
Vol 441 ◽  
pp. 666-669 ◽  
Author(s):  
You Jun Yue ◽  
Ying Dong Yao ◽  
Hui Zhao ◽  
Hong Jun Wang

In order to solve the problem that the small and middle converters unable to introduce the sublance detection technology to improve the control precision of endpoint because of the constraints of economy and technology, a method which combine the pedigree cluster and neural network is studied, the pedigree cluster divide the large data sets into several categories, the degree of similarity will be relatively high in each category after division, then train neural model for every category. Finally make predictions. Simulation results show that the multi-neural network model has better prediction results.


2021 ◽  
Vol 29 (5) ◽  
pp. 7904
Author(s):  
Xiaojing Gao ◽  
Wei Zhu ◽  
Qi Yang ◽  
Deze Zeng ◽  
Lei Deng ◽  
...  

2018 ◽  
Vol 25 (3) ◽  
pp. 655-670 ◽  
Author(s):  
Tsung-Wei Ke ◽  
Aaron S. Brewster ◽  
Stella X. Yu ◽  
Daniela Ushizima ◽  
Chao Yang ◽  
...  

A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.


Sign in / Sign up

Export Citation Format

Share Document