network structure optimization
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 6)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Wei Du ◽  
Gang Li ◽  
Xiaochen He

Abstract Network structure plays an important role in the natural and social sciences. Optimization of network structure in achieving specified goals has been a major research focus. In this paper, we propose a definition of structural optimization in terms of minimizing the network’s average path length (APL) by adding edges. We suggest a memetic algorithm to find the minimum-APL solution by adding edges. Experiments show that the proposed algorithm can solve this problem efficiently. Further, we find that APL will ultimately decrease linearly in the process of adding edges, which is affected by the network diameter.


2021 ◽  
Vol 13 (9) ◽  
pp. 1761
Author(s):  
Deshan Feng ◽  
Xiangyu Wang ◽  
Xun Wang ◽  
Siyuan Ding ◽  
Hua Zhang

The high-fidelity attenuation of random ground penetrating radar (GPR) noise is important for enhancing the signal-noise ratio (SNR). In this paper, a novel network structure for convolutional denoising autoencoders (CDAEs) was proposed to effectively resolve various problems in the noise attenuation process, including overfitting, the size of the local receptive field, and representational bottlenecks and vanishing gradients in deep learning; this approach also significantly improves the noise attenuation performance. We described the noise attenuation process of conventional CDAEs, and then presented the output feature map of each convolutional layer to analyze the role of convolutional layers and their impacts on GPR data. Furthermore, we focused on the problems of overfitting, the local receptive field size, and the occurrence of representational bottlenecks and vanishing gradients in deep learning. Subsequently, a network structure optimization strategy, including a dropout regularization layer, an atrous convolution layer, and a residual-connection structure, was proposed, namely convolutional denoising autoencoders with network structure optimization (CDAEsNSO), comprising an intermediate version, called atrous-dropout CDAEs (AD-CDAEs), and a final version, called residual-connection CDAEs (ResCDAEs), all of which effectively improve the performance of conventional CDAEs. Finally, CDAEsNSO was applied to attenuate noise for the H-beam model, tunnel lining model, and field pipeline data, confirming that the algorithm adapts well to both synthetic and field data. The experiments verified that CDAEsNSO not only effectively attenuates strong Gaussian noise, Gaussian spike impulse noise, and mixed noise, but it also causes less damage to the original waveform data and maintains high-fidelity information.


2019 ◽  
Vol 4 (1) ◽  
pp. 84
Author(s):  
TANG Yin ◽  
YANG Jin Yu ◽  
CHEN Jian

<p><em>During training process of LSTM, the prediction accuracy is affected by a variation of factors, including the selection of training samples, the network structure, the optimization algorithm, and the stock market status. This paper tries to conduct a systematic research on several influencing factors of LSTM training in context of time series prediction. The experiment uses Shanghai and Shenzhen 300 constituent stocks from 2006 to 2017 as samples. The influencing factors of the study include indicator sampling, sample length, network structure, optimization method, and data of the bull and bear market, and this experiment compared the effects of PCA, dropout, and L2 regularization on predict accuracy and efficiency. Indice sampling, number of samples, network structure, optimization techniques, and PCA are found to be have their scope of application. Further, dropout and L2 regularization are found positive to improve the accuracy. The experiments cover most of the factors, however have to be compared by data overseas. This paper is of significance for feature and parameter selection in LSTM training process.</em></p>


Sign in / Sign up

Export Citation Format

Share Document