scholarly journals Applying Hierarchical Bayesian Neural Network in Failure Time Prediction

2012 ◽  
Vol 2012 ◽  
pp. 1-11 ◽  
Author(s):  
Ling-Jing Kao ◽  
Hsin-Fen Chen

With the rapid technology development and improvement, the product failure time prediction becomes an even harder task because only few failures in the product life tests are recorded. The classical statistical model relies on the asymptotic theory and cannot guarantee that the estimator has the finite sample property. To solve this problem, we apply the hierarchical Bayesian neural network (HBNN) approach to predict the failure time and utilize the Gibbs sampler of Markov chain Monte Carlo (MCMC) to estimate model parameters. In this proposed method, the hierarchical structure is specified to study the heterogeneity among products. Engineers can use the heterogeneity estimates to identify the causes of the quality differences and further enhance the product quality. In order to demonstrate the effectiveness of the proposed hierarchical Bayesian neural network model, the prediction performance of the proposed model is evaluated using multiple performance measurement criteria. Sensitivity analysis of the proposed model is also conducted using different number of hidden nodes and training sample sizes. The result shows that HBNN can provide not only the predictive distribution but also the heterogeneous parameter estimates for each path.

2020 ◽  
Vol 10 (16) ◽  
pp. 5622
Author(s):  
Zitong Zhou ◽  
Yanyang Zi ◽  
Jingsong Xie ◽  
Jinglong Chen ◽  
Tong An

The escalator is one of the most popular travel methods in public places, and the safe working of the escalator is significant. Accurately predicting the escalator failure time can provide scientific guidance for maintenance to avoid accidents. However, failure data have features of short length, non-uniform sampling, and random interference, which makes the data modeling difficult. Therefore, a strategy that combines data quality enhancement with deep neural networks is proposed for escalator failure time prediction in this paper. First, a comprehensive selection indicator (CSI) that can describe the stationarity and complexity of time series is established to select inherently excellent failure sequences. According to the CSI, failure sequences with high stationarity and low complexity are selected as the referenced sequences to enhance the quality of other failure sequences by using dynamic time warping preprocessing. Secondly, a deep neural network combining the advantages of a convolutional neural network and long short-term memory is built to train and predict quality-enhanced failure sequences. Finally, the failure-recall record of six escalators used for 6 years is analyzed by using the proposed method as a case study, and the results show that the proposed method can reduce the average prediction error of failure time to less than one month.


2018 ◽  
Vol 10 (11) ◽  
pp. 168781401881105
Author(s):  
Shengliang Lu ◽  
Jie Zhang ◽  
Shirong Zhou ◽  
Ancha Xu

The sea reclamation is one of the efficient ways to alleviate the shortage of land resources due to population growth, and the corresponding axial ultimate bearing capacity of piles has become one of the critical factors for evaluating the performance of the soil layer reclamation work. Many models are used to analyze the testing data. However, these models cannot describe the mean population bearing capacity and unit-to-unit variation simultaneously, and they cannot give the reliability of predicting the axial ultimate bearing capacity of piles. Thus, they are rarely used in practice. In this article, we propose a mixed-effects model, which could overcome the drawback of the models in the literature. A hierarchical Bayesian framework is developed to estimate the model parameters using Gibbs sampling. The proposed model is applied to a real pile dataset collected in silt-rock layer area, and we predict the mean axial bearing capacities under different reliability levels.


Author(s):  
Duowei Tang ◽  
Peter Kuppens ◽  
Luc Geurts ◽  
Toon van Waterschoot

AbstractAmongst the various characteristics of a speech signal, the expression of emotion is one of the characteristics that exhibits the slowest temporal dynamics. Hence, a performant speech emotion recognition (SER) system requires a predictive model that is capable of learning sufficiently long temporal dependencies in the analysed speech signal. Therefore, in this work, we propose a novel end-to-end neural network architecture based on the concept of dilated causal convolution with context stacking. Firstly, the proposed model consists only of parallelisable layers and is hence suitable for parallel processing, while avoiding the inherent lack of parallelisability occurring with recurrent neural network (RNN) layers. Secondly, the design of a dedicated dilated causal convolution block allows the model to have a receptive field as large as the input sequence length, while maintaining a reasonably low computational cost. Thirdly, by introducing a context stacking structure, the proposed model is capable of exploiting long-term temporal dependencies hence providing an alternative to the use of RNN layers. We evaluate the proposed model in SER regression and classification tasks and provide a comparison with a state-of-the-art end-to-end SER model. Experimental results indicate that the proposed model requires only 1/3 of the number of model parameters used in the state-of-the-art model, while also significantly improving SER performance. Further experiments are reported to understand the impact of using various types of input representations (i.e. raw audio samples vs log mel-spectrograms) and to illustrate the benefits of an end-to-end approach over the use of hand-crafted audio features. Moreover, we show that the proposed model can efficiently learn intermediate embeddings preserving speech emotion information.


2021 ◽  
Vol 50 (4) ◽  
pp. 656-673
Author(s):  
Chhayarani Ram Kinkar ◽  
Yogendra Kumar Jain

The presented paper proposes a new speech command recognition model for novel engineering applications with limited resources. We built the proposed model with the help of a Convolutional Recurrent Neural Network (CRNN). The use of CRNN instead of Convolutional Neural Network (CNN) helps us to reduce the model parameters and memory requirement as per resource constraints. Furthermore, we insert transmute and curtailment layer between the layers of CRNN. By doing this we further reduce model parameters and float number of operations to half of the CRNN requirement. The proposed model is tested on Google’s speech command dataset. The obtained result shows that the proposed CRNN model requires 1/3 parameters as compared to the CNN model. The number of parameters of the CRNN model is further reduced by 45% and the float numbers of operations between 2% to 12 % in different recognition tasks. The recognition accuracy of the proposed model is 96% on Google’s speech command dataset, and on laboratory recording, its recognition accuracy is 89%.


2004 ◽  
Vol 3 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Yulan Liang ◽  
Arpad G Kelemen

There are several important issues to be addressed for gene expression temporal patterns' analysis: first, the correlation structure of multidimensional temporal data; second, the numerous sources of variations with existing high level noise; and last, gene expression mostly involves heterogeneous multiple dynamic patterns. We propose a Hierarchical Bayesian Neural Network model to account for the input correlations of time course gene array data. The variations in absolute gene expression levels and the noise can be estimated with the hierarchical Bayesian setting. The network parameters and the hyperparameters were simultaneously optimized with Monte Carlo Markov Chain simulation. Results show that the proposed model and algorithm can well capture the dynamic feature of gene expression temporal patterns despite the high noise levels, the highly correlated inputs, the overwhelming interactions, and other complex features typically present in microarray data. We test and demonstrate the proposed models with yeast cell cycle temporal data sets. The model performance of Hierarchical Bayesian Neural Network was compared to other popular machine learning methods such as Nearest Neighbor, Support Vector Machine, and Self Organized Map.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Tahere Yaghoobi

PurposeThe Gompertz curve has been used in industry to estimate the number of remaining software faults. This paper aims to introduce a family of distributions for fitting software failure times which subsumes the Gompertz distribution.Design/methodology/approachThe mean value function of the corresponding non-homogenous Poisson process software reliability growth model is presented. Model parameters are estimated by the method of maximum likelihood. A comparison of the new model with eight models that use well-known failure time distributions of exponential, gamma, Rayleigh, Weibull, Gompertz, half normal, log-logistic and lognormal is performed according to the several statistical and informational criteria. Moreover, a Shannon entropy approach is used for ranking and model selection.FindingsNumerical experiments are implemented on five real software failure datasets varying from small to large datasets. The results exhibit that the proposed model is promising and particularly outperforms the Gompertz model in all considered datasets.Originality/valueThe proposed model provides optimized reliability estimation.


Author(s):  
Xin-Yu Wu ◽  
Hai-Lin Zhou

AbstractIn this paper we introduce a triple-threshold leverage stochastic volatility (TTLSV) model for financial return time series. The main feature of the model is to allow asymmetries in the leverage effect as well as mean and volatility. In the model the asymmetric effect is modeled by a threshold nonlinear structure that the two regimes are determined by the sign of the past returns. The model parameters are estimated using maximum likelihood (ML) method based on the efficient importance sampling (EIS) technique. Monte Carlo simulations are presented to examine the accuracy and finite sample properties of the proposed methodology. The EIS-based ML (EIS-ML) method shows good performance according to the Monte Carlo results. The proposed model and methodology are applied to two stock market indices for China. Strong evidence of the mean and volatility asymmetries is detected in Chinese stock market. Moreover, asymmetries in the volatility persistence and leverage effect are also discovered. The log-likelihood and Akaike information criterion (AIC) suggest evidence in favor of the proposed model. In addition, model diagnostics suggest that the proposed model performs relatively well in capturing the key features of the data. Finally, we compare models in a Value at Risk (VaR) study. The results show that the proposed model can yield more accurate VaR estimates than the alternatives.


Sign in / Sign up

Export Citation Format

Share Document