scholarly journals Early Prediction of Quality Issues in Automotive Modern Industry

Information ◽  
2020 ◽  
Vol 11 (7) ◽  
pp. 354 ◽  
Author(s):  
Reza Khoshkangini ◽  
Peyman Sheikholharam Mashhadi ◽  
Peter Berck ◽  
Saeed Gholami Shahbandi ◽  
Sepideh Pashami ◽  
...  

Many industries today are struggling with early the identification of quality issues, given the shortening of product design cycles and the desire to decrease production costs, coupled with the customer requirement for high uptime. The vehicle industry is no exception, as breakdowns often lead to on-road stops and delays in delivery missions. In this paper we consider quality issues to be an unexpected increase in failure rates of a particular component; those are particularly problematic for the original equipment manufacturers (OEMs) since they lead to unplanned costs and can significantly affect brand value. We propose a new approach towards the early detection of quality issues using machine learning (ML) to forecast the failures of a given component across the large population of units. In this study, we combine the usage information of vehicles with the records of their failures. The former is continuously collected, as the usage statistics are transmitted over telematics connections. The latter is based on invoice and warranty information collected in the workshops. We compare two different ML approaches: the first is an auto-regression model of the failure ratios for vehicles based on past information, while the second is the aggregation of individual vehicle failure predictions based on their individual usage. We present experimental evaluations on the real data captured from heavy-duty trucks demonstrating how these two formulations have complementary strengths and weaknesses; in particular, they can outperform each other given different volumes of the data. The classification approach surpasses the regressor model whenever enough data is available, i.e., once the vehicles are in-service for a longer time. On the other hand, the regression shows better predictive performance with a smaller amount of data, i.e., for vehicles that have been deployed recently.

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 726
Author(s):  
Lamya A. Baharith ◽  
Wedad H. Aljuhani

This article presents a new method for generating distributions. This method combines two techniques—the transformed—transformer and alpha power transformation approaches—allowing for tremendous flexibility in the resulting distributions. The new approach is applied to introduce the alpha power Weibull—exponential distribution. The density of this distribution can take asymmetric and near-symmetric shapes. Various asymmetric shapes, such as decreasing, increasing, L-shaped, near-symmetrical, and right-skewed shapes, are observed for the related failure rate function, making it more tractable for many modeling applications. Some significant mathematical features of the suggested distribution are determined. Estimates of the unknown parameters of the proposed distribution are obtained using the maximum likelihood method. Furthermore, some numerical studies were carried out, in order to evaluate the estimation performance. Three practical datasets are considered to analyze the usefulness and flexibility of the introduced distribution. The proposed alpha power Weibull–exponential distribution can outperform other well-known distributions, showing its great adaptability in the context of real data analysis.


2021 ◽  
Vol 11 (15) ◽  
pp. 6998
Author(s):  
Qiuying Li ◽  
Hoang Pham

Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.


Biometrika ◽  
2021 ◽  
Author(s):  
Juhyun Park ◽  
Jeongyoun Ahn ◽  
Yongho Jeon

Abstract Functional linear discriminant analysis offers a simple yet efficient method for classification, with the possibility of achieving a perfect classification. Several methods are proposed in the literature that mostly address the dimensionality of the problem. On the other hand, there is a growing interest in interpretability of the analysis, which favors a simple and sparse solution. In this work, we propose a new approach that incorporates a type of sparsity that identifies nonzero sub-domains in the functional setting, offering a solution that is easier to interpret without compromising performance. With the need to embed additional constraints in the solution, we reformulate the functional linear discriminant analysis as a regularization problem with an appropriate penalty. Inspired by the success of ℓ1-type regularization at inducing zero coefficients for scalar variables, we develop a new regularization method for functional linear discriminant analysis that incorporates an L1-type penalty, ∫ |f|, to induce zero regions. We demonstrate that our formulation has a well-defined solution that contains zero regions, achieving a functional sparsity in the sense of domain selection. In addition, the misclassification probability of the regularized solution is shown to converge to the Bayes error if the data are Gaussian. Our method does not presume that the underlying function has zero regions in the domain, but produces a sparse estimator that consistently estimates the true function whether or not the latter is sparse. Numerical comparisons with existing methods demonstrate this property in finite samples with both simulated and real data examples.


2002 ◽  
Vol 2 (6) ◽  
pp. 2133-2150 ◽  
Author(s):  
J.-P. Issartel ◽  
J. Baverel

Abstract. An international monitoring system is being built as a verification tool for the Comprehensive Test Ban Treaty. Forty stations will measure on a worldwide daily basis the concentration of radioactive noble gases. The paper introduces, by handling preliminary real data, a new approach of backtracking for the identification of sources after positive measurements. When several measurements are available the ambiguity about possible sources is reduced significantly. As an interesting side result it is shown that diffusion in the passive tracer dispersion equation is necessarily a self-adjoint operator.


2020 ◽  
Author(s):  
Fanny Mollandin ◽  
Andrea Rau ◽  
Pascal Croiseau

ABSTRACTTechnological advances and decreasing costs have led to the rise of increasingly dense genotyping data, making feasible the identification of potential causal markers. Custom genotyping chips, which combine medium-density genotypes with a custom genotype panel, can capitalize on these candidates to potentially yield improved accuracy and interpretability in genomic prediction. A particularly promising model to this end is BayesR, which divides markers into four effect size classes. BayesR has been shown to yield accurate predictions and promise for quantitative trait loci (QTL) mapping in real data applications, but an extensive benchmarking in simulated data is currently lacking. Based on a set of real genotypes, we generated simulated data under a variety of genetic architectures, phenotype heritabilities, and we evaluated the impact of excluding or including causal markers among the genotypes. We define several statistical criteria for QTL mapping, including several based on sliding windows to account for linkage disequilibrium. We compare and contrast these statistics and their ability to accurately prioritize known causal markers. Overall, we confirm the strong predictive performance for BayesR in moderately to highly heritable traits, particularly for 50k custom data. In cases of low heritability or weak linkage disequilibrium with the causal marker in 50k genotypes, QTL mapping is a challenge, regardless of the criterion used. BayesR is a promising approach to simultaneously obtain accurate predictions and interpretable classifications of SNPs into effect size classes. We illustrated the performance of BayesR in a variety of simulation scenarios, and compared the advantages and limitations of each.


2020 ◽  
Author(s):  
Kongying Lin ◽  
Qizhen Huang ◽  
Zongren Ding ◽  
Yongyi Zeng ◽  
Jingfeng Liu

Abstract Objectives This study was conducted to estimate the probability of cancer-specific survival (CSS) of HCC and establish a competing risk nomogram for predicting the CSS of HCC using a large population-based cohort. Methods Patients diagnosed with HCC between 2004 and 2015 were identified from the Surveillance Epidemiology and End Results Program. The CSS and overall survival (OS) were the endpoints of the study. A competing risk nomogram for predicting CSS was built with Fine and Gray’s competing risk model, and the nomogram for predicting OS was constructed with Cox proportional hazard regression models. The predictive performance of the model was tested in terms of discrimination and calibration. Results A total of 34,957 patients were included in the study and randomly divided into a training set and validation set at a ratio of 9:1. Multivariate analysis identified age, race, sex, surgical therapy, chemotherapy, radiotherapy, tumour diameter, and tumour staging as independent predictive factors of CSS. Additionally, marital status was identified as an independent predictive factor of OS. Using these factors, corresponding nomograms were constructed for CSS and OS. In the validation set, the concordance-index of the two nomogram models reached 0.810 and 0.750, respectively. Calibration curves revealed good consistency between the prediction of models and observed outcome. Furthermore, cumulative incidence function analysis and Kaplan-Meier analysis divided patients into four distinct risk subgroups, supporting the predictive performance of the models. Conclusions In this population-based analysis, we developed and validated nomograms for individualized prediction of CSS and OS in patients with HCC.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Lorentz Jäntschi ◽  
Donatella Bálint ◽  
Sorana D. Bolboacă

Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.


Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V223-V232 ◽  
Author(s):  
Zhicheng Geng ◽  
Xinming Wu ◽  
Sergey Fomel ◽  
Yangkang Chen

The seislet transform uses the wavelet-lifting scheme and local slopes to analyze the seismic data. In its definition, the designing of prediction operators specifically for seismic images and data is an important issue. We have developed a new formulation of the seislet transform based on the relative time (RT) attribute. This method uses the RT volume to construct multiscale prediction operators. With the new prediction operators, the seislet transform gets accelerated because distant traces get predicted directly. We apply our method to synthetic and real data to demonstrate that the new approach reduces computational cost and obtains excellent sparse representation on test data sets.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Saima K. Khosa ◽  
Ahmed Z. Afify ◽  
Zubair Ahmad ◽  
Mi Zichuan ◽  
Saddam Hussain ◽  
...  

In this article, a new approach is used to introduce an additional parameter to a continuous class of distributions. The new class is referred to as a new extended-F family of distributions. The new extended-Weibull distribution, as a special submodel of this family, is discussed. General expressions for some mathematical properties of the proposed family are derived, and maximum likelihood estimators of the model parameters are obtained. Furthermore, a simulation study is provided to evaluate the validity of the maximum likelihood estimators. Finally, the flexibility of the proposed method is illustrated via two applications to real data, and the comparison is made with the Weibull and some of its well-known extensions such as Marshall–Olkin Weibull, alpha power-transformed Weibull, and Kumaraswamy Weibull distributions.


Author(s):  
Gregorio Soria ◽  
L. M. Ortega Alvarado ◽  
Francisco R. Feito

Augmented reality (AR) has experienced a breakthrough in many areas of application thanks to cheaper hardware and a strong industry commitment. In the field of management of urban facilities, this technology allows virtual access and interaction with hidden underground elements. This paper presents a new approach to enable AR in mobile devices such as Google Tango, which has specific capabilities to be used outdoors. The first objective is to provide full functionality in the life-cycle management of subsoil infrastructures through this technology. This implies not only visualization, interaction, and free navigation, but also editing, deleting, and inserting elements ubiquitously. For this, a topological data model for three-dimensional (3D) data has been designed. Another important contribution of the paper is getting exact location and orientation performed in only a few minutes, using no additional markers or hardware. This accuracy in the initial positioning, together with the device sensing, avoids the usual errors during the navigation process in AR. Similar functionality has also been implemented in a nonubiquitous way to be supported by any other device through virtual reality (VR). The tests have been performed using real data of the city of Jaén (Spain).


Sign in / Sign up

Export Citation Format

Share Document