scholarly journals Forecasting Failure Rates of Electronic Goods by Using Decomposition and Fuzzy Clustering of Empirical Failure Rate Curves

2017 ◽  
Vol 26 (1) ◽  
pp. 10
Author(s):  
Tamás Jónás ◽  
Gábor Árva ◽  
Zsuzsanna Eszter Tóth

In this paper a novel methodology founded on the joint application of analytic decomposition of empirical failure rate time series and soft computational techniques is introduced in order to predict bathtub-shaped failure rate curves of consumer electronic goods. Empirical failure rate time series are modeled by a flexible function the parameters of which have geometric interpretations, and so the model parameters grab the characteristics of bathtub-shaped failure rate curves. The so-called typical standardized failure rate curve models, which are derived from the model functions through standardization and fuzzy clustering processes, are applied to predict failure rate curves of consumer electronics in a method that combines analytic curve fitting and soft computing techniques. The forecasting capability of the introduced method was tested on real-life data. Based on the empirical results from practical applications, the introduced method can be considered as a new, alternative reliability prediction technique the application of which can support the electronic repair service providers to plan their resources in the long run.

Mathematics ◽  
2021 ◽  
Vol 9 (14) ◽  
pp. 1679
Author(s):  
Jacopo Giacomelli ◽  
Luca Passalacqua

The CreditRisk+ model is one of the industry standards for the valuation of default risk in credit loans portfolios. The calibration of CreditRisk+ requires, inter alia, the specification of the parameters describing the structure of dependence among default events. This work addresses the calibration of these parameters. In particular, we study the dependence of the calibration procedure on the sampling period of the default rate time series, that might be different from the time horizon onto which the model is used for forecasting, as it is often the case in real life applications. The case of autocorrelated time series and the role of the statistical error as a function of the time series period are also discussed. The findings of the proposed calibration technique are illustrated with the support of an application to real data.


Signals ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 434-455
Author(s):  
Sujan Kumar Roy ◽  
Kuldip K. Paliwal

Inaccurate estimates of the linear prediction coefficient (LPC) and noise variance introduce bias in Kalman filter (KF) gain and degrade speech enhancement performance. The existing methods propose a tuning of the biased Kalman gain, particularly in stationary noise conditions. This paper introduces a tuning of the KF gain for speech enhancement in real-life noise conditions. First, we estimate noise from each noisy speech frame using a speech presence probability (SPP) method to compute the noise variance. Then, we construct a whitening filter (with its coefficients computed from the estimated noise) to pre-whiten each noisy speech frame prior to computing the speech LPC parameters. We then construct the KF with the estimated parameters, where the robustness metric offsets the bias in KF gain during speech absence of noisy speech to that of the sensitivity metric during speech presence to achieve better noise reduction. The noise variance and the speech model parameters are adopted as a speech activity detector. The reduced-biased Kalman gain enables the KF to minimize the noise effect significantly, yielding the enhanced speech. Objective and subjective scores on the NOIZEUS corpus demonstrate that the enhanced speech produced by the proposed method exhibits higher quality and intelligibility than some benchmark methods.


Author(s):  
Arnaud Dufays ◽  
Elysee Aristide Houndetoungan ◽  
Alain Coën

Abstract Change-point (CP) processes are one flexible approach to model long time series. We propose a method to uncover which model parameters truly vary when a CP is detected. Given a set of breakpoints, we use a penalized likelihood approach to select the best set of parameters that changes over time and we prove that the penalty function leads to a consistent selection of the true model. Estimation is carried out via the deterministic annealing expectation-maximization algorithm. Our method accounts for model selection uncertainty and associates a probability to all the possible time-varying parameter specifications. Monte Carlo simulations highlight that the method works well for many time series models including heteroskedastic processes. For a sample of fourteen hedge fund (HF) strategies, using an asset-based style pricing model, we shed light on the promising ability of our method to detect the time-varying dynamics of risk exposures as well as to forecast HF returns.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mohammad Ali Beheshtinia ◽  
Narjes Salmabadi ◽  
Somaye Rahimi

Purpose This paper aims to provide an integrated production-routing model in a three-echelon supply chain containing a two-layer transportation system to minimize the total costs of production, transportation, inventory holding and expired drugs treatment. In the proposed problem, some specifications such as multisite manufacturing, simultaneous pickup and delivery and uncertainty in parameters are considered. Design/methodology/approach At first, a mathematical model has been proposed for the problem. Then, one possibilistic model and one robust possibilistic model equivalent to the initial model are provided regarding the uncertain nature of the model parameters and the inaccessibility of their probability function. Finally, the performance of the proposed model is evaluated using the real data collected from a pharmaceutical production center in Iran. The results reveal the proper performance of the proposed models. Findings The results obtained from applying the proposed model to a real-life production center indicated that the number of expired drugs has decreased because of using this model, also the costs of the system were reduced owing to integrating simultaneous drug pickup and delivery operations. Moreover, regarding the results of simulations, the robust possibilistic model had the best performance among the proposed models. Originality/value This research considers a two-layer vehicle routing in a production-routing problem with inventory planning. Moreover, multisite manufacturing, simultaneous pickup of the expired drugs and delivery of the drugs to the distribution centers are considered. Providing a robust possibilistic model for tackling the uncertainty in demand, costs, production capacity and drug expiration costs is considered as another remarkable feature of the proposed model.


2021 ◽  
Author(s):  
Annette Dietmaier ◽  
Thomas Baumann

<p>The European Water Framework Directive (WFD) commits EU member states to achieve a good qualitative and quantitative status of all their water bodies.  WFD provides a list of actions to be taken to achieve the goal of good status.  However, this list disregards the specific conditions under which deep (> 400 m b.g.l.) groundwater aquifers form and exist.  In particular, deep groundwater fluid composition is influenced by interaction with the rock matrix and other geofluids, and may assume a bad status without anthropogenic influences. Thus, a new concept with directions of monitoring and modelling this specific kind of aquifers is needed. Their status evaluation must be based on the effects induced by their exploitation. Here, we analyze long-term real-life production data series to detect changes in the hydrochemical deep groundwater characteristics which might be triggered by balneological and geothermal exploitation. We aim to use these insights to design a set of criteria with which the status of deep groundwater aquifers can be quantitatively and qualitatively determined. Our analysis is based on a unique long-term hydrochemical data set, taken from 8 balneological and geothermal sites in the molasse basin of Lower Bavaria, Germany, and Upper Austria. It is focused on a predefined set of annual hydrochemical concentration values. The data range dates back to 1937. Our methods include developing threshold corridors, within which a good status can be assumed, and developing cluster analyses, correlation, and piper diagram analyses. We observed strong fluctuations in the hydrochemical characteristics of the molasse basin deep groundwater during the last decades. Special interest is put on fluctuations that seem to have a clear start and end date, and to be correlated with other exploitation activities in the region. For example, during the period between 1990 and 2020, bicarbonate and sodium values displayed a clear increase, followed by a distinct dip to below-average values and a subsequent return to average values at site F. During the same time, these values showed striking irregularities at site B. Furthermore, we observed fluctuations in several locations, which come close to disqualifying quality thresholds, commonly used in German balneology. Our preliminary results prove the importance of using long-term (multiple decades) time series analysis to better inform quality and quantity assessments for deep groundwater bodies: most fluctuations would stay undetected within a < 5 year time series window, but become a distinct irregularity when viewed in the context of multiple decades. In the next steps, a quality assessment matrix and threshold corridors will be developed, which take into account methods to identify these fluctuations. This will ultimately aid in assessing the sustainability of deep groundwater exploitation and reservoir management for balneological and geothermal uses.</p>


2021 ◽  
Author(s):  
Alberto Jose Ramirez ◽  
Jessica Graciela Iriarte

Abstract Breakdown pressure is the peak pressure attained when fluid is injected into a borehole until fracturing occurs. Hydraulic fracturing operations are conducted above the breakdown pressure, at which the rock formation fractures and allows fluids to flow inside. This value is essential to obtain formation stress measurements. The objective of this study is to automate the selection of breakdown pressure flags on time series fracture data using a novel algorithm in lieu of an artificial neural network. This study is based on high-frequency treatment data collected from a cloud-based software. The comma separated (.csv) files include treating pressure (TP), slurry rate (SR), and bottomhole proppant concentration (BHPC) with defined start and end time flags. Using feature engineering, the model calculates the rate of change of treating pressure (dtp_1st) slurry rate (dsr_1st), and bottomhole proppant concentration (dbhpc_1st). An algorithm isolates the initial area of the treatment plot before proppant reaches the perforations, the slurry rate is constant, and the pressure increases. The first approach uses a neural network trained with 872 stages to isolate the breakdown pressure area. The expert rule-based approach finds the highest pressure spikes where SR is constant. Then, a refining function finds the maximum treating pressure value and returns its job time as the predicted breakdown pressure flag. Due to the complexity of unconventional reservoirs, the treatment plots may show pressure changes while the slurry rate is constant multiple times during the same stage. The diverse behavior of the breakdown pressure inhibits an artificial neural network's ability to find one "consistent pattern" across the stage. The multiple patterns found through the stage makes it difficult to select an area to find the breakdown pressure value. Testing this complex model worked moderately well, but it made the computational time too high for deployment. On the other hand, the automation algorithm uses rules to find the breakdown pressure value with its location within the stage. The breakdown flag model was validated with 102 stages and tested with 775 stages, returning the location and values corresponding to the highest pressure point. Results show that 86% of the predicted breakdown pressures are within 65 psi of manually picked values. Breakdown pressure recognition automation is important because it saves time and allows engineers to focus on analytical tasks instead of repetitive data-structuring tasks. Automating this process brings consistency to the data across service providers and basins. In some cases, due to its ability to zoom-in, the algorithm recognized breakdown pressures with higher accuracy than subject matter experts. Comparing the results from two different approaches allowed us to conclude that similar or better results with lower running times can be achieved without using complex algorithms.


2013 ◽  
Vol 20 (6) ◽  
pp. 1071-1078 ◽  
Author(s):  
E. Piegari ◽  
R. Di Maio ◽  
A. Avella

Abstract. Reasonable prediction of landslide occurrences in a given area requires the choice of an appropriate probability distribution of recurrence time intervals. Although landslides are widespread and frequent in many parts of the world, complete databases of landslide occurrences over large periods are missing and often such natural disasters are treated as processes uncorrelated in time and, therefore, Poisson distributed. In this paper, we examine the recurrence time statistics of landslide events simulated by a cellular automaton model that reproduces well the actual frequency-size statistics of landslide catalogues. The complex time series are analysed by varying both the threshold above which the time between events is recorded and the values of the key model parameters. The synthetic recurrence time probability distribution is shown to be strongly dependent on the rate at which instability is approached, providing a smooth crossover from a power-law regime to a Weibull regime. Moreover, a Fano factor analysis shows a clear indication of different degrees of correlation in landslide time series. Such a finding supports, at least in part, a recent analysis performed for the first time of an historical landslide time series over a time window of fifty years.


1998 ◽  
Vol 120 (2) ◽  
pp. 331-338 ◽  
Author(s):  
Y. Ren ◽  
C. F. Beards

Almost all real-life structures are assembled from components connected by various types of joints. Unlike many other parts, the dynamic properties of a joint are difficult to model analytically. An alternative approach for establishing a theoretical model of a joint is to extract the model parameters from experimental data using joint identification techniques. The accuracy of the identification is significantly affected by the properties of the joints themselves. If a joint is stiff, its properties are often difficult to identify accurately. This is because the responses at both ends of the joint are linearly-dependent. To make things worse, the existence of a stiff joint can also affect the accuracy of identification of other effective joints (the term “effective joints” in this paper refers to those joints which otherwise can be identified accurately). This problem is tackled by coupling these stiff joints using a generalized coupling technique, and then the properties of the remaining joints are identified using a joint identification technique. The accuracy of the joint identification can usually be improved by using this approach. Both numerically simulated and experimental results are presented.


Sign in / Sign up

Export Citation Format

Share Document