Industrial Prediction Intervals with Data Uncertainty

Author(s):  
Jun Zhao ◽  
Wei Wang ◽  
Chunyang Sheng
Author(s):  
Guanglu Zhang ◽  
Douglas Allaire ◽  
Daniel A. McAdams ◽  
Venkatesh Shankar

Technology evolution prediction, or technological forecasting, is critical for designers to make important decisions during product development planning such as R&D investment and outsourcing. In practice, designers want to supplement point forecast by prediction intervals to assess future uncertainty and make contingency plans. Available technology evolution data is a time series but is generally with non-uniform spacing. Existing methods associated with typical time series models assume uniformly spaced data, so these methods cannot be used to construct prediction intervals for technology evolution prediction. In this paper, we develop a generic method that use bootstrapping to generate prediction intervals for technology evolution. The method we develop can be applied to any technology evolution prediction model. We consider parameter uncertainty and data uncertainty and establish their empirical probability distributions. We determine an appropriate confidence level α to generate prediction intervals through a holdout sample analysis rather than set α = 0.05 as is typically done in the literature. We validate our method to generate the prediction intervals through a case study of central processing unit transistor count evolution. The case study shows that the prediction intervals generated by our method cover every actual data point in a holdout sample test. To apply our method in practice, we outline four steps for designers to generate prediction intervals for technology evolution prediction.


2019 ◽  
Vol 141 (6) ◽  
Author(s):  
Guanglu Zhang ◽  
Douglas Allaire ◽  
Daniel A. McAdams ◽  
Venkatesh Shankar

Technology evolution prediction is critical for designers, business managers, and entrepreneurs to make important decisions during product development planning such as R&D investment and outsourcing. In practice, designers want to supplement point forecasts with prediction intervals to assess future uncertainty and make contingency plans accordingly. However, prediction intervals generation for technology evolution has received scant attention in the literature. In this paper, we develop a generic method that uses bootstrapping to generate prediction intervals for technology evolution. The method we develop can be applied to any model that describes technology performance incremental change. We consider parameter uncertainty and data uncertainty and establish their empirical probability distributions. We determine an appropriate confidence level to generate prediction intervals through a holdout sample analysis rather than specify that the confidence level equals 0.05 as is typically done in the literature. In addition, our method provides the probability distribution of each parameter in a prediction model. The probability distribution is valuable when parameter values are associated with the impact factors of technology evolution. We validate our method to generate prediction intervals through two case studies of central processing units (CPU) and passenger airplanes. These case studies show that the prediction intervals generated by our method cover every actual data point in the holdout sample tests. We outline four steps to generate prediction intervals for technology evolution prediction in practice.


Author(s):  
Jarkko P. P. Jääskelä ◽  
Anthony Yates

2021 ◽  
Vol 5 (1) ◽  
pp. 51
Author(s):  
Enriqueta Vercher ◽  
Abel Rubio ◽  
José D. Bermúdez

We present a new forecasting scheme based on the credibility distribution of fuzzy events. This approach allows us to build prediction intervals using the first differences of the time series data. Additionally, the credibility expected value enables us to estimate the k-step-ahead pointwise forecasts. We analyze the coverage of the prediction intervals and the accuracy of pointwise forecasts using different credibility approaches based on the upper differences. The comparative results were obtained working with yearly time series from the M4 Competition. The performance and computational cost of our proposal, compared with automatic forecasting procedures, are presented.


Author(s):  
Mythili K. ◽  
Manish Narwaria

Quality assessment of audiovisual (AV) signals is important from the perspective of system design, optimization, and management of a modern multimedia communication system. However, automatic prediction of AV quality via the use of computational models remains challenging. In this context, machine learning (ML) appears to be an attractive alternative to the traditional approaches. This is especially when such assessment needs to be made in no-reference (i.e., the original signal is unavailable) fashion. While development of ML-based quality predictors is desirable, we argue that proper assessment and validation of such predictors is also crucial before they can be deployed in practice. To this end, we raise some fundamental questions about the current approach of ML-based model development for AV quality assessment and signal processing for multimedia communication in general. We also identify specific limitations associated with the current validation strategy which have implications on analysis and comparison of ML-based quality predictors. These include a lack of consideration of: (a) data uncertainty, (b) domain knowledge, (c) explicit learning ability of the trained model, and (d) interpretability of the resultant model. Therefore, the primary goal of this article is to shed some light into mentioned factors. Our analysis and proposed recommendations are of particular importance in the light of significant interests in ML methods for multimedia signal processing (specifically in cases where human-labeled data is used), and a lack of discussion of mentioned issues in existing literature.


Sign in / Sign up

Export Citation Format

Share Document