interval coverage
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 22)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
pp. 107699862110520
Author(s):  
Jin Liu ◽  
Robert A. Perera ◽  
Le Kang ◽  
Roy T. Sabo ◽  
Robert M. Kirkpatrick

This study proposes transformation functions and matrices between coefficients in the original and reparameterized parameter spaces for an existing linear-linear piecewise model to derive the interpretable coefficients directly related to the underlying change pattern. Additionally, the study extends the existing model to allow individual measurement occasions and investigates predictors for individual differences in change patterns. We present the proposed methods with simulation studies and a real-world data analysis. Our simulation study demonstrates that the method can generally provide an unbiased and accurate point estimate and appropriate confidence interval coverage for each parameter. The empirical analysis shows that the model can estimate the growth factor coefficients and path coefficients directly related to the underlying developmental process, thereby providing meaningful interpretation.


Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

Abstract Deep Neural Networks (DNNs) have shown great success in many fields. Various network architectures have been developed for different applications. Regardless of the complexities of the networks, DNNs do not provide model uncertainty. Bayesian Neural Networks (BNNs), on the other hand, is able to make probabilistic inference. Among various types of BNNs, Dropout as a Bayesian Approximation converts a Neural Network (NN) to a BNN by adding a dropout layer after each weight layer in the NN. This technique provides a simple transformation from a NN to a BNN. However, for DNNs, adding a dropout layer to each weight layer would lead to a strong regularization due to the deep architecture. Previous researches [1, 2, 3] have shown that adding a dropout layer after each weight layer in a DNN is unnecessary. However, how to place dropout layers in a ResNet for regression tasks are less explored. In this work, we perform an empirical study on how different dropout placements would affect the performance of a Bayesian DNN. We use a regression model modified from ResNet as the DNN and place the dropout layers at different places in the regression ResNet. Our experimental results show that it is not necessary to add a dropout layer after every weight layer in the Regression ResNet to let it be able to make Bayesian Inference. Placing Dropout layers between the stacked blocks i.e. Dense+Identity+Identity blocks has the best performance in Predictive Interval Coverage Probability (PICP). Placing a dropout layer after each stacked block has the best performance in Root Mean Square Error (RMSE).


2021 ◽  
Vol 73 (10) ◽  
pp. 51-52
Author(s):  
Chris Carpenter

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 202265, “Leap of Faith From Conventional to EM Look-Ahead: A Game-Changing Technology To Improve Well Efficiency,” by Muhamad Yanuar Mahardi, Hendarsyah Hendarsyah, and Kharisma Endarmoyo, PT Pertamina, et al., prepared for the 2020 SPE Asia Pacific Oil and Gas Conference and Exhibition, originally scheduled to be held in Perth, Australia, 20–22 October. The paper has not been peer reviewed. The structure in the Matindok block in Central Sulawesi operated by Pertamina has proven producible gas reserves in the Minahaki formation. One of the main challenges in this area is the low resolution of seismic data, leading to a high depth uncertainty. The complete paper describes a technology developed to meet these challenges with the capability to map and detect lithology changes ahead of the bit in real time. Geological Background A first exploration well, PEP-001, was drilled in 2018. The structure has a Miocene carbonate buildup play, and the target reservoir is the M pinnacle carbonate reef. The PEP-001 well was planned to set the 9⅝-in. casing point above the top of the M formation. Offset wells did not show any clear markers in the thick shale above the M formation that could have been used for log correlation. In previously drilled offset wells, correlation was performed convention-ally by examination of cutting samples and on drilling breaks. However, when Well PEP-001 was drilled, no apparent drilling break was observed. By the time cuttings reached the surface, the bit had drilled into 20 m of the M formation. Because the casing covered most of the upper carbonate formation, openhole logging and well-testing data were not acquired to delineate the target formation optimally. The second exploration well, PEP-002, was planned with an objective of setting 9⅝-in. casing approximately 5 m above the top of M to acquire full-interval coverage of coring, openhole wireline logging, and well testing. This information was critical for optimal reservoir delineation to allow for accurate reserves calculation and future development. Conventional correlation methods have proven insufficient for casing point placement. The presence of limestone stringers in offset wells within proximity of the top of M presented an additional challenge. The stringers could have been misinterpreted as the main carbonate body, if interpretation were based solely on cutting samples. Real-Time Electromagnetic (EM) Look-Ahead Technology


2021 ◽  
Author(s):  
Jo-Anne Bright ◽  
Shan-I Lee ◽  
JOHN BUCKLETON ◽  
Duncan Alexander Taylor

In previously reported work a method for applying a lower bound to the variation induced by the Monte Carlo effect was trialled. This is implemented in the widely used probabilistic genotyping system, STRmix The approach did not give the desired 99% coverage. However, the method for assigning the lower bound to the MCMC variability is only one of a number of layers of conservativism applied in a typical application. We tested all but one of these sources of variability collectively and term the result the near global coverage. The near global coverage for all tested samples was greater than 99.5% for inclusionary average LRs of known donors. This suggests that when included in the probability interval method the other layers of conservativism are more than adequate to compensate for the intermittent underperformance of the MCMC variability component. Running for extended MCMC accepts was also shown to result in improved precision.


2021 ◽  
Vol 8 ◽  
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

Safety is an important issue in human–robot interaction (HRI) applications. Various research works have focused on different levels of safety in HRI. If a human/obstacle is detected, a repulsive action can be taken to avoid the collision. Common repulsive actions include distance methods, potential field methods, and safety field methods. Approaches based on machine learning are less explored regarding the selection of the repulsive action. Few research works focus on the uncertainty of the data-based approaches and consider the efficiency of the executing task during collision avoidance. In this study, we describe a system that can avoid collision with human hands while the robot is executing an image-based visual servoing (IBVS) task. We use Monte Carlo dropout (MC dropout) to transform a deep neural network (DNN) to a Bayesian DNN, and learn the repulsive position for hand avoidance. The Bayesian DNN allows IBVS to converge faster than the opposite repulsive pose. Furthermore, it allows the robot to avoid undesired poses that the DNN cannot avoid. The experimental results show that Bayesian DNN has adequate accuracy and can generalize well on unseen data. The predictive interval coverage probability (PICP) of the predictions along x, y, and z directions are 0.84, 0.94, and 0.95, respectively. In the space which is unseen in the training data, the Bayesian DNN is also more robust than a DNN. We further implement the system on a UR10 robot, and test the robustness of the Bayesian DNN and the IBVS convergence speed. Results show that the Bayesian DNN can avoid the poses out of the reach range of the robot and it lets the IBVS task converge faster than the opposite repulsive pose.1


2021 ◽  
Author(s):  
Thomas Monks ◽  
Michael Allen

BackgroundWe aimed to select and externally validate a benchmark method for emergency ambulance services to use to forecast the daily number of calls that result in the dispatch of one or more ambulances. The study was conducted using standard methods known to the UK's NHS to aid implementation in practice.MethodsWe selected our benchmark model from a naive benchmark and 14 standard forecasting methods. Mean absolute scaled error and 80 and 95\% prediction interval coverage over a 84 day horizon were evaluated using time series cross validation across eight time series from the South West of England. External validation was conducted by time series cross validation across 13 time series from London, Yorkshire and Welsh Ambulance Services. ResultsA model combining a simple average of Facebook's Prophet and regression with ARIMA Errors (1, 1, 3)(1, 0, 1, 7) was selected. Benchmark MASE, 80 and 95\% prediction intervals were 0.68 (95% CI 0.67 - 0.69), 0.847 (95% CI 0.843 - 0.851), and 0.965 (95% CI 0.949 - 0.977), respectively. Performance in the validation set was within expected ranges for MASE, 0.73 (95% CI 0.72 - 0.74) 80\% coverage (0.833; 95% CI 0.828-0.838), and 95\% coverage (0.965; 95% CI 0.963-0.967).ConclusionsWe provide a robust externally validated benchmark for future ambulance demand forecasting studies to improve on. Our benchmark forecasting model is high quality and usable by ambulance services. We provide a simple python framework to aid its implementation in practice.


Author(s):  
Heather Kitada Smalley ◽  
Sarah C. Emerson ◽  
Virginia Lesser

In this chapter, we develop theory and methodology to support mode adjustment and hindcasting/forecasting in the presence of different possible mode effect types, including additive effects and odds-multiplicative effects. Mode adjustment is particularly important if the ultimate goal is to report one aggregate estimate of response parameters, and also to allow for comparison to historical surveys performed with different modes. Effect type has important consequences for inferential validity when the baseline response changes over time (i.e. when there is a time trend or time effect). We present a methodology to provide inference for additive and odds-multiplicative effect types, and demonstrate its performance in a simulation study. We also show that if the wrong effect type is assumed, the resulting inference can be invalid as confidence interval coverage is greatly reduced and estimates can be biased.


2021 ◽  
Vol 11 (6) ◽  
pp. 2538
Author(s):  
Fermín Rodríguez ◽  
Najmeh Bazmohammadi ◽  
Josep M. Guerrero ◽  
Ainhoa Galarza

Very short-term load demand forecasters are essential for power systems’ decision makers in real-time dispatching. These tools allow traditional network operators to maintain power systems’ safety and stability and provide customers energy with high reliability. Although research has traditionally focused on developing point forecasters, these tools do not provide complete information because they do not estimate the deviation between actual and predicted values. Therefore, the aim of this paper is to develop a very short-term probabilistic prediction interval forecaster to reduce decision makers’ uncertainty by computing the predicted value’s upper and lower bounds. The proposed forecaster combines an artificial intelligence-based point forecaster with a probabilistic prediction interval algorithm. First, the point forecaster predicts energy demand in the next 15 min and then the prediction interval algorithm calculates the upper and lower bounds with the user’s chosen confidence level. To examine the reliability of proposed forecaster model and resulting interval sharpness, different error metrics, such as prediction interval coverage percentage and a skill score, are computed for 95, 90, and 85% confidence intervals. Results show that the prediction interval coverage percentage is higher than the confidence level in each analysis, which means that the proposed model is valid for practical applications.


2021 ◽  
Author(s):  
David Banda Carrasco ◽  
Violeta Tolorza ◽  
Mauricio Galleguillos

<p>Novel estimations of burn severity consequences are relevant to improve the understanding of spatial ecosystem dynamics between soil and vegetation. In this study, we implemented digital soil mapping (DSM) with Random Forest (RF) and generalized additive model (GAM) as internal statistical models, to generate maps for spatial prediction of chemical parameters of post-fire litter (N, P, C and OM) in the Purapel River basin, Maule region of Chile. Response variables were the chemical characterization of 67 samples of litter collected in different hillslopes of the basin during the first post-fire winter. The predictive variables that fed the RF model were spectral, topographic, and vegetation structure derivations, obtained from free and private use satellite products (Sentinel 1, Sentinel 2, LiDAR and TanDEM-X). As a result, we generated maps of post-fire spatial distribution of N, P, C and OM with acceptable adjustment (R<sup>2</sup> 0.52-0.61, nRMSE 54-72, pbias 0.35-1.20). The uncertainty associated with the predictions of these variables was successfully evaluated with the prediction interval coverage probability (PICP). A clear decrease on the concentration of litter elements is observed respect to the degree of burn severity, and this relationship depends on the type of cover and the environmental gradient where they are distributed.</p>


Author(s):  
Fei Jin ◽  
Xiaoliang Liu ◽  
Fangfang Xing ◽  
Guoqiang Wen ◽  
Shuangkun Wang ◽  
...  

Background : The day-ahead load forecasting is an essential guideline for power generating, and it is of considerable significance in power dispatch. Objective: Most of the existing load probability prediction methods use historical data to predict a single area, and rarely use the correlation of load time and space to improve the accuracy of load prediction. Methods: This paper presents a method for day-ahead load probability prediction based on space-time correction. Firstly, the kernel density estimation (KDE) is employed to model the prediction error of the long short-term memory (LSTM) model, and the residual distribution is obtained. Then the correlation value is used to modify the time and space dimensions of the test set's partial period prediction values. Results: The experiment selected three years of load data in 10 areas of a city in northern China. The MAPE of the two modified models on their respective test sets can be reduced by an average of 10.2% and 6.1% compared to previous results. The interval coverage of the probability prediction can be increased by an average of 4.2% and 1.8% than before. Conclusion: The test results show that the proposed correction schemes are feasible.


Sign in / Sign up

Export Citation Format

Share Document