scholarly journals Conditional Expectation in an Uncertainty Space

2019 ◽  
Vol 1180 ◽  
pp. 012003
Author(s):  
E Aujero ◽  
M Frondoza ◽  
E De Lara-Tuprio ◽  
R Eden ◽  
T R Teng
Author(s):  
Seyed Kourosh Mahjour ◽  
Antonio Alberto Souza Santos ◽  
Manuel Gomes Correia ◽  
Denis José Schiozer

AbstractThe simulation process under uncertainty needs numerous reservoir models that can be very time-consuming. Hence, selecting representative models (RMs) that show the uncertainty space of the full ensemble is required. In this work, we compare two scenario reduction techniques: (1) Distance-based Clustering with Simple Matching Coefficient (DCSMC) applied before the simulation process using reservoir static data, and (2) metaheuristic algorithm (RMFinder technique) applied after the simulation process using reservoir dynamic data. We use these two methods as samples to investigate the effect of static and dynamic data usage on the accuracy and rate of the scenario reduction process focusing field development purposes. In this work, a synthetic benchmark case named UNISIM-II-D considering the flow unit modelling is used. The results showed both scenario reduction methods are reliable in selecting the RMs from a specific production strategy. However, the obtained RMs from a defined strategy using the DCSMC method can be applied to other strategies preserving the representativeness of the models, while the role of the strategy types to select the RMs using the metaheuristic method is substantial so that each strategy has its own set of RMs. Due to the field development workflow in which the metaheuristic algorithm is used, the number of required flow simulation models and the computational time are greater than the workflow in which the DCSMC method is applied. Hence, it can be concluded that static reservoir data usage on the scenario reduction process can be more reliable during the field development phase.


2010 ◽  
Vol 101 (9) ◽  
pp. 2250-2253 ◽  
Author(s):  
Christopher S. Withers ◽  
Saralees Nadarajah

1998 ◽  
Vol 52 (3) ◽  
pp. 248 ◽  
Author(s):  
Michael A. Proschan ◽  
Brett Presnell

Author(s):  
Juan Luis Fernández-Martínez ◽  
Ana Cernea

In this paper, we present a supervised ensemble learning algorithm, called SCAV1, and its application to face recognition. This algorithm exploits the uncertainty space of the ensemble classifiers. Its design includes six different nearest-neighbor (NN) classifiers that are based on different and diverse image attributes: histogram, variogram, texture analysis, edges, bidimensional discrete wavelet transform and Zernike moments. In this approach each attribute, together with its corresponding type of the analysis (local or global), and the distance criterion (p-norm) induces a different individual NN classifier. The ensemble classifier SCAV1 depends on a set of parameters: the number of candidate images used by each individual method to perform the final classification and the individual weights given to each individual classifier. SCAV1 parameters are optimized/sampled using a supervised approach via the regressive particle swarm optimization algorithm (RR-PSO). The final classifier exploits the uncertainty space of SCAV1 and uses majority voting (Borda Count) as a final decision rule. We show the application of this algorithm to the ORL and PUT image databases, obtaining very high and stable accuracies (100% median accuracy and almost null interquartile range). In conclusion, exploring the uncertainty space of ensemble classifiers provides optimum results and seems to be the appropriate strategy to adopt for face recognition and other classification problems.


1970 ◽  
Vol 2 (02) ◽  
pp. 179-228 ◽  
Author(s):  
Harry Kesten

In this last part theFn(i) andMn(i) are considered as random variables whose distributions are described by the model and various mating rules of Section 2. Several convergence results will be proved for those specific mating rules, but we begin with the more general convergence theorem 6.1. The proof of this theorem brings out the basic idea of this section, namely that whenFnandMnare large,Fn + 1(i) andMn + 1(i) will, with high probability, be close to a certain function ofFn(·) andMn(·) (roughly the conditional expectation ofFn+1(i) andMn + 1(i) givenFn(·) andMn(·)).


Sign in / Sign up

Export Citation Format

Share Document