scholarly journals Gaussian process latent variable model based on immune clonal selection for SAR target feature extraction and recognition

2013 ◽  
Vol 32 (3) ◽  
pp. 231 ◽  
Author(s):  
Xiang-Rong ZHANG ◽  
Li-Min GOU ◽  
Yang-Yang LI ◽  
Jie FENG ◽  
Li-Cheng JIAO
Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3137
Author(s):  
Amine Tadjer ◽  
Reider B. Bratvold ◽  
Remus G. Hanea

Production forecasting is the basis for decision making in the oil and gas industry, and can be quite challenging, especially in terms of complex geological modeling of the subsurface. To help solve this problem, assisted history matching built on ensemble-based analysis such as the ensemble smoother and ensemble Kalman filter is useful in estimating models that preserve geological realism and have predictive capabilities. These methods tend, however, to be computationally demanding, as they require a large ensemble size for stable convergence. In this paper, we propose a novel method of uncertainty quantification and reservoir model calibration with much-reduced computation time. This approach is based on a sequential combination of nonlinear dimensionality reduction techniques: t-distributed stochastic neighbor embedding or the Gaussian process latent variable model and clustering K-means, along with the data assimilation method ensemble smoother with multiple data assimilation. The cluster analysis with t-distributed stochastic neighbor embedding and Gaussian process latent variable model is used to reduce the number of initial geostatistical realizations and select a set of optimal reservoir models that have similar production performance to the reference model. We then apply ensemble smoother with multiple data assimilation for providing reliable assimilation results. Experimental results based on the Brugge field case data verify the efficiency of the proposed approach.


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Victoria Savalei ◽  
Steven P. Reise

McNeish (2018) advocates that researchers abandon coefficient alpha in favor of alternative reliability measures, such as the 1-factor reliability (coefficient omega), a total reliability coefficient based on an exploratory bifactor solution (“Revelle’s omega total”), and the glb (“greatest lower bound”). McNeish supports this argument by demonstrating that these coefficients produce higher sample values in several examples. We express three main disagreements with this article. First, we show that McNeish exaggerates the extent to which alpha is different from omega when unidimensionality holds. Second, we argue that, when unidimensionality is violated, most alternative reliability coefficients are model-based, and it is critical to carefully select the underlying latent variable model rather than relying on software defaults. Third, we point out that higher sample reliability values do not necessarily capture population reliability better: many alternative reliability coefficients are upwardly biased except in very large samples. We conclude with a set of alternative recommendations for researchers.


2010 ◽  
Vol 73 (10-12) ◽  
pp. 2186-2195 ◽  
Author(s):  
Xiumei Wang ◽  
Xinbo Gao ◽  
Yuan Yuan ◽  
Dacheng Tao ◽  
Jie Li

Sign in / Sign up

Export Citation Format

Share Document