scholarly journals Factor Analysis of Well Logs for Total Organic Carbon Estimation in Unconventional Reservoirs

Energies ◽  
2021 ◽  
Vol 14 (18) ◽  
pp. 5978
Author(s):  
Norbert P. Szabó ◽  
Rafael Valadez-Vergara ◽  
Sabuhi Tapdigli ◽  
Aja Ugochukwu ◽  
István Szabó ◽  
...  

Several approaches have been applied for the evaluation of formation organic content. For further developments in the interpretation of organic richness, this research proposes a multivariate statistical method for exploring the interdependencies between the well logs and model parameters. A factor analysis-based approach is presented for the quantitative determination of total organic content of shale formations. Uncorrelated factors are extracted from well logging data using Jöreskog’s algorithm, and then the factor logs are correlated with estimated petrophysical properties. Whereas the first factor holds information on the amount of shaliness, the second is identified as an organic factor. The estimation method is applied both to synthetic and real datasets from different reservoir types and geologic basins, i.e., Derecske Trough in East Hungary (tight gas); Kingak formation in North Slope Alaska, United States of America (shale gas); and shale source rock formations in the Norwegian continental shelf. The estimated total organic content logs are verified by core data and/or results from other indirect estimation methods such as interval inversion, artificial neural networks and cluster analysis. The presented statistical method used for the interpretation of wireline logs offers an effective tool for the evaluation of organic matter content in unconventional reservoirs.

2021 ◽  
Vol 11 (15) ◽  
pp. 6701
Author(s):  
Yuta Sueki ◽  
Yoshiyuki Noda

This paper discusses a real-time flow-rate estimation method for a tilting-ladle-type automatic pouring machine used in the casting industry. In most pouring machines, molten metal is poured into a mold by tilting the ladle. Precise pouring is required to improve productivity and ensure a safe pouring process. To achieve precise pouring, it is important to control the flow rate of the liquid outflow from the ladle. However, due to the high temperature of molten metal, directly measuring the flow rate to devise flow-rate feedback control is difficult. To solve this problem, specific flow-rate estimation methods have been developed. In the previous study by present authors, a simplified flow-rate estimation method was proposed, in which Kalman filters were decentralized to motor systems and the pouring process for implementing into the industrial controller of an automatic pouring machine used a complicatedly shaped ladle. The effectiveness of this flow rate estimation was verified in the experiment with the ideal condition. In the present study, the appropriateness of the real-time flow-rate estimation by decentralization of Kalman filters is verified by comparing it with two other types of existing real-time flow-rate estimations, i.e., time derivatives of the weight of the outflow liquid measured by the load cell and the liquid volume in the ladle measured by a visible camera. We especially confirmed the estimation errors of the candidate real-time flow-rate estimations in the experiments with the uncertainty of the model parameters. These flow-rate estimation methods were applied to a laboratory-type automatic pouring machine to verify their performance.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1578 ◽  
Author(s):  
Hazem Al-Mofleh ◽  
Ahmed Z. Afify ◽  
Noor Akma Ibrahim

In this paper, a new two-parameter generalized Ramos–Louzada distribution is proposed. The proposed model provides more flexibility in modeling data with increasing, decreasing, J-shaped, and reversed-J shaped hazard rate functions. Several statistical properties of the model were derived. The unknown parameters of the new distribution were explored using eight frequentist estimation approaches. These approaches are important for developing guidelines to choose the best method of estimation for the model parameters, which would be of great interest to practitioners and applied statisticians. Detailed numerical simulations are presented to examine the bias and the mean square error of the proposed estimators. The best estimation method and ordering performance of the estimators were determined using the partial and overall ranks of all estimation methods for various parameter combinations. The performance of the proposed distribution is illustrated using two real datasets from the fields of medicine and geology, and both datasets show that the new model is more appropriate as compared to the Marshall–Olkin exponential, exponentiated exponential, beta exponential, gamma, Poisson–Lomax, Lindley geometric, generalized Lindley, and Lindley distributions, among others.


Geophysics ◽  
2011 ◽  
Vol 76 (4) ◽  
pp. V59-V68 ◽  
Author(s):  
Jonathan A. Edgar ◽  
Mirko van der Baan

Well logs often are used for the estimation of seismic wavelets. The phase is obtained by forcing a well-derived synthetic seismogram to match the seismic, thus assuming the well log provides ground truth. However, well logs are not always available and can predict different phase corrections at nearby locations. Thus, a wavelet-estimation method that reliably can predict phase from the seismic alone is required. Three statistical wavelet-estimation techniques were tested against the deterministic method of seismic-to-well ties. How the choice of method influences the estimated wavelet phase was explored, with the aim of finding a statistical method which consistently predicts a phase in agreement with well logs. It was shown that the statistical method of kurtosis maximization by constant phase rotation consistently is able to extract a phase in agreement with seismic-to-well ties. A statistical method based on a modified mutual-information-rate criterion was demonstrated to provide frequency-dependent phase wavelets where the deterministic method could not. Time-varying statistical wavelets also were estimated with good results — a challenge for deterministic approaches because of the short logging sequence. It was concluded that statistical techniques can be used as quality control tools for the deterministic methods, as a way of extrapolating phase away from wells, or to act as standalone tools in the absence of wells.


2018 ◽  
Vol 10 (8) ◽  
pp. 2837 ◽  
Author(s):  
Dereje Birhanu ◽  
Hyeonjun Kim ◽  
Cheolhee Jang ◽  
Sanghyun Park

In this study, five hydrological models of increasing complexity and 12 Potential Evapotranspiration (PET) estimation methods of different data requirements were applied in order to assess their effect on model performance, optimized parameters, and robustness. The models were applied over a set of 10 catchments that are located in South Korea. The Shuffled Complex Evolution-University of Arizona (SCE-UA) algorithm was implemented to calibrate the hydrological models for each PET input while considering similar objective functions. The hydrological models’ performance was satisfactory for each PET input in the calibration and validation periods for all of the tested catchments. The five hydrological models’ performance were found to be insensitive to the 12 PET inputs because of the SCE-UA algorithm’s efficiency in optimizing model parameters. However, the five hydrological models’ parameters in charge of transforming the PET to actual evapotranspiration were sensitive and significantly affected by the PET complexity. The values of the three statistical indicators also agreed with the computed model evaluation index values. Similarly, identical behavioral similarities and Dimensionless Bias were observed in all of the tested catchments. For the five hydrological models, lack of robustness and higher Dimensionless Bias were seen for high and low flow as well as for the Hamon PET input. The results indicated that the complexity of the hydrological models’ structure and the PET estimation methods did not necessarily enhance model performance and robustness. The model performance and robustness were found to be mainly dependent on extreme hydrological conditions, including high and low flow, rather than complexity; the simplest hydrological model and PET estimation method could perform better if reliable hydro-meteorological datasets are applied.


2019 ◽  
Vol 80 (3) ◽  
pp. 421-445 ◽  
Author(s):  
Dexin Shi ◽  
Alberto Maydeu-Olivares

We examined the effect of estimation methods, maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), on three population SEM (structural equation modeling) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual (SRMR). We considered different types and levels of misspecification in factor analysis models: misspecified dimensionality, omitting cross-loadings, and ignoring residual correlations. Estimation methods had substantial impacts on the RMSEA and CFI so that different cutoff values need to be employed for different estimators. In contrast, SRMR is robust to the method used to estimate the model parameters. The same criterion can be applied at the population level when using the SRMR to evaluate model fit, regardless of the choice of estimation method.


2019 ◽  
Vol 26 (10) ◽  
pp. 1046-1055
Author(s):  
Erich Kummerfeld ◽  
Alexander Rix ◽  
Justin J Anker ◽  
Matt G Kushner

AbstractObjectiveThe objective of this study was to assess the potential of combining graph learning methods with latent variable estimation methods for mining clinically useful information from observational clinical data sets.Materials and MethodsThe data set contained self-reported measures of psychopathology symptoms from a clinical sample receiving treatment for alcohol use disorder. We used the traditional graph learning methods: Graphical Least Absolute Shrinkage and Selection Operator, and Friedman's hill climbing algorithm; traditional latent variable estimation method factor analysis; recently developed graph learning method Greedy Fast Causal Inference; and recently developed latent variable estimation method Find One Factor Clusters. Methods were assessed qualitatively by the content of their findings.ResultsRecently developed graphical methods identified potential latent variables (ie, not represented in the model) influencing particular scores. Recently developed latent effect estimation methods identified plausible cross-score loadings that were not found with factor analysis. A graphical analysis of individual items identified a mistake in wording on 1 questionnaire and provided further evidence that certain scores are not reflective of indirectly measured common causes.Discussion and ConclusionOur findings suggest that a combination of Greedy Fast Causal Inference and Find One Factor Clusters can enhance the evidence-based information yield from psychopathological constructs and questionnaires. Traditional methods provided some of the same information but missed other important findings. These conclusions point the way toward more informative interrogations of existing and future data sets than are commonly employed at present.


Author(s):  
Hebert Azevedo-Sa ◽  
Suresh Kumaar Jayaraman ◽  
Connor T. Esterwood ◽  
X. Jessie Yang ◽  
Lionel P. Robert ◽  
...  

Abstract Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers’ trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers’ trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers’ performance on a non-driving-related task. We conducted a study ($$n=80$$ n = 80 ) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers’ trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers’ trust levels to mitigate both undertrust and overtrust.


Energies ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 1349 ◽  
Author(s):  
Qiaohua Fang ◽  
Xuezhe Wei ◽  
Tianyi Lu ◽  
Haifeng Dai ◽  
Jiangong Zhu

The state of health estimation for lithium-ion battery is a key function of the battery management system. Unlike the traditional state of health estimation methods under dynamic conditions, the relaxation process is studied and utilized to estimate the state of health in this research. A reasonable and accurate voltage relaxation model is established based on the linear relationship between time coefficient and open circuit time for a Li1(NiCoAl)1O2-Li1(NiCoMn)1O2/graphite battery. The accuracy and effectiveness of the model is verified under different states of charge and states of health. Through systematic experiments under different states of charge and states of health, it is found that the model parameters monotonically increase with the aging of the battery. Three different capacity estimation methods are proposed based on the relationship between model parameters and residual capacity, namely the α-based, β-based, and parameter–fusion methods. The validation of the three methods is verified with high accuracy. The results indicate that the capacity estimation error under most of the aging states is less than 1%. The largest error drops from 3% under the α-based method to 1.8% under the parameter–fusion method.


CAUCHY ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 95
Author(s):  
Ovi Delviyanti Saputri ◽  
Ferra Yanuar ◽  
Dodi Devianto

<span lang="DE">Quantile regression is a regression method with the approach of separating or dividing data into certain quantiles by minimizing the number of absolute values from asymmetrical errors to overcome unfulfilled assumptions, including the presence of autocorrelation. The resulting model parameters are tested for accuracy using the bootstrap method. The bootstrap method is a parameter estimation method by re-sampling from the original sample as much as R replication. The bootstrap trust interval was then used as a test consistency test algorithm constructed on the estimator by the quantile regression method. And test the uncommon quantile regression method with bootstrap method. The data obtained in this test is data replication 10 times. The biasness is calculated from the difference between the quantile estimate and bootstrap estimation. Quantile estimation methods are said to be unbiased if the standard deviation bias is less than the standard bootstrap deviation. This study proves that the estimated value with quantile regression is within the bootstrap percentile confidence interval and proves that 10 times replication produces a better estimation value compared to other replication measures. Quantile regression method in this study is also able to produce unbiased parameter estimation values.</span>


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Mohammed M. A. Almazah ◽  
Muhammad Ismail

Several studies have considered various scheduling methods and reliability functions to determine the optimum maintenance time. These methods and functions correspond to the lowest cost by using the maximum likelihood estimator to evaluate the model parameters. However, this paper aims to estimate the parameters of the two-parameter Weibull distribution (α, β). The maximum likelihood estimation method, modified linear exponential loss function, and Wyatt-based regression method are used for the estimation of the parameters. Minimum mean square error (MSE) criterion is used to evaluate the relative efficiency of the estimators. The comparison of the different parameter estimation methods is conducted, and the efficiency of these methods is observed, both mathematically and experimentally. The simulation study is conducted for comparison of samples sizes (10, 50, 100, 150) based on the mean square error (MSE). It is concluded that the maximum likelihood method was found to be the most efficient method for all sample sizes used in the research because it achieved the least MSE compared with other methods.


Sign in / Sign up

Export Citation Format

Share Document