scholarly journals A new framework for selection of representative samples for special core analysis

2020 ◽  
Vol 5 (3) ◽  
pp. 210-226 ◽  
Author(s):  
Abouzar Mirzaei-Paiaman ◽  
Seyed Reza Asadolahpour ◽  
Hadi Saboorian-Jooybari ◽  
Zhangxin Chen ◽  
Mehdi Ostadhassan
2006 ◽  
Vol 9 (06) ◽  
pp. 647-653 ◽  
Author(s):  
Shameem Siddiqui ◽  
Taha M. Okasha ◽  
James J. Funk ◽  
Ahmad M. Al-Harbi

Summary The data generated from special-core-analysis (SCAL) tests have a significant impact on the development of reservoir engineering models. This paper describes some of the criteria and tests required for the selection of representative samples for use in SCAL tests. The proposed technique ensures that high-quality core plugs are chosen to represent appropriate flow compartments or facies within the reservoir. Visual inspection and, sometimes, computerized tomography (CT) images are the main tools used for assessing and selecting the core plugs for SCAL studies. Although it is possible to measure the brine permeability (kb), there is no direct method for determining the porosity (f) of SCAL plugs without compromising their wettability. Other selection methods involve using the conventional-core-analysis data (k and f) on "sister plugs" as a general indicator of the properties of the SCAL samples. A selective technique ideally suited for preserved or "native-state" samples has been developed to identify reservoir intervals with similar porosity/permeability relationships. It uses a combination of wireline log, gamma scan, quantitative CT, and preserved-state brine-permeability data. The technique uses these data to calculate appropriate depth-shifted reservoir-quality index (RQI) and flow-zone indicator (FZI) data, which are then used to select representative plug samples from each reservoir compartment. As an example application, approximately 400 SCAL plugs from an Upper Jurassic carbonate reservoir in the Middle East were tested using the selection criteria. This paper describes the step-by-step procedure to select representative plugs and criteria for combining the plugs for meaningful SCAL tests. Introduction The main goal of coring is to retrieve core samples from a well to get the maximum amount of information about the reservoir. Core samples collected provide important petrophysical, petrographic, paleontological, sedimentological, and diagenetic information. From a petrophysical point of view, the whole-core and plug samples typically undergo the following tests: CT scan, gamma scan, conventional tests, SCAL tests, rock mechanics, and other special tests. The data are combined to get information on heterogeneity, depth shift between core and log data, whole-core and plug porosity and permeability, porosity/permeability relationship, fluid content (Dean-Stark), RQI, FZI, wettability, relative permeability, capillary pressure, stress/strain relationship, and compressibility. The petrophysical data generated in this way play important roles in reservoir characterization and modeling, log calibration, reservoir simulation, and overall field production and development planning. Among all the petrophysical tests, the SCAL tests (which include wettability, capillary pressure, and relative permeability determination) are critical and time-consuming. A reservoir-condition relative permeability test can sometimes run for several months when mimicking the actual flow mechanisms taking place in the field. Therefore, it is very important to design these tests properly and, in particular, to select the samples that ensure meaningful results. In short, the samples must be "representative samples," which can capture the overall variability within the reservoir in a more scientific way. Unfortunately, the most important aspect of all SCAL procedures, the sample selection, is one of those least discussed. According to Corbett et al. (2001), API's RP40 (Recommended Practices for Core Analysis) makes very little reference to sampling; similarly, textbooks on petrophysics do not have sections on sampling. The Corbett et al. paper reviewed the statistical, petrophysical, and geological issues for sampling and proposed a series of considerations. This has led to the development of a method (Mohammed and Corbett 2002) using hydraulic units in a relatively simple clastic reservoir. In this paper, some issues related to sample-selection criteria (with special focus on carbonate reservoirs) will be discussed. A large data set of conventional, whole-core, and special-core analyses on a well in an Upper Jurassic carbonate reservoir was used to characterize representative samples for SCAL tests.


2015 ◽  
Vol 4 (2) ◽  
pp. 44-52
Author(s):  
Novia Rita ◽  
Tomi Erfando

Sebelum suatu model reservoir digunakan, terlebih dahulu dilakukan history matching atau menyesuaikan kondisi model dengan dengan kondisi reservoir. Salah satu parameter yang perlu disesuaikan adalah permeabilitas relatif. Untuk melakukan rekonstruksi nilai permeabilitas relatifnya dibutuhkan data SCAL (Special Core Analysis) dari sampel core. Langkah awal rekonstruksi adalah dengan melakukan normalisasi data permeabilitas relatif (kr) dan saturasi air (Sw) dari data SCAL yang berasal dari tiga sampel core. Setelah dilakukan nomalisasi, dilakukan denormalisasi data permeabilitas relatif yang akan dikelompokkan berdasarkan jenis batuannya. Setelah dilakukan history matching menggunakan black oil simulator, data denormalisasi tersebut belum sesuai dengan kondisi aktual. Selanjutnya digunakan persamaan Corey untuk rekonstruksi kurva permeabilitas relatifnya. Hasil dari persamaan tersebut didapat bahwa nilai kro dan krw jenis batuan 1 sebesar 0,25 dan 0,09 kemudian nilai kro dan krw untuk jenis batuan 2 sebesar 0,4 dan 0,2. Nilai permeabilitas dari persamaan Corey digunakan untuk melakukan history matching, hasilnya didapat kecocokan (matching) dengan keadaan aktual. Berdasarkan hasil simulasi, nilai produksi minyak aktualnya adalah 1.465.650 bbl sedangkan produksi dari simulasi adalah 1.499.000 bbl. Artinya persentase perbandingan aktual dan simulasinya adalah 1,14% yang dapat dikatakan cocok karena persentase perbedaannya di bawah 5%.


1981 ◽  
Vol 29 ◽  
pp. 1-9
Author(s):  
George J. Graham

The purpose of this course is to introduce a new framework linking the humanities to public policy analysis as pursued in the government and the academy. Current efforts to link the particular contributions from the humanities to problems of public policy choice are often narrow either in terms of their perspective on the humanities or in terms of their selection of the possible means of influencing policy choice. Sometimes a single text from one of the humanities disciplines is selected to apply to a particular issue. At other times, arguments about the ethical dimensions of a single policy issue often are pursued with a single — or sometimes, no — point of access to the policy process in mind.


Author(s):  
Sayan Surya Shaw ◽  
Shameem Ahmed ◽  
Samir Malakar ◽  
Laura Garcia-Hernandez ◽  
Ajith Abraham ◽  
...  

AbstractMany real-life datasets are imbalanced in nature, which implies that the number of samples present in one class (minority class) is exceptionally less compared to the number of samples found in the other class (majority class). Hence, if we directly fit these datasets to a standard classifier for training, then it often overlooks the minority class samples while estimating class separating hyperplane(s) and as a result of that it missclassifies the minority class samples. To solve this problem, over the years, many researchers have followed different approaches. However the selection of the true representative samples from the majority class is still considered as an open research problem. A better solution for this problem would be helpful in many applications like fraud detection, disease prediction and text classification. Also, the recent studies show that it needs not only analyzing disproportion between classes, but also other difficulties rooted in the nature of different data and thereby it needs more flexible, self-adaptable, computationally efficient and real-time method for selection of majority class samples without loosing much of important data from it. Keeping this fact in mind, we have proposed a hybrid model constituting Particle Swarm Optimization (PSO), a popular swarm intelligence-based meta-heuristic algorithm, and Ring Theory (RT)-based Evolutionary Algorithm (RTEA), a recently proposed physics-based meta-heuristic algorithm. We have named the algorithm as RT-based PSO or in short RTPSO. RTPSO can select the most representative samples from the majority class as it takes advantage of the efficient exploration and the exploitation phases of its parent algorithms for strengthening the search process. We have used AdaBoost classifier to observe the final classification results of our model. The effectiveness of our proposed method has been evaluated on 15 standard real-life datasets having low to extreme imbalance ratio. The performance of the RTPSO has been compared with PSO, RTEA and other standard undersampling methods. The obtained results demonstrate the superiority of RTPSO over state-of-the-art class imbalance problem-solvers considered here for comparison. The source code of this work is available in https://github.com/Sayansurya/RTPSO_Class_imbalance.


Author(s):  
Christophe Bastien ◽  
Alexander Diederich ◽  
Jesper Christensen ◽  
Shahab Ghaleb

With the increasing use of Computer Aided Engineering, it has become vital to be able to evaluate the accuracy of numerical models. This research poses the problem of selection of the most accurate and relevant correlation solution to a set of corridor variations. Specific methods such as CORA, widely accepted in industry, are developed to objectively evaluate the correlation between monotonic functions, while the Minimum Area Discrepancy Method, or MADM, is the only method to address the correlation of non-injective mathematical variations, usually related to force/acceleration versus displacement problems. Often, it is not possible to differentiate objectively various solutions proposed by CORA, which this paper proposes to answer. This research is original, as it proposes a new innovative correlation optimisation framework, which can select the best CORA solution by including MADM as a subsequent process. The paper and the methods are rigorous, having used an industry standard driver airbag computer model, built virtual test corridors and compared the relationship between different CORA and MADM ratings from 100 Latin Hypercube samples. For the same CORA value of ‘1’ (perfect correlation), MADM was capable to objectively differentiate between 13 of them and provide the best correlation possible. The paper has recommended the MADM settings n = 1; m = 2 or n = 3; m = 2 for a congruent relationship with CORA. As MADM is performed subsequently, this new framework can be implemented in already existing industrial processes and provide automotive manufacturers and Original Equipment Manufacturers (OEM) with a new tool to generate more accurate computer models.


2018 ◽  
Vol 26 (2) ◽  
pp. 87-94 ◽  
Author(s):  
Zhonghai He ◽  
Zhenhe Ma ◽  
Mengchao Li ◽  
Yang Zhou

For spectroscopic measurements, representative samples are needed in the course of building a calibration model to guarantee accurate predictions. The most widely used selection method is the Kennard-Stone method, which can be used before a reference measurement is done. In this paper, a method termed semi-supervised selection is presented to determine whether a sample should be added to the calibration set. The selection procedure has two steps. First, part of the population of samples is selected using the Kennard-Stone method, and their concentrations are measured. Second, another part of the population of samples is selected based on the scalar value distribution of the net analyte signal. If the net analyte signal of a sample is distinctive compared to the existing net analyte signal values, then the sample is added to the calibration set. The analyte of interest in the sample is then measured so that the sample can be used as a calibration sample. By a validation test, it is shown that the presented method is more efficient than random selection and Kennard-Stone selection. As a result, both the time and the money spent on reference measurements are saved.


Sign in / Sign up

Export Citation Format

Share Document