The Choke as a Brainbox for Smart Wellhead Control

2021 ◽  
Vol 6 (1) ◽  
pp. 114-118
Author(s):  
Stanley I. Okafor ◽  
Azubuike H. Amadi ◽  
Mobolaji A. Abegunde

This project uses production data to generate well-specific correlations for GLR, BSW and sand concentration which are used for predictions. A software has been developed to effect a smart control algorithm. This results in a bean up or bean down operation depending on the current flowing conditions and constraints. Excel programming environment was used to write a code that constantly takes in measured data points, models the behavior of the individual data sets with bean size and controls the choke if the parameters of interest go above a predetermined cut-off. The software was also equipped with an inverse matrix solving algorithm that enables it to determine the choke performance constants for any set of initialization data. A set of data from field X were supplied and the choke performance constants; A, B, C, D and E, were found to be 10, 0.546, 0.0, 1.89 and 1.0 respectively. In addition to that, data from subsequent production operations were entered and the software was able to control the choke size to ensure that production stays below set constraints of 500, 80 and 10 in field units for GLR, BSW and sand concentration respectively. From this, it can be concluded that the software can effectively maintain the production of unwanted well effluents below their cut-offs, thereby improving oil production and the overall Net Profit Value (NPV) of a project.

2014 ◽  
Vol 70 (a1) ◽  
pp. C954-C954 ◽  
Author(s):  
Uwe König ◽  
Thomas Degen ◽  
Detlef Beckers

Usually in XRPD we are paying lots of attention to accurately describe profile shapes. We do that to eventually extract/predict information from the full pattern using physical models and fitting techniques. Sometimes this approach is stretched to its limits. That usually happens, when no realistic physical model is available, or when the model is either too complex or doesn't fit to reality. In such cases there is one very elegant way out: multivariate statistics and Partial Least-Squares Regression. This technique is rather popular in spectroscopy as well as in a number of science fields like biosciences, proteomics and social sciences. PLSR as developed by Herman Wold [1] in 1960 is able to predict any defined property Y directly from the variability in a data matrix X. In the XRPD the rows of the data matrix used for calibration are formed by the individual scans and the columns are formed by all measured data points. PLSR is particularly well-suited when the matrix of predictors has more variables than observations, and when there exists multi-collinearity among X values. In fact with PLSR we have a full pattern approach that totally dismisses profile shapes but still uses the complete information present in our XRPD data sets. We will show a number of cases where PLSR was used to easily and precisely predict properties like crystallinity and more from XRPD data.


2019 ◽  
Author(s):  
Ulrike Niemeier ◽  
Claudia Timmreck ◽  
Kirstin Krüger

Abstract. In 1963 a series of eruptions of Mt. Agung, Indonesia, resulted in the 3rd largest eruption of the 20th century and claimed about 1900 lives. Two eruptions of this series injected SO2 into the stratosphere, a requirement to get a long lasting stratospheric sulfate layer. The first eruption on March 17th injected 4.7 Tg SO2 into the stratosphere, the second eruption 2.3 Tg SO2 on May, 16th. In recent volcanic emission data sets these eruption phases are merged together to one large eruption phase for Mt. Agung in March 1963 with an injection rate of 7 Tg SO2. The injected sulfur forms a sulfate layer in the stratosphere. The evolution of sulfur is non-linear and depends on the injection rate and aerosol background conditions. We performed ensembles of two model experiments, one with a single and a second one with two eruptions. The two smaller eruptions result in a lower burden, smaller particles and 0.1 to 0.3 Wm−2 (10–20 %) lower radiative forcing in monthly mean global average compared to the individual eruption experiment. The differences are the consequence of slightly stronger meridional transport due to different seasons of the eruptions, lower injection height of the second eruption and the resulting different aerosol evolution. The differences between the two experiments are significant but smaller than the variance of the individual ensemble means. Overall, the evolution of the volcanic clouds is different in case of two eruptions than with a single eruption only. We conclude that there is no justification to use one eruption only and both climatic eruptions should be taken into account in future emission datasets.


2012 ◽  
Vol 38 (2) ◽  
pp. 57-69 ◽  
Author(s):  
Abdulghani Hasan ◽  
Petter Pilesjö ◽  
Andreas Persson

Global change and GHG emission modelling are dependent on accurate wetness estimations for predictions of e.g. methane emissions. This study aims to quantify how the slope, drainage area and the TWI vary with the resolution of DEMs for a flat peatland area. Six DEMs with spatial resolutions from 0.5 to 90 m were interpolated with four different search radiuses. The relationship between accuracy of the DEM and the slope was tested. The LiDAR elevation data was divided into two data sets. The number of data points facilitated an evaluation dataset with data points not more than 10 mm away from the cell centre points in the interpolation dataset. The DEM was evaluated using a quantile-quantile test and the normalized median absolute deviation. It showed independence of the resolution when using the same search radius. The accuracy of the estimated elevation for different slopes was tested using the 0.5 meter DEM and it showed a higher deviation from evaluation data for steep areas. The slope estimations between resolutions showed differences with values that exceeded 50%. Drainage areas were tested for three resolutions, with coinciding evaluation points. The model ability to generate drainage area at each resolution was tested by pair wise comparison of three data subsets and showed differences of more than 50% in 25% of the evaluated points. The results show that consideration of DEM resolution is a necessity for the use of slope, drainage area and TWI data in large scale modelling.


2014 ◽  
Vol 21 (11) ◽  
pp. 1581-1588 ◽  
Author(s):  
Piotr Kardas ◽  
Mohammadreza Sadeghi ◽  
Fabian H. Weissbach ◽  
Tingting Chen ◽  
Lea Hedman ◽  
...  

ABSTRACTJC polyomavirus (JCPyV) can cause progressive multifocal leukoencephalopathy (PML), a debilitating, often fatal brain disease in immunocompromised patients. JCPyV-seropositive multiple sclerosis (MS) patients treated with natalizumab have a 2- to 10-fold increased risk of developing PML. Therefore, JCPyV serology has been recommended for PML risk stratification. However, different antibody tests may not be equivalent. To study intra- and interlaboratory variability, sera from 398 healthy blood donors were compared in 4 independent enzyme-linked immunoassay (ELISA) measurements generating >1,592 data points. Three data sets (Basel1, Basel2, and Basel3) used the same basic protocol but different JCPyV virus-like particle (VLP) preparations and introduced normalization to a reference serum. The data sets were also compared with an independent method using biotinylated VLPs (Helsinki1). VLP preadsorption reducing ≥35% activity was used to identify seropositive sera. The results indicated that Basel1, Basel2, Basel3, and Helsinki1 were similar regarding overall data distribution (P= 0.79) and seroprevalence (58.0, 54.5, 54.8, and 53.5%, respectively;P= 0.95). However, intra-assay intralaboratory comparison yielded 3.7% to 12% discordant results, most of which were close to the cutoff (0.080 < optical density [OD] < 0.250) according to Bland-Altman analysis. Introduction of normalization improved overall performance and reduced discordance. The interlaboratory interassay comparison between Basel3 and Helsinki1 revealed only 15 discordant results, 14 (93%) of which were close to the cutoff. Preadsorption identified specificities of 99.44% and 97.78% and sensitivities of 99.54% and 95.87% for Basel3 and Helsinki1, respectively. Thus, normalization to a preferably WHO-approved reference serum, duplicate testing, and preadsorption for samples around the cutoff may be necessary for reliable JCPyV serology and PML risk stratification.


2021 ◽  
pp. M56-2021-22
Author(s):  
Mirko Scheinert ◽  
Olga Engels ◽  
Ernst J. O. Schrama ◽  
Wouter van der Wal ◽  
Martin Horwath

AbstractGeodynamic processes in Antarctica such as glacial isostatic adjustment (GIA) and post-seismic deformation are measured by geodetic observations such as GNSS and satellite gravimetry. GNSS measurements have been comprising continuous measurements as well as episodic measurements since the mid-1990s. The estimated velocities typically reach an accuracy of 1 mm/a for horizontal and 2 mm/a for vertical velocities. However, the elastic deformation due to present-day ice-load change needs to be considered accordingly.Space gravimetry derives mass changes from small variations in the inter-satellite distance of a pair of satellites, starting with the GRACE satellite mission in 2002 and continuing with the GRACE-FO mission launched in 2018. The spatial resolution of the measurements is low (about 300 km) but the measurement error is homogeneous across Antarctica. The estimated trends contain signals from ice mass change, local and global GIA signal. To combine the strengths of the individual data sets statistical combinations of GNSS, GRACE and satellite altimetry data have been developed. These combinations rely on realistic error estimates and assumptions of snow density. Nevertheless, they capture signal that is missing from geodynamic forward models such as the large uplift in the Amundsen Sea sector due to low-viscous response to century-scale ice-mass changes.


2018 ◽  
Vol 11 (2) ◽  
pp. 53-67
Author(s):  
Ajay Kumar ◽  
Shishir Kumar

Several initial center selection algorithms are proposed in the literature for numerical data, but the values of the categorical data are unordered so, these methods are not applicable to a categorical data set. This article investigates the initial center selection process for the categorical data and after that present a new support based initial center selection algorithm. The proposed algorithm measures the weight of unique data points of an attribute with the help of support and then integrates these weights along the rows, to get the support of every row. Further, a data object having the largest support is chosen as an initial center followed by finding other centers that are at the greatest distance from the initially selected center. The quality of the proposed algorithm is compared with the random initial center selection method, Cao's method, Wu method and the method introduced by Khan and Ahmad. Experimental analysis on real data sets shows the effectiveness of the proposed algorithm.


2018 ◽  
Vol 8 (2) ◽  
pp. 377-406
Author(s):  
Almog Lahav ◽  
Ronen Talmon ◽  
Yuval Kluger

Abstract A fundamental question in data analysis, machine learning and signal processing is how to compare between data points. The choice of the distance metric is specifically challenging for high-dimensional data sets, where the problem of meaningfulness is more prominent (e.g. the Euclidean distance between images). In this paper, we propose to exploit a property of high-dimensional data that is usually ignored, which is the structure stemming from the relationships between the coordinates. Specifically, we show that organizing similar coordinates in clusters can be exploited for the construction of the Mahalanobis distance between samples. When the observable samples are generated by a nonlinear transformation of hidden variables, the Mahalanobis distance allows the recovery of the Euclidean distances in the hidden space. We illustrate the advantage of our approach on a synthetic example where the discovery of clusters of correlated coordinates improves the estimation of the principal directions of the samples. Our method was applied to real data of gene expression for lung adenocarcinomas (lung cancer). By using the proposed metric we found a partition of subjects to risk groups with a good separation between their Kaplan–Meier survival plot.


2019 ◽  
Author(s):  
Benedikt Ley ◽  
Komal Raj Rijal ◽  
Jutta Marfurt ◽  
Nabaraj Adhikari ◽  
Megha Banjara ◽  
...  

Abstract Objective: Electronic data collection (EDC) has become a suitable alternative to paper based data collection (PBDC) in biomedical research even in resource poor settings. During a survey in Nepal, data were collected using both systems and data entry errors compared between both methods. Collected data were checked for completeness, values outside of realistic ranges, internal logic and date variables for reasonable time frames. Variables were grouped into 5 categories and the number of discordant entries were compared between both systems, overall and per variable category. Results: Data from 52 variables collected from 358 participants were available. Discrepancies between both data sets were found in 12.6% of all entries (2352/18,616). Differences between data points were identified in 18.0% (643/3,580) of continuous variables, 15.8% of time variables (113/716), 13.0% of date variables (140/1,074), 12.0% of text variables (86/716), and 10.9% of categorical variables (1,370/12,530). Overall 64% (1,499/2,352) of all discrepancies were due to data omissions, 76.6% (1,148/1,499) of missing entries were among categorical data. Omissions in PBDC (n=1002) were twice as frequent as in EDC (n=497, p<0.001). Data omissions, specifically among categorical variables were identified as the greatest source of error. If designed accordingly, EDC can address this short fall effectively.


Author(s):  
Sean Moran ◽  
Bruce MacFadden ◽  
Michelle Barboza

Over the past several decades, thousands of stable isotope analyses (δ13C, δ18O) published in the peer-reviewed literature have advanced understanding of ecology and evolution of fossil mammals in Deep Time. These analyses typically have come from sampling vouchered museum specimens. However, the individual stable isotope data are typically disconnected from the vouchered specimens, and there likewise is no central repository for this information. This paper describes the status, potential, and value of the integration of stable isotope data in museum fossil collections. A pilot study in the Vertebrate Paleontology collection at the Florida Museum of Natural History has repatriated within Specify more than 1,000 legacy stable isotope data (mined from the literature) with the vouchered specimens by using ancillary non Darwin Core (DwC) data fields. As this database grows, we hope to both: validate previous studies that were done using smaller data sets; and ask new questions of the data that can only be addressed with larger, aggregated data sets. validate previous studies that were done using smaller data sets; and ask new questions of the data that can only be addressed with larger, aggregated data sets. Additionally, we envision that as the community gains a better understanding of the importance of these kinds of ancillary data to add value to vouchered museum specimens, then workflows, data fields, and protocols can be standardized.


Author(s):  
B. Piltz ◽  
S. Bayer ◽  
A. M. Poznanska

In this paper we propose a new algorithm for digital terrain (DTM) model reconstruction from very high spatial resolution digital surface models (DSMs). It represents a combination of multi-directional filtering with a new metric which we call &lt;i&gt;normalized volume above ground&lt;/i&gt; to create an above-ground mask containing buildings and elevated vegetation. This mask can be used to interpolate a ground-only DTM. The presented algorithm works fully automatically, requiring only the processing parameters &lt;i&gt;minimum height&lt;/i&gt; and &lt;i&gt;maximum width&lt;/i&gt; in metric units. Since slope and breaklines are not decisive criteria, low and smooth and even very extensive flat objects are recognized and masked. The algorithm was developed with the goal to generate the normalized DSM for automatic 3D building reconstruction and works reliably also in environments with distinct hillsides or terrace-shaped terrain where conventional methods would fail. A quantitative comparison with the ISPRS data sets &lt;i&gt;Potsdam&lt;/i&gt; and &lt;i&gt;Vaihingen&lt;/i&gt; show that 98-99% of all building data points are identified and can be removed, while enough ground data points (~66%) are kept to be able to reconstruct the ground surface. Additionally, we discuss the concept of &lt;i&gt;size dependent height thresholds&lt;/i&gt; and present an efficient scheme for pyramidal processing of data sets reducing time complexity to linear to the number of pixels, &lt;i&gt;O(WH)&lt;/i&gt;.


Sign in / Sign up

Export Citation Format

Share Document