Real-Time Prediction of Choke Health Using Production Data Integrated with AI

2021 ◽  
Author(s):  
Ahmed Alghamdi ◽  
Olakunle Ayoola ◽  
Khalid Mulhem ◽  
Mutlaq Otaibi ◽  
Abdulazeez Abdulraheem

Abstract Chokes are an integral part of production systems and are crucial surface equipment that faces rough conditions such as high-pressure drops and erosion due to solids. Predicting choke health is usually achieved by analyzing the relationship of choke size, pressure, and flow rate. In large-scale fields, this process requires extensive-time and effort using the conventional techniques. This paper presents a real-time proactive approach to detect choke wear utilizing production data integrated with AI analytics. Flowing parameters data were collected for more than 30 gas wells. These wells are producing gas with slight solids production from a high-pressure high-temperature field. In addition, these wells are equipped with a multi-stage choke system. The approach of determining choke wear relies on training the AI model on a dataset constructed by comparison of the choke valve rate of change with respect to a smoother slope of the production rate. If the rate of change is not within a tolerated range of divergence, an abnormal choke behavior is detected. The data set was divided into 70% for training and 30% for testing. Artificial Neural Network (ANN) was trained on data that has the following inputs: gas specific gravity, upstream & downstream pressure and temperature, and choke size. This ANN model achieved a correlation coefficient above 0.9 with an excellent prediction on the data points exhibiting normal or abnormal choke behaviors. Piloting this application on large fields, where manual analysis is often impractical, saves a substantial man-hour and generates significant cost-avoidance. Areas for improvement in such an application depends on equipping the ANN network with long-term production profile prediction abilities, such as water production, and this analysis relies on having an accurate reading from the venturi meters, which is often the case in single-phase flow. The application of this AI-driven analytics provides tremendous improvement for remote offshore production operations surveillance. The novel approach presented in this paper capitalizes on the AI analytics for estimating proactively detecting choke health conditions. The advantages of such a model are that it harnesses AI analytics to help operators improve asset integrity and production monitoring compliance. In addition, this approach can be expanded to estimate sand production as choke wear is a strong function of sand production.

Author(s):  
Paul Oehlmann ◽  
Paul Osswald ◽  
Juan Camilo Blanco ◽  
Martin Friedrich ◽  
Dominik Rietzel ◽  
...  

AbstractWith industries pushing towards digitalized production, adaption to expectations and increasing requirements for modern applications, has brought additive manufacturing (AM) to the forefront of Industry 4.0. In fact, AM is a main accelerator for digital production with its possibilities in structural design, such as topology optimization, production flexibility, customization, product development, to name a few. Fused Filament Fabrication (FFF) is a widespread and practical tool for rapid prototyping that also demonstrates the importance of AM technologies through its accessibility to the general public by creating cost effective desktop solutions. An increasing integration of systems in an intelligent production environment also enables the generation of large-scale data to be used for process monitoring and process control. Deep learning as a form of artificial intelligence (AI) and more specifically, a method of machine learning (ML) is ideal for handling big data. This study uses a trained artificial neural network (ANN) model as a digital shadow to predict the force within the nozzle of an FFF printer using filament speed and nozzle temperatures as input data. After the ANN model was tested using data from a theoretical model it was implemented to predict the behavior using real-time printer data. For this purpose, an FFF printer was equipped with sensors that collect real time printer data during the printing process. The ANN model reflected the kinematics of melting and flow predicted by models currently available for various speeds of printing. The model allows for a deeper understanding of the influencing process parameters which ultimately results in the determination of the optimum combination of process speed and print quality.


2020 ◽  
Vol 10 (7) ◽  
pp. 2491
Author(s):  
Shengkai Chen ◽  
Shuliang Fang ◽  
Renzhong Tang

The cloud manufacturing platform needs to allocate the endlessly emerging tasks to the resources scattered in different places for processing. However, this real-time scheduling problem in the cloud environment is more complicated than that in a traditional workshop because constraints, such as type matching, task precedence, resource occupation, and logistics duration, need to be met, and the internal manufacturing plan of providers must also be considered. Since the platform aggregates massive manufacturing resources to serve large-scale manufacturing tasks, the space of feasible solutions is huge, resulting in many conventional search algorithms no longer being applicable. In this paper, we considered resource allocation as the key procedure for real-time scheduling, and an ANN (Artificial Neural Network) based model is established to predict the task completion status for resource allocation among candidates. The trained ANN model has high prediction accuracy, and the ANN-based scheduling approach performs better than the preferred method in terms of the optimization objectives, including total cost, service satisfaction, and make-span. In addition, the proposed approach has potential in the application for smart manufacturing or Industry 4.0 because of its high response performance and good scalability.


2020 ◽  
Author(s):  
Vera Thiemig ◽  
Peter Salamon ◽  
Goncalo N. Gomes ◽  
Jon O. Skøien ◽  
Markus Ziese ◽  
...  

<p>We present EMO-5, a Pan-European high-resolution (5 km), (sub-)daily, multi-variable meteorological data set especially developed to the needs of an operational, pan-European hydrological service (EFAS; European Flood Awareness System). The data set is built on historic and real-time observations coming from 18,964 meteorological in-situ stations, collected from 24 data providers, and 10,632 virtual stations from four high-resolution regional observational grids (CombiPrecip, ZAMG - INCA, EURO4M-APGD and CarpatClim) as well as one global reanalysis product (ERA-Interim-land). This multi-variable data set covers precipitation, temperature (average, min and max), wind speed, solar radiation and vapor pressure; all at daily resolution and in addition 6-hourly resolution for precipitation and average temperature. The original observations were thoroughly quality controlled before we used the Spheremap interpolation method to estimate the variable values for each of the 5 x 5 km grid cells and their affiliated uncertainty. EMO-5 v1 grids covering the time period from 1990 till 2019 will be released as a free and open Copernicus product mid-2020 (with a near real-time release of the latest gridded observations in future). We would like to present the great potential EMO-5 holds for the hydrological modelling community.</p><p> </p><p>footnote: EMO = European Meteorological Observations</p>


2020 ◽  
Author(s):  
Markus Wiedemann ◽  
Bernhard S.A. Schuberth ◽  
Lorenzo Colli ◽  
Hans-Peter Bunge ◽  
Dieter Kranzlmüller

<p>Precise knowledge of the forces acting at the base of tectonic plates is of fundamental importance, but models of mantle dynamics are still often qualitative in nature to date. One particular problem is that we cannot access the deep interior of our planet and can therefore not make direct in situ measurements of the relevant physical parameters. Fortunately, modern software and powerful high-performance computing infrastructures allow us to generate complex three-dimensional models of the time evolution of mantle flow through large-scale numerical simulations.</p><p>In this project, we aim at visualizing the resulting convective patterns that occur thousands of kilometres below our feet and to make them "accessible" using high-end virtual reality techniques.</p><p>Models with several hundred million grid cells are nowadays possible using the modern supercomputing facilities, such as those available at the Leibniz Supercomputing Centre. These models provide quantitative estimates on the inaccessible parameters, such as buoyancy and temperature, as well as predictions of the associated gravity field and seismic wavefield that can be tested against Earth observations.</p><p>3-D visualizations of the computed physical parameters allow us to inspect the models such as if one were actually travelling down into the Earth. This way, convective processes that occur thousands of kilometres below our feet are virtually accessible by combining the simulations with high-end VR techniques.</p><p>The large data set used here poses severe challenges for real time visualization, because it cannot fit into graphics memory, while requiring rendering with strict deadlines. This raises the necessity to balance the amount of displayed data versus the time needed for rendering it.</p><p>As a solution, we introduce a rendering framework and describe our workflow that allows us to visualize this geoscientific dataset. Our example exceeds 16 TByte in size, which is beyond the capabilities of most visualization tools. To display this dataset in real-time, we reduce and declutter the dataset through isosurfacing and mesh optimization techniques.</p><p>Our rendering framework relies on multithreading and data decoupling mechanisms that allow to upload data to graphics memory while maintaining high frame rates. The final visualization application can be executed in a CAVE installation as well as on head mounted displays such as the HTC Vive or Oculus Rift. The latter devices will allow for viewing our example on-site at the EGU conference.</p>


2018 ◽  
Vol 246 ◽  
pp. 03009
Author(s):  
Jia-Ke Lv ◽  
Yang Li ◽  
Xuan Wang

The log data real-time processing platform which is built using Storm On YARN integrated MapReduce and Storm that use MapReduce to complete large-scale off-line data global knowledge extraction, sudden knowledge extraction of small-scale data in Kafka buffers through Storm, and continuous real-time calculation of streaming data in combination with global knowledge. We tested our technique with the well-known KDD99 CUP data set. The experimentation results prove the system to be effective and efficient.


2002 ◽  
Vol 469 ◽  
pp. 1-12 ◽  
Author(s):  
A. S. FLEISCHER ◽  
R. J. GOLDSTEIN

High-pressure gases are used to study high-Rayleigh-number Rayleigh–Bénard convection in cylindrical horizontal enclosures. The Nusselt–Rayleigh heat transfer relationship is investigated for 1×109 < Ra < 1.7×1012. Schlieren video images of the flow field are recorded through optical viewports in the pressure vessel. The data set is well correlated by Nu = 0.071Ra0.328. The schlieren results confirm the existence of a large-scale flow that periodically interrupts the ascending and descending plumes. The intensity of both the plumes and the large-scale flow increases with Rayleigh number.


2007 ◽  
Vol 30 (3) ◽  
pp. 363-370 ◽  
Author(s):  
Mark Kidd ◽  
Boaz Nadler ◽  
Shrikant Mane ◽  
Geeta Eick ◽  
Maximillian Malfertheiner ◽  
...  

Accurate quantitation of target genes depends on correct normalization. Use of genes with variable tissue transcription ( GAPDH) is problematic, particularly in clinical samples, which are derived from different tissue sources. Using a large-scale gene database (Affymetrix U133A) data set of 36 gastrointestinal (GI) tumors and normal tissues, we identified 8 candidate reference genes and established expression levels by real-time RT-PCR in an independent data set ( n = 42). A geometric averaging method (geNorm) identified ALG9, TFCP2, and ZNF410 as the most robustly expressed control genes. Examination of raw CT values demonstrated that these genes were tightly correlated between themselves ( R2 > 0.86, P < 0.0001), with low variability [coefficient of variation (CV) <12.7%] and high interassay reproducibility ( r = 0.93, P = 0.001). In comparison, the alternative control gene, GAPDH, exhibited the highest variability (CV = 18.1%), was significantly differently expressed between tissue types ( P = 0.05), was poorly correlated with the three reference genes ( R2 < 0.4), and was considered the least stable gene. To illustrate the importance of correct normalization, the target gene, MTA1, was significantly overexpressed ( P = 0.0006) in primary GI neuroendocrine tumor (NET) samples (vs. normal GI samples) when normalized by geNormATZ but not when normalized using GAPDH. The geNormATZ approach was, in addition, applicable to adenocarcinomas; MTA1 was overexpressed ( P < 0.04) in malignant colon, pancreas, and breast tumors compared with normal tissues. We provide a robust basis for the establishment of a reference gene set using GeneChip data and provide evidence for the utility of normalizing a malignancy-associated gene ( MTA1) using novel reference genes and the geNorm approach in GI NETs as well as in adenocarcinomas and breast tumors.


2020 ◽  
Author(s):  
Peter Berg ◽  
Fredrik Almén ◽  
Denica Bozhinova

Abstract. HydroGFD (Hydrological Global Forcing Data) is a data set of bias adjusted reanalysis data for daily precipitation, and minimum, mean, and maximum temperature. It is mainly intended for large scale hydrological modeling, but is also suitable for other impact modeling. The data set has an almost global land area coverage, excluding the Antarctic continent, at a horizontal resolution of 0.25°, i.e. about 25 km. It is available for the complete ERA5 reanalysis time period; currently 1979 until five days ago. This period will be extended back to 1950 once the back catalogue of ERA5 is available. The historical period is adjusted using global gridded observational data sets, and to acquire real-time data, a collection of several reference data sets is used. Consistency in time is attempted by relying on a background climatology, and only making use of anomalies from the different data sets. Precipitation is adjusted for mean bias as well as the number or wet days in a month. The latter is relying on a calibrated statistical method with input only of the monthly precipitation anomaly, such that no additional input data about the number of wet days is necessary. The daily mean temperature is adjusted toward the monthly mean of the observations, and applied to 1 h timesteps of the ERA5 reanalysis. Daily mean, minimum and maximum temperature are then calculated. The performance of the HydroGFD3 data set is on par with other similar products, although there are significant differences in different parts of the globe, especially where observations are uncertain. Further, HydroGFD3 tends to have higher precipitation extremes, partly due to its higher spatial resolution. In this paper, we present the methodology, evaluation results, and how to access to the data set at https://doi.org/10.5281/zenodo.3871707.


Author(s):  
David Gamero ◽  
Andrew Dugenske ◽  
Thomas Kurfess ◽  
Christopher Saldana ◽  
Katherine Fu

Abstract In this paper, the design and performance differences between Relational Database Management Systems (RDBMS) and NoSQL Database Systems are examined, with attention to their applicability for real-world Internet of Things for manufacturing (IoTfM) data. While previous work has extensively compared SQL and NoSQL for both generalized and IoT uses, this work specifically examines the tradeoffs and performance differences for manufacturing applications by using a high-fidelity data set collected from a large US manufacturing firm. Growing an IoT system beyond the pilot stage requires scalable data storage; this work seeks to determine the impact of selected database systems on data write performance at scale. Payload size and message frequency were used as the primary characteristics to maintain model fidelity in simulated clients. As the number of simulated asset clients grow, the data write latency was calculated to determine how both database systems’ performance were affected. To isolate the RDBMS and NoSQL differences, a cloud environment was created using Amazon Web Services (AWS) with two identical data ingestion pipelines: writing data to an RDMBS (1) using AWS Aurora MySQL, and (2) using AWS DynamoDB NoSQL. The findings may provide guidance for further experimentation in large-scale manufacturing IoT implementations.


2019 ◽  
Vol 71 (5) ◽  
pp. 537-550
Author(s):  
Shuangshuang Liu ◽  
Geoffrey Phelps

Teacher professional development (PD) is seen as a promising intervention to improve teacher knowledge, instructional practice, and ultimately student learning. While research finds instances of significant program effects on teacher knowledge, little is known about how long these effects last. If teachers forget what is learned, the contribution of the intervention will be diminished. Using a large-scale data set, this study examines the sustainability of gains in teachers’ content knowledge for teaching mathematics (CKT-M). Results show that there is a negative rate of change in CKT after teachers complete the training, suggesting that the average score gain from the program is lost in just 37 days. There is, however, variation in how quickly knowledge is lost, with teachers participating in summer programs losing more rapidly than those who attend programs that occur during school years. The implications of these findings on designing and evaluating PD programs are discussed.


Sign in / Sign up

Export Citation Format

Share Document