Better reservoir visualisation

2012 ◽  
Vol 52 (1) ◽  
pp. 475
Author(s):  
Andrew Moore ◽  
David J. Storey ◽  
Darren Stanton

Santos, a significant Australian energy company, sponsors open source software to improve 3D reservoir visualisation. The software, TurboVNC, allows users of standard laptops to connect to servers running Paradigm exploration and production software from any network location. Performance, collaboration and data management benefits are coupled with capital and operational savings of $2.5 million AUD. Santos’s TurboVNC project won the global Innovator of the Year award (2011) with Red Hat, suppliers of Linux, the server operating system. Beyond these immediate benefits, the real value of thin client application delivery is the ability to centralise data in one large database. This facilitates consistent data standards and quality procedures to be applied. New insights and value can be derived from the consolidated big data gathered from the full exploration and production spectrum. The hypothesis is that access to larger, integrated data sets can result in better reservoir models, reduced uncertainty and optimised production.

Author(s):  
Ricardo Oliveira ◽  
Rafael Moreno

Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.


Author(s):  
Ricardo Oliveira ◽  
Rafael Moreno

Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.


2018 ◽  
Author(s):  
Luís Moreira de Sousa

The volume and coverage of spatial data has increased dramatically in recent years, with Earth observation programmes producing dozens of GB of data on a daily basis. The term Big Spatial Data is now applied to data sets that impose real challenges to researchers and practitioners alike. The difficulties are partly related to a lack of tools supporting appropriate Coordinate Reference Systems (CRS). As rule, these data are provided in highly irregular geodesic grids, defined along equal intervals of latitude and longitude. Compounding the problem, users of such data end up taking geodesic coordinates in these grids as a Cartesian system, implicitly applying Marinus of Tyre's projection. A first approach towards the compactness of global geo-spatial data is to work in a Cartesian system produced by an equal-area projection. There are a good number to choose from, but those commonly supported by GIS software invariably relate to the sinusoidal or pseudo-cylindrical families, that impose important distortions of shape and distance. The land masses of Antarctica, Alaska, Canada, Greenland and Russia are particularly distorted with such projections. A more effective approach is to store and work with data in modern cartographic projections, in particular those defined with the Platonic and Archimedean solids. In spite of various attempts at open source software supporting these projections, in practice they remain today largely out of reach to GIS practitioners. This communication reviews persisting difficulties in working with worldwide big spatial data, current strategies to address such difficulties, the compromises they impose and the remaining gaps in open source software.


Author(s):  
Richard S. Segall

This chapter discusses what Open Source Software is and its relationship to Big Data and how it differs from other types of software and its software development cycle. Open source software (OSS) is a type of computer software in which source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. Big Data are data sets that are so voluminous and complex that traditional data processing application software are inadequate to deal with them. Big data can be discrete or a continuous stream data and is accessible using many types of computing devices ranging from supercomputers and personal workstations to mobile devices and tablets. It is discussed how fog computing can be performed with cloud computing for visualization of Big Data. This chapter also presents a summary of additional web-based Big Data visualization software.


SPE Journal ◽  
2021 ◽  
Vol 26 (02) ◽  
pp. 1011-1031
Author(s):  
Gilson Moura Silva Neto ◽  
Ricardo Vasconcellos Soares ◽  
Geir Evensen ◽  
Alessandra Davolio ◽  
Denis José Schiozer

Summary Time-lapse-seismic-data assimilation has been drawing the reservoir-engineering community's attention over the past few years. One of the advantages of including this kind of data to improve the reservoir-flow models is that it provides complementary information compared with the wells' production data. Ensemble-based methods are some of the standard tools used to calibrate reservoir models using time-lapse seismic data. One of the drawbacks of assimilating time-lapse seismic data involves the large data sets, mainly for large reservoir models. This situation leads to high-dimensional problems that demand significant computational resources to process and store the matrices when using conventional and straightforward methods. Another known issue associated with the ensemble-based methods is the limited ensemble sizes, which cause spurious correlations between the data and the parameters and limit the degrees of freedom. In this work, we propose a data-assimilation scheme using an efficient implementation of the subspace ensemble randomized maximum likelihood (SEnRML) method with local analysis. This method reduces the computational requirements for assimilating large data sets because the number of operations scales linearly with the number of observed data points. Furthermore, by implementing it with local analysis, we reduce the memory requirements at each update step and mitigate the effects of the limited ensemble sizes. We test two local analysis approaches: one distance-based approach and one correlation-based approach. We apply these implementations to two synthetic time-lapse-seismic-data-assimilation cases, one 2D example, and one field-scale application that mimics some of the real-field challenges. We compare the results with reference solutions and with the known ensemble smoother with multiple data assimilation (ES-MDA) using Kalman gain distance-based localization. The results show that our method can efficiently assimilate time-lapse seismic data, leading to updated models that are comparable with other straightforward methods. The correlation-based local analysis approach provided results similar to the distance-based approach, with the advantage that the former can be applied to data and parameters that do not have specific spatial positions.


Author(s):  
Brent A. Jones

Many smaller pipeline operating companies see the benefits of implementing a Geographic Information System (GIS) to organize pipeline data and meet the requirements of 49 CFR 195, but cannot justify the cost of a large-scale AM/FM/GIS system. PPL Interstate Energy Company (PPL IE) is a pipeline company with 84 miles of main that implemented a GIS solution that leverages both existing technology and facility data investments. This paper discusses the process used to acquire landbase data, to organize existing pipeline data from a variety of paper-based and digital sources, and to integrate these data sets. It will also discuss the functionality and benefits of the resultant GIS.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Xinyang Li ◽  
Caroline H. Roney ◽  
Balvinder S. Handa ◽  
Rasheda A. Chowdhury ◽  
Steven A. Niederer ◽  
...  

Abstract The analysis of complex mechanisms underlying ventricular fibrillation (VF) and atrial fibrillation (AF) requires sophisticated tools for studying spatio-temporal action potential (AP) propagation dynamics. However, fibrillation analysis tools are often custom-made or proprietary, and vary between research groups. With no optimal standardised framework for analysis, results from different studies have led to disparate findings. Given the technical gap, here we present a comprehensive framework and set of principles for quantifying properties of wavefront dynamics in phase-processed data recorded during myocardial fibrillation with potentiometric dyes. Phase transformation of the fibrillatory data is particularly useful for identifying self-perpetuating spiral waves or rotational drivers (RDs) rotating around a phase singularity (PS). RDs have been implicated in sustaining fibrillation, and thus accurate localisation and quantification of RDs is crucial for understanding specific fibrillatory mechanisms. In this work, we assess how variation of analysis parameters and thresholds in the tracking of PSs and quantification of RDs could result in different interpretations of the underlying fibrillation mechanism. These techniques have been described and applied to experimental AF and VF data, and AF simulations, and examples are provided from each of these data sets to demonstrate the range of fibrillatory behaviours and adaptability of these tools. The presented methodologies are available as an open source software and offer an off-the-shelf research toolkit for quantifying and analysing fibrillatory mechanisms.


Author(s):  
E. Salinas ◽  
A. Mun˜oz ◽  
A. Wilde ◽  
J. Healy ◽  
M. Bakayeva

Empresa Nacional del Petro´leo (ENAP) is an energy company, wholly owned by the Chilean Government. With regards to overall management, the company comprises of two Business Divisions: Exploration and Production (Up-stream) and Refining and Logistic (Down-stream), complemented by corporate managerial structures. The objective of ENAP’s Exploration and Production (UpStream) business line is the exploration and exploitation of hydrocarbons (oil and natural gas) in the South of Chile (Magallanes) and abroad, as well as geo-thermal energy, in this case, associated with private entities in areas of Northern Chile. Within the Magallanes region ENAP operates approximately 2,200 km of natural gas, crude oil and refined product pipelines. These pipelines range in diameter from 4 to 20 inch and the majority of pipelines are over 30 years old. Due to operational reliability reasons, since 1998 ENAP has been regularly inspecting its pipelines using intelligent in-line inspection tools. Furthermore, since 2006, as part of an overall pipeline integrity management plan ENAP has been conducting Fitness for Service assessments on selected pipelines including a risk-based assessment considering pipeline condition and the impact on the continuity of operation. The Integrity Management Plan implemented by ENAP in the Magallanes region has been applied to all pipelines transporting gas, crude oil and refined products, including those built after 1990. This plan comprises the construction phase, from which invaluable information is gathered for later use. The primary aims of ENAP’s integrity management plan are: - To protect the public; - To protect the surrounding environment by preventing pipeline failures; - To ensure efficient usage of the budget available to conduct maintenance tasks; - To prevent damage to the pipelines, e.g. due to corrosion activity; - To provide clarity of activities being performed by ENAP in order to ensure an efficient, safe and reliable pipeline system. This paper provides a description of the integrity management strategy adopted by ENAP and includes a review of a number of the challenges encountered during its implementation.


2018 ◽  
Author(s):  
Luís Moreira de Sousa

The volume and coverage of spatial data has increased dramatically in recent years, with Earth observation programmes producing dozens of GB of data on a daily basis. The term Big Spatial Data is now applied to data sets that impose real challenges to researchers and practitioners alike. As rule, these data are provided in highly irregular geodesic grids, defined along equal intervals of latitude and longitude, a vastly inefficient and burdensome topology. Compounding the problem, users of such data end up taking geodesic coordinates in these grids as a Cartesian system, implicitly applying Marinus of Tyre's projection. A first approach towards the compactness of global geo-spatial data is to work in a Cartesian system produced by an equal-area projection. There are a good number to choose from, but those supported by common GIS software invariably relate to the sinusoidal or pseudo-cylindrical families, that impose important distortions of shape and distance. The land masses of Antarctica, Alaska, Canada, Greenland and Russia are particularly distorted with such projections. A more effective approach is to store and work with data in modern cartographic projections, in particular those defined with the Platonic and Archimedean solids. In spite of various attempts at open source software supporting these projections, in practice they remain today largely out of reach to GIS practitioners. This communication reviews persisting difficulties in working with global big spatial data, current strategies to address such difficulties, the compromises they impose and the remaining gaps in open source software.


Sign in / Sign up

Export Citation Format

Share Document