Leveraging Large Subsurface Data and Associated Uncertainties to Build High Quality 3D Structural Model

2021 ◽  
Author(s):  
Sanat Aidarbayev ◽  
Mohamed Kamel Ouldamer ◽  
Guillaume Masson ◽  
Jean-Michel Codo

Abstract Objectives/Scope At brownfield development stage, dealing with diverse and large amount of data makes it challenging to integrate them all in a consistent manner to build a prime structural model. Like many others, the studied field consists of several-stacked reservoirs featuring many faults and close to a thousand drilled wells with vertical, slanted and horizontal trajectories. On top of that, many horizontal wells are targeting thin carbonate layers for which tightly spaced data points often result in conflicting observations. Consequently, horizontal and deviated wells are commonly discarded from structural modelling, leaving substantial and valuable information unused. Some of these wells may be indirectly accounted through the introduction of pseudo-wells, making the modelling workflow tedious, user-dependent and therefore difficult to repeat. Methods, Procedures, Process ’It's better to be approximately right than exactly wrong’ quoted by Carveth Read, 18th century. Accordingly, every physical measurement, even from the most modern and sophisticated tools, is subject to some uncertainty. Therefore, assessing the uncertainty related to each input data is paramount in this method. Integrated teamwork between geologists, geophysicists and drilling specialists lead to a thorough analysis of each data feeding the process of structural model building while providing best uncertainty estimates. The ranges were specified for ∼1000 well trajectories, ∼16000 geological markers, 3 seismic travel time maps, 3 interval velocities and 59 thickness maps. All available data are used in a consistent manner to minimize the depth uncertainty. The accuracy is further improved by linking together all surfaces in a multi-layered model. In addition, this methodology considers both large scale spatial continuity capturing structural trends and more local scale incorporating inter-well variations of thickness due to sedimentological controls. Results, Observations, Conclusions After following this approach, all subsurface data started to come in agreement and resulted in more geological architectures. As an example, Figure 1 shows a cross-section along a well that drilled in B4 target layer which average thickness of 6 ft. As illustrated in the left figure, classical workflow using vertical wells and some pseudo-wells resulted in an anomalous pull-up structure and overall wavy non-geological geometry. Moreover, the well shows that it is in non-reservoir dense layer even though the well in the reservoir based on the zone log interpretation. However, the right figure shows that considering horizontal wells and uncertainties help to integrate all subsurface data with improved consistency where the structure model is smoother & more geological, plus the well is correctly placed in the targeted reservoir. Novel/Additive Information This approach will make the studied field one of the first brownfields that incorporated all data in consistent manner without pseudo-wells to build 3D structural model. It will bring considerable value to reduce uncertainties during subsequent property and dynamic modelling stages.

1994 ◽  
Vol 2 (1) ◽  
pp. 79-99 ◽  
Author(s):  
Hiroaki Kitano

This article reports on a simple neurogenesis model that is combined with evolutionary computation. Because the integration of an evolutionary process with neural networks is such an exciting field of study, with the promise of discovering new computational models and, possibly, providing novel biological insights, much research has been conducted in this area. However, only a few studies have incorporated a development stage, and none have modeled metabolism and other chemical reactions in a consistent manner. In this article, we present a simple model of neurogenesis and cell differentiation that combines evolutionary computing, metabolism, development, and neural networks. The model represents an evolutionary large-scale chaos as a mathematical foundation. An evolutionary large-scale chaos is a large-scale chaos whose map functions change through evolutionary computing. Experiments indicate that the model is capable of evolving and growing large neural networks, and exhibits phenomena analogous to cell differentiation.


2018 ◽  
Author(s):  
Rune Thomas Kidmose ◽  
Jonathan Juhl ◽  
Poul Nissen ◽  
Thomas Boesen ◽  
Jesper Lykkegaard Karlsen ◽  
...  

AbstractModel building into experimental maps is a key element of structural biology, but can be both time consuming and error-prone. Here we present Namdinator, an easy-to-use tool that enables the user to run a Molecular Dynamics Flexible Fitting (MDFF) simulation in an automated manner through a pipeline system. Namdinator will modify an atomic model to fit within cryo-EM or crystallography density maps, and can be used advantageously for both the initial fitting of models, and for a geometrical optimization step to correct outliers, clashes and other model problems. We have benchmarked Namdinator against 39 deposited models and maps from cryo-EM and observe model improvements in 34 of these cases (87%). Clashes between atoms were reduced, and model-to-map fit and overall model geometry were improved, in several cases substantially. We show that Namdinator is able to model large scale conformational changes compared to the starting model. Namdinator is a fast and easy way to create suitable initial models for both cryo-EM and crystallography. It can fix model errors in the final steps of model building, and is usable for structural model builders at all skill levels. Namdinator is available as a web service (https://namdinator.au.dk), or can be run locally as a command-line tool.SynopsisA pipeline tool called Namdinator is presented that enables the user to run a Molecular Dynamics Flexible Fitting (MDFF) simulation in a fully automated manner, both online and locally. This provides a fast and easy way to create suitable initial models for both cryo-EM and crystallography and help fix errors in the final steps of model building.


IUCrJ ◽  
2019 ◽  
Vol 6 (4) ◽  
pp. 526-531 ◽  
Author(s):  
Rune Thomas Kidmose ◽  
Jonathan Juhl ◽  
Poul Nissen ◽  
Thomas Boesen ◽  
Jesper Lykkegaard Karlsen ◽  
...  

Model building into experimental maps is a key element of structural biology, but can be both time consuming and error prone for low-resolution maps. Here we present Namdinator, an easy-to-use tool that enables the user to run a molecular dynamics flexible fitting simulation followed by real-space refinement in an automated manner through a pipeline system. Namdinator will modify an atomic model to fit within cryo-EM or crystallography density maps, and can be used advantageously for both the initial fitting of models, and for a geometrical optimization step to correct outliers, clashes and other model problems. We have benchmarked Namdinator against 39 deposited cryo-EM models and maps, and observe model improvements in 34 of these cases (87%). Clashes between atoms were reduced, and the model-to-map fit and overall model geometry were improved, in several cases substantially. We show that Namdinator is able to model large-scale conformational changes compared to the starting model. Namdinator is a fast and easy tool for structural model builders at all skill levels. Namdinator is available as a web service (https://namdinator.au.dk), or it can be run locally as a command-line tool.


Computers ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 82
Author(s):  
Ahmad O. Aseeri

Deep Learning-based methods have emerged to be one of the most effective and practical solutions in a wide range of medical problems, including the diagnosis of cardiac arrhythmias. A critical step to a precocious diagnosis in many heart dysfunctions diseases starts with the accurate detection and classification of cardiac arrhythmias, which can be achieved via electrocardiograms (ECGs). Motivated by the desire to enhance conventional clinical methods in diagnosing cardiac arrhythmias, we introduce an uncertainty-aware deep learning-based predictive model design for accurate large-scale classification of cardiac arrhythmias successfully trained and evaluated using three benchmark medical datasets. In addition, considering that the quantification of uncertainty estimates is vital for clinical decision-making, our method incorporates a probabilistic approach to capture the model’s uncertainty using a Bayesian-based approximation method without introducing additional parameters or significant changes to the network’s architecture. Although many arrhythmias classification solutions with various ECG feature engineering techniques have been reported in the literature, the introduced AI-based probabilistic-enabled method in this paper outperforms the results of existing methods in outstanding multiclass classification results that manifest F1 scores of 98.62% and 96.73% with (MIT-BIH) dataset of 20 annotations, and 99.23% and 96.94% with (INCART) dataset of eight annotations, and 97.25% and 96.73% with (BIDMC) dataset of six annotations, for the deep ensemble and probabilistic mode, respectively. We demonstrate our method’s high-performing and statistical reliability results in numerical experiments on the language modeling using the gating mechanism of Recurrent Neural Networks.


2021 ◽  
Vol 13 (14) ◽  
pp. 2848
Author(s):  
Hao Sun ◽  
Qian Xu

Obtaining large-scale, long-term, and spatial continuous soil moisture (SM) data is crucial for climate change, hydrology, and water resource management, etc. ESA CCI SM is such a large-scale and long-term SM (longer than 40 years until now). However, there exist data gaps, especially for the area of China, due to the limitations in remote sensing of SM such as complex topography, human-induced radio frequency interference (RFI), and vegetation disturbances, etc. The data gaps make the CCI SM data cannot achieve spatial continuity, which entails the study of gap-filling methods. In order to develop suitable methods to fill the gaps of CCI SM in the whole area of China, we compared typical Machine Learning (ML) methods, including Random Forest method (RF), Feedforward Neural Network method (FNN), and Generalized Linear Model (GLM) with a geostatistical method, i.e., Ordinary Kriging (OK) in this study. More than 30 years of passive–active combined CCI SM from 1982 to 2018 and other biophysical variables such as Normalized Difference Vegetation Index (NDVI), precipitation, air temperature, Digital Elevation Model (DEM), soil type, and in situ SM from International Soil Moisture Network (ISMN) were utilized in this study. Results indicated that: 1) the data gap of CCI SM is frequent in China, which is found not only in cold seasons and areas but also in warm seasons and areas. The ratio of gap pixel numbers to the whole pixel numbers can be greater than 80%, and its average is around 40%. 2) ML methods can fill the gaps of CCI SM all up. Among the ML methods, RF had the best performance in fitting the relationship between CCI SM and biophysical variables. 3) Over simulated gap areas, RF had a comparable performance with OK, and they outperformed the FNN and GLM methods greatly. 4) Over in situ SM networks, RF achieved better performance than the OK method. 5) We also explored various strategies for gap-filling CCI SM. Results demonstrated that the strategy of constructing a monthly model with one RF for simulating monthly average SM and another RF for simulating monthly SM disturbance achieved the best performance. Such strategy combining with the ML method such as the RF is suggested in this study for filling the gaps of CCI SM in China.


MRS Bulletin ◽  
2008 ◽  
Vol 33 (4) ◽  
pp. 389-395 ◽  
Author(s):  
Ralph E.H. Sims

AbstractSome forms of renewable energy have long contributed to electricity generation, whereas others are just emerging. For example, large-scale hydropower is a mature technology generating about 16% of global electricity, and many smaller scale systems are also being installed worldwide. Future opportunities to improve the technology are limited but include upgrading of existing plants to gain greater performance efficiencies and reduced maintenance. Geothermal energy, widely used for power generation and direct heat applications, is also mature, but new technologies could improve plant designs, extend their lifetimes, and improve reliability. By contrast, ocean energy is an emerging renewable energy technology. Design, development, and testing of a myriad of devices remain mainly in the research and development stage, with many opportunities for materials science to improve design and performance, reduce costly maintenance procedures, and extend plant operating lifetimes under the harsh marine environment.


2021 ◽  
pp. 004728752110247
Author(s):  
Vinh Bui ◽  
Ali Reza Alaei ◽  
Huy Quan Vu ◽  
Gang Li ◽  
Rob Law

Understanding and being able to measure, analyze, compare, and contrast the image of a tourism destination, also known as tourism destination image (TDI), is critical in tourism management and destination marketing. Although various methodologies have been developed, a consistent, reliable, and scalable method for measuring TDI is still unavailable. This study aims to address the challenge by proposing a framework for a holistic measure of TDI in four dimensions, including popularity, sentiment, time, and location. A structural model for TDI measurement that covers various aspects of a tourism destination is developed. TDI is then measured by a comprehensive computational framework that can analyze complex textual and visual data on a large scale. A case study using more than 30,000 images, and 10,000 comments in relation to three tourism destinations in Australia demonstrates the effectiveness of the proposed framework.


2021 ◽  
Author(s):  
Kor de Jong ◽  
Marc van Kreveld ◽  
Debabrata Panja ◽  
Oliver Schmitz ◽  
Derek Karssenberg

<p>Data availability at global scale is increasing exponentially. Although considerable challenges remain regarding the identification of model structure and parameters of continental scale hydrological models, we will soon reach the situation that global scale models could be defined at very high resolutions close to 100 m or less. One of the key challenges is how to make simulations of these ultra-high resolution models tractable ([1]).</p><p>Our research contributes by the development of a model building framework that is specifically designed to distribute calculations over multiple cluster nodes. This framework enables domain experts like hydrologists to develop their own large scale models, using a scripting language like Python, without the need to acquire the skills to develop low-level computer code for parallel and distributed computing.</p><p>We present the design and implementation of this software framework and illustrate its use with a prototype 100 m, 1 h continental scale hydrological model. Our modelling framework ensures that any model built with it is parallelized. This is made possible by providing the model builder with a set of building blocks of models, which are coded in such a manner that parallelization of calculations occurs within and across these building blocks, for any combination of building blocks. There is thus full flexibility on the side of the modeller, without losing performance.</p><p>This breakthrough is made possible by applying a novel approach to the implementation of the model building framework, called asynchronous many-tasks, provided by the HPX C++ software library ([3]). The code in the model building framework expresses spatial operations as large collections of interdependent tasks that can be executed efficiently on individual laptops as well as computer clusters ([2]). Our framework currently includes the most essential operations for building large scale hydrological models, including those for simulating transport of material through a flow direction network. By combining these operations, we rebuilt an existing 100 m, 1 h resolution model, thus far used for simulations of small catchments, requiring limited coding as we only had to replace the computational back end of the existing model. Runs at continental scale on a computer cluster show acceptable strong and weak scaling providing a strong indication that global simulations at this resolution will soon be possible, technically speaking.</p><p>Future work will focus on extending the set of modelling operations and adding scalable I/O, after which existing models that are currently limited in their ability to use the computational resources available to them can be ported to this new environment.</p><p>More information about our modelling framework is at https://lue.computationalgeography.org.</p><p><strong>References</strong></p><p>[1] M. Bierkens. Global hydrology 2015: State, trends, and directions. Water Resources Research, 51(7):4923–4947, 2015.<br>[2] K. de Jong, et al. An environmental modelling framework based on asynchronous many-tasks: scalability and usability. Submitted.<br>[3] H. Kaiser, et al. HPX - The C++ standard library for parallelism and concurrency. Journal of Open Source Software, 5(53):2352, 2020.</p>


2021 ◽  
Author(s):  
Natalia Vazaeva ◽  
Otto Chkhetiani ◽  
Michael Kurgansky

<p>Polar lows (PLs) are important mesoscale (horizontal diameter up to 1000 km) maritime weather systems at high latitudes, forming pole ward from the polar front. We consider the possible prognostic criteria of PLs, in particular, the kinematic helicity as a quadratic characteristic related to the integral vortex formations and the kinematic vorticity number (KVN). To calculate such characteristics we use reanalysis data and the results of numerical simulation with the WRF-ARW model (Version 4.1.) for the PLs over the Nordic (Norwegian and Barents) seas. For comparison, experimental data are used.</p><p>Our estimate of helicity is based on the connection of an integral helicity (IH) in the Ekman layer with the geostrophic wind velocity, due to the good correlation between IH and half the sum of the wind velocity squared. We have chosen IH averaged over preselected area covering the locality of PLs genesis. This area was moving along with the centre of PL during the numerical simulation.</p><p>The genesis of PLs can be divided into three stages: (i) an initial development stage, in which a number of small vortices appear in a shear zone; (ii) a late development stage, characterized by the merger of vortices; (iii) a mature stage, in which only a single PL is present. Approximately one day before PL formation, a significant increase in helicity was observed. The average helicity bulk density of large-scale motions has values of 0.3 – 0.4 ms<sup>-2</sup>. The local changes in helicity are adjacent to the front side of the PLs. The IH criterion described facilitates the identification of the PLs genesis area. For a more detailed analysis of the PL genesis, it is recommended to apply KVN, which is the additional indicator of PL size and intensity. At the moment of maximum intensity of PLs KVN can reach values of 12 – 14 units. The advantage of using KVN is also in its clear change directly in the centre of the emerging PLs, which allows to precisely indicates the limits of the most intense part of PLs.</p><p>The main challenge is to make the operational forecast of PLs possible through the selection of the prognostic integral characteristics of PLs, sufficient for PLs identification and for analysis of their size and intensity in a convenient, usable and understandable way. The criteria associated with vorticity and helicity are reflected in the PLs genesis and development quite clearly. At this time, such a claim is only a hypothesis, which must be tested using a larger set of cases. Future work will need to extend these analyses to other active PL basins. Also, it would be interesting to compare the representation of PLs by using any other criteria. It is intended to use our combined criteria as a precursor to machine learning-based PLs identification procedure where satellite image analysis and capture of particular cloud patterns are currently applied in most of the cases. It would eliminate the time consuming first stage of collecting data sets.</p><p>This work was supported by the Russian Science Foundation (project No. 19-17-00248).</p>


Sign in / Sign up

Export Citation Format

Share Document