Geological Insigths Gained from a Seismic Data Lake

Author(s):  
Karyna Rodriguez ◽  
Neil Hodgson

<p>Seismic data has been and continues to be the main tool for hydrocarbon exploration. Storing very large quantities of seismic data, as well as making it easily accessible and with machine learning functionality, is the way forward to gain regional and local understanding of petroleum systems. Seismic data has been made available as a streamed service through a web-based platform allowing seismic data access on the spot, from large datasets stored in the cloud. A data lake can be defined as transformed data used for tasks such as reporting, visualization, advanced analytics and machine learning. The global library of data has been deconstructed from the rigid flat file format traditionally associated with seismic and transformed into a distributed, scalable, big data store. This allows for rapid access, complex queries, and efficient use of computer power – fundamental criteria for enabling Big Data technologies such as deep learning.  </p><p>This data lake concept is already changing the way we access seismic data, enhancing the efficiency of gaining insights into any hydrocarbon basin. Examples include the identification of potentially prolific mixed turbidite/contourite systems in the Trujillo Basin offshore Peru, together with important implications of BSR-derived geothermal gradients, which are much higher than expected in a fore arc setting, opening new exploration opportunities. Another example is de-risking and ranking of offshore Malvinas Basin blocks by gaining new insights into areas until very recently considered to be non-prospective. Further de-risking was achieved by carrying out an in-depth source rock analysis in the Malvinas and conjugate southern South Africa Basins. Additionally, the data lake enabled the development of machine learning algorithms for channel recognition which were successfully applied to data offshore Australia and Norway.</p><p>“On demand” regional seismic dataset access is proving invaluable in our efforts to make hydrocarbon exploration more efficient and successful. Machine learning algorithms are helping to automate the more mechanical tasks, leaving time for the more valuable task of analysing the results. The geological insights gained by combining these 2 aspects confirm the value of seismic data lakes.</p>

2017 ◽  
Vol 47 (10) ◽  
pp. 2625-2626 ◽  
Author(s):  
Fuchun Sun ◽  
Guang-Bin Huang ◽  
Q. M. Jonathan Wu ◽  
Shiji Song ◽  
Donald C. Wunsch II

Author(s):  
C.S.R. Prabhu ◽  
Aneesh Sreevallabh Chivukula ◽  
Aditya Mogadala ◽  
Rohit Ghosh ◽  
L.M. Jenila Livingston

Author(s):  
Manjunath Thimmasandra Narayanapppa ◽  
T. P. Puneeth Kumar ◽  
Ravindra S. Hegadi

Recent technological advancements have led to generation of huge volume of data from distinctive domains (scientific sensors, health care, user-generated data, finical companies and internet and supply chain systems) over the past decade. To capture the meaning of this emerging trend the term big data was coined. In addition to its huge volume, big data also exhibits several unique characteristics as compared with traditional data. For instance, big data is generally unstructured and require more real-time analysis. This development calls for new system platforms for data acquisition, storage, transmission and large-scale data processing mechanisms. In recent years analytics industries interest expanding towards the big data analytics to uncover potentials concealed in big data, such as hidden patterns or unknown correlations. The main goal of this chapter is to explore the importance of machine learning algorithms and computational environment including hardware and software that is required to perform analytics on big data.


Author(s):  
Qifang Bi ◽  
Katherine E Goodman ◽  
Joshua Kaminsky ◽  
Justin Lessler

Abstract Machine learning is a branch of computer science that has the potential to transform epidemiologic sciences. Amid a growing focus on “Big Data,” it offers epidemiologists new tools to tackle problems for which classical methods are not well-suited. In order to critically evaluate the value of integrating machine learning algorithms and existing methods, however, it is essential to address language and technical barriers between the two fields that can make it difficult for epidemiologists to read and assess machine learning studies. Here, we provide an overview of the concepts and terminology used in machine learning literature, which encompasses a diverse set of tools with goals ranging from prediction to classification to clustering. We provide a brief introduction to 5 common machine learning algorithms and 4 ensemble-based approaches. We then summarize epidemiologic applications of machine learning techniques in the published literature. We recommend approaches to incorporate machine learning in epidemiologic research and discuss opportunities and challenges for integrating machine learning and existing epidemiologic research methods.


2019 ◽  
Vol 24 (34) ◽  
pp. 3998-4006
Author(s):  
Shijie Fan ◽  
Yu Chen ◽  
Cheng Luo ◽  
Fanwang Meng

Background: On a tide of big data, machine learning is coming to its day. Referring to huge amounts of epigenetic data coming from biological experiments and clinic, machine learning can help in detecting epigenetic features in genome, finding correlations between phenotypes and modifications in histone or genes, accelerating the screen of lead compounds targeting epigenetics diseases and many other aspects around the study on epigenetics, which consequently realizes the hope of precision medicine. Methods: In this minireview, we will focus on reviewing the fundamentals and applications of machine learning methods which are regularly used in epigenetics filed and explain their features. Their advantages and disadvantages will also be discussed. Results: Machine learning algorithms have accelerated studies in precision medicine targeting epigenetics diseases. Conclusion: In order to make full use of machine learning algorithms, one should get familiar with the pros and cons of them, which will benefit from big data by choosing the most suitable method(s).


2018 ◽  
Vol 37 (6) ◽  
pp. 451-461 ◽  
Author(s):  
Zhen Wang ◽  
Haibin Di ◽  
Muhammad Amir Shafiq ◽  
Yazeed Alaudah ◽  
Ghassan AlRegib

As a process that identifies geologic structures of interest such as faults, salt domes, or elements of petroleum systems in general, seismic structural interpretation depends heavily on the domain knowledge and experience of interpreters as well as visual cues of geologic structures, such as texture and geometry. With the dramatic increase in size of seismic data acquired for hydrocarbon exploration, structural interpretation has become more time consuming and labor intensive. By treating seismic data as images rather than signal traces, researchers have been able to utilize advanced image-processing and machine-learning techniques to assist interpretation directly. In this paper, we mainly focus on the interpretation of two important geologic structures, faults and salt domes, and summarize interpretation workflows based on typical or advanced image-processing and machine-learning algorithms. In recent years, increasing computational power and the massive amount of available data have led to the rise of deep learning. Deep-learning models that simulate the human brain's biological neural networks can achieve state-of-the-art accuracy and even exceed human-level performance on numerous applications. The convolutional neural network — a form of deep-learning model that is effective in analyzing visual imagery — has been applied in fault and salt dome interpretation. At the end of this review, we provide insight and discussion on the future of structural interpretation.


Sign in / Sign up

Export Citation Format

Share Document