Big data analytics—lessons learnt from global E&P operators

2015 ◽  
Vol 55 (2) ◽  
pp. 409
Author(s):  
Kevin Kalish

Exploration and production operators are striving to attain the hidden knowledge in their key asset: data. Data and real-time data from intelligent wells supplement historical interpretations and generated datasets. It is paramount to gain insight from these multiple datasets, which enable engineers and stakeholders to make faster and more accurate decisions under uncertainty. By combining the traditional deterministic and interpretive workflows with a data-driven probabilistic set of analyses, it is possible to predict events that result in poor reservoir or well performance or facility failures. By building predictive models based on cleansed historical data and by analysing them in real-time data streams, it is now feasible to optimise production. Controlling costs and ensuring efficient processes that impact positively on health, safety and environment and resource usage are key benefits that fall out of analytical methodologies. This extended abstract provides recent examples of global exploration and production operators using an analytics oilfield framework to: improve the quality of data by integrating relevant sources from multiple monitoring and surveillance systems across all geology, geophysics and reservoir engineering (GGRE) disciplines into a unified view; predict unplanned events so that mitigation can be planned in advance; use predictive models to avoid frequent and unnecessary preventive maintenance that interferes with production schedules, strains maintenance staff and increases costs; and, increase decision support across disparate upstream disciplines by using data mining to create accurate predictive and descriptive models.

2013 ◽  
Author(s):  
Orvel Lynn Rowlan ◽  
James N. McCoy ◽  
Dieter Joseph Becker ◽  
Kay Stefan Capps ◽  
A. L. Podio

2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Woochul Kang ◽  
Jaeyong Chung

With ubiquitous deployment of sensors and network connectivity, amounts of real-time data for embedded systems are increasing rapidly and database capability is required for many embedded systems for systematic management of real-time data. In such embedded systems, supporting the timeliness of tasks accessing databases is an important problem. However, recent multicore-based embedded architectures pose a significant challenge for such data-intensive real-time tasks since the response time of accessing data can be significantly affected by potential intercore interferences. In this paper, we propose a novel feedback control scheme that supports the timeliness of data-intensive tasks against unpredictable intercore interferences. In particular, we use multiple inputs/multiple outputs (MIMO) control method that exploits multiple control knobs, for example, CPU frequency and the Quality-of-Data (QoD) to handle highly unpredictable workloads in multicore systems. Experimental results, using actual implementation, show that the proposed approach achieves the target Quality-of-Service (QoS) goals, such as task timeliness and Quality-of-Data (QoD) while consuming less energy compared to baseline approaches.


Author(s):  
Muhammad Febrian Rachmadhan Amri ◽  
I Made Sukarsa ◽  
I Ketut Adi Purnawan

The online business era causes the form of transactions to occur so quickly that the information stored in the data warehouse becomes invalid. Companies are required to have a strong system, which is a system that is real time in order to be able to perform data loading into the media repository that resides on different hosts in the near-real time. Data Warehouse is used as a media repository of data that has the nature of subject-oriented, integrated, time-variant, and is fixed. Data Warehouse can be built into real time management with the advantages possessed and utilize Change Data Capture. Change Data Capture (CDC) is a technique that can be used as problem solution to build real time data warehousing (RTDW). The binary log approach in change data capture is made to record any data manipulation activity that occurs at the OLTP level and is managed back before being stored into the Data Warehouse (loading process). This can improve the quality of data management so that the creation of the right information, because the information available is always updated. Testing shows that Binary Log approach in Change Data Capture (BinlogCDC) is able to generate real time data management, valid current information, dynamic communication between systems, and data management without losing any information from data manipulation.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2994 ◽  
Author(s):  
Bhagya Silva ◽  
Murad Khan ◽  
Changsu Jung ◽  
Jihun Seo ◽  
Diyan Muhammad ◽  
...  

The Internet of Things (IoT), inspired by the tremendous growth of connected heterogeneous devices, has pioneered the notion of smart city. Various components, i.e., smart transportation, smart community, smart healthcare, smart grid, etc. which are integrated within smart city architecture aims to enrich the quality of life (QoL) of urban citizens. However, real-time processing requirements and exponential data growth withhold smart city realization. Therefore, herein we propose a Big Data analytics (BDA)-embedded experimental architecture for smart cities. Two major aspects are served by the BDA-embedded smart city. Firstly, it facilitates exploitation of urban Big Data (UBD) in planning, designing, and maintaining smart cities. Secondly, it occupies BDA to manage and process voluminous UBD to enhance the quality of urban services. Three tiers of the proposed architecture are liable for data aggregation, real-time data management, and service provisioning. Moreover, offline and online data processing tasks are further expedited by integrating data normalizing and data filtering techniques to the proposed work. By analyzing authenticated datasets, we obtained the threshold values required for urban planning and city operation management. Performance metrics in terms of online and offline data processing for the proposed dual-node Hadoop cluster is obtained using aforementioned authentic datasets. Throughput and processing time analysis performed with regard to existing works guarantee the performance superiority of the proposed work. Hence, we can claim the applicability and reliability of implementing proposed BDA-embedded smart city architecture in the real world.


2021 ◽  
Vol 13 (0203) ◽  
pp. 78-81
Author(s):  
Ashish P. Joshi ◽  
Biraj V. Patel

The model and pattern for real time data mining have an important role for decision making. The meaningful real time data mining is basically depends on the quality of data while row or rough data available at warehouse. The data available at warehouse can be in any format, it may huge or it may unstructured. These kinds of data require some process to enhance the efficiency of data analysis. The process to make it ready to use is called data preprocessing. There can be many activities for data preprocessing such as data transformation, data cleaning, data integration, data optimization and data conversion which are use to converting the rough data to quality data. The data preprocessing techniques are the vital step for the data mining. The analyzed result will be good as far as data quality is good. This paper is about the different data preprocessing techniques which can be use for preparing the quality data for the data analysis for the available rough data.


Author(s):  
Mpoki Mwabukusi ◽  
Esron D. Karimuribo ◽  
Mark M. Rweyemamu ◽  
Eric Beda

A paper-based disease reporting system has been associated with a number of challenges. These include difficulties to submit hard copies of the disease surveillance forms because of poor road infrastructure, weather conditions or challenging terrain, particularly in the developing countries. The system demands re-entry of the data at data processing and analysis points, thus making it prone to introduction of errors during this process. All these challenges contribute to delayed acquisition, processing and response to disease events occurring in remote hard to reach areas. Our study piloted the use of mobile phones in order to transmit near to real-time data from remote districts in Tanzania (Ngorongoro and Ngara), Burundi (Muyinga) and Zambia (Kazungula and Sesheke). Two technologies namely, digital and short messaging services were used to capture and transmit disease event data in the animal and human health sectors in the study areas based on a server–client model. Smart phones running the Android operating system (minimum required version: Android 1.6), and which supported open source application, Epicollect, as well as the Open Data Kit application, were used in the study. These phones allowed collection of geo-tagged data, with the opportunity of including static and moving images related to disease events. The project supported routine disease surveillance systems in the ministries responsible for animal and human health in Burundi, Tanzania and Zambia, as well as data collection for researchers at the Sokoine University of Agriculture, Tanzania. During the project implementation period between 2011 and 2013, a total number of 1651 diseases event-related forms were submitted, which allowed reporters to include GPS coordinates and photographs related to the events captured. It was concluded that the new technology-based surveillance system is useful in providing near to real-time data, with potential for enhancing timely response in rural remote areas of Africa. We recommended adoption of the proven technologies to improve disease surveillance, particularly in the developing countries.


2011 ◽  
Author(s):  
Ahmed Saleh Al-nuaim ◽  
Gary M. Williamson ◽  
Marwan M. Labban ◽  
Keith Richard Holdaway ◽  
Steffen Krug

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 24510-24520 ◽  
Author(s):  
Sohail Jabbar ◽  
Kaleem R. Malik ◽  
Mudassar Ahmad ◽  
Omar Aldabbas ◽  
Muhammad Asif ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document