A qualitative study of the impact of random shale barriers on SAGD performance using data analytics and machine learning

Author(s):  
Ashish Kumar ◽  
Hassan Hassanzadeh
2021 ◽  
Vol 9 (2) ◽  
pp. 1-19
Author(s):  
Lawrence A. Gordon

The objective of this paper is to assess the impact of data analytics (DA) and machine learning (ML) on accounting research.[1] As discussed in the paper, the inherent inductive nature of DA and ML is creating an important trend in the way accounting research is being conducted. That trend is the increasing utilization of inductive-based research among accounting researchers. Indeed, as a result of the recent developments with DA and ML, a rebalancing is taking place between inductive-based and deductive-based research in accounting.[2] In essence, we are witnessing the resurrection of inductive-based accounting research. A brief review of some empirical evidence to support the above argument is also provided in the paper.   


Author(s):  
Ali Al-Ramini ◽  
Mohammad A Takallou ◽  
Daniel P Piatkowski ◽  
Fadi Alsaleem

Most cities in the United States lack comprehensive or connected bicycle infrastructure; therefore, inexpensive and easy-to-implement solutions for connecting existing bicycle infrastructure are increasingly being employed. Signage is one of the promising solutions. However, the necessary data for evaluating its effect on cycling ridership is lacking. To overcome this challenge, this study tests the potential of using readily-available crowdsourced data in concert with machine-learning methods to provide insight into signage intervention effectiveness. We do this by assessing a natural experiment to identify the potential effects of adding or replacing signage within existing bicycle infrastructure in 2019 in the city of Omaha, Nebraska. Specifically, we first visually compare cycling traffic changes in 2019 to those from the previous two years (2017–2018) using data extracted from the Strava fitness app. Then, we use a new three-step machine-learning approach to quantify the impact of signage while controlling for weather, demographics, and street characteristics. The steps are as follows: Step 1 (modeling and validation) build and train a model from the available 2017 crowdsourced data (i.e., Strava, Census, and weather) that accurately predicts the cycling traffic data for any street within the study area in 2018; Step 2 (prediction) use the model from Step 1 to predict bicycle traffic in 2019 while assuming new signage was not added; Step 3 (impact evaluation) use the difference in prediction from actual traffic in 2019 as evidence of the likely impact of signage. While our work does not demonstrate causality, it does demonstrate an inexpensive method, using readily-available data, to identify changing trends in bicycling over the same time that new infrastructure investments are being added.


2020 ◽  
Author(s):  
Ahmed Tageldin ◽  
Dalia Adly ◽  
Hassan Mostafa ◽  
Haitham S Mohammed

AbstractThe use of technology in agriculture has grown in recent years with the era of data analytics affecting every industry. The main challenge in using technology in agriculture is identification of effectiveness of big data analytics algorithms and their application methods. Pest management is one of the most important problems facing farmers. The cotton leafworm, Spodoptera littoralis (Boisd.) (CLW) is one of the major polyphagous key pests attacking plants includes 73 species recorded at Egypt. In the present study, several machine learning algorithms have been implemented to predict plant infestation with CLW. The moth of CLW data was weekly collected for two years in a commercial hydroponic greenhouse. Furthermore, among other features temperature and relative humidity were recorded over the total period of the study. It was proven that the XGBoost algorithm is the most effective algorithm applied in this study. Prediction accuracy of 84 % has been achieved using this algorithm. The impact of environmental features on the prediction accuracy was compared with each other to ensure a complete dataset for future results. In conclusion, the present study provided a framework for applying machine learning in the prediction of plant infestation with the CLW in the greenhouses. Based on this framework, further studies with continuous measurements are warranted to achieve greater accuracy.


Author(s):  
Namratha Birudaraju ◽  
Adiraju Prasanth Rao ◽  
Sathiyamoorthi V.

The main steps for agricultural practices include preparation of soil, sowing, adding manure, irrigation, harvesting, and storage. For this, one needs to develop modern tools and technologies that can improve production efficiency, product quality, schedule and monitoring the crops, fertilizer spraying, planting, which helps the farmers choose the suitable crop. Efficient techniques are used to analyze huge amount of data which provide real time information about emerging trends. Facilities like fertilizer requirement notifications, predictions on wind directions, satellite-based monitoring are sources of data. Analytics can be used to enable farmers to make decisions based on data. This chapter provides a review of existing work to study the impact of big data on the analysis of agriculture. Analytics creates many chances in the field of agriculture towards smart farming by using hardware, software. The emerging ability to use analytic methods for development promise to transform farming sector to facilitate the poverty reduction which helps to deal with humane crises and conflicts.


Author(s):  
Danielle D. Monteiro ◽  
Maria Machado Duque ◽  
Gabriela S. Chaves ◽  
Virgílio M. Ferreira Filho ◽  
Juliana S. Baioco

In general, flow measurement systems in production units only report the daily total production rates. As there is no precise control of individual production of each well, the current well flow rates and their parameters are determined when production tests are conducted. Because production tests are performed periodically (e.g., once a month), information about the wells is limited and operational decisions are made using data that are not updated. Meanwhile, well properties and parameters from the production test are typically used in multiphase flow models to forecast the expected production. However, this is done deterministically without considering the different sources of uncertainties in the production tests. This study aims to introduce uncertainties in oil flow rate forecast. To do this, it is necessary to identify and quantify uncertainties from the data obtained in the production tests, consider them in production modeling, and propagate them by using multiphase flow simulation. This study comprises two main areas: data analytics and multiphase flow simulation. In data analytics, an algorithm is developed using R to analyze and treat the data from production tests. The most significant stochastic variables are identified and data deviation is adjusted to probability distributions with their respective parameters. Random values of the selected variables are then generated using Monte Carlo and Latin Hypercube Sampling (LHS) methods. In multiphase flow simulation, these possible values are used as input. By nodal analysis, the simulator output is a set of oil flow rate values, with their interval of occurrence probabilities. The methodology is applied, using a representative Brazilian offshore field as a case study. The results show the significance of the inclusion of uncertainties to achieve greater accuracy in the multiphase flow analysis of oil production.


2016 ◽  
Vol 11 (3) ◽  
pp. 162-171 ◽  
Author(s):  
Elisabeth Zabel ◽  
Grace Donegan ◽  
Kate Lawrence ◽  
Paul French

Purpose – Recovery Colleges strive to assist individuals in their journey of recovery and help organisations to become more recovery focused. The evidence base surrounding Recovery Colleges is still in its infancy and further research is required to investigate their effectiveness. The purpose of this paper is to explore the subjective experience of people involved with a Recovery College: “The Recovery Academy” based in Greater Manchester. Design/methodology/approach – A qualitative study using data collected from four focus groups of Recovery Academy students who have either lived experience of mental health problems, are health professionals or are family members or carers. The data were analysed using thematic analysis. Findings – Four main themes emerged from discussing experiences of the Recovery Academy and its courses: ethos of the Recovery Academy; personal and organisational impact; value of co-production; and barriers to engagement and impact. The Recovery Academy can have a positive impact on the lives of students who attend the courses and offer benefits to the organisation in which it is run. Originality/value – Recovery Colleges are gaining large interest nationally. However, to date there is a paucity of research on Recovery Colleges. This is the first paper to be presented for publication specifically on the Recovery Academy. The findings of this study suggest Recovery Colleges have the potential to positively impact students and facilitate recovery oriented organisational change. The findings can add valuable data to the emerging Recovery College evidence base.


2021 ◽  
Author(s):  
Harmanjot Singh Sandhu

Various machine learning-based methods and techniques are developed for forecasting day-ahead electricity prices and spikes in deregulated electricity markets. The wholesale electricity market in the Province of Ontario, Canada, which is one of the most volatile electricity markets in the world, is utilized as the case market to test and apply the methods developed. Factors affecting electricity prices and spikes are identified by using literature review, correlation tests, and data mining techniques. Forecasted prices can be utilized by market participants in deregulated electricity markets, including generators, consumers, and market operators. A novel methodology is developed to forecast day-ahead electricity prices and spikes. Prices are predicted by a neural network called the base model first and the forecasted prices are classified into the normal and spike prices using a threshold calculated from the previous year’s prices. The base model is trained using information from similar days and similar price days for a selected number of training days. The spike prices are re-forecasted by another neural network. Three spike forecasting neural networks are created to test the impact of input features. The overall forecasting is obtained by combining the results from the base model and a spike forecaster. Extensive numerical experiments are carried out using data from the Ontario electricity market, showing significant improvements in the forecasting accuracy in terms of various error measures. The performance of the methodology developed is further enhanced by improving the base model and one of the spike forecasters. The base model is improved by using multi-set canonical correlation analysis (MCCA), a popular technique used in data fusion, to select the optimal numbers of training days, similar days, and similar price days and by numerical experiments to determine the optimal number of neurons in the hidden layer. The spike forecaster is enhanced by having additional inputs including the predicted supply cushion, mined from information publicly available from the Ontario electricity market’s day-ahead System Status Report. The enhanced models are employed to conduct numerical experiments using data from the Ontario electricity market, which demonstrate significant improvements for forecasting accuracy.


2021 ◽  
Author(s):  
Harmanjot Singh Sandhu

Various machine learning-based methods and techniques are developed for forecasting day-ahead electricity prices and spikes in deregulated electricity markets. The wholesale electricity market in the Province of Ontario, Canada, which is one of the most volatile electricity markets in the world, is utilized as the case market to test and apply the methods developed. Factors affecting electricity prices and spikes are identified by using literature review, correlation tests, and data mining techniques. Forecasted prices can be utilized by market participants in deregulated electricity markets, including generators, consumers, and market operators. A novel methodology is developed to forecast day-ahead electricity prices and spikes. Prices are predicted by a neural network called the base model first and the forecasted prices are classified into the normal and spike prices using a threshold calculated from the previous year’s prices. The base model is trained using information from similar days and similar price days for a selected number of training days. The spike prices are re-forecasted by another neural network. Three spike forecasting neural networks are created to test the impact of input features. The overall forecasting is obtained by combining the results from the base model and a spike forecaster. Extensive numerical experiments are carried out using data from the Ontario electricity market, showing significant improvements in the forecasting accuracy in terms of various error measures. The performance of the methodology developed is further enhanced by improving the base model and one of the spike forecasters. The base model is improved by using multi-set canonical correlation analysis (MCCA), a popular technique used in data fusion, to select the optimal numbers of training days, similar days, and similar price days and by numerical experiments to determine the optimal number of neurons in the hidden layer. The spike forecaster is enhanced by having additional inputs including the predicted supply cushion, mined from information publicly available from the Ontario electricity market’s day-ahead System Status Report. The enhanced models are employed to conduct numerical experiments using data from the Ontario electricity market, which demonstrate significant improvements for forecasting accuracy.


Sign in / Sign up

Export Citation Format

Share Document