Presentation of a new hybrid approach for forecasting economic growth using artificial intelligence approaches

2019 ◽  
Vol 31 (12) ◽  
pp. 8661-8680 ◽  
Author(s):  
Mohsen Ahmadi ◽  
Saeid Jafarzadeh-Ghoushchi ◽  
Rahim Taghizadeh ◽  
Abbas Sharifi
Fuels ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 286-303
Author(s):  
Vuong Van Pham ◽  
Ebrahim Fathi ◽  
Fatemeh Belyadi

The success of machine learning (ML) techniques implemented in different industries heavily rely on operator expertise and domain knowledge, which is used in manually choosing an algorithm and setting up the specific algorithm parameters for a problem. Due to the manual nature of model selection and parameter tuning, it is impossible to quantify or evaluate the quality of this manual process, which in turn limits the ability to perform comparison studies between different algorithms. In this study, we propose a new hybrid approach for developing machine learning workflows to help automated algorithm selection and hyperparameter optimization. The proposed approach provides a robust, reproducible, and unbiased workflow that can be quantified and validated using different scoring metrics. We have used the most common workflows implemented in the application of artificial intelligence (AI) and ML in engineering problems including grid/random search, Bayesian search and optimization, genetic programming, and compared that with our new hybrid approach that includes the integration of Tree-based Pipeline Optimization Tool (TPOT) and Bayesian optimization. The performance of each workflow is quantified using different scoring metrics such as Pearson correlation (i.e., R2 correlation) and Mean Square Error (i.e., MSE). For this purpose, actual field data obtained from 1567 gas wells in Marcellus Shale, with 121 features from reservoir, drilling, completion, stimulation, and operation is tested using different proposed workflows. A proposed new hybrid workflow is then used to evaluate the type well used for evaluation of Marcellus shale gas production. In conclusion, our automated hybrid approach showed significant improvement in comparison to other proposed workflows using both scoring matrices. The new hybrid approach provides a practical tool that supports the automated model and hyperparameter selection, which is tested using real field data that can be implemented in solving different engineering problems using artificial intelligence and machine learning. The new hybrid model is tested in a real field and compared with conventional type wells developed by field engineers. It is found that the type well of the field is very close to P50 predictions of the field, which shows great success in the completion design of the field performed by field engineers. It also shows that the field average production could have been improved by 8% if shorter cluster spacing and higher proppant loading per cluster were used during the frac jobs.


2021 ◽  
Author(s):  
Mokhles Mezghani ◽  
Mustafa AlIbrahim ◽  
Majdi Baddourah

Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.


Kybernetes ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Naurin Farooq Khan ◽  
Naveed Ikram ◽  
Hajra Murtaza ◽  
Muhammad Aslam Asadi

PurposeThis study aims to investigate the cybersecurity awareness manifested as protective behavior to explain self-disclosure in social networking sites. The disclosure of information about oneself is associated with benefits as well as privacy risks. The individuals self-disclose to gain social capital and display protective behaviors to evade privacy risks by careful cost-benefit calculation of disclosing information.Design/methodology/approachThis study explores the role of cyber protection behavior in predicting self-disclosure along with demographics (age and gender) and digital divide (frequency of Internet access) variables by conducting a face-to-face survey. Data were collected from 284 participants. The model is validated by using multiple hierarchal regression along with the artificial intelligence approach.FindingsThe results revealed that cyber protection behavior significantly explains the variance in self-disclosure behavior. The complementary use of five machine learning (ML) algorithms further validated the model. The ML algorithms predicted self-disclosure with an area under the curve of 0.74 and an F1 measure of 0.70.Practical implicationsThe findings suggest that costs associated with self-disclosure can be mitigated by educating the individuals to heighten their cybersecurity awareness through cybersecurity training programs.Originality/valueThis study uses a hybrid approach to assess the influence of cyber protection behavior on self-disclosure using expectant valence theory (EVT).


2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Mohammed Al-Maitah ◽  
Olena O. Semenova ◽  
Andriy O. Semenov ◽  
Pavel I. Kulakov ◽  
Volodymyr Yu. Kucheruk

Artificial intelligence is employed for solving complex scientific, technical, and practical problems. Such artificial intelligence techniques as neural networks, fuzzy systems, and genetic and evolutionary algorithms are widely used for communication systems management, optimization, and prediction. Artificial intelligence approach provides optimized results in a challenging task of call admission control, handover, routing, and traffic prediction in cellular networks. 5G mobile communications are designed as heterogeneous networks, whose important requirement is accommodating great numbers of users and the quality of service satisfaction. Call admission control plays a significant role in providing the desired quality of service. An effective call admission control algorithm is needed for optimizing the cellular network system. Many call admission control schemes have been proposed. The paper proposes a methodology for developing a genetic neurofuzzy controller for call admission in 5G networks. Performance of the proposed admission control is evaluated through computer simulation.


2016 ◽  
Vol 60 (1) ◽  
pp. 68-81
Author(s):  
E. Arapova

During the 2014 APEC summit the participating countries agreed to move towards a region-wide economic integration and approved China-backed roadmap to promote the Free Trade Area of the Asia-Pacific (FTAAP). The paper examines prospects for economic integration in the Asia-Pacific in the framework of 21 APEC participating members. It aims to measure the “integration potential” of the FTAAP on the basis of quantitative and qualitative analysis of the actual statistic data, to explore key obstacles hampering economic integration in the region. The research comes from the theory of convergence and concept of proximity. They suppose that the higher is the degree of homogeneity in economic development and regulatory regimes of the integrating countries the higher is their “integration potential”. The objective of the author’s analysis is to measure the “integration potential” of APEC countries in four directions: trade liberalization, free movement of investments, monetary and banking integration, free division of labor. Initial estimates of the FTAAP prospects base on the merchandize trade complementarity indices and coefficients of variation analysis. Besides, the research uses hierarchical cluster analysis that helps to classify countries in different groups according to similarity of their economic typologies. This methodology allows to reveal the favorable algorithm of regional economic integration in the framework of the “hybrid approach” (or “open regionalism” adopted for APEC countries in 1989) which encourages the countries to enter into free trade agreements on a bilateral basis or to make offers to the APEC membership as a whole. Final conclusions are based on the results of authors’ calculations with consideration for contemporary trends of the member countries’ economic development and long-term strategies of economic growth. Acknowledgements. The research was supported by the Russian Fund for Humanities, project no. 15-07-00026 “East Asian regionalism in the context of diversifi cation of economic growth model”.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 518
Author(s):  
Carlos Dafonte ◽  
Alejandra Rodríguez ◽  
Minia Manteiga ◽  
Ángel Gómez ◽  
Bernardino Arcay

This paper analyzes and compares the sensitivity and suitability of several artificial intelligence techniques applied to the Morgan–Keenan (MK) system for the classification of stars. The MK system is based on a sequence of spectral prototypes that allows classifying stars according to their effective temperature and luminosity through the study of their optical stellar spectra. Here, we include the method description and the results achieved by the different intelligent models developed thus far in our ongoing stellar classification project: fuzzy knowledge-based systems, backpropagation, radial basis function (RBF) and Kohonen artificial neural networks. Since one of today’s major challenges in this area of astrophysics is the exploitation of large terrestrial and space databases, we propose a final hybrid system that integrates the best intelligent techniques, automatically collects the most important spectral features, and determines the spectral type and luminosity level of the stars according to the MK standard system. This hybrid approach truly emulates the behavior of human experts in this area, resulting in higher success rates than any of the individual implemented techniques. In the final classification system, the most suitable methods are selected for each individual spectrum, which implies a remarkable contribution to the automatic classification process.


Sign in / Sign up

Export Citation Format

Share Document