scholarly journals A novel hybrid approach to Baltic Dry Index forecasting based on a combined dynamic fluctuation network and artificial intelligence method

2019 ◽  
Vol 361 ◽  
pp. 499-516 ◽  
Author(s):  
X. Zhang ◽  
M.Y. Chen ◽  
M.G. Wang ◽  
Y.E. Ge ◽  
H.E. Stanley
Fuels ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 286-303
Author(s):  
Vuong Van Pham ◽  
Ebrahim Fathi ◽  
Fatemeh Belyadi

The success of machine learning (ML) techniques implemented in different industries heavily rely on operator expertise and domain knowledge, which is used in manually choosing an algorithm and setting up the specific algorithm parameters for a problem. Due to the manual nature of model selection and parameter tuning, it is impossible to quantify or evaluate the quality of this manual process, which in turn limits the ability to perform comparison studies between different algorithms. In this study, we propose a new hybrid approach for developing machine learning workflows to help automated algorithm selection and hyperparameter optimization. The proposed approach provides a robust, reproducible, and unbiased workflow that can be quantified and validated using different scoring metrics. We have used the most common workflows implemented in the application of artificial intelligence (AI) and ML in engineering problems including grid/random search, Bayesian search and optimization, genetic programming, and compared that with our new hybrid approach that includes the integration of Tree-based Pipeline Optimization Tool (TPOT) and Bayesian optimization. The performance of each workflow is quantified using different scoring metrics such as Pearson correlation (i.e., R2 correlation) and Mean Square Error (i.e., MSE). For this purpose, actual field data obtained from 1567 gas wells in Marcellus Shale, with 121 features from reservoir, drilling, completion, stimulation, and operation is tested using different proposed workflows. A proposed new hybrid workflow is then used to evaluate the type well used for evaluation of Marcellus shale gas production. In conclusion, our automated hybrid approach showed significant improvement in comparison to other proposed workflows using both scoring matrices. The new hybrid approach provides a practical tool that supports the automated model and hyperparameter selection, which is tested using real field data that can be implemented in solving different engineering problems using artificial intelligence and machine learning. The new hybrid model is tested in a real field and compared with conventional type wells developed by field engineers. It is found that the type well of the field is very close to P50 predictions of the field, which shows great success in the completion design of the field performed by field engineers. It also shows that the field average production could have been improved by 8% if shorter cluster spacing and higher proppant loading per cluster were used during the frac jobs.


2021 ◽  
Author(s):  
Mokhles Mezghani ◽  
Mustafa AlIbrahim ◽  
Majdi Baddourah

Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.


Kybernetes ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Naurin Farooq Khan ◽  
Naveed Ikram ◽  
Hajra Murtaza ◽  
Muhammad Aslam Asadi

PurposeThis study aims to investigate the cybersecurity awareness manifested as protective behavior to explain self-disclosure in social networking sites. The disclosure of information about oneself is associated with benefits as well as privacy risks. The individuals self-disclose to gain social capital and display protective behaviors to evade privacy risks by careful cost-benefit calculation of disclosing information.Design/methodology/approachThis study explores the role of cyber protection behavior in predicting self-disclosure along with demographics (age and gender) and digital divide (frequency of Internet access) variables by conducting a face-to-face survey. Data were collected from 284 participants. The model is validated by using multiple hierarchal regression along with the artificial intelligence approach.FindingsThe results revealed that cyber protection behavior significantly explains the variance in self-disclosure behavior. The complementary use of five machine learning (ML) algorithms further validated the model. The ML algorithms predicted self-disclosure with an area under the curve of 0.74 and an F1 measure of 0.70.Practical implicationsThe findings suggest that costs associated with self-disclosure can be mitigated by educating the individuals to heighten their cybersecurity awareness through cybersecurity training programs.Originality/valueThis study uses a hybrid approach to assess the influence of cyber protection behavior on self-disclosure using expectant valence theory (EVT).


2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Mohammed Al-Maitah ◽  
Olena O. Semenova ◽  
Andriy O. Semenov ◽  
Pavel I. Kulakov ◽  
Volodymyr Yu. Kucheruk

Artificial intelligence is employed for solving complex scientific, technical, and practical problems. Such artificial intelligence techniques as neural networks, fuzzy systems, and genetic and evolutionary algorithms are widely used for communication systems management, optimization, and prediction. Artificial intelligence approach provides optimized results in a challenging task of call admission control, handover, routing, and traffic prediction in cellular networks. 5G mobile communications are designed as heterogeneous networks, whose important requirement is accommodating great numbers of users and the quality of service satisfaction. Call admission control plays a significant role in providing the desired quality of service. An effective call admission control algorithm is needed for optimizing the cellular network system. Many call admission control schemes have been proposed. The paper proposes a methodology for developing a genetic neurofuzzy controller for call admission in 5G networks. Performance of the proposed admission control is evaluated through computer simulation.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 518
Author(s):  
Carlos Dafonte ◽  
Alejandra Rodríguez ◽  
Minia Manteiga ◽  
Ángel Gómez ◽  
Bernardino Arcay

This paper analyzes and compares the sensitivity and suitability of several artificial intelligence techniques applied to the Morgan–Keenan (MK) system for the classification of stars. The MK system is based on a sequence of spectral prototypes that allows classifying stars according to their effective temperature and luminosity through the study of their optical stellar spectra. Here, we include the method description and the results achieved by the different intelligent models developed thus far in our ongoing stellar classification project: fuzzy knowledge-based systems, backpropagation, radial basis function (RBF) and Kohonen artificial neural networks. Since one of today’s major challenges in this area of astrophysics is the exploitation of large terrestrial and space databases, we propose a final hybrid system that integrates the best intelligent techniques, automatically collects the most important spectral features, and determines the spectral type and luminosity level of the stars according to the MK standard system. This hybrid approach truly emulates the behavior of human experts in this area, resulting in higher success rates than any of the individual implemented techniques. In the final classification system, the most suitable methods are selected for each individual spectrum, which implies a remarkable contribution to the automatic classification process.


Author(s):  
Roman Dushkin

This article describes the author's proposal of cognitive architecture for the development of artificial intelligence agent of the general level (“strong" artificial intelligence”). The new principles for the development of such architecture are offered: hybrid approach in artificial intelligence and psychophysiological foundations. The scheme of architecture of the proposed solution, as well as the descriptions of possible areas of implementation are given. Strong artificial intelligence represents a technical solution that can solve arbitrary cognitive tasks accessible to humans (human level intelligence), and even beyond the capabilities of human intelligence (artificial superintelligence). The areas of application of strong artificial intelligence are limitless – from solving the current problems faced by humans to completely new tasks that are yet inaccessible to human civilization or expect for their groundbreaker. This study would be interested to the scholars, engineers and researchers dealing with artificial intelligence, as well as to the readers who want to keep in step with modern technologies. The novelty consists in the original approach towards building a cognitive architecture that has absorbed the results of previous research in the area of artificial intelligence. The relevance of this work is based on the indisputable fact that currently, the research in the area of weak artificial intelligence begin to slow down due to the inability to solve general problems, and the majority of national strategies of the advanced countries in the area of artificial intelligence declare the need for the development of new artificial intelligence technologies, including the artificial intelligence of general level.


2021 ◽  
Vol 70 ◽  
pp. 871-890
Author(s):  
Tae Wan Kim ◽  
John Hooker ◽  
Thomas Donaldson

An important step in the development of value alignment (VA) systems in artificial intelligence (AI) is understanding how VA can reflect valid ethical principles. We propose that designers of VA systems incorporate ethics by utilizing a hybrid approach in which both ethical reasoning and empirical observation play a role. This, we argue, avoids committing “naturalistic fallacy,” which is an attempt to derive “ought” from “is,” and it provides a more adequate form of ethical reasoning when the fallacy is not committed. Using quantified model logic, we precisely formulate principles derived from deontological ethics and show how they imply particular “test propositions” for any given action plan in an AI rule base. The action plan is ethical only if the test proposition is empirically true, a judgment that is made on the basis of empirical VA. This permits empirical VA to integrate seamlessly with independently justified ethical principles. This article is part of the special track on AI and Society.


2020 ◽  
Vol 3 (1) ◽  
pp. 43-56 ◽  
Author(s):  
Adetokunbo MacGregor John-Otumu ◽  
Godswill U. Ogba ◽  
Obi C. Nwokonkwo

Hepatitis is a dreaded disease that has taken the lives of so many people over the recent past years. The research survey shows that hepatitis viral disease has five major variants referred to as Hepatitis A, B, C, D, and E. Scholars over the years have tried to find an alternative diagnostic means for hepatitis disease using artificial intelligence (AI) techniques in order to save lives. This study extensively reviewed 37 papers on AI based techniques for diagnosing core hepatitis viral disease. Results showed that Hepatitis B (30%) and C (3%) were the only types of hepatitis the AI-based techniques were used to diagnose and properly classified out of the five major types, while (67%) of the paper reviewed diagnosed hepatitis disease based on the different AI based approach but were not classified into any of the five major types. Results from the study also revealed that 18 out of the 37 papers reviewed used hybrid approach, while the remaining 19 used single AI based approach. This shows no significance in terms of technique usage in modeling intelligence into application. This study reveals furthermore a serious gap in knowledge in terms of single hepatitis type prediction or diagnosis in all the papers considered, and recommends that the future road map should be in the aspect of integrating the major hepatitis variants into a single predictive model using effective intelligent machine learning techniques in order to reduce cost of diagnosis and quick treatment of patients.


Author(s):  
Alejandra Rodriguez ◽  
Carlos Dafonte ◽  
Bernardino Arcay ◽  
Iciar Carricajo ◽  
Minia Manteiga

This chapter describes a hybrid approach to the unattended classification of low-resolution optical spectra of stars. The classification of stars in the standard MK system constitutes an important problem in the astrophysics area, since it helps to carry out proper stellar evolution studies. Manual methods, based on the visual study of stellar spectra, have been frequently and successfully used by researchers for many years, but they are no longer viable because of the spectacular advances of the objects collection technologies, which gather a huge amount of spectral data in a relatively short time. Therefore, we propose a cooperative system that is capable of classifying stars automatically and efficiently, by applying to each spectrum the most appropriate method or combined methods, which guarantees a reliable, consistent, and adapted classification. Our final objective is the integration of several artificial intelligence techniques in a unique hybrid system.


Sign in / Sign up

Export Citation Format

Share Document