Technology Focus: Field Development (September 2021)

2021 ◽  
Vol 73 (09) ◽  
pp. 43-43
Author(s):  
Reza Garmeh

The digital transformation that began several years ago continues to grow and evolve. With new advancements in data analytics and machine-learning algorithms, field developers today see more benefits to upgrading their traditional development work flows to automated artificial-intelligence work flows. The transformation has helped develop more-efficient and truly integrated development approaches. Many development scenarios can be automatically generated, examined, and updated very quickly. These approaches become more valuable when coupled with physics-based integrated asset models that are kept close to actual field performance to reduce uncertainty for reactive decision making. In unconventional basins with enormous completion and production databases, data-driven decisions powered by machine-learning techniques are increasing in popularity to solve field development challenges and optimize cube development. Finding a trend within massive amounts of data requires an augmented artificial intelligence where machine learning and human expertise are coupled. With slowed activity and uncertainty in the oil and gas industry from the COVID-19 pandemic and growing pressure for cleaner energy and environmental regulations, operators had to shift economic modeling for environmental considerations, predicting operational hazards and planning mitigations. This has enlightened the value of field development optimization, shifting from traditional workflow iterations on data assimilation and sequential decision making to deep reinforcement learning algorithms to find the best well placement and well type for the next producer or injector. Operators are trying to adapt with the new environment and enhance their capabilities to efficiently plan, execute, and operate field development plans. Collaboration between different disciplines and integrated analyses are key to the success of optimized development strategies. These selected papers and the suggested additional reading provide a good view of what is evolving with field development work flows using data analytics and machine learning in the era of digital transformation. Recommended additional reading at OnePetro: www.onepetro.org. SPE 203073 - Data-Driven and AI Methods To Enhance Collaborative Well Planning and Drilling-Risk Prediction by Richard Mohan, ADNOC, et al. SPE 200895 - Novel Approach To Enhance the Field Development Planning Process and Reservoir Management To Maximize the Recovery Factor of Gas Condensate Reservoirs Through Integrated Asset Modeling by Oswaldo Espinola Gonzalez, Schlumberger, et al. SPE 202373 - Efficient Optimization and Uncertainty Analysis of Field Development Strategies by Incorporating Economic Decisions in Reservoir Simulation Models by James Browning, Texas Tech University, et al.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


2021 ◽  
Author(s):  
Yew Kee Wong

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.


Author(s):  
Rahul Badwaik

Healthcare industry is currently undergoing a digital transformation, and Artificial Intelligence (AI) is the latest buzzword in the healthcare domain. The accuracy and efficiency of AI-based decisions are already been heard across countries. Moreover, the increasing availability of electronic clinical data can be combined with big data analytics to harness the power of AI applications in healthcare. Like other countries, the Indian healthcare industry has also witnessed the growth of AI-based applications. A review of the literature for data on AI and machine learning was conducted. In this article, we discuss AI, the need for AI in healthcare, and its current status. An overview of AI in the Indian healthcare setting has also been discussed.


2021 ◽  
Vol 29 (Supplement_1) ◽  
pp. i18-i18
Author(s):  
N Hassan ◽  
R Slight ◽  
D Weiand ◽  
A Vellinga ◽  
G Morgan ◽  
...  

Abstract Introduction Sepsis is a life-threatening condition that is associated with increased mortality. Artificial intelligence tools can inform clinical decision making by flagging patients who may be at risk of developing infection and subsequent sepsis and assist clinicians with their care management. Aim To identify the optimal set of predictors used to train machine learning algorithms to predict the likelihood of an infection and subsequent sepsis and inform clinical decision making. Methods This systematic review was registered in PROSPERO database (CRD42020158685). We searched 3 large databases: Medline, Cumulative Index of Nursing and Allied Health Literature, and Embase, using appropriate search terms. We included quantitative primary research studies that focused on sepsis prediction associated with bacterial infection in adult population (>18 years) in all care settings, which included data on predictors to develop machine learning algorithms. The timeframe of the search was 1st January 2000 till the 25th November 2019. Data extraction was performed using a data extraction sheet, and a narrative synthesis of eligible studies was undertaken. Narrative analysis was used to arrange the data into key areas, and compare and contrast between the content of included studies. Quality assessment was performed using Newcastle-Ottawa Quality Assessment scale, which was used to evaluate the quality of non-randomized studies. Bias was not assessed due to the non-randomised nature of the included studies. Results Fifteen articles met our inclusion criteria (Figure 1). We identified 194 predictors that were used to train machine learning algorithms to predict infection and subsequent sepsis, with 13 predictors used on average across all included studies. The most significant predictors included age, gender, smoking, alcohol intake, heart rate, blood pressure, lactate level, cardiovascular disease, endocrine disease, cancer, chronic kidney disease (eGFR<60ml/min), white blood cell count, liver dysfunction, surgical approach (open or minimally invasive), and pre-operative haematocrit < 30%. These predictors were used for the development of all the algorithms in the fifteen articles. All included studies used artificial intelligence techniques to predict the likelihood of sepsis, with average sensitivity 77.5±19.27, and average specificity 69.45±21.25. Conclusion The type of predictors used were found to influence the predictive power and predictive timeframe of the developed machine learning algorithm. Two strengths of our review were that we included studies published since the first definition of sepsis was published in 2001, and identified factors that can improve the predictive ability of algorithms. However, we note that the included studies had some limitations, with three studies not validating the models that they developed, and many tools limited by either their reduced specificity or sensitivity or both. This work has important implications for practice, as predicting the likelihood of sepsis can help inform the management of patients and concentrate finite resources to those patients who are most at risk. Producing a set of predictors can also guide future studies in developing more sensitive and specific algorithms with increased predictive time window to allow for preventive clinical measures.


2021 ◽  
Author(s):  
Jorge Crespo Alvarez ◽  
Bryan Ferreira Hernández ◽  
Sandra Sumalla Cano

This work, developed under the NUTRIX Project, has the objective to develop artificial intelligence algorithms based on the open source platform Knime that allows to characterize and predict the adherence of individuals to diet before starting the treatment. The machine learning algorithms developed under this project have significantly increased the confidence (a priory probability) that a patient leaves the treatment (diet) before starting: from 17,6% up to 96,5% which can be used as valuable guidance during the decision-making process of professionals in the area of ​dietetics and nutrition.


Intelligent technology has touched and improved upon almost every aspect of employee life cycle, Human resource is one of the areas, which has greatly benefited. Transformation of work mainly question the way we work, where we work, how we work and mainly care about the environment and surroundings in which we work. The main goal is to support the organizations to break out their traditional way of work and further move towards to an environment, which brings more pleasing atmosphere, flexible, empowering and communicative. Machine learning, algorithms and artificial intelligence are the latest technology buzzing around the HR professional minds. Artificial intelligence designed to take decisions based on data fed into the programs. The key difference between rhythm and balance is of choice vs adjustment. The choice is made easier, only with the help of priority, quick decision-making, time and communication. To maintain the above scenario digitalisation plays a vital role. In this paper, we suggest the artificial assistants focus on improving the rhythm of individual


Author(s):  
Deeksha Kaul ◽  
Harika Raju ◽  
B. K. Tripathy

In this chapter, the authors discuss the use of quantum computing concepts to optimize the decision-making capability of classical machine learning algorithms. Machine learning, a subfield of artificial intelligence, implements various techniques to train a computer to learn and adapt to various real-time tasks. With the volume of data exponentially increasing, solving the same problems using classical algorithms becomes more tedious and time consuming. Quantum computing has varied applications in many areas of computer science. One such area which has been transformed a lot through the introduction of quantum computing is machine learning. Quantum computing, with its ability to perform tasks in logarithmic time, aids in overcoming the limitations of classical machine learning algorithms.


Author(s):  
Thomas Boraud

This chapter assesses alternative approaches of reinforcement learning that are developed by machine learning. The initial goal of this branch of artificial intelligence, which appeared in the middle of the twentieth century, was to develop and implement algorithms that allow a machine to learn. Originally, they were computers or more or less autonomous robotic automata. As artificial intelligence has developed and cross-fertilized with neuroscience, it has begun to be used to model the learning and decision-making processes for biological agents, broadening the meaning of the word ‘machine’. Theoreticians of this discipline define several categories of learning, but this chapter only deals with those which are related to reinforcement learning. To understand how these algorithms work, it is necessary first of all to explain the Markov chain and the Markov decision-making process. The chapter then goes on to examine model-free reinforcement learning algorithms, the actor-critic model, and finally model-based reinforcement learning algorithms.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 520-520 ◽  
Author(s):  
André Pfob ◽  
Babak Mehrara ◽  
Jonas Nelson ◽  
Edwin G. Wilkins ◽  
Andrea Pusic ◽  
...  

520 Background: Post-surgical satisfaction with breasts is a key outcome for women undergoing cancer-related mastectomy and reconstruction. Current decision making relies on group-level evidence, which may not offer optimal choice of treatment for individuals. We developed and validated machine learning algorithms to predict individual post-surgical breast-satisfaction. We aim to facilitate individualized data-driven decision making in breast cancer. Methods: We collected clinical, perioperative, and patient-reported data from 3058 women who underwent breast reconstruction due to breast cancer across 11 sites in North America. We trained and evaluated four algorithms (regularized regression, Support Vector Machine, Neural Network, Regression Tree) to predict significant changes in satisfaction with breasts at 2-year follow up using the validated BREAST-Q measure. Accuracy and area under the receiver operating characteristics curve (AUC) were used to determine algorithm performance in the test sample. Results: Machine learning algorithms were able to accurately predict changes in women’s satisfaction with breasts (see table). Baseline satisfaction with breasts was the most informative predictor of outcome, followed by radiation during or after reconstruction, nipple-sparing and mixed mastectomy, implant-based reconstruction, chemotherapy, unilateral mastectomy, lower psychological well-being, and obesity. Conclusions: We reveal the crucial role of patient-reported outcomes in determining post-operative outcomes and that Machine Learning algorithms are suitable to identify individuals who might benefit from alternative treatment decisions than suggested by group-level evidence. We provide a web-based tool for individuals considering mastectomy and reconstruction. importdemo.com . Clinical trial information: NCT01723423 . [Table: see text]


Sign in / Sign up

Export Citation Format

Share Document