In-Memory Analytics

Author(s):  
Jorge Manjarrez Sánchez

Analytics is the processing of data for information discovery. In-memory implementation of machine learning and statistical algorithms enable the fast processing of data for descriptive, diagnostic, predictive, and prescriptive analytics. In this chapter, the authors first present some concepts and challenges for fast analytics, then they discuss some of the most relevant proposals and data management structures for in-memory data analytics in centralized, parallel, and distributed settings. Finally, the authors offer further research directions and some concluding remarks.

Author(s):  
Jorge Manjarrez Sánchez

Analytics is the processing of data for information discovery. In-memory implementation of machine learning and statistical algorithms enable the fast processing of data for descriptive, diagnostic, predictive and prescriptive analytics. In this chapter we first present some concepts and challenges for fast analytics, then we discuss some of the most relevant proposals and data management structures for in-memory data analytics in centralized, parallel and distributed settings. Finally, we offer further research directions and some concluding remarks.


2021 ◽  
Vol 73 (03) ◽  
pp. 25-30
Author(s):  
Srikanta Mishra ◽  
Jared Schuetter ◽  
Akhil Datta-Gupta ◽  
Grant Bromhal

Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy. It is useful to start with some definitions to establish a common vocabulary. Data analytics (DA)—Sophisticated data collection and analysis to understand and model hidden patterns and relationships in complex, multivariate data sets Machine learning (ML)—Building a model between predictors and response, where an algorithm (often a black box) is used to infer the underlying input/output relationship from the data Artificial intelligence (AI)—Applying a predictive model with new data to make decisions without human intervention (and with the possibility of feedback for model updating) Thus, DA can be thought of as a broad framework that helps determine what happened (descriptive analytics), why it happened (diagnostic analytics), what will happen (predictive analytics), or how can we make something happen (prescriptive analytics) (Sankaran et al. 2019). Although DA is built upon a foundation of classical statistics and optimization, it has increasingly come to rely upon ML, especially for predictive and prescriptive analytics (Donoho 2017). While the terms DA, ML, and AI are often used interchangeably, it is important to recognize that ML is basically a subset of DA and a core enabling element of the broader application for the decision-making construct that is AI. In recent years, there has been a proliferation in studies using ML for predictive analytics in the context of subsurface energy resources. Consider how the number of papers on ML in the OnePetro database has been increasing exponentially since 1990 (Fig. 1). These trends are also reflected in the number of technical sessions devoted to ML/AI topics in conferences organized by SPE, AAPG, and SEG among others; as wells as books targeted to practitioners in these professions (Holdaway 2014; Mishra and Datta-Gupta 2017; Mohaghegh 2017; Misra et al. 2019). Given these high levels of activity, our goal is to provide some observations and recommendations on the practice of data-driven model building using ML techniques. The observations are motivated by our belief that some geoscientists and petroleum engineers may be jumping the gun by applying these techniques in an ad hoc manner without any foundational understanding, whereas others may be holding off on using these methods because they do not have any formal ML training and could benefit from some concrete advice on the subject. The recommendations are conditioned by our experience in applying both conventional statistical modeling and data analytics approaches to practical problems.


2020 ◽  
Vol 18 (160) ◽  
pp. 731-751
Author(s):  
Lavinia Mihaela CRISTEA ◽  

The IT impact can be noticed in all activity fields of this world, and the audit is no exception from the evolution of this technological trend. Motivation: Given that professionals are progressively pursuing experimentation in working with new technologies, the development of Artificial Intelligence (AI), Blockchain, RPA, Machine Learning through the Deep Learning subset is a particularly interesting case, on which the researcher argues for debate. The objective of the article is to present the latest episode of the new technologies impact that outline the auditor profession, the methods and tools used. The quantitative, applied and technical research method allows the analysis of the emerging technologies impact, completing a previous specialized paper of the same author. The results of this paper propose the integration of AI, Blockchain, RPA, Deep Learning and predictive analytics in financial audit missions. The projections resulted from discussions with auditing and IT specialists from Big Four companies show how the technologies presented in this paper could be applied on concrete cases, facilitating current tasks. Machine Learning and Deep Learning would allow a development for prescriptive analytics, revolutionizing the data analytics process. Both the analysis of the literature and the conducted interviews admit AI as a business solution that contributes to the data analytics in an intelligent way, providing a foundation for the development of RPA.


Author(s):  
Sadaf Qazi ◽  
Muhammad Usman

Background: Immunization is a significant public health intervention to reduce child mortality and morbidity. However, its coverage, in spite of free accessibility, is still very low in developing countries. One of the primary reasons for this low coverage is the lack of analysis and proper utilization of immunization data at various healthcare facilities. Purpose: In this paper, the existing machine learning based data analytics techniques have been reviewed critically to highlight the gaps where this high potential data could be exploited in a meaningful manner. Results: It has been revealed from our review, that the existing approaches use data analytics techniques without considering the complete complexity of Expanded Program on Immunization which includes the maintenance of cold chain systems, proper distribution of vaccine and quality of data captured at various healthcare facilities. Moreover, in developing countries, there is no centralized data repository where all data related to immunization is being gathered to perform analytics at various levels of granularities. Conclusion: We believe that the existing non-centralized immunization data with the right set of machine learning and Artificial Intelligence based techniques will not only improve the vaccination coverage but will also help in predicting the future trends and patterns of its coverage at different geographical locations.


Author(s):  
William B. Rouse

This book discusses the use of models and interactive visualizations to explore designs of systems and policies in determining whether such designs would be effective. Executives and senior managers are very interested in what “data analytics” can do for them and, quite recently, what the prospects are for artificial intelligence and machine learning. They want to understand and then invest wisely. They are reasonably skeptical, having experienced overselling and under-delivery. They ask about reasonable and realistic expectations. Their concern is with the futurity of decisions they are currently entertaining. They cannot fully address this concern empirically. Thus, they need some way to make predictions. The problem is that one rarely can predict exactly what will happen, only what might happen. To overcome this limitation, executives can be provided predictions of possible futures and the conditions under which each scenario is likely to emerge. Models can help them to understand these possible futures. Most executives find such candor refreshing, perhaps even liberating. Their job becomes one of imagining and designing a portfolio of possible futures, assisted by interactive computational models. Understanding and managing uncertainty is central to their job. Indeed, doing this better than competitors is a hallmark of success. This book is intended to help them understand what fundamentally needs to be done, why it needs to be done, and how to do it. The hope is that readers will discuss this book and develop a “shared mental model” of computational modeling in the process, which will greatly enhance their chances of success.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 831
Author(s):  
Vaneet Aggarwal

Due to the proliferation of applications and services that run over communication networks, ranging from video streaming and data analytics to robotics and augmented reality, tomorrow’s networks will be faced with increasing challenges resulting from the explosive growth of data traffic demand with significantly varying performance requirements [...]


Author(s):  
G. Arunakranthi ◽  
B. Rajkumar ◽  
V. Chandra Shekhar Rao ◽  
A. Harshavardhan

Sign in / Sign up

Export Citation Format

Share Document