scholarly journals THE PLATFORM FOR CREATION OF EVENT-DRIVEN APPLICATIONS BASED ON WOLFRAM MATHEMATICA AND APACHE KAFKA

Author(s):  
Denis Zolotariov

The article is devoted to the study and development of the mechanism of interaction between Wolfram Mathematica programs and Apache Kafka queue to provide the ability to build event-driven applications based on it. The subject of the research is the practical principles of building a mechanism for interaction between Wolfram Mathematica and Apache Kafka through a proxy-server. The purpose of the article is to develop and substantiate practical recommendations regarding the formation of proxy-server and a mechanism for its work to publishing messages to the Apache Kafka queue and reading messages from it for programs of the mathematical processor Wolfram Mathematica, which will make it possible to build event-driven applications. The tasks are: to determine the mechanism of such interaction, prove the choice of tools for its implementation, create and test the obtained results. The research used the following tools: Apache Kafka, Kafkacat, servers Ubuntu 20 LTS, the method of developing the Wolfram Mathematica package. The results of the research: the mechanism of interaction between Wolfram Mathematica and Apache Kafka through a proxy-server was determined and the corresponding toolkit was created on its basis in the form of two Mathematica packages, which are built on using bash-scripts, Apache Kafka and third-party Kafkacat software. The first - for use on the end user's computer, the second – on a compute server with a remote Mathematica kernel. It is confirmed that the Mathematica processor is currently not suitable in its pure form for real-time data analysis. Conclusions. Practical recommendations have been developed and substantiated regarding the formation of the mechanism of interaction between the Wolfram Mathematica mathematical processor and the Apache Kafka queue manager through a proxy-server for the possibility of working in two directions with the queue: publishing messages and reading them. A toolkit for such interaction in the form of Mathematica packages has been created, their capabilities have been demonstrated. The economic benefit of using the described tools is shown. Future ways of its improvement are given.

Author(s):  
Denis Zolotariov

The article is devoted to the research and development of the mechanism of interaction between Wolfram Mathematica programs and Apache Kafka queue to provide the ability to build event-driven applications based on it. The subject of the research is the practical principles of building a mechanism for interaction between Wolfram Mathematica and Apache Kafka. The purpose of the article is to develop and substantiate practical recommendations regarding the formation of a mechanism for publishing messages to the Apache Kafka queue and reading messages from it for programs of the mathematical processor Wolfram Mathematica, which will make it possible to build event-driven applications. Tasks: to determine the mechanism of such interaction, prove the choice of tools for its implementation, create and test the obtained results. The research used the following tools: Apache Kafka, Kafkacat, the method of developing the Wolfram Mathematica package. The results of the research: the mechanism of interaction between Wolfram Mathematica and Apache Kafka was determined and the corresponding toolkit was created on its basis in the form of two Mathematica packages, which are built on using Apache Kafka as a queue client and third-party Kafkacat software, respectively. It is shown that the first option is less reliable and consumes much more computer resources during operation. It has been demonstrated that the Mathematica processor is currently not suitable in its pure form for real-time data analysis. Recommendations are given regarding the use of built-in compilation functions to increase the speed of such processing. Conclusions. Practical recommendations have been developed and substantiated regarding the formation of the mechanism of interaction between the Wolfram Mathematica mathematical processor and the Apache Kafka queue manager for the possibility of working in two directions with the queue: publishing messages and reading them. A toolkit for such interaction in the form of Mathematica packages has been created, their capabilities have been demonstrated, as well as comparison with each other. The economic benefit of using the described tools is shown. Future ways of its improvement are given.


Author(s):  
Denis Zolotariov

The article is devoted to the development and substantiation of practical recommendations regarding the formation of a mechanism for deploying a software environment for creating and executing microservices in a rapidly changing technological stack. The subject of the research is the basics of building a system for automated deployment of a software environment for the development and execution of microservices. The purpose of the article is to develop and substantiate practical recommendations for the formation of a mechanism for deploying a software environment for creating and executing microservices in a rapidly changing technological stack. The task of the work: to determine the necessary elements of the deployment mechanism of the software environment and provide an analysis of the functional load for each of them, set specific tasks that must be solved when building each of them, propose and justify the choice of tools for their solution. In the course of the study, the methods of system analysis were used to decompose a complex system into elements and each element into functional components. As of the study, it was established that such a mechanism should consist of the following elements: a universal server initialization a result subsystem for any technological stack and a software environment deployment subsystem for developing or executing an application of a certain type on a certain technological stack. Each element is described in detail, its functional load is shown and its role in the overall system is substantiated. It is shown that such a standardized approach to the deployment of the development and runtime environment allows, among other things, to solve the problem of operating microservices in a tested environment. Conclusions. Practical recommendations for the formation of a mechanism for deploying a software environment for creating and executing microservices in a rapidly changing technological stack have been developed and substantiated. This mechanism is automated. It shows its flexibility and versatility in relation to programming languages and other features of the software environment. It is pointed out that when implemented in the shell language, bash does not need any third-party applications for its work. The economic benefit of using the proposed mechanism is shown. The ways of its improvement are shown.


2021 ◽  
Author(s):  
John McIntosh ◽  
Renata Martin ◽  
Pedro Alcala ◽  
Stian Skjævesland ◽  
John Rigg

Abstract The paper describes a project known internally as "InWell" to address multiple requirements in Repsol Drilling & Completions. InWell is defined by a new Operating Model comprising Governance, People, Process, Functions and Technology. This paper addresses changes to the Technology element - often referred to as "Digitalization". The paper includes a discussion about the business transformation strategy and case studies for addressing three of 18 functionalities identified in the first round of development. The InWell development strategy followed four steps; identification of performance issues, envisioning of a future operating model, identification of functionalities required/supporting this operating model and matching to digital solutions. Our case studies focus on three functionalities provided by three separate companies, Unification of Planning and Compliance, Real Time Data aggregation and Key Performance Indicators. Each functionality was addressed with an existing commercial application customized to meet specific requirements. A corporate web-based Well Construction Process (WCP) was initially piloted and then extended to include all well projects. The WCP identifies the key Tasks that must be completed per project, and these are all tracked. Data from this application is used by a third-party Business Analytics application via an API. Real time data from many sites and a wide range of sources was aggregated and standardized, Quality Controlled and stored within a private secure cloud. The data collation service is an essential building block for current third-party applications such as the operating centre and is a prerequisite for the goal of increased automation. A suite of Operator specific Key Performance Indicators (KPIs) and data analytics services were developed for drilling and completions. Homogenized KPIs for all business units provide data for objective performance management and apples-to-apples comparison. Results are presented via custom dashboards, reports, and integrations with third party applications to meet a wide range of requirements. During a four-month Pilot Phase the InWell Project delivered € 2.5 million in tangible savings through improvements in operational performance. In the first 12 months € 16 million in savings were attributed to InWell. By 2022 forecast savings are expected to exceed € 60 million (Figures 1 & 2). The value of Intangible benefits is thought to exceed these objective savings. Figure 1 The Business Case for InWell – Actual & Projected Savings and Costs. Figure 2 InWell Services addressing Value Levers and quantified potential impact. A multi-sourced digital strategy can produce quick gains, is easily adapted, and provides high value at low risk. The full benefit of digital transformation can only be realised when supported by an effective business operating model.


Blockchain technology uses the cryptographic technique to create expanding list of data records called blocks. Along with transaction and timestamp data, each block holds a hash value obtained using cryptographic technique. Blockchain gains importance for its decentralized data transaction and authorization without the need for third-party intervention. Although, it is mostly used in Finance sector these days, due to its inherent ability to protect data it can be applied to every field of computation especially in fields where data transaction is voluminous. Internet of Things (IoT) is one such area where it involves collection, transfer and processing of real time data from objects, humans and sensors to automate various tasks. Hence, this paper reviews the blockchain technology, and how it can be coupled with IoT to overcome the privacy and security issues. This paper first systematically introduces the concept of blockchain technology, its applications along with the need for IoT devices and its implementation. Finally, it discusses the blockchain based IoT (BIoT) its architecture, advantages, challenges in implementation


2021 ◽  
pp. 204388692110572
Author(s):  
Barbara A. Manko

Big data analytics takes raw, real-time data and uses it to predict trends. Successful use of this data can have a powerful impact on a business’s effectiveness and ultimately their bottom line. As the amount of data increases, the need for analytics is growing. This teaching study discusses the role of social media in data analytics, how to approach the subject, and the desired outcomes. Students will explore the expansion of this field of study, familiarize themselves with the concept and where they may have encountered it in their lives so far, and discuss what analytics can contribute to running a successful business.


Author(s):  
F. Xiao ◽  
C. Li ◽  
Z. Wu ◽  
Y. Wu

<p><strong>Abstract.</strong> ETL (Extraction-Transform-Load) tools, traditionally developed to operate offline on historical data for feeding Data-warehouses need to be enhanced to deal with continuously increased streaming data and be executed at network level during data streams acquisition. In this paper, a scalable and web-based ETL system called NMStream was presented. NMStream is based on event-driven architecture and designed for integrating distributed and heterogeneous streaming data by integrating the Apache Flume and Cassandra DB system, and the ETL processes were conducted through the Flume agent object. NMStream can be used for feeding traditional/real-time data-warehouses or data analytic tools in a stable and effective manner.</p>


2019 ◽  
Vol 14 (6) ◽  
pp. 769-796 ◽  
Author(s):  
Audrey Fertier ◽  
Aurélie Montarnal ◽  
Anne-Marie Barthe-Delanoë ◽  
Sébastien Truptil ◽  
Frédérick Bénaben

Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 399-P
Author(s):  
ANN MARIE HASSE ◽  
RIFKA SCHULMAN ◽  
TORI CALDER

Sign in / Sign up

Export Citation Format

Share Document