scholarly journals A Review Paper on Big Data Analytics in Process Industry

Author(s):  
Lakshmi S ◽  
Manonmani A

Today, in modern large-scale industrial processes, each step-in manufacturing produces a bulk of variables, which are highly precise in nature. However, great challenges are faced under different real-time operating conditions when using just the basic data-driven methods. One of the sultriest research points for convoluted process control is the usage of big data analytics. The aim of big data analytics is to take full advantages of the large amounts of obtained process data and mine helpful details present within. Compared to the well-developed model-based approaches, usage of big data analytics provides productive elective answers for various modern issues under different working conditions. Majority of the modelling in process control in a closed loop system is based on varying the command input to obtain desired controlled output. However, modelling of the process control in a closed loop system based on the disturbance using conventional methods is time consuming since disturbance data is too big and too complex. Utilization of advanced big data analytical methods to mine the disturbance data can lead towards more informed decisions to model the process control in the system. Thus, relevant solutions can be obtained to some of the challenges in the modeling of process control using big data analytics.

2021 ◽  
Author(s):  
R. Salter ◽  
Quyen Dong ◽  
Cody Coleman ◽  
Maria Seale ◽  
Alicia Ruvinsky ◽  
...  

The Engineer Research and Development Center, Information Technology Laboratory’s (ERDC-ITL’s) Big Data Analytics team specializes in the analysis of large-scale datasets with capabilities across four research areas that require vast amounts of data to inform and drive analysis: large-scale data governance, deep learning and machine learning, natural language processing, and automated data labeling. Unfortunately, data transfer between government organizations is a complex and time-consuming process requiring coordination of multiple parties across multiple offices and organizations. Past successes in large-scale data analytics have placed a significant demand on ERDC-ITL researchers, highlighting that few individuals fully understand how to successfully transfer data between government organizations; future project success therefore depends on a small group of individuals to efficiently execute a complicated process. The Big Data Analytics team set out to develop a standardized workflow for the transfer of large-scale datasets to ERDC-ITL, in part to educate peers and future collaborators on the process required to transfer datasets between government organizations. Researchers also aim to increase workflow efficiency while protecting data integrity. This report provides an overview of the created Data Lake Ecosystem Workflow by focusing on the six phases required to efficiently transfer large datasets to supercomputing resources located at ERDC-ITL.


Author(s):  
Marcus Tanque ◽  
Harry J Foxwell

Big data and cloud computing are transforming information technology. These comparable technologies are the result of dramatic developments in computational power, virtualization, network bandwidth, availability, storage capability, and cyber-physical systems. The crossroads of these two areas, involves the use of cloud computing services and infrastructure, to support large-scale data analytics research, providing relevant solutions or future possibilities for supply chain management. This chapter broadens the current posture of cloud computing and big data, as associate with the supply chain solutions. This chapter focuses on areas of significant technology and scientific advancements, which are likely to enhance supply chain systems. This evaluation emphasizes the security challenges and mega-trends affecting cloud computing and big data analytics pertaining to supply chain management.


Big Data ◽  
2016 ◽  
pp. 1555-1581
Author(s):  
Gueyoung Jung ◽  
Tridib Mukherjee

In the modern information era, the amount of data has exploded. Current trends further indicate exponential growth of data in the future. This prevalent humungous amount of data—referred to as big data—has given rise to the problem of finding the “needle in the haystack” (i.e., extracting meaningful information from big data). Many researchers and practitioners are focusing on big data analytics to address the problem. One of the major issues in this regard is the computation requirement of big data analytics. In recent years, the proliferation of many loosely coupled distributed computing infrastructures (e.g., modern public, private, and hybrid clouds, high performance computing clusters, and grids) have enabled high computing capability to be offered for large-scale computation. This has allowed the execution of the big data analytics to gather pace in recent years across organizations and enterprises. However, even with the high computing capability, it is a big challenge to efficiently extract valuable information from vast astronomical data. Hence, we require unforeseen scalability of performance to deal with the execution of big data analytics. A big question in this regard is how to maximally leverage the high computing capabilities from the aforementioned loosely coupled distributed infrastructure to ensure fast and accurate execution of big data analytics. In this regard, this chapter focuses on synchronous parallelization of big data analytics over a distributed system environment to optimize performance.


2017 ◽  
pp. 83-99
Author(s):  
Sivamathi Chokkalingam ◽  
Vijayarani S.

The term Big Data refers to large-scale information management and analysis technologies that exceed the capability of traditional data processing technologies. Big Data is differentiated from traditional technologies in three ways: volume, velocity and variety of data. Big data analytics is the process of analyzing large data sets which contains a variety of data types to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. Since Big Data is new emerging field, there is a need for development of new technologies and algorithms for handling big data. The main objective of this paper is to provide knowledge about various research challenges of Big Data analytics. A brief overview of various types of Big Data analytics is discussed in this paper. For each analytics, the paper describes process steps and tools. A banking application is given for each analytics. Some of research challenges and possible solutions for those challenges of big data analytics are also discussed.


Author(s):  
Luca Oneto ◽  
Emanuele Fumeo ◽  
Giorgio Clerico ◽  
Renzo Canepa ◽  
Federico Papa ◽  
...  

2017 ◽  
Vol 23 (3) ◽  
pp. 703-720 ◽  
Author(s):  
Daniel Bumblauskas ◽  
Herb Nold ◽  
Paul Bumblauskas ◽  
Amy Igou

Purpose The purpose of this paper is to provide a conceptual model for the transformation of big data sets into actionable knowledge. The model introduces a framework for converting data to actionable knowledge and mitigating potential risk to the organization. A case utilizing a dashboard provides a practical application for analysis of big data. Design/methodology/approach The model can be used both by scholars and practitioners in business process management. This paper builds and extends theories in the discipline, specifically related to taking action using big data analytics with tools such as dashboards. Findings The authors’ model made use of industry experience and network resources to gain valuable insights into effective business process management related to big data analytics. Cases have been provided to highlight the use of dashboards as a visual tool within the conceptual framework. Practical implications The literature review cites articles that have used big data analytics in practice. The transitions required to reach the actionable knowledge state and dashboard visualization tools can all be deployed by practitioners. A specific case example from ESP International is provided to illustrate the applicability of the model. Social implications Information assurance, security, and the risk of large-scale data breaches are a contemporary problem in society today. These topics have been considered and addressed within the model framework. Originality/value The paper presents a unique and novel approach for parsing data into actionable knowledge items, identification of viruses, an application of visual dashboards for identification of problems, and a formal discussion of risk inherent with big data.


2017 ◽  
Vol 10 (11) ◽  
pp. 165-174 ◽  
Author(s):  
Mechelle Grace Zaragoza ◽  
Haeng-Kon Kim ◽  
Younky Chung

Sign in / Sign up

Export Citation Format

Share Document