Big Data in Business

Author(s):  
Farid Huseynov

The term “big data” refers to the very large and diverse sets of structured, semi-structured, and unstructured digital data from different sources that accumulate and grow very rapidly on a continuous basis. Big data enables enhanced decision-making in various types of businesses. Through these technologies, businesses are able to cut operational costs, digitally transform business operations to be more efficient and effective, and make more informed business decisions. Big data technologies enable businesses to better understand their markets by uncovering hidden patterns behind consumer behaviors and introduce new products and services accordingly. This chapter shows the critical role that big data plays in businesses. Initially, in this chapter, big data and its underlying technologies are explained. Later, this chapter discusses how big data digitally transforms critical business operations for enhanced decision-making and superior customer experience. Finally, this chapter ends with the possible challenges of big data for businesses and possible solutions to these challenges.

Author(s):  
Longzhi Yang ◽  
Jie Li ◽  
Noe Elisa ◽  
Tom Prickett ◽  
Fei Chao

AbstractBig data refers to large complex structured or unstructured data sets. Big data technologies enable organisations to generate, collect, manage, analyse, and visualise big data sets, and provide insights to inform diagnosis, prediction, or other decision-making tasks. One of the critical concerns in handling big data is the adoption of appropriate big data governance frameworks to (1) curate big data in a required manner to support quality data access for effective machine learning and (2) ensure the framework regulates the storage and processing of the data from providers and users in a trustworthy way within the related regulatory frameworks (both legally and ethically). This paper proposes a framework of big data governance that guides organisations to make better data-informed business decisions within the related regularity framework, with close attention paid to data security, privacy, and accessibility. In order to demonstrate this process, the work also presents an example implementation of the framework based on the case study of big data governance in cybersecurity. This framework has the potential to guide the management of big data in different organisations for information sharing and cooperative decision-making.


Web Services ◽  
2019 ◽  
pp. 1430-1443
Author(s):  
Louise Leenen ◽  
Thomas Meyer

The Governments, military forces and other organisations responsible for cybersecurity deal with vast amounts of data that has to be understood in order to lead to intelligent decision making. Due to the vast amounts of information pertinent to cybersecurity, automation is required for processing and decision making, specifically to present advance warning of possible threats. The ability to detect patterns in vast data sets, and being able to understanding the significance of detected patterns are essential in the cyber defence domain. Big data technologies supported by semantic technologies can improve cybersecurity, and thus cyber defence by providing support for the processing and understanding of the huge amounts of information in the cyber environment. The term big data analytics refers to advanced analytic techniques such as machine learning, predictive analysis, and other intelligent processing techniques applied to large data sets that contain different data types. The purpose is to detect patterns, correlations, trends and other useful information. Semantic technologies is a knowledge representation paradigm where the meaning of data is encoded separately from the data itself. The use of semantic technologies such as logic-based systems to support decision making is becoming increasingly popular. However, most automated systems are currently based on syntactic rules. These rules are generally not sophisticated enough to deal with the complexity of decisions required to be made. The incorporation of semantic information allows for increased understanding and sophistication in cyber defence systems. This paper argues that both big data analytics and semantic technologies are necessary to provide counter measures against cyber threats. An overview of the use of semantic technologies and big data technologies in cyber defence is provided, and important areas for future research in the combined domains are discussed.


Author(s):  
Dharmpal Singh ◽  
Madhusmita Mishra ◽  
Sudipta Sahana

Big-data-analyzed finding patterns derive meaning and make decisions on data to produce responses to the world with intelligence. It is an emerging area used in business intelligence (BI) for competitive advantage to analyze the structured, semi-structured, and unstructured data stored in different formats. As the big data technology continues to evolve, businesses are turning to predictive intelligence to deepen the engagement to customers with optimization in processes to reduce the operational costs. Predictive intelligence uses sets of advanced technologies that enable organizations to use data stored in real time that move from a historical and descriptive view to a forward-looking perspective of data. The comparison and other security issue of this technology is covered in this book chapter. The combination of big data technology and predictive analytics is sometimes referred to as a never-ending process and has the possibility to deliver significant competitive advantage. This chapter provides an extensive review of literature on big data technologies and its usage in the predictive intelligence.


2013 ◽  
Vol 1 (1) ◽  
pp. 19-25 ◽  
Author(s):  
Abdelkader Baaziz ◽  
Luc Quoniam

“Big Data is the oil of the new economy” is the most famous citation during the three last years. It has even been adopted by the World Economic Forum in 2011. In fact, Big Data is like crude! It’s valuable, but if unrefined it cannot be used. It must be broken down, analyzed for it to have value. But what about Big Data generated by the Petroleum Industry and particularly its upstream segment? Upstream is no stranger to Big Data. Understanding and leveraging data in the upstream segment enables firms to remain competitive throughout planning, exploration, delineation, and field development.Oil Gas Companies conduct advanced geophysics modeling and simulation to support operations where 2D, 3D 4D Seismic generate significant data during exploration phases. They closely monitor the performance of their operational assets. To do this, they use tens of thousands of data-collecting sensors in subsurface wells and surface facilities to provide continuous and real-time monitoring of assets and environmental conditions. Unfortunately, this information comes in various and increasingly complex forms, making it a challenge to collect, interpret, and leverage the disparate data. As an example, Chevron’s internal IT traffic alone exceeds 1.5 terabytes a day.Big Data technologies integrate common and disparate data sets to deliver the right information at the appropriate time to the correct decision-maker. These capabilities help firms act on large volumes of data, transforming decision-making from reactive to proactive and optimizing all phases of exploration, development and production. Furthermore, Big Data offers multiple opportunities to ensure safer, more responsible operations. Another invaluable effect of that would be shared learning.The aim of this paper is to explain how to use Big Data technologies to optimize operations. How can Big Data help experts to decision-making leading the desired outcomes?Keywords:Big Data; Analytics; Upstream Petroleum Industry; Knowledge Management; KM; Business Intelligence; BI; Innovation; Decision-making under Uncertainty


2021 ◽  
Vol 2 ◽  
pp. 75-80
Author(s):  
Martin Misut ◽  
Pavol Jurik

The digital transformation of business in the light of opportunities and focusing on the challenges posed by the introduction of Big Data in enterprises allows for a more accurate reflection of the internal and external environmental stimuli. Intuition ceases to be present in the decision-making process, and decision-making becomes strictly data-based. Thus, the precondition for data-based decision-making is relevant data in digital form, resulting from data processing. Datafication is the process by which subjects, objects and procedures are transformed into digital data. Only after data collection can other natural steps occur to acquire knowledge to improve the company's results if we move in the industry's functioning context. The task of finding a set of attributes (selecting attributes from a set of available attributes) so that a suitable alternative can be determined in its decision-making is analogous to the task of classification. Decision trees are suitable for solving such a task. We verified the proposed method in the case of logistics tasks. The analysis subject was tasks from logistics and 80 well-described quantitative methods used in logistics to solve them. The result of the analysis is a matrix (table), in which the rows contain the values of individual attributes defining a specific logistic task. The columns contain the values of the given attribute for different tasks. We used Incremental Wrapper Subset Selection IWSS package Weka 3.8.4 to select attributes. The resulting classification model is suitable for use in DSS. The analysis of logistics tasks and the subsequent design of a classification model made it possible to reveal the contours of the relationship between the characteristics of a logistics problem explicitly expressed through a set of attributes and the classes of methods used to solve them.


Author(s):  
Loubna Rabhi ◽  
Noureddine Falih ◽  
Lekbir Afraites ◽  
Belaid Bouikhalene

Big <span>data in agriculture is defined as massive volumes of data with a wide variety of sources and types which can be captured using internet of things sensors (soil and crops sensors, drones, and meteorological stations), analyzed and used for decision-making. In the era of internet of things (IoT) tools, connected agriculture has appeared. Big data outputs can be exploited by the future connected agriculture in order to reduce cost and time production, improve yield, develop new products, offer optimization and smart decision-making. In this article, we propose a functional framework to model the decision-making process in digital and connected agriculture</span>.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Himanshu Gupta ◽  
Sarangdhar Kumar ◽  
Simonov Kusi-Sarpong ◽  
Charbel Jose Chiappetta Jabbour ◽  
Martin Agyemang

PurposeThe aim of this study is to identify and prioritize a list of key digitization enablers that can improve supply chain management (SCM). SCM is an important driver for organization's competitive advantage. The fierce competition in the market has forced companies to look the past conventional decision-making process, which is based on intuition and previous experience. The swift evolution of information technologies (ITs) and digitization tools has changed the scenario for many industries, including those involved in SCM.Design/methodology/approachThe Best Worst Method (BWM) has been applied to evaluate, rank and prioritize the key digitization and IT enablers beneficial for the improvement of SC performance. The study also used additive value function to rank the organizations on their SC performance with respect to digitization enablers.FindingsThe total of 25 key enablers have been identified and ranked. The results revealed that “big data/data science skills”, “tracking and localization of products” and “appropriate and feasibility study for aiding the selection and adoption of big data technologies and techniques ” are the top three digitization and IT enablers that organizations need to focus much in order to improve their SC performance. The study also ranked the SC performance of the organizations based on digitization enablers.Practical implicationsThe findings of this study will help the organizations to focus on certain digitization technologies in order to improve their SC performance. This study also provides an original framework for organizations to rank the key digitization enablers according to enablers relevant in their context and also to compare their performance with their counterparts.Originality/valueThis study seems to be the first of its kind in which 25 digitization enablers categorized in four main categories are ranked using a multi-criteria decision-making (MCDM) tool. This study is also first of its kind in ranking the organizations in their SC performance based on weights/ranks of digitization enablers.


2019 ◽  
Vol 18 (01) ◽  
pp. 1950009
Author(s):  
T. Venkatesan ◽  
K. Saravanan ◽  
T. Ramkumar

Organisations that perform business operations in a multi-sourced big data environment are in imperative need to discover meaningful patterns of interest from their diversified data sources. With the advent of big data technologies such as Hadoop and Spark, commodity hardwares play vital role in the task of data analytics and process the multi-sourced and multi-formatted big data in a reasonable cost and time. Though various data analytic techniques exist in the context of big data, recommendation system is more popular in web-based business applications to suggest suitable products, services, and items to potential customers. In this paper, we put forth a big data recommendation engine framework based on local pattern analytics strategy to explore user preferences and taste for both branch level and central level decisions. The framework encourages the practice of moving computing environment towards the data source location and avoids forceful integration of data. Further it assists decision makers to reap hidden preferences and taste of users from branch data sources for an effective customer campaign. The novelty of the framework has been evaluated in the benchmark dataset, MovieLens100k and results clearly confirm the advantages of the proposal.


Sign in / Sign up

Export Citation Format

Share Document