How strategists use “big data” to support internal business decisions, discovery and production

2014 ◽  
Vol 42 (4) ◽  
pp. 45-50 ◽  
Author(s):  
Thomas H. Davenport

Purpose – The author, an internationally known IT expert, aims to explain how big data is being used by leading corporations to promote better decision making, especially about innovation. Design/methodology/approach – Big data is the collection and interpretation of massive data sets, made possible by vast computing power that monitors a variety of digital streams – such as sensors, marketplace interactions and social information exchanges – and analyses them using “smart” algorithms. It offers a promising new way to discover new opportunities to offer customers high-value products and services. Findings – Big data […] resembles not so much a pool of statistics as an ongoing, fast-flowing stream of information about customer choices. Therefore, a more continuous approach to sampling, analyzing and acting on data is necessary. Practical implications – A number of major financial services firms are using “customer journeys” through the tangle of websites, call centers, tellers and other branch personnel to better understand the paths that customers follow through the organization, and how those paths affect attrition or the purchase of particular financial services. Originality/value – The desired outcome of data discovery is an idea – a notion of a new product, service, or feature, or a hypothesis – with supporting evidence – that an existing model can be improved. Increasingly, corporate strategists are recognizing that big data architecture and management should be designed so that discovery and analysis is the first order of business.

Bank marketers still have difficulties to find the best implementation for credit card promotion using above the line, particularly based on customers preferences in point of interest (POI) locations such as mall and shopping center. On the other hand, customers on those POIs are keen to have recommendation on what is being offered by the bank. On this paper we propose a design architecture and implementation of big data platform to support bank’s credit card’s program campaign that generating data and extracting topics from Twitter. We built a data pipeline that consist of a Twitter streamer, a text preprocessor, a topic extractor using Latent Dirichlet Allocation, and a dashboard that visualize the recommendation. As a result, we successfully generate topics that related to specific location in Jakarta during some time windows, that can be used as a recommendation for bank marketers to create promotion program for their customers. We also present the analysis of computing power usages that indicates the strategy is well implemented on the big data platform.


2014 ◽  
Vol 35 (4/5) ◽  
pp. 284-292 ◽  
Author(s):  
Lucas Mak ◽  
Devin Higgins ◽  
Aaron Collie ◽  
Shawn Nicholson

Purpose – The purpose of this paper is to illustrate that Electronic Theses and Dissertation (ETD) metadata can be used as data for institutional assessment and to map an extended research landscape when connected to other data sets through linked data models. Design/methodology/approach – This paper presents conceptual consideration of ideas behind linked data architecture to leverage ETD and attendant metadata to build a case for institutional assessment. Analysis of graph data support the considerations. Findings – The study reveals first and foremost that ETD metadata is in itself data. Concerns with creating URIs for data elements and general applicability of linked data model formation result. The analysis positively points up a rich environment of institutional relationships not readily found in traditional flat metadata records. Originality/value – This paper provides a new perspective in examining research landscape through ETDs produced by graduate students in higher education sector.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Abeeku Sam Edu

PurposeEnterprises are increasingly taking actionable steps to transform existing business models through digital technologies for service transformation such as big data analytics (BDA). BDA capabilities offer financial institutions to source financial data, analyse data, insight and store such data and information on collaborative platforms for a quick decision-making process. Accordingly, this study identifies how BDA capabilities can be deployed to provide significant improvement for financial services agility.Design/methodology/approachThe study relied on survey data from 485 banking professionals' perspectives with BDA usage, IT capability development and financial service agility. The PLS-SEM technique was used to evaluate the underlying relationship and the applicability of the research framework proposed.FindingsBased on the empirical test from this study, distinctive BDA usage grounded on the concept of IT capability viewpoint proof that financial service agility could be enhanced provided enterprises develop technical capabilities alongside other relevant resources.Practical implicationsThe study further highlights the need for financial service managers to identify BDA technologies such as data mining, query and reporting, data visualisation, predictive modelling, streaming analytics, video analytics and voice analytics to focus on financial knowledge gathering and market observation. Financial managers can also deploy BDA tools to develop a strategic road map for data management, data transferability and knowledge discovery for customised financial products.Originality/valueThis study is a useful contribution to the burgeoning discussion with emerging technologies such as BDA implication to improving enterprises operations.


2021 ◽  
Author(s):  
Hamdan AlSaadi ◽  
Faisal Rashid ◽  
Paulinus Bimastianto ◽  
Shreepad Khambete ◽  
Lucian Toader ◽  
...  

Abstract Big data analytics is the often complex process of examining large andvaried data sets to uncover information. The aim of this paper is to describe how Real TimeOperation Center structuring drilling data in an informative and systematic manner throughdigital solution that can help organizations make informed business decisions and leverage business value to deliver wells efficiently and effectively. Real Time Operation Center process of collecting largechunks of structured/unstructured data, segregating and analyzing it and discovering thepatterns and other useful business insights from it. The methods were based on structuringa detailed workflow, RACI, quality check list for every single process of the provision of real-timedrilling data and digitally transform into valuable information through robust auditableprocess, quality standards and sophisticated software. The paper will explain RTOC DataManagement System and how it helped the organization determining which data is relevantand can be analyzed to drive better business decisions in the future. The big data platform, in-house built-in software, andautomated dashboards have helped the company build the links between different assets,analyzing technical gaps, creating opportunities and moving away from manual data entry(e.g. Excel) which was causing data errors, disconnection between information and wastedworker hours due to inefficiency. These solutions leverage analytics and unlock the valuefrom data to enhance operational efficiency, drive performance and maximize profitability. As a result, the company has successfully delivered 160 wells in 2019 (6% higher than 2019 Business Plan and 10% higher than number of delivered wellsin 2018) more efficiently with 28.2 days per 10kft fornew wells (10% better than 2018), without compromising the well objectives and quality of the wells. Moreover, despite increasing complexity, the highest level ofconfidence on data analytics has permitted the company to go beyond their normaloperating envelop and set a major record for drilling the world's fifth longest well as amilestone in 2019.


2019 ◽  
Vol 57 (8) ◽  
pp. 1734-1755 ◽  
Author(s):  
Deepa Mishra ◽  
Zongwei Luo ◽  
Benjamin Hazen ◽  
Elkafi Hassini ◽  
Cyril Foropon

Purpose Big data and predictive analytics (BDPA) has received great attention in terms of its role in making business decisions. However, current knowledge on BDPA regarding how it might link organizational capabilities and organizational performance (OP) remains unclear. Drawing from the resource-based view, the purpose of this paper is to propose a model to examine how information technology (IT) deployment (i.e. strategic IT flexibility, business–BDPA partnership and business–BDPA alignment) and HR capabilities affect OP through BDPA. Design/methodology/approach To test the proposed hypotheses, structural equation modeling is applied on survey data collected from 159 Indian firms. Findings The results show that BDPA diffusion mediates the influence of IT deployment and HR capabilities on OP. In addition, there is a direct effect of IT deployment and HR capabilities on BDPA diffusion, which also has a direct relationship with OP. Originality/value Through this study, authors demonstrate that IT deployment and HR capabilities have an indirect impact on OP through BDPA diffusion.


Author(s):  
Longzhi Yang ◽  
Jie Li ◽  
Noe Elisa ◽  
Tom Prickett ◽  
Fei Chao

AbstractBig data refers to large complex structured or unstructured data sets. Big data technologies enable organisations to generate, collect, manage, analyse, and visualise big data sets, and provide insights to inform diagnosis, prediction, or other decision-making tasks. One of the critical concerns in handling big data is the adoption of appropriate big data governance frameworks to (1) curate big data in a required manner to support quality data access for effective machine learning and (2) ensure the framework regulates the storage and processing of the data from providers and users in a trustworthy way within the related regulatory frameworks (both legally and ethically). This paper proposes a framework of big data governance that guides organisations to make better data-informed business decisions within the related regularity framework, with close attention paid to data security, privacy, and accessibility. In order to demonstrate this process, the work also presents an example implementation of the framework based on the case study of big data governance in cybersecurity. This framework has the potential to guide the management of big data in different organisations for information sharing and cooperative decision-making.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sathyaraj R ◽  
Ramanathan L ◽  
Lavanya K ◽  
Balasubramanian V ◽  
Saira Banu J

PurposeThe innovation in big data is increasing day by day in such a way that the conventional software tools face several problems in managing the big data. Moreover, the occurrence of the imbalance data in the massive data sets is a major constraint to the research industry.Design/methodology/approachThe purpose of the paper is to introduce a big data classification technique using the MapReduce framework based on an optimization algorithm. The big data classification is enabled using the MapReduce framework, which utilizes the proposed optimization algorithm, named chicken-based bacterial foraging (CBF) algorithm. The proposed algorithm is generated by integrating the bacterial foraging optimization (BFO) algorithm with the cat swarm optimization (CSO) algorithm. The proposed model executes the process in two stages, namely, training and testing phases. In the training phase, the big data that is produced from different distributed sources is subjected to parallel processing using the mappers in the mapper phase, which perform the preprocessing and feature selection based on the proposed CBF algorithm. The preprocessing step eliminates the redundant and inconsistent data, whereas the feature section step is done on the preprocessed data for extracting the significant features from the data, to provide improved classification accuracy. The selected features are fed into the reducer for data classification using the deep belief network (DBN) classifier, which is trained using the proposed CBF algorithm such that the data are classified into various classes, and finally, at the end of the training process, the individual reducers present the trained models. Thus, the incremental data are handled effectively based on the training model in the training phase. In the testing phase, the incremental data are taken and split into different subsets and fed into the different mappers for the classification. Each mapper contains a trained model which is obtained from the training phase. The trained model is utilized for classifying the incremental data. After classification, the output obtained from each mapper is fused and fed into the reducer for the classification.FindingsThe maximum accuracy and Jaccard coefficient are obtained using the epileptic seizure recognition database. The proposed CBF-DBN produces a maximal accuracy value of 91.129%, whereas the accuracy values of the existing neural network (NN), DBN, naive Bayes classifier-term frequency–inverse document frequency (NBC-TFIDF) are 82.894%, 86.184% and 86.512%, respectively. The Jaccard coefficient of the proposed CBF-DBN produces a maximal Jaccard coefficient value of 88.928%, whereas the Jaccard coefficient values of the existing NN, DBN, NBC-TFIDF are 75.891%, 79.850% and 81.103%, respectively.Originality/valueIn this paper, a big data classification method is proposed for categorizing massive data sets for meeting the constraints of huge data. The big data classification is performed on the MapReduce framework based on training and testing phases in such a way that the data are handled in parallel at the same time. In the training phase, the big data is obtained and partitioned into different subsets of data and fed into the mapper. In the mapper, the features extraction step is performed for extracting the significant features. The obtained features are subjected to the reducers for classifying the data using the obtained features. The DBN classifier is utilized for the classification wherein the DBN is trained using the proposed CBF algorithm. The trained model is obtained as an output after the classification. In the testing phase, the incremental data are considered for the classification. New data are first split into subsets and fed into the mapper for classification. The trained models obtained from the training phase are used for the classification. The classified results from each mapper are fused and fed into the reducer for the classification of big data.


2019 ◽  
Vol 120 (2) ◽  
pp. 265-279 ◽  
Author(s):  
Tingyu Weng ◽  
Wenyang Liu ◽  
Jun Xiao

Purpose The purpose of this paper is to design a model that can accurately forecast the supply chain sales. Design/methodology/approach This paper proposed a new model based on lightGBM and LSTM to forecast the supply chain sales. In order to verify the accuracy and efficiency of this model, three representative supply chain sales data sets are selected for experiments. Findings The experimental results show that the combined model can forecast supply chain sales with high accuracy, efficiency and interpretability. Practical implications With the rapid development of big data and AI, using big data analysis and algorithm technology to accurately forecast the long-term sales of goods will provide the database for the supply chain and key technical support for enterprises to establish supply chain solutions. This paper provides an effective method for supply chain sales forecasting, which can help enterprises to scientifically and reasonably forecast long-term commodity sales. Originality/value The proposed model not only inherits the ability of LSTM model to automatically mine high-level temporal features, but also has the advantages of lightGBM model, such as high efficiency, strong interpretability, which is suitable for industrial production environment.


2017 ◽  
Vol 34 (5) ◽  
pp. 10-13 ◽  
Author(s):  
Stuti Saxena

Purpose The purpose of this paper is to appreciate the futuristic trends of Big and Open Linked Data (BOLD). While designating the ongoing progress of BOLD as BOLD 0.0, the paper also identifies the trajectory of BOLD 0.0 as BOLD 1.0, BOLD 2.0 and BOLD 3.0 in terms of the complexity and management of data sets from different sources. Design/methodology/approach This is a viewpoint and the ideas presented here are personal. Findings The trajectory of BOLD shall witness ever-growing challenges as the nature and scope of data sets grow complicated. The paper posits that by the time BOLD would attain its maturity, there would be a need for newer technologies and data architecture platforms which are relatively affordable and available as “Open Source”, if possible. Research limitations/implications Being exploratory in approach, this viewpoint presents a futuristic trend, which may or may not be valid. Nevertheless, there are significant practical implications for the academicians and practitioners to appreciate the likely challenges in the coming times for ensuring the sustainability of BOLD. Originality/value While there are a number of studies on BOLD, there are no studies which seek to propose the possible trends in BOLD’s progress. This paper seeks to plug this gap.


Author(s):  
HarshmitKaur Saluja ◽  
Vinod Kumar Yadav ◽  
K.M. Mohapatra

On the one hand, big-data analytics has brought revolution in the predictive modeler by enabling the complex data sets getting structured. On the other hand, the interactive advertisement has changed the complete scenario of the advertising sector by making advertisements content structured in such a way that it is customer-centric. The paper helps to widen the view to explore the growing urge of customization technique in advertising sector with interactive enablers. The paper further examines that how interactive advertisement and big-data has helped to represent product/service from the view of a customer and also improved the product/service performance. In order of study, exhaustive literature reviews resulting in three hypothesis are developed to take on the above-mentioned concerns.


Sign in / Sign up

Export Citation Format

Share Document