Big data breaches and customer compensation strategies

2017 ◽  
Vol 37 (1) ◽  
pp. 56-74 ◽  
Author(s):  
Thomas Kude ◽  
Hartmut Hoehle ◽  
Tracy Ann Sykes

Purpose Big Data Analytics provides a multitude of opportunities for organizations to improve service operations, but it also increases the threat of external parties gaining unauthorized access to sensitive customer data. With data breaches now a common occurrence, it is becoming increasingly plain that while modern organizations need to put into place measures to try to prevent breaches, they must also put into place processes to deal with a breach once it occurs. Prior research on information technology security and services failures suggests that customer compensation can potentially restore customer sentiment after such data breaches. The paper aims to discuss these issues. Design/methodology/approach In this study, the authors draw on the literature on personality traits and social influence to better understand the antecedents of perceived compensation and the effectiveness of compensation strategies. The authors studied the propositions using data collected in the context of Target’s large-scale data breach that occurred in December 2013 and affected the personal data of more than 70 million customers. In total, the authors collected data from 212 breached customers. Findings The results show that customers’ personality traits and their social environment significantly influences their perceptions of compensation. The authors also found that perceived compensation positively influences service recovery and customer experience. Originality/value The results add to the emerging literature on Big Data Analytics and will help organizations to more effectively manage compensation strategies in large-scale data breaches.

2021 ◽  
Author(s):  
R. Salter ◽  
Quyen Dong ◽  
Cody Coleman ◽  
Maria Seale ◽  
Alicia Ruvinsky ◽  
...  

The Engineer Research and Development Center, Information Technology Laboratory’s (ERDC-ITL’s) Big Data Analytics team specializes in the analysis of large-scale datasets with capabilities across four research areas that require vast amounts of data to inform and drive analysis: large-scale data governance, deep learning and machine learning, natural language processing, and automated data labeling. Unfortunately, data transfer between government organizations is a complex and time-consuming process requiring coordination of multiple parties across multiple offices and organizations. Past successes in large-scale data analytics have placed a significant demand on ERDC-ITL researchers, highlighting that few individuals fully understand how to successfully transfer data between government organizations; future project success therefore depends on a small group of individuals to efficiently execute a complicated process. The Big Data Analytics team set out to develop a standardized workflow for the transfer of large-scale datasets to ERDC-ITL, in part to educate peers and future collaborators on the process required to transfer datasets between government organizations. Researchers also aim to increase workflow efficiency while protecting data integrity. This report provides an overview of the created Data Lake Ecosystem Workflow by focusing on the six phases required to efficiently transfer large datasets to supercomputing resources located at ERDC-ITL.


Author(s):  
Sadaf Afrashteh ◽  
Ida Someh ◽  
Michael Davern

Big data analytics uses algorithms for decision-making and targeting of customers. These algorithms process large-scale data sets and create efficiencies in the decision-making process for organizations but are often incomprehensible to customers and inherently opaque in nature. Recent European Union regulations require that organizations communicate meaningful information to customers on the use of algorithms and the reasons behind decisions made about them. In this paper, we explore the use of explanations in big data analytics services. We rely on discourse ethics to argue that explanations can facilitate a balanced communication between organizations and customers, leading to transparency and trust for customers as well as customer engagement and reduced reputation risks for organizations. We conclude the paper by proposing future empirical research directions.


2021 ◽  
pp. 1-7
Author(s):  
Emmanuel Jesse Amadosi

With rapid development in technology, the built industry’s capacity to generate large-scale data is not in doubt. This trend of data upsurge labelled “Big Data” is currently being used to seek intelligent solutions in many industries including construction. As a result of this, the appeal to embrace Big Data Analytics has also gained wide advocacy globally. However, the general knowledge of Nigeria’s built environment professionals on Big Data Analytics is still limited and this gap continues to account for the slow pace of adoption of digital technologies like Big Data Analytics and the value it projects. This study set out to assess the level of awareness and knowledge of professionals within the Nigerian built environment with a view to promoting the adoption of Big Data Analytics for improved productivity. To achieve this aim, a structured questionnaire survey was carried out among a total of 283 professionals drawn from 9 disciplines within the built environment in the Federal Capital Territory, Abuja. The findings revealed that: a) a low knowledge level of Big Data exists among professionals, b) knowledge among professional and the level of Big Data Analytics application have strong relationship c) professional are interested in knowing more about the Big Data concept and how Big Data Analytics can be leveraged upon. The study, therefore recommends an urgent paradigm shift towards digitisation to fully embrace and adopt Big Data Analytics and enjoin stakeholders to promote collaborative schemes among practice-based professionals and the academia in seeking intelligent and smart solutions to construction-related problems.


2017 ◽  
Vol 23 (3) ◽  
pp. 703-720 ◽  
Author(s):  
Daniel Bumblauskas ◽  
Herb Nold ◽  
Paul Bumblauskas ◽  
Amy Igou

Purpose The purpose of this paper is to provide a conceptual model for the transformation of big data sets into actionable knowledge. The model introduces a framework for converting data to actionable knowledge and mitigating potential risk to the organization. A case utilizing a dashboard provides a practical application for analysis of big data. Design/methodology/approach The model can be used both by scholars and practitioners in business process management. This paper builds and extends theories in the discipline, specifically related to taking action using big data analytics with tools such as dashboards. Findings The authors’ model made use of industry experience and network resources to gain valuable insights into effective business process management related to big data analytics. Cases have been provided to highlight the use of dashboards as a visual tool within the conceptual framework. Practical implications The literature review cites articles that have used big data analytics in practice. The transitions required to reach the actionable knowledge state and dashboard visualization tools can all be deployed by practitioners. A specific case example from ESP International is provided to illustrate the applicability of the model. Social implications Information assurance, security, and the risk of large-scale data breaches are a contemporary problem in society today. These topics have been considered and addressed within the model framework. Originality/value The paper presents a unique and novel approach for parsing data into actionable knowledge items, identification of viruses, an application of visual dashboards for identification of problems, and a formal discussion of risk inherent with big data.


Web Services ◽  
2019 ◽  
pp. 1706-1716
Author(s):  
S. ZerAfshan Goher ◽  
Barkha Javed ◽  
Peter Bloodsworth

Due to the growing interest in harnessing the hidden significance of data, more and more enterprises are moving to data analytics. Data analytics require the analysis and management of large-scale data to find the hidden patterns among various data components to gain useful insight. The derived information is then used to predict the future trends that can be advantageous for a business to flourish such as customers' likes/dislikes, reasons behind customers' churn and more. In this paper, several techniques for the big data analysis have been investigated along with their advantages and disadvantages. The significance of cloud computing for big data storage has also been discussed. Finally, the techniques to make the robust and efficient usage of big data have also been discussed.


2016 ◽  
Vol 116 (4) ◽  
pp. 646-666 ◽  
Author(s):  
Shi Cheng ◽  
Qingyu Zhang ◽  
Quande Qin

Purpose – The quality and quantity of data are vital for the effectiveness of problem solving. Nowadays, big data analytics, which require managing an immense amount of data rapidly, has attracted more and more attention. It is a new research area in the field of information processing techniques. It faces the big challenges and difficulties of a large amount of data, high dimensionality, and dynamical change of data. However, such issues might be addressed with the help from other research fields, e.g., swarm intelligence (SI), which is a collection of nature-inspired searching techniques. The paper aims to discuss these issues. Design/methodology/approach – In this paper, the potential application of SI in big data analytics is analyzed. The correspondence and association between big data analytics and SI techniques are discussed. As an example of the application of the SI algorithms in the big data processing, a commodity routing system in a port in China is introduced. Another example is the economic load dispatch problem in the planning of a modern power system. Findings – The characteristics of big data include volume, variety, velocity, veracity, and value. In the SI algorithms, these features can be, respectively, represented as large scale, high dimensions, dynamical, noise/surrogates, and fitness/objective problems, which have been effectively solved. Research limitations/implications – In current research, the example problem of the port is formulated but not solved yet given the ongoing nature of the project. The example could be understood as advanced IT or data processing technology, however, its underlying mechanism could be the SI algorithms. This paper is the first step in the research to utilize the SI algorithm to a big data analytics problem. The future research will compare the performance of the method and fit it in a dynamic real system. Originality/value – Based on the combination of SI and data mining techniques, the authors can have a better understanding of the big data analytics problems, and design more effective algorithms to solve real-world big data analytical problems.


Author(s):  
S. ZerAfshan Goher ◽  
Barkha Javed ◽  
Peter Bloodsworth

Due to the growing interest in harnessing the hidden significance of data, more and more enterprises are moving to data analytics. Data analytics require the analysis and management of large-scale data to find the hidden patterns among various data components to gain useful insight. The derived information is then used to predict the future trends that can be advantageous for a business to flourish such as customers' likes/dislikes, reasons behind customers' churn and more. In this paper, several techniques for the big data analysis have been investigated along with their advantages and disadvantages. The significance of cloud computing for big data storage has also been discussed. Finally, the techniques to make the robust and efficient usage of big data have also been discussed.


2021 ◽  
Vol 11 (1) ◽  
pp. 6650-6655
Author(s):  
A. Alghamdi ◽  
T. Alsubait ◽  
A. Baz ◽  
H. Alhakami

Big data have attracted significant attention in recent years, as their hidden potentials that can improve human life, especially when applied in healthcare. Big data is a reasonable collection of useful information allowing new breakthroughs or understandings. This paper reviews the use and effectiveness of data analytics in healthcare, examining secondary data sources such as books, journals, and other reputable publications between 2000 and 2020, utilizing a very strict strategy in keywords. Large scale data have been proven of great importance in healthcare, and therefore there is a need for advanced forms of data analytics, such as diagnostic data and descriptive analysis, for improving healthcare outcomes. The utilization of large-scale data can form the backbone of predictive analytics which is the baseline for future individual outcome prediction.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Phongphun Kijsanayothin ◽  
Gantaphon Chalumporn ◽  
Rattikorn Hewett

Abstract Introduction Many data analytics algorithms are originally designed for in-memory data. Parallel and distributed computing is a natural first remedy to scale these algorithms to “Big algorithms” for large-scale data. Advances in many Big Data analytics algorithms are contributed by MapReduce, a programming paradigm that enables parallel and distributed execution of massive data processing on large clusters of machines. Much research has focused on building efficient naive MapReduce-based algorithms or extending MapReduce mechanisms to enhance performance. However, we argue that these should not be the only research directions to pursue. We conjecture that when naive MapReduce-based solutions do not perform well, it could be because certain classes of algorithms are not amendable to MapReduce model and one should find a fundamentally different approach to a new MapReduce-based solution. Case description This paper investigates a case study of a scaling problem of “Big algorithms” for a popular association rule-mining algorithm, particularly the development of Apriori algorithm in MapReduce model. Discussion and evaluation Formal and empirical illustrations are explored to compare our proposed MapReduce-based Apriori algorithm with previous solutions. The findings support our conjecture and our study shows promising results compared to the state-of-the-art performer with 7% increase in performance on the average of transactions ranging from 10,000 to 120,000. Conclusions The results confirm that effective MapReduce implementation should avoid dependent iterations, such as that of the original sequential Apriori algorithm. These findings could lead to many more alternative non-naive MapReduce-based “Big algorithms”.


Sign in / Sign up

Export Citation Format

Share Document