Cognitive big data: survey and review on big data research and its implications. What is really “new” in big data?

2017 ◽  
Vol 21 (1) ◽  
pp. 197-212 ◽  
Author(s):  
Artur Lugmayr ◽  
Björn Stockleben ◽  
Christoph Scheib ◽  
Mathew A. Mailaparampil

Purpose The purpose of this paper is to introduce and define Cognitive Big Data as a concept. Furthermore, it investigates what is really “new” in Big Data, as it seems to be a hyped-up concept that has emerged during recent years. The purpose is also to broaden the discussion around Big Data far beyond the common 4V (velocity, volume, veracity and variety) model. Design/methodology/approach The authors established an expert think tank to discuss the notion of Big Data, identify new characteristics and re-think what really is new in the idea of Big Data, by analyzing over 60 literature resources. They identified typical baseline scenarios (traffic, business processes, retail, health and social media) as a starting point from which they explored the notion of Big Data from different perspectives. Findings They concluded that the idea of Big Data is simply not new and recognized the need to re-think a new approach toward Big Data. The authors also introduced a five-Trait Framework for “Cognitive Big Data”, socio-technical system, data space, data richness, knowledge management (KM)/decision-making and visualization/sensory presentation. Research limitations/implications The viewpoint is centered on cognitive processes as KM process. Practical implications Data need to be made available in an understandable form for the right application context and in the right portion size that it can be turned into knowledge and eventually wisdom. The authors need to know about data that can be ignored, data that they are not aware of (dark data) and data that can be fully utilized for analysis (light data). In the foreground is the extension of human mental capabilities and data understandability. Social implications Cognitive Big Data implies a socio-technological knowledge system. Originality/value Introduction of cognitive Big Data as concept and framework.

2015 ◽  
Vol 22 (4) ◽  
pp. 215-228 ◽  
Author(s):  
Alejandro Vera-Baquero ◽  
Ricardo Colomo Palacios ◽  
Vladimir Stantchev ◽  
Owen Molloy

Purpose – This paper aims to present a solution that enables organizations to monitor and analyse the performance of their business processes by means of Big Data technology. Business process improvement can drastically influence in the profit of corporations and helps them to remain viable. However, the use of traditional Business Intelligence systems is not sufficient to meet today ' s business needs. They normally are business domain-specific and have not been sufficiently process-aware to support the needs of process improvement-type activities, especially on large and complex supply chains, where it entails integrating, monitoring and analysing a vast amount of dispersed event logs, with no structure, and produced on a variety of heterogeneous environments. This paper tackles this variability by devising different Big-Data-based approaches that aim to gain visibility into process performance. Design/methodology/approach – Authors present a cloud-based solution that leverages (BD) technology to provide essential insights into business process improvement. The proposed solution is aimed at measuring and improving overall business performance, especially in very large and complex cross-organisational business processes, where this type of visibility is hard to achieve across heterogeneous systems. Findings – Three different (BD) approaches have been undertaken based on Hadoop and HBase. We introduced first, a map-reduce approach that it is suitable for batch processing and presents a very high scalability. Secondly, we have described an alternative solution by integrating the proposed system with Impala. This approach has significant improvements in respect with map reduce as it is focused on performing real-time queries over HBase. Finally, the use of secondary indexes has been also proposed with the aim of enabling immediate access to event instances for correlation in detriment of high duplication storage and synchronization issues. This approach has produced remarkable results in two real functional environments presented in the paper. Originality/value – The value of the contribution relies on the comparison and integration of software packages towards an integrated solution that is aimed to be adopted by industry. Apart from that, in this paper, authors illustrate the deployment of the architecture in two different settings.


2018 ◽  
Vol 38 (7) ◽  
pp. 1589-1614 ◽  
Author(s):  
Morten Brinch

Purpose The value of big data in supply chain management (SCM) is typically motivated by the improvement of business processes and decision-making practices. However, the aspect of value associated with big data in SCM is not well understood. The purpose of this paper is to mitigate the weakly understood nature of big data concerning big data’s value in SCM from a business process perspective. Design/methodology/approach A content-analysis-based literature review has been completed, in which an inductive and three-level coding procedure has been applied on 72 articles. Findings By identifying and defining constructs, a big data SCM framework is offered using business process theory and value theory as lenses. Value discovery, value creation and value capture represent different value dimensions and bring a multifaceted view on how to understand and realize the value of big data. Research limitations/implications This study further elucidates big data and SCM literature by adding additional insights to how the value of big data in SCM can be conceptualized. As a limitation, the constructs and assimilated measures need further empirical evidence. Practical implications Practitioners could adopt the findings for conceptualization of strategies and educational purposes. Furthermore, the findings give guidance on how to discover, create and capture the value of big data. Originality/value Extant SCM theory has provided various views to big data. This study synthesizes big data and brings a multifaceted view on its value from a business process perspective. Construct definitions, measures and research propositions are introduced as an important step to guide future studies and research designs.


2017 ◽  
Vol 21 (4) ◽  
pp. 351-376 ◽  
Author(s):  
Marcin Czajkowski

Purpose The purpose of this paper is to critically examine existing models for cost of quality. Having identified issues and limitations of historic models, develop and implement a novel, structured hybrid cost of quality model to identify and effectively manage cost of company’s product. Design/methodology/approach A theoretical framework is proposed based on an integration of three existing, historical cost of quality models into a structured hybrid model. Subsequently, an exploratory pilot case study in a manufacturing environment is described that illustrates the value of the model. Findings The paper manages to find how a hybrid model can help identify cost of quality more accurately than the traditional models. Thanks to the new model, the author shows how gaps between product’s theoretical and actual costs can be highlighted. This allows management to drive down cost of quality and improve business performance. Research limitations/implications The model would benefit from a company-wide implementation. The present study provides a starting point for further research in the international manufacturing sector. Practical implications The framework improves the knowledge of cost of quality by providing a new case study with full results and analysis from a UK-based manufacturing company. It provides a critical re-evaluation of available literature, including the most recent publications as far as practically possible within timescale available. The study shows the importance of comprehensive cost collection if companies are to have the right data needed to manage business excellence. Originality/value The paper presents a development of the first structured hybrid model for measuring cost of quality using the strongest points of main three approaches and addresses their limitations. It gives new arguments against allocation of some cost elements within BS 6143-2:1990, resulting in recommendations for further brainstorming of pros and cons of the suggestion.


2017 ◽  
Vol 23 (3) ◽  
pp. 645-670 ◽  
Author(s):  
Sune Dueholm Müller ◽  
Preben Jensen

Purpose The development within storage and processing technologies combined with the growing collection of data has created opportunities for companies to create value through the application of big data. The purpose of this paper is to focus on how small and medium-sized companies in Denmark are using big data to create value. Design/methodology/approach The research is based on a literature review and on data collected from 457 Danish companies through an online survey. The paper looks at big data from the perspective of SMEs in order to answer the following research question: to what extent does the application of big data create value for small and medium-sized companies. Findings The findings show clear links between the application of big data and value creation. The analysis also shows that the value created through big data does not arise from data or technology alone but is dependent on the organizational context and managerial action. A holistic perspective on big data is advocated, not only focusing on the capture, storage, and analysis of data, but also leadership through goal setting and alignment of business strategies and goals, IT capabilities, and analytical skills. Managers are advised to communicate the business value of big data, adapt business processes to data-driven business opportunities, and in general act on the basis of data. Originality/value The paper provides researchers and practitioners with empirically based insights into how the application of big data creates value for SMEs.


2019 ◽  
Vol 26 (5) ◽  
pp. 1141-1155 ◽  
Author(s):  
Enrico Battisti ◽  
S.M. Riad Shams ◽  
Georgia Sakka ◽  
Nicola Miglietta

Purpose The purpose of this paper is to improve understanding of the integration between big data (BD) and risk management (RM) in business processes (BPs), with special reference to corporate real estate (CRE). Design/methodology/approach This conceptual study follows, methodologically, the structuring inter-textual coherence process – specifically, the synthesised coherence tactical approach. It draws heavily on theoretical evidence published, mainly, in the corporate finance and the business management literature. Findings A new conceptual framework is presented for CRE to proactively develop insights into the potential benefits of using BD as a business strategy/instrument. The approach was found to strengthen decision-making processes and encourage better RM – with significant consequences, in particular, for business process management (BPM). Specifically, by recognising the potential uses of BD, it is also possible to redefine the processes with advantages in terms of RM. Originality/value This study contributes to the literature in the fields of real estate, RM, BPM and digital transformation. To the best knowledge of authors, although the literature has examined the concepts of BD, RM and BP, no prior studies have comprehensively examined these three elements and their conjoint contribution to CRE. In particular, the study highlights how the automation of data-intensive activities and the analysis of such data (in both structured and unstructured forms), as a means of supporting decision making, can lead to better efficiency in RM and optimisation of processes.


2019 ◽  
Vol 25 (7) ◽  
pp. 1783-1801 ◽  
Author(s):  
Shu-hsien Liao ◽  
Yi-Shan Tasi

Purpose In the retailing industry, database is the time and place where a retail transaction is completed. E-business processes are increasingly adopting databases that can obtain in-depth customers and sales knowledge with the big data analysis. The specific big data analysis on a database system allows a retailer designing and implementing business process management (BPM) to maximize profits, minimize costs and satisfy customers on a business model. Thus, the research of big data analysis on the BPM in the retailing is a critical issue. The paper aims to discuss this issue. Design/methodology/approach This paper develops a database, ER model, and uses cluster analysis, C&R tree and the a priori algorithm as approaches to illustrate big data analysis/data mining results for generating business intelligence and process management, which then obtain customer knowledge from the case firm’s database system. Findings Big data analysis/data mining results such as customer profiles, product/brand display classifications and product/brand sales associations can be used to propose alternatives to the case firm for store layout and bundling sales business process and management development. Originality/value This research paper is an example to develop the BPM of database model and big data/data mining based on insights from big data analysis applications for store layout and bundling sales in the retailing industry.


2018 ◽  
Vol 26 (3) ◽  
pp. 361-380 ◽  
Author(s):  
Federica De Santis ◽  
Claudia Presti

PurposeThis paper aims to give an integrated framework for analysing the main opportunities and threats related to the exploitation of Big Data (BD) technologies within intellectual capital (IC) management.Design/methodology/approachBy means of a structured literature review (SLR) of the extant literature on BD and IC, the study identified distinctive opportunities and challenges of BD technologies and related them to the traditional dimensions of IC.FindingsThe advent of BD has not radically changed the risks and opportunities of IC management already highlighted in previous literature. However, it has significantly amplified their magnitude and the speed with which they manifest themselves. Thus, a revision of the traditional managerial solutions needed to face them is required.Research limitations/implicationsThe developed framework can contribute to academic discourse on BD and IC as a starting point to understanding how BD can be turned into intangible assets from a value creation perspective.Practical implicationsThe framework can also represent a useful decision-making tool for practitioners in identifying and evaluating the main opportunities and threats of an investment in BD technologies for IC management.Originality/valueThe paper responds to the call for more research on the integration of BD discourse in the fourth stage of IC research. It intends to improve this understanding of how BD technologies can be exploited to create value from an IC perspective, focussing not only on the potential of BD for creating value but also on the challenges that it poses to organizations.


Facilities ◽  
2019 ◽  
Vol 37 (1/2) ◽  
pp. 103-118 ◽  
Author(s):  
Simon Ashworth ◽  
Matthew Tucker ◽  
Carsten K. Druhmann

PurposeThis paper aims to describe the development and testing of an employer’s information requirements (EIR) template and guidance document designed to meet client and facility management (FM) needs in the building information modelling (BIM) process.Design/methodology/approachA qualitative design approach was used and triangulation of methods which included a focus group with the British Institute of Facilities Management (BIFM), semi-structured interviews with the case study Glasgow Life Burrell Renaissance Project who trialled the EIR and peer-reviews and interviews with BIM/CAFM experts from the BIM Academy and FM180.FindingsSpecific guidance to help clients and facility managers prepare key BIM documents like the EIR are needed. They are aware of industry BIM standards and guidance but often not in detail. The Glasgow Life case study illustrated the EIR as a useful collaboration-tool to bring together stakeholders in early planning stages to understand client information needs.Social implicationsAssets and buildings account for most of the energy and material use in society. A well-structured EIR will help ensure the right information is available to enable optimisation of running costs and utility-use over their whole life, thus contributing to long-term sustainability.Originality/valueThis paper provides a new EIR template and guidance document ideal for practitioners in industry as a practical starting point to plan the client information requirements for BIM projects. It can be downloaded atwww.bifm.org.uk/bifm/knowledge.


Author(s):  
Robert Glenn Richey ◽  
Tyler R. Morgan ◽  
Kristina Lindsey-Hall ◽  
Frank G. Adams

Purpose Journals in business logistics, operations management, supply chain management, and business strategy have initiated ongoing calls for Big Data research and its impact on research and practice. Currently, no extant research has defined the concept fully. The purpose of this paper is to develop an industry grounded definition of Big Data by canvassing supply chain managers across six nations. The supply chain setting defines Big Data as inclusive of four dimensions: volume, velocity, variety, and veracity. The study further extracts multiple concepts that are important to the future of supply chain relationship strategy and performance. These outcomes provide a starting point and extend a call for theoretically grounded and paradigm-breaking research on managing business-to-business relationships in the age of Big Data. Design/methodology/approach A native categories qualitative method commonly employed in sociology allows each executive respondent to provide rich, specific data. This approach reduces interviewer bias while examining 27 companies across six industrialized and industrializing nations. This is the first study in supply chain management and logistics (SCMLs) to use the native category approach. Findings This study defines Big Data by developing four supporting dimensions that inform and ground future SCMLs research; details ten key success factors/issues; and discusses extensive opportunities for future research. Research limitations/implications This study provides a central grounding of the term, dimensions, and issues related to Big Data in supply chain research. Practical implications Supply chain managers are provided with a peer-specific definition and unified dimensions of Big Data. The authors detail key success factors for strategic consideration. Finally, this study notes differences in relational priorities concerning these success factors across different markets, and points to future complexity in managing supply chain and logistics relationships. Originality/value There is currently no central grounding of the term, dimensions, and issues related to Big Data in supply chain research. For the first time, the authors address subjects related to how supply chain partners employ Big Data across the supply chain, uncover Big Data’s potential to influence supply chain performance, and detail the obstacles to developing Big Data’s potential. In addition, the study introduces the native category qualitative interview approach to SCMLs researchers.


2017 ◽  
Vol 9 (3/4) ◽  
pp. 498-518 ◽  
Author(s):  
Peter Cronemyr ◽  
Ingela Bäckström ◽  
Åsa Rönnbäck

Purpose Today’s organisations face the challenge of measuring the right things and then using those measurements as a starting point to work with improved quality. The failure to generate a shared value base is pointed out as one main cause for the inability to effectively apply quality management and lean within organisations; thus, it appears central to measure these values. However, the measuring of values and behaviours seems to be missing within both concepts. Therefore, there is a need for a tool that measures not only quality values but also behaviours that support or obstruct a quality culture. The purpose of this paper is to describe how a measuring tool which measures quality culture can be designed and structured. Design/methodology/approach A project with the aim to measure and develop quality culture started in 2015 by three Swedish universities/institutes and seven organisations. During several workshops, quality values and supportive and obstructive behaviours were developed and described. This resulted in a survey where employees of the participating organisations ranked performance and importance of the described behaviours. The results were presented and discussed in a fourth workshop. Findings A framework of behaviours and a measurement tool for a quality culture are presented in this paper. Originality/value The framework of behaviours, supporting or obstructing a quality culture, is original and may be very useful to diagnose and develop a quality culture.


Sign in / Sign up

Export Citation Format

Share Document