scholarly journals Big Data, Tacit Knowledge and Organizational Competitiveness

Author(s):  
Nowshade Kabir ◽  
Elias Carayannis

In the process of conducting everyday business, organizations generate and gather a large number of information about their customers, suppliers, competitors, processes, operations, routines and procedures. They also capture communication data from mobile devices, instruments, tools, machines and transmissions. Much of this data possesses an enormous amount of valuable knowledge, exploitation of which could yield economic benefit. Many organizations are taking advantage of business analytics and intelligence solutions to help them find new insights in their business processes and performance. For companies, however, it is still a nascent area, and many of them understand that there are more knowledge and insights that can be extracted from available big data using creativity, recombination and innovative methods, apply it to new knowledge creation and produce substantial value. This has created a need for finding a suitable approach in the firm’s big data related strategy. In this paper, the authors concur that big data is indeed a source of firm’s competitive advantage and consider that it is essential to have the right combination of people, tool and data along with management support and data‐oriented culture to gain competitiveness from big data. However, the authors also argue that organizations should consider the knowledge hidden in the big data as tacit knowledge and they should take advantage of the cumulative experience garnered by the companies and studies done so far by the scholars in this sphere from knowledge management perspective. Based on this idea, a big data oriented framework of organizational knowledge‐based strategy is proposed here.

Author(s):  
Javier Conejero ◽  
Sandra Corella ◽  
Rosa M Badia ◽  
Jesus Labarta

Task-based programming has proven to be a suitable model for high-performance computing (HPC) applications. Different implementations have been good demonstrators of this fact and have promoted the acceptance of task-based programming in the OpenMP standard. Furthermore, in recent years, Apache Spark has gained wide popularity in business and research environments as a programming model for addressing emerging big data problems. COMP Superscalar (COMPSs) is a task-based environment that tackles distributed computing (including Clouds) and is a good alternative for a task-based programming model for big data applications. This article describes why we consider that task-based programming models are a good approach for big data applications. The article includes a comparison of Spark and COMPSs in terms of architecture, programming model, and performance. It focuses on the differences that both frameworks have in structural terms, on their programmability interface, and in terms of their efficiency by means of three widely known benchmarking kernels: Wordcount, Kmeans, and Terasort. These kernels enable the evaluation of the more important functionalities of both programming models and analyze different work flows and conditions. The main results achieved from this comparison are (1) COMPSs is able to extract the inherent parallelism from the user code with minimal coding effort as opposed to Spark, which requires the existing algorithms to be adapted and rewritten by explicitly using their predefined functions, (2) it is an improvement in terms of performance when compared with Spark, and (3) COMPSs has shown to scale better than Spark in most cases. Finally, we discuss the advantages and disadvantages of both frameworks, highlighting the differences that make them unique, thereby helping to choose the right framework for each particular objective.


Big Data ◽  
2016 ◽  
pp. 711-733 ◽  
Author(s):  
Jafreezal Jaafar ◽  
Kamaluddeen Usman Danyaro ◽  
M. S. Liew

This chapter discusses about the veracity of data. The veracity issue is the challenge of imprecision in big data due to influx of data from diverse sources. To overcome this problem, this chapter proposes a fuzzy knowledge-based framework that will enhance the accessibility of Web data and solve the inconsistency in data model. D2RQ, protégé, and fuzzy Web Ontology Language applications were used for configuration and performance. The chapter also provides the completeness fuzzy knowledge-based algorithm, which was used to determine the robustness and adaptability of the knowledge base. The result shows that the D2RQ is more scalable with respect to performance comparison. Finally, the conclusion and future lines of the research were provided.


2010 ◽  
pp. 1956-1976
Author(s):  
Saad Ghaleb Yaseen ◽  
Khaled Saleh Al Omoush

This chapter aims to identify the Critical Success Factors (CSFs) and outcomes of Web-based Supply Chain Collaboration (SCC). A total of 230 questionnaires were initially distributed to sample respondents of seven manufacturing firms in Jordan that use Web systems to collaborate with supply chain members. The results showed that top management support, IT infrastructure, training and education, business processes reengineering, trust among partners, open information sharing, and performance measurement are critical factors for Web-based SCC implementation success. In addition, this study revealed that Web-based SCC implementation is positively related to supply chain relationship quality, performance effectiveness, and performance efficiency.


Author(s):  
Dennis T. Kennedy ◽  
Dennis M. Crossen ◽  
Kathryn A. Szabat

Big Data Analytics has changed the way organizations make decisions, manage business processes, and create new products and services. Business analytics is the use of data, information technology, statistical analysis, and quantitative methods and models to support organizational decision making and problem solving. The main categories of business analytics are descriptive analytics, predictive analytics, and prescriptive analytics. Big Data is data that exceeds the processing capacity of conventional database systems and is typically defined by three dimensions known as the Three V's: Volume, Variety, and Velocity. Big Data brings big challenges. Big Data not only has influenced the analytics that are utilized but also has affected technologies and the people who use them. At the same time Big Data brings challenges, it presents opportunities. Those who embrace Big Data and effective Big Data Analytics as a business imperative can gain competitive advantage.


2018 ◽  
Vol 8 (4) ◽  
pp. 334-347 ◽  
Author(s):  
Rana Khallaf ◽  
Nader Naderpajouh ◽  
Makarand Hastak

Purpose The purpose of this paper is to build upon the extensive application of risk registries in the construction literature and establish a systematic methodology to develop risk registries. Risk registries channel judgment of experts as a basis for risk analysis and should be tailored for each project to be more effective. Given their prevalence, there is a need for systematic integration of tacit and explicit knowledge to develop practical risk registries. Design/methodology/approach A combined approach is proposed using the systematic literature review (SLR) technique to integrate explicit knowledge and Delphi technique to integrate tacit knowledge in the development of risk registries. This two-step approach further increases the robustness of the registries by validating them through integrating and contrasting multiple forms of knowledge for a tailored risk registry. Findings The application of the proposed approach indicates that the use of multiple forms of knowledge can increase the robustness and practicality of risk registries. It also showcased its potential in the development of risk registries for complex projects. Examples include modification of risk factors obtained from the explicit sources of knowledge based on contextual tacit knowledge. Originality/value The proposed approach is an imperative step to standardize the development of risk registries. With its inherent validation process through integrating and contrasting tacit and explicit knowledge, practitioners can use this approach to develop practical risk registries for different categories of projects. Integrating different forms of knowledge can increase the impact of registries beyond risk assessment and in contexts such as decision making and performance assessment.


Author(s):  
M. Adel Serhani ◽  
Elarbi Badidi ◽  
Mohamed Vall O. Mohamed-Salem

As Web services are growing rapidly and as their adoption by a large number of business organizations is increasing, scalability and performance management of Web services environments are of paramount importance. This chapter proposes a scalable QoS-aware architecture, called QoSMA, for the management of QoS-aware Web services. The aim of this architecture is to provide QoS management support for both Web services’ providers and consumers. The proposed architecture is based on the commonly-used notion of QoS brokerage service. The QoS broker mediates between service requestors and service providers. Its responsibilities include performance monitoring of Web services, supporting users in Web services selection based on their QoS requirements, and the negotiation of QoS issues between requestors and providers. The QoSMA architecture provides the following benefits: First, it allows the automation of QoS management and QoS monitoring for both providers and clients. Second, the scalability of the architecture allows for better handling of the increasing demand while maintaining the pre-agreed-on QoS between service requestors and providers.


Author(s):  
Yu-Che Chen ◽  
Tsui-Chuan Hsieh

“Big data” is one of the emerging and critical issues facing government in the digital age. This study first delineates the defining features of big data (volume, velocity, and variety) and proposes a big data typology that is suitable for the public sector. This study then examines the opportunities of big data in generating business analytics to promote better utilization of information and communication technology (ICT) resources and improved personalization of e-government services. Moreover, it discusses the big data management challenges in building appropriate governance structure, integrating diverse data sources, managing digital privacy and security risks, and acquiring big data talent and tools. An effective big data management strategy to address these challenges should develop a stakeholder-focused and performance-oriented governance structure and build capacity for data management and business analytics as well as leverage and prioritize big data assets for performance. In addition, this study illustrates the opportunities, challenges, and strategy for big service data in government with the E-housekeeper program in Taiwan. This brief case study offers insight into the implementation of big data for improving government information and services. This article concludes with the main findings and topics of future research in big data for public administration.


Author(s):  
Saad Ghaleb Yaseen ◽  
Khaled Saleh Al Omoush

This chapter aims to identify the Critical Success Factors (CSFs) and outcomes of Web-based Supply Chain Collaboration (SCC). A total of 230 questionnaires were initially distributed to sample respondents of seven manufacturing firms in Jordan that use Web systems to collaborate with supply chain members. The results showed that top management support, IT infrastructure, training and education, business processes reengineering, trust among partners, open information sharing, and performance measurement are critical factors for Web-based SCC implementation success. In addition, this study revealed that Web-based SCC implementation is positively related to supply chain relationship quality, performance effectiveness, and performance efficiency.


Author(s):  
Ezer Osei Yeboah-Boateng

Big data is characterized as huge datasets generated at a fast rate, in unstructured, semi-structured, and structured data formats, with inconsistencies and disparate data types and sources. The challenge is having the right tools to process large datasets in an acceptable timeframe and within reasonable cost range. So, how can social media big datasets be harnessed for best value decision making? The approach adopted was site scraping to collect online data from social media and other websites. The datasets have been harnessed to provide better understanding of customers' needs and preferences. It's applied to design targeted campaigns, to optimize business processes, and to improve performance. Using the social media facts and rules, a multivariate value creation decision model was built to assist executives to create value based on improved “knowledge” in a hindsight-foresight-insight continuum about their operations and initiatives and to make informed decisions. The authors also demonstrated use cases of insights computed as equations that could be leveraged to create sustainable value.


Author(s):  
Jafreezal Jaafar ◽  
Kamaluddeen Usman Danyaro ◽  
M. S. Liew

This chapter discusses about the veracity of data. The veracity issue is the challenge of imprecision in big data due to influx of data from diverse sources. To overcome this problem, this chapter proposes a fuzzy knowledge-based framework that will enhance the accessibility of Web data and solve the inconsistency in data model. D2RQ, protégé, and fuzzy Web Ontology Language applications were used for configuration and performance. The chapter also provides the completeness fuzzy knowledge-based algorithm, which was used to determine the robustness and adaptability of the knowledge base. The result shows that the D2RQ is more scalable with respect to performance comparison. Finally, the conclusion and future lines of the research were provided.


Sign in / Sign up

Export Citation Format

Share Document