The Link Between Innovation and Prosperity

Author(s):  
Sonia Chien-i Chen ◽  
Radwan Alyan Kharabsheh

The digital era accelerates the growth of knowledge to such an extent that it is a challenge for individuals and society to manage it traditionally. Innovative tools are introduced to analyze massive data sets for extracting business value cost-effectively and efficiently. These tools help extract business intelligence from explicit information, so that tacit knowledge can be transferred into actionable insights. Big data are relevantly fashionable because of their accuracy and the capability of predicting future trends. They show their mightiness of bringing business prosperity from supermarket giants to businesses and disciplines of all kinds. However, with data widely spreading, people are concerning their potential risk of increasing inequality and threatening democracy. Big data governance is needed, if people want to keep their private right. This chapter explores how big data can be governed for maintaining the benefits of the individual and society. It aims to allow technology to humanize the digital era, so that people can be benefited from living in the present.

Author(s):  
George Leal Jamil ◽  
Ângela Do Carmo Carvalho Jamil

Organizations are still confused about tacit knowledge principles, conceptualization and applications. In this chapter, authors approach how tacit knowledge can be valuable for practical decisions and implementations, from theoretical and practical points of view. Approaching from the theoretical view, tacit knowledge definition was discussed, as it results from a conceptual development already adopted for several decades. From practical analysis, actual and future trends arise, as its applications and influences were consolidated, in the perspective on associating tacit knowledge with explicit, for planning, designing and implementing real businesses solutions. Modern competitive features and propositions, such as big data, information technology insertion and startups entrepreneurships are also discussed, serving as an orientation for new studies, as tacit knowledge plays a differential role for new ages of value-aggregation arrays.


2018 ◽  
pp. 1122-1133
Author(s):  
Michael A. Chilton ◽  
James M. Bloodgood

In this chapter, the authors investigate how raw data, obtained from a variety of sources, can be processed into knowledge using automated techniques that will help organizations gain a competitive advantage. Firms have amassed so much data that only automated methods, such as data mining or converting existing knowledge into expert systems is possible to make any sense of it or to protect it from competitors. Further, the data that is processed may be considered tacit knowledge because it is hidden from people until it is processed. In this chapter, the authors discuss various sources of data that might help an organization achieve and sustain a competitive advantage. A firm can data mine its own production database for insight regarding its customers and markets that have previously been ignored. It might also mine social media (e.g., Facebook and Twitter), which has become a forum for individual preferences and activities from which the savvy organization could turn into competitive advantage. They also discuss how this knowledge can be protected from intrusion by competitors to sustain the competitive position it may achieve as a result of the discovery of knowledge from massive data sets.


2022 ◽  
pp. 41-67
Author(s):  
Vo Ngoc Phu ◽  
Vo Thi Ngoc Tran

Machine learning (ML), neural network (NN), evolutionary algorithm (EA), fuzzy systems (FSs), as well as computer science have been very famous and very significant for many years. They have been applied to many different areas. They have contributed much to developments of many large-scale corporations, massive organizations, etc. Lots of information and massive data sets (MDSs) have been generated from these big corporations, organizations, etc. These big data sets (BDSs) have been the challenges of many commercial applications, researches, etc. Therefore, there have been many algorithms of the ML, the NN, the EA, the FSs, as well as computer science which have been developed to handle these massive data sets successfully. To support for this process, the authors have displayed all the possible algorithms of the NN for the large-scale data sets (LSDSs) successfully in this chapter. Finally, they have presented a novel model of the NN for the BDS in a sequential environment (SE) and a distributed network environment (DNE).


Author(s):  
Ulrik Schmidt

”Data Masses and Sensory Environments” explores a major trend in current digital culture to visualise massive data sets in the form of abstract, dynamic environments. This ‘performative’ staging of big data manifests what we could think of as big data aesthetics proper because it gives the ‘big’ and ‘massive’ properties of big data a direct and perceptible visual expression. Drawing on several recent examples of big data visualisation, the article examines the different manifestations and aesthetic potential of such performative big data aesthetics. It is concluded that the performative ‘massification’ of big data in abstract environments has important implications for our everyday communication with and through data because it potentially generates a conflict between the comprehension of information and a more abstract and defocused ‘ambient’ sensation of being surrounded by a ubiquitous and all-encompassing sensory environment.


Author(s):  
Longzhi Yang ◽  
Jie Li ◽  
Noe Elisa ◽  
Tom Prickett ◽  
Fei Chao

AbstractBig data refers to large complex structured or unstructured data sets. Big data technologies enable organisations to generate, collect, manage, analyse, and visualise big data sets, and provide insights to inform diagnosis, prediction, or other decision-making tasks. One of the critical concerns in handling big data is the adoption of appropriate big data governance frameworks to (1) curate big data in a required manner to support quality data access for effective machine learning and (2) ensure the framework regulates the storage and processing of the data from providers and users in a trustworthy way within the related regulatory frameworks (both legally and ethically). This paper proposes a framework of big data governance that guides organisations to make better data-informed business decisions within the related regularity framework, with close attention paid to data security, privacy, and accessibility. In order to demonstrate this process, the work also presents an example implementation of the framework based on the case study of big data governance in cybersecurity. This framework has the potential to guide the management of big data in different organisations for information sharing and cooperative decision-making.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sathyaraj R ◽  
Ramanathan L ◽  
Lavanya K ◽  
Balasubramanian V ◽  
Saira Banu J

PurposeThe innovation in big data is increasing day by day in such a way that the conventional software tools face several problems in managing the big data. Moreover, the occurrence of the imbalance data in the massive data sets is a major constraint to the research industry.Design/methodology/approachThe purpose of the paper is to introduce a big data classification technique using the MapReduce framework based on an optimization algorithm. The big data classification is enabled using the MapReduce framework, which utilizes the proposed optimization algorithm, named chicken-based bacterial foraging (CBF) algorithm. The proposed algorithm is generated by integrating the bacterial foraging optimization (BFO) algorithm with the cat swarm optimization (CSO) algorithm. The proposed model executes the process in two stages, namely, training and testing phases. In the training phase, the big data that is produced from different distributed sources is subjected to parallel processing using the mappers in the mapper phase, which perform the preprocessing and feature selection based on the proposed CBF algorithm. The preprocessing step eliminates the redundant and inconsistent data, whereas the feature section step is done on the preprocessed data for extracting the significant features from the data, to provide improved classification accuracy. The selected features are fed into the reducer for data classification using the deep belief network (DBN) classifier, which is trained using the proposed CBF algorithm such that the data are classified into various classes, and finally, at the end of the training process, the individual reducers present the trained models. Thus, the incremental data are handled effectively based on the training model in the training phase. In the testing phase, the incremental data are taken and split into different subsets and fed into the different mappers for the classification. Each mapper contains a trained model which is obtained from the training phase. The trained model is utilized for classifying the incremental data. After classification, the output obtained from each mapper is fused and fed into the reducer for the classification.FindingsThe maximum accuracy and Jaccard coefficient are obtained using the epileptic seizure recognition database. The proposed CBF-DBN produces a maximal accuracy value of 91.129%, whereas the accuracy values of the existing neural network (NN), DBN, naive Bayes classifier-term frequency–inverse document frequency (NBC-TFIDF) are 82.894%, 86.184% and 86.512%, respectively. The Jaccard coefficient of the proposed CBF-DBN produces a maximal Jaccard coefficient value of 88.928%, whereas the Jaccard coefficient values of the existing NN, DBN, NBC-TFIDF are 75.891%, 79.850% and 81.103%, respectively.Originality/valueIn this paper, a big data classification method is proposed for categorizing massive data sets for meeting the constraints of huge data. The big data classification is performed on the MapReduce framework based on training and testing phases in such a way that the data are handled in parallel at the same time. In the training phase, the big data is obtained and partitioned into different subsets of data and fed into the mapper. In the mapper, the features extraction step is performed for extracting the significant features. The obtained features are subjected to the reducers for classifying the data using the obtained features. The DBN classifier is utilized for the classification wherein the DBN is trained using the proposed CBF algorithm. The trained model is obtained as an output after the classification. In the testing phase, the incremental data are considered for the classification. New data are first split into subsets and fed into the mapper for classification. The trained models obtained from the training phase are used for the classification. The classified results from each mapper are fused and fed into the reducer for the classification of big data.


2016 ◽  
pp. 234-245
Author(s):  
Michael A. Chilton ◽  
James M. Bloodgood

In this chapter, the authors investigate how raw data, obtained from a variety of sources, can be processed into knowledge using automated techniques that will help organizations gain a competitive advantage. Firms have amassed so much data that only automated methods, such as data mining or converting existing knowledge into expert systems is possible to make any sense of it or to protect it from competitors. Further, the data that is processed may be considered tacit knowledge because it is hidden from people until it is processed. In this chapter, the authors discuss various sources of data that might help an organization achieve and sustain a competitive advantage. A firm can data mine its own production database for insight regarding its customers and markets that have previously been ignored. It might also mine social media (e.g., Facebook and Twitter), which has become a forum for individual preferences and activities from which the savvy organization could turn into competitive advantage. They also discuss how this knowledge can be protected from intrusion by competitors to sustain the competitive position it may achieve as a result of the discovery of knowledge from massive data sets.


Author(s):  
Michael A. Chilton ◽  
James M. Bloodgood

In this chapter, the authors investigate how raw data, obtained from a variety of sources, can be processed into knowledge using automated techniques that will help organizations gain a competitive advantage. Firms have amassed so much data that only automated methods, such as data mining or converting existing knowledge into expert systems is possible to make any sense of it or to protect it from competitors. Further, the data that is processed may be considered tacit knowledge because it is hidden from people until it is processed. In this chapter, the authors discuss various sources of data that might help an organization achieve and sustain a competitive advantage. A firm can data mine its own production database for insight regarding its customers and markets that have previously been ignored. It might also mine social media (e.g., Facebook and Twitter), which has become a forum for individual preferences and activities from which the savvy organization could turn into competitive advantage. They also discuss how this knowledge can be protected from intrusion by competitors to sustain the competitive position it may achieve as a result of the discovery of knowledge from massive data sets.


2019 ◽  
Vol 8 (2S8) ◽  
pp. 1563-1566

Data privacy is an area of concern to process massive datasets in Big Data applications. Assortment of Big Data-sets is tough to be handled using, with the use of on-hand management tools or traditional processing techniques, the assortment of Big Data sets is difficult to be handled using Big Data has three characteristics i.e. V’s Volume, Varity, and Velocity . Privacy to such Big Data could be a massive snag which might be achieved by Anonymization technique. Datasets like financial data, Health Records and other confidential information of various organizations needs privacy to protect from the intruders and malicious entities. The aim of Big Data Anonymization is to shield the privacy of the individual and make it legal to share the information while not obtaining permission from people. The research paper discusses the basics of Big Data, technology behind it and various challenges


Author(s):  
Vo Ngoc Phu ◽  
Vo Thi Ngoc Tran

Machine learning (ML), neural network (NN), evolutionary algorithm (EA), fuzzy systems (FSs), as well as computer science have been very famous and very significant for many years. They have been applied to many different areas. They have contributed much to developments of many large-scale corporations, massive organizations, etc. Lots of information and massive data sets (MDSs) have been generated from these big corporations, organizations, etc. These big data sets (BDSs) have been the challenges of many commercial applications, researches, etc. Therefore, there have been many algorithms of the ML, the NN, the EA, the FSs, as well as computer science which have been developed to handle these massive data sets successfully. To support for this process, the authors have displayed all the possible algorithms of the NN for the large-scale data sets (LSDSs) successfully in this chapter. Finally, they have presented a novel model of the NN for the BDS in a sequential environment (SE) and a distributed network environment (DNE).


Sign in / Sign up

Export Citation Format

Share Document