Big Data
Latest Publications


TOTAL DOCUMENTS

112
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By IGI Global

9781466698406, 9781466698413

Big Data ◽  
2016 ◽  
pp. 2368-2387
Author(s):  
Hajime Eto

As this book has the limited numbers of chapters and pages, many important issues remain unanalyzed. This chapter picks up and roughly discusses some of them for the future analyses in more analytical ways. The focuses are placed on how to apply the data scientific methods to the analyses of public voice, claims and behaviors of tourists, customers and the general publics by using the big data already acquired and stored somewhere.


Big Data ◽  
2016 ◽  
pp. 2300-2315
Author(s):  
Dimitar Christozov ◽  
Stefka Toleva-Stoimenova

The chapter addresses the problems of digital divide in the light of learning from data in the era of accumulated, available, and accessible Big Data. The phenomenon of Big Data arose in the last years and offered new dimensions of digital divide: the challenge that the human society faces since the appearance of computer technology. Objectives of this chapter are to highlight problems and barriers in learning from Big Data and to initiate discussion on the ways to overcome those new challenges. The chapter tries to define the “Big Data Phenomenon,” to identify the phases and activities in the process of learning from data, and to relate them to learning from Big Data. As a result, a paradigm of competences and barriers for acquiring Big Data literacy are proposed as a new dimension of literacy in dividing the human society.


Big Data ◽  
2016 ◽  
pp. 2249-2274
Author(s):  
Chinh Nguyen ◽  
Rosemary Stockdale ◽  
Helana Scheepers ◽  
Jason Sargent

The rapid development of technology and interactive nature of Government 2.0 (Gov 2.0) is generating large data sets for Government, resulting in a struggle to control, manage, and extract the right information. Therefore, research into these large data sets (termed Big Data) has become necessary. Governments are now spending significant finances on storing and processing vast amounts of information because of the huge proliferation and complexity of Big Data and a lack of effective records management. On the other hand, there is a method called Electronic Records Management (ERM), for controlling and governing the important data of an organisation. This paper investigates the challenges identified from reviewing the literature for Gov 2.0, Big Data, and ERM in order to develop a better understanding of the application of ERM to Big Data to extract useable information in the context of Gov 2.0. The paper suggests that a key building block in providing useable information to stakeholders could potentially be ERM with its well established governance policies. A framework is constructed to illustrate how ERM can play a role in the context of Gov 2.0. Future research is necessary to address the specific constraints and expectations placed on governments in terms of data retention and use.


Big Data ◽  
2016 ◽  
pp. 2165-2198
Author(s):  
José Carlos Cavalcanti

Analytics (discover and communication of patterns, with significance, in data) of Big Data (basically characterized by large structured and unstructured data volumes, from a variety of sources, at high velocity - i.e., real-time data capture, storage, and analysis), through the use of Cloud Computing (a model of network computing) is becoming the new “ABC” of information and communication technologies (ICTs), with important effects for the generation of new firms and for the restructuring of those ones already established. However, as this chapter argues, successful application of these new ABC technologies and tools depends on two interrelated policy aspects: 1) the use of a proper model which could help one to approach the structure and dynamics of the firm, and, 2) how the complex trade-off between information technology (IT) and communication technology (CT) costs is handled within, between and beyond firms, organizations and institutions.


Big Data ◽  
2016 ◽  
pp. 2028-2046
Author(s):  
Ljiljana Kašćelan ◽  
Vladimir Kašćelan ◽  
Milijana Novović-Burić

This paper has proposed a data mining approach for risk assessment in car insurance. Standard methods imply classification of policies to great number of tariff classes and assessment of risk on basis of them. With application of data mining techniques, it is possible to get functional dependencies between the level of risk and risk factors as well as better results in predictions. On the case study data it has been proved that data mining techniques can, with better accuracy than the standard methods, predict claim sizes and occurrence of claims, and this represents the basis for calculation of net risk premium and risk classification. This paper, also, discusses advantages of data mining methods compared to standard methods for risk assessment in car insurance, as well as the specificities of the obtained results due to small insurance market, such is the one in Montenegro.


Big Data ◽  
2016 ◽  
pp. 1917-1933
Author(s):  
Basant Agarwal ◽  
Namita Mittal

Opinion Mining or Sentiment Analysis is the study that analyzes people's opinions or sentiments from the text towards entities such as products and services. It has always been important to know what other people think. With the rapid growth of availability and popularity of online review sites, blogs', forums', and social networking sites' necessity of analysing and understanding these reviews has arisen. The main approaches for sentiment analysis can be categorized into semantic orientation-based approaches, knowledge-based, and machine-learning algorithms. This chapter surveys the machine learning approaches applied to sentiment analysis-based applications. The main emphasis of this chapter is to discuss the research involved in applying machine learning methods mostly for sentiment classification at document level. Machine learning-based approaches work in the following phases, which are discussed in detail in this chapter for sentiment classification: (1) feature extraction, (2) feature weighting schemes, (3) feature selection, and (4) machine-learning methods. This chapter also discusses the standard free benchmark datasets and evaluation methods for sentiment analysis. The authors conclude the chapter with a comparative study of some state-of-the-art methods for sentiment analysis and some possible future research directions in opinion mining and sentiment analysis.


Big Data ◽  
2016 ◽  
pp. 1717-1735
Author(s):  
Paul Prinsloo ◽  
Sharon Slade

Learning analytics is an emerging but rapidly growing field seen as offering unquestionable benefit to higher education institutions and students alike. Indeed, given its huge potential to transform the student experience, it could be argued that higher education has a duty to use learning analytics. In the flurry of excitement and eagerness to develop ever slicker predictive systems, few pause to consider whether the increasing use of student data also leads to increasing concerns. This chapter argues that the issue is not whether higher education should use student data, but under which conditions, for what purpose, for whose benefit, and in ways in which students may be actively involved. The authors explore issues including the constructs of general data and student data, and the scope for student responsibility in the collection, analysis and use of their data. An example of student engagement in practice reviews the policy created by the Open University in 2014. The chapter concludes with an exploration of general principles for a new deal on student data in learning analytics.


Big Data ◽  
2016 ◽  
pp. 1668-1686
Author(s):  
Margee Hume ◽  
Craig Hume ◽  
Paul Johnston ◽  
Jeffrey Soar ◽  
Jon Whitty

Aged care is projected to be the fastest-growing sector within the health and community care industries (Reynolds, 2009). Strengthening the care-giving workforce, compliance, delivery, and technology is not only vital to our social infrastructure and improving the quality of care, but also has the potential to drive long-term economic growth and contribute to the Gross Domestic Product (GDP). This chapter examines the role of Knowledge Management (KM) in aged care organizations to assist in the delivery of aged care. With limited research related to KM in aged care, this chapter advances knowledge and offers a unique view of KM from the perspective of 22 aged care stakeholders. Using in-depth interviewing, this chapter explores the definition of knowledge in aged care facilities, the importance of knowledge planning, capture, and diffusion for accreditation purposes, and offers recommendations for the development of sustainable knowledge management practice and development.


Big Data ◽  
2016 ◽  
pp. 1582-1612 ◽  
Author(s):  
Alessandro Marcengo ◽  
Amon Rapp

Although in recent years the Quantified Self (QS) application domain is growing, there are still some palpable fundamental problems that relegate the QS movement in a phase of low maturity. The first is a technological problem, and specifically, a lack of maturity in technologies for the collection, processing, and data visualization. This is accompanied by a perhaps more fundamental problem of deficit, bias, and lack of integration of aspects concerning the human side of the QS idea. The step that the authors tried to make in this chapter is to highlight aspects that could lead to a more robust approach in QS area. This was done, primarily, through a new approach in data visualization and, secondly, through a necessary management of complexity, both in technological terms and, for what concerns the human side of the whole issue, in theoretical terms. The authors have gone a little further stressing how the future directions of research could lead to significant impacts on both individual and social level.


Big Data ◽  
2016 ◽  
pp. 1555-1581
Author(s):  
Gueyoung Jung ◽  
Tridib Mukherjee

In the modern information era, the amount of data has exploded. Current trends further indicate exponential growth of data in the future. This prevalent humungous amount of data—referred to as big data—has given rise to the problem of finding the “needle in the haystack” (i.e., extracting meaningful information from big data). Many researchers and practitioners are focusing on big data analytics to address the problem. One of the major issues in this regard is the computation requirement of big data analytics. In recent years, the proliferation of many loosely coupled distributed computing infrastructures (e.g., modern public, private, and hybrid clouds, high performance computing clusters, and grids) have enabled high computing capability to be offered for large-scale computation. This has allowed the execution of the big data analytics to gather pace in recent years across organizations and enterprises. However, even with the high computing capability, it is a big challenge to efficiently extract valuable information from vast astronomical data. Hence, we require unforeseen scalability of performance to deal with the execution of big data analytics. A big question in this regard is how to maximally leverage the high computing capabilities from the aforementioned loosely coupled distributed infrastructure to ensure fast and accurate execution of big data analytics. In this regard, this chapter focuses on synchronous parallelization of big data analytics over a distributed system environment to optimize performance.


Sign in / Sign up

Export Citation Format

Share Document