A Conceptual Framework for Big Data Analysis

Author(s):  
Fernando Almeida ◽  
Mário Santos

Big data is a term that has risen to prominence describing data that exceeds the processing capacity of conventional database systems. Big data is a disruptive force that will affect organizations across industries, sectors, and economies. Hidden in the immense volume, variety, and velocity of data that is produced today is new information, facts, relationships, indicators, and pointers that either could not be practically discovered in the past, or simply did not exist before. This new information, effectively captured, managed, and analyzed, has the power to enhance profoundly the effectiveness of government. This chapter looks to the main challenges and issues that will have to be addressed to capture the full potential of big data. Additionally, the authors present a conceptual framework for big data analysis structured in there layers: (a) data capture and preprocessing, (b) data processing and interaction, and (c) auxiliary tools. Each has a different role to play in capturing, processing, accessing, and analyzing big data.

Big data marks a major turning point in the use of data and is a powerful vehicle for growth and profitability. A comprehensive understanding of a company's data, its potential can be a new vector for performance. It must be recognized that without an adequate analysis, our data are just an unusable raw material. In this context, the traditional data processing tools cannot support such an explosion of volume. They cannot respond to new needs in a timely manner and at a reasonable cost. Big data is a broad term generally referring to very large data collections that impose complications on analytics tools for harnessing and managing such. This chapter details what big data analysis is. It presents the development of its applications. It is interested in the important changes that have touched the analytics context.


2015 ◽  
Vol 115 (9) ◽  
pp. 1577-1595 ◽  
Author(s):  
Wasim Ahmad Bhat ◽  
S.M.K. Quadri

Purpose – The purpose of this paper is to explore the challenges posed by Big Data to current trends in computation, networking and storage technology at various stages of Big Data analysis. The work aims to bridge the gap between theory and practice, and highlight the areas of potential research. Design/methodology/approach – The study employs a systematic and critical review of the relevant literature to explore the challenges posed by Big Data to hardware technology, and assess the worthiness of hardware technology at various stages of Big Data analysis. Online computer-databases were searched to identify the literature relevant to: Big Data requirements and challenges; and evolution and current trends of hardware technology. Findings – The findings reveal that even though current hardware technology has not evolved with the motivation to support Big Data analysis, it significantly supports Big Data analysis at all stages. However, they also point toward some important shortcomings and challenges of current technology trends. These include: lack of intelligent Big Data sources; need for scalable real-time analysis capability; lack of support (in networks) for latency-bound applications; need for necessary augmentation (in network support) for peer-to-peer networks; and rethinking on cost-effective high-performance storage subsystem. Research limitations/implications – The study suggests that a lot of research is yet to be done in hardware technology, if full potential of Big Data is to be unlocked. Practical implications – The study suggests that practitioners need to meticulously choose the hardware infrastructure for Big Data considering the limitations of technology. Originality/value – This research arms industry, enterprises and organizations with the concise and comprehensive technical-knowledge about the capability of current hardware technology trends in solving Big Data problems. It also highlights the areas of potential research and immediate attention which researchers can exploit to explore new ideas and existing practices.


2021 ◽  
Author(s):  
Vasyl Melnyk ◽  
Olena Kuzmych ◽  
Nataliia Bahniuk ◽  
Nataliia Cherniashchuk ◽  
Liudmyla Hlynchuk ◽  
...  

Author(s):  
Rajanala Vijaya Prakash

The data management industry has matured over the last three decades, primarily based on Relational Data Base Management Systems (RDBMS) technology. The amount of data collected and analyzed in enterprises has increased several folds in volume, variety and velocity of generation and consumption, organizations have started struggling with architectural limitations of traditional RDBMS architecture. As a result a new class of systems had to be designed and implemented, giving rise to the new phenomenon of “Big Data”. The data-driven world has the potential to improve the efficiencies of enterprises and improve the quality of our lives. There are a number of challenges that must be addressed to allow us to exploit the full potential of Big Data. This article highlights the key technical challenges of Big Data.


2021 ◽  
Vol 8 (7) ◽  
pp. 112-116
Author(s):  
Yundong Hao ◽  

With the development of social economy, information technology is developing constantly, and the whole society has entered the era of big data. The era of big data is mainly manifested in the large amount of data, various types, low value density, and high requirements for data processing speed and timeliness. In the era of big data, we should extract data in time and effectively, so as to promote the development of various industries. As an institutional industry that plays an important role in the cure of human health diseases, medical and health institutions have begun to gradually use big data to play an important role in the management of people's diagnosis and treatment. Taking medical and health institutions in Chengdu as an example, this paper explores how to use big data analysis in medical and health institutions, so as to improve the efficiency of medical treatment and medical quality.


2017 ◽  
Vol 5 (4) ◽  
pp. 169 ◽  
Author(s):  
Julia Rayz

While historically computational humor paid very little attention to sociology and mostly took into account subparts of linguistics and some psychology, Christie Davies wrote a number of papers that should affect the study of computational humor directly. This paper will look at one paper to illustrate this point, namely Christie’s chapter in the Primer of Humor Research.  With the advancements in computational processing and big data analysis/analytics, it is becoming possible to look at a large collection of humorous texts that are available on the web. In particular, older texts, including joke materials, that are being scanned from previously published printed versions. Most of the approaches within computational humor concentrated on comparison of present/existing jokes, without taking into account classes of jokes that are absent in a given setting. While the absence of a class is unlikely to affect classification – something that researchers in computational humor seem to be interested in – it does come into light when features of various classes are compared and conclusions are being made. This paper will describe existing approaches and how they could be enhanced, thanks to Davies’s contributions and the advancements in data processing.


Sign in / Sign up

Export Citation Format

Share Document