First of All, Understand Data Analytics Context and Changes

Big data marks a major turning point in the use of data and is a powerful vehicle for growth and profitability. A comprehensive understanding of a company's data, its potential can be a new vector for performance. It must be recognized that without an adequate analysis, our data are just an unusable raw material. In this context, the traditional data processing tools cannot support such an explosion of volume. They cannot respond to new needs in a timely manner and at a reasonable cost. Big data is a broad term generally referring to very large data collections that impose complications on analytics tools for harnessing and managing such. This chapter details what big data analysis is. It presents the development of its applications. It is interested in the important changes that have touched the analytics context.

2019 ◽  
Author(s):  
Abhishek Singh

Abstract Background: The need for big data analysis requires being able to process large data which are being held fine-tuned for usage by corporates. It is only very recently that the need for big data has caught attention for low budget corporate groups and academia who typically do not have money and resources to buy expensive licenses of big data analysis platforms such as SAS. The corporates continue to work on SAS data format largely because of systemic organizational history and that the prior codes have been built on them. The data-providers continue to thus provide data in SAS formats. Acute sudden need has arisen because of this gap of data being in SAS format and the coders not having a SAS expertise or training background as the economic and inertial forces acting of having shaped these two class of people have been different. Method: We analyze the differences and thus the need for SasCsvToolkit which helps to generate a CSV file for a SAS format data so that the data scientist can then make use of his skills in other tools that can process CSVs such as R, SPSS, or even Microsoft Excel. At the same time, it also provides conversion of CSV files to SAS format. Apart from this, a SAS database programmer always struggles in finding the right method to do a database search, exact match, substring match, except condition, filters, unique values, table joins and data mining for which the toolbox also provides template scripts to modify and use from command line. Results: The toolkit has been implemented on SLURM scheduler platform as a `bag-of-tasks` algorithm for parallel and distributed workflow though serial version has also been incorporated. Conclusion: In the age of Big Data where there are way too many file formats and software and analytics environment each having their own semantics to deal with specific file types, SasCsvToolkit will find its functions very handy to a data engineer.


2021 ◽  
Author(s):  
Vasyl Melnyk ◽  
Olena Kuzmych ◽  
Nataliia Bahniuk ◽  
Nataliia Cherniashchuk ◽  
Liudmyla Hlynchuk ◽  
...  

Author(s):  
Fernando Almeida ◽  
Mário Santos

Big data is a term that has risen to prominence describing data that exceeds the processing capacity of conventional database systems. Big data is a disruptive force that will affect organizations across industries, sectors, and economies. Hidden in the immense volume, variety, and velocity of data that is produced today is new information, facts, relationships, indicators, and pointers that either could not be practically discovered in the past, or simply did not exist before. This new information, effectively captured, managed, and analyzed, has the power to enhance profoundly the effectiveness of government. This chapter looks to the main challenges and issues that will have to be addressed to capture the full potential of big data. Additionally, the authors present a conceptual framework for big data analysis structured in there layers: (a) data capture and preprocessing, (b) data processing and interaction, and (c) auxiliary tools. Each has a different role to play in capturing, processing, accessing, and analyzing big data.


2020 ◽  
Vol 166 ◽  
pp. 13027
Author(s):  
Anzhela Ignatyuk ◽  
Olena Liubkina ◽  
Tetiana Murovana ◽  
Alina Magomedova

Driving force of human society development is elimination contradiction between unlimited usage of natural resources during economic activity of enterprises, environment pollution as a result of such activity and limited natural, energy and other resources. Research results on economic and environmental issues of green business management showed that there are several basic types of problems at present which arise at enterprises during collecting and processing data on the results of their activities. The authors analyzed how public sector and green business is catching up on global trend towards broader use of the big data analysis to serve public interests and increase efficiency of business activities. In order to detect current approach to big data analysis in public and private sectors authors conduct interviews with stakeholders. The paper concludes with the analysis what changes in approaches to the big data analysis in public and private sectors have to be made in order to comply with the global trends in greening the economy. Application of FinTech, methods of processing large data sets and tools for implementing the principles of greening the economy will enable to increase the investment attractiveness of green business and will simplify the interaction between the state and enterprises.


2021 ◽  
Vol 8 (7) ◽  
pp. 112-116
Author(s):  
Yundong Hao ◽  

With the development of social economy, information technology is developing constantly, and the whole society has entered the era of big data. The era of big data is mainly manifested in the large amount of data, various types, low value density, and high requirements for data processing speed and timeliness. In the era of big data, we should extract data in time and effectively, so as to promote the development of various industries. As an institutional industry that plays an important role in the cure of human health diseases, medical and health institutions have begun to gradually use big data to play an important role in the management of people's diagnosis and treatment. Taking medical and health institutions in Chengdu as an example, this paper explores how to use big data analysis in medical and health institutions, so as to improve the efficiency of medical treatment and medical quality.


2017 ◽  
Vol 5 (4) ◽  
pp. 169 ◽  
Author(s):  
Julia Rayz

While historically computational humor paid very little attention to sociology and mostly took into account subparts of linguistics and some psychology, Christie Davies wrote a number of papers that should affect the study of computational humor directly. This paper will look at one paper to illustrate this point, namely Christie’s chapter in the Primer of Humor Research.  With the advancements in computational processing and big data analysis/analytics, it is becoming possible to look at a large collection of humorous texts that are available on the web. In particular, older texts, including joke materials, that are being scanned from previously published printed versions. Most of the approaches within computational humor concentrated on comparison of present/existing jokes, without taking into account classes of jokes that are absent in a given setting. While the absence of a class is unlikely to affect classification – something that researchers in computational humor seem to be interested in – it does come into light when features of various classes are compared and conclusions are being made. This paper will describe existing approaches and how they could be enhanced, thanks to Davies’s contributions and the advancements in data processing.


Sign in / Sign up

Export Citation Format

Share Document