The tensions between comparability and locally meaningful data

Author(s):  
Sara Randall

In the contemporary global context where the demand for data and the calculation of indicators mean that sources of such data themselves are a powerful basis for decision making on both local and global stages, the degree to which these data are comparable takes on great importance. This chapter unpicks a number of situations and types of data where such comparability can be challenged. These can be summarized as (i) the comparability of concepts and definitions—using the household and marriage as examples which are examined in detail; (ii) the comparability of comprehensibility and answerability of questions focusing on the Demographic and Health Survey (DHS) ‘ideal family size’ and also behavioural methods of contraception; (iii) the comparability of cultural willingness to answer questions, which considers those who will and will not talk about the dead; and (iv) gendered differences in the interpretation of survey questions. The examples call into question the whole notion of comparability and what apparently comparable data sets might actually be examining. It concludes that much social behaviour is not inherently comparable cross-culturally and cross-linguistically and calls for much more care and sensitivity in analysing large data sets such as censuses and DHS surveys, especially when the analytical conclusions can have major implications for policy development and resource allocation.

Author(s):  
Dariusz Prokopowicz ◽  
Jan Grzegorek

Rapid progress is being made in the field of IT applications in the analysis of the economic and financial situation of enterprises and in the processes supporting management of organizations. In terms of the fastest growing areas of information and communication technology, which are the prerequisites for the progress of online electronic banking, it is necessary to disseminate the standards of financial operations have been carried out. The cloud as well as the use of large data sets in the so-called. Big Data platforms. The current Big Data technology solutions are not just large databases, data warehouses allow for multifaceted analysis of huge volumes of quantitative data for periodic managerial reporting. Business decision-making processes should be based on the analysis of reliable and up-to-date market and business data. The information necessary for the decision-making processes has been collected, stored, ordered and pre-summed up in the form of Business Intelligence analytics reports in corporations. Business Intelligence analyzes give managers the ability to analyze the large data sets in real time, which significantly contributes to improving business management efficiency. At present, business analytics use either the advanced analytical formulas of Ms Excel or computerized platforms that include ready-made Business Intelligence reporting formulas.


2020 ◽  
Vol 8 (6) ◽  
pp. 4453-4456

In today’s emerging era of data science where data plays a huge role for accurate decision making process it is very important to work on cleaned and irredundant data. As data is gathered from multiple sources it might contain anomalies, missing values etc. which needs to be removed this process is called data pre-processing. In this paper we perform data preprocessing on news popularity data set where extraction , transform and loading (ETL) is done .The outcome of the process is cleaned and refined news data set which can be used to do further analysis for knowledge discovery on popularity of news . Refined data give accurate predictions and can be better utilized in decision making process


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Computers ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 47
Author(s):  
Fariha Iffath ◽  
A. S. M. Kayes ◽  
Md. Tahsin Rahman ◽  
Jannatul Ferdows ◽  
Mohammad Shamsul Arefin ◽  
...  

A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging procedure of the programs that are submitted by the users. Online judges are systems designed for the reliable evaluation of the source codes submitted by the users. Traditional online judging platforms are not ideally suitable for programming labs, as they do not support partial scoring and efficient detection of plagiarized codes. When considering this fact, in this paper, we present an online judging framework that is capable of automatic scoring of codes by detecting plagiarized contents and the level of accuracy of codes efficiently. Our system performs the detection of plagiarism by detecting fingerprints of programs and using the fingerprints to compare them instead of using the whole file. We used winnowing to select fingerprints among k-gram hash values of a source code, which was generated by the Rabin–Karp Algorithm. The proposed system is compared with the existing online judging platforms to show the superiority in terms of time efficiency, correctness, and feature availability. In addition, we evaluated our system by using large data sets and comparing the run time with MOSS, which is the widely used plagiarism detection technique.


2021 ◽  
Author(s):  
Věra Kůrková ◽  
Marcello Sanguineti
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document