Some thoughts on the impact of large data sets on regional science

1999 ◽  
Vol 33 (2) ◽  
pp. 145-150 ◽  
Author(s):  
Arthur Getis
Author(s):  
David Japikse ◽  
Oleg Dubitsky ◽  
Kerry N. Oliphant ◽  
Robert J. Pelton ◽  
Daniel Maynes ◽  
...  

In the course of developing advanced data processing and advanced performance models, as presented in companion papers, a number of basic scientific and mathematical questions arose. This paper deals with questions such as uniqueness, convergence, statistical accuracy, training, and evaluation methodologies. The process of bringing together large data sets and utilizing them, with outside data supplementation, is considered in detail. After these questions are focused carefully, emphasis is placed on how the new models, based on highly refined data processing, can best be used in the design world. The impact of this work on designs of the future is discussed. It is expected that this methodology will assist designers to move beyond contemporary design practices.


Leonardo ◽  
2012 ◽  
Vol 45 (2) ◽  
pp. 113-118 ◽  
Author(s):  
Rama C. Hoetzlein

This paper follows the development of visual communication through information visualization in the wake of the Fukushima nuclear accident in Japan. While information aesthetics are often applied to large data sets retrospectively, the author developed new works concurrently with an ongoing crisis to examine the impact and social aspects of visual communication while events continued to unfold. The resulting work, Fukushima Nuclear Accident—Radiation Comparison Map, is a reflection of rapidly acquired data, collaborative on-line analysis and reflective criticism of contemporary news media, resolved into a coherent picture through the participation of an on-line community.


2020 ◽  
pp. 81-93
Author(s):  
D. V. Shalyapin ◽  
D. L. Bakirov ◽  
M. M. Fattakhov ◽  
A. D. Shalyapina ◽  
A. V. Melekhov ◽  
...  

The article is devoted to the quality of well casing at the Pyakyakhinskoye oil and gas condensate field. The issue of improving the quality of well casing is associated with many problems, for example, a large amount of work on finding the relationship between laboratory studies and actual data from the field; the difficulty of finding logically determined relationships between the parameters and the final quality of well casing. The text gives valuable information on a new approach to assessing the impact of various parameters, based on a mathematical apparatus that excludes subjective expert assessments, which in the future will allow applying this method to deposits with different rock and geological conditions. We propose using the principles of mathematical processing of large data sets applying neural networks trained to predict the characteristics of the quality of well casing (continuity of contact of cement with the rock and with the casing). Taking into account the previously identified factors, we developed solutions to improve the tightness of the well casing and the adhesion of cement to the limiting surfaces.


2009 ◽  
Vol 42 (5) ◽  
pp. 783-792 ◽  
Author(s):  
A. Morawiec

Progress in experimental methods of serial sectioning and orientation determination opens new opportunities to study inter-crystalline boundaries in polycrystalline materials. In particular, macroscopic boundary parameters can now be measured automatically. With sufficiently large data sets, statistical analysis of interfaces between crystals is possible. The most basic and interesting issue is to find out the probability of occurrence of various boundaries in a given material. In order to define a boundary density function, a model of uniformity is needed. A number of such models can be conceived. It is proposed to use those derived from an assumed metric structure of the interface manifold. Some basic metrics on the manifold are explicitly given, and a number of notions and constructs needed for a strict definition of the boundary density function are considered. In particular, the crucial issue of the impact of symmetries is examined. The treatments of homo- and hetero-phase boundaries differ in some respects, and approaches applicable to each of these two cases are described. In order to make the abstract matter of the paper more accessible, a concrete boundary parameterization is used and some examples are given.


Psychology ◽  
2020 ◽  
Author(s):  
Jeffrey Stanton

The term “data science” refers to an emerging field of research and practice that focuses on obtaining, processing, visualizing, analyzing, preserving, and re-using large collections of information. A related term, “big data,” has been used to refer to one of the important challenges faced by data scientists in many applied environments: the need to analyze large data sources, in certain cases using high-speed, real-time data analysis techniques. Data science encompasses much more than big data, however, as a result of many advancements in cognate fields such as computer science and statistics. Data science has also benefited from the widespread availability of inexpensive computing hardware—a development that has enabled “cloud-based” services for the storage and analysis of large data sets. The techniques and tools of data science have broad applicability in the sciences. Within the field of psychology, data science offers new opportunities for data collection and data analysis that have begun to streamline and augment efforts to investigate the brain and behavior. The tools of data science also enable new areas of research, such as computational neuroscience. As an example of the impact of data science, psychologists frequently use predictive analysis as an investigative tool to probe the relationships between a set of independent variables and one or more dependent variables. While predictive analysis has traditionally been accomplished with techniques such as multiple regression, recent developments in the area of machine learning have put new predictive tools in the hands of psychologists. These machine learning tools relax distributional assumptions and facilitate exploration of non-linear relationships among variables. These tools also enable the analysis of large data sets by opening options for parallel processing. In this article, a range of relevant areas from data science is reviewed for applicability to key research problems in psychology including large-scale data collection, exploratory data analysis, confirmatory data analysis, and visualization. This bibliography covers data mining, machine learning, deep learning, natural language processing, Bayesian data analysis, visualization, crowdsourcing, web scraping, open source software, application programming interfaces, and research resources such as journals and textbooks.


2020 ◽  
Vol 65 (4) ◽  
pp. 608-627
Author(s):  
Dennis W. Carlton ◽  
Ken Heyer

In this essay, we evaluate the impact of the revolution that has occurred in antitrust and in particular the growing role played by economic analysis. Section II describes exactly what we think that revolution was. There were actually two revolutions. The first was the use by economists and other academics of existing economic insights together with the development of new economic insights to improve the understanding of the consequences of certain forms of market structure and firm behaviors. It also included the application of advanced empirical techniques to large data sets. The second was a revolution in legal jurisprudence, as both the federal competition agencies and the courts increasingly accepted and relied on the insights and evidence emanating from this economic research. Section III explains the impact of the revolution on economists, consulting firms, and research in the field of industrial organization. One question it addresses is why, if economics is being so widely employed and is so useful, one finds skilled economists so often in disagreement. Section IV asks whether the revolution has been successful or whether, as some critics claim, it has gone too far. Our view is that it has generally been beneficial though, as with most any policy, it can be improved. Section V discusses some of the hot issues in antitrust today and, in particular, what some of its critics say about the state of the revolution. The final section concludes with the hope that those wishing to turn back the clock to the antitrust and regulatory policies of fifty years ago more closely study that experience, otherwise they risk having its demonstrated deficiencies be repeated by throwing out the revolution’s baby with the bathwater.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


Sign in / Sign up

Export Citation Format

Share Document