Visual Communication in Times of Crisis: The Fukushima Nuclear Accident

Leonardo ◽  
2012 ◽  
Vol 45 (2) ◽  
pp. 113-118 ◽  
Author(s):  
Rama C. Hoetzlein

This paper follows the development of visual communication through information visualization in the wake of the Fukushima nuclear accident in Japan. While information aesthetics are often applied to large data sets retrospectively, the author developed new works concurrently with an ongoing crisis to examine the impact and social aspects of visual communication while events continued to unfold. The resulting work, Fukushima Nuclear Accident—Radiation Comparison Map, is a reflection of rapidly acquired data, collaborative on-line analysis and reflective criticism of contemporary news media, resolved into a coherent picture through the participation of an on-line community.

Author(s):  
David Japikse ◽  
Oleg Dubitsky ◽  
Kerry N. Oliphant ◽  
Robert J. Pelton ◽  
Daniel Maynes ◽  
...  

In the course of developing advanced data processing and advanced performance models, as presented in companion papers, a number of basic scientific and mathematical questions arose. This paper deals with questions such as uniqueness, convergence, statistical accuracy, training, and evaluation methodologies. The process of bringing together large data sets and utilizing them, with outside data supplementation, is considered in detail. After these questions are focused carefully, emphasis is placed on how the new models, based on highly refined data processing, can best be used in the design world. The impact of this work on designs of the future is discussed. It is expected that this methodology will assist designers to move beyond contemporary design practices.


2020 ◽  
pp. 81-93
Author(s):  
D. V. Shalyapin ◽  
D. L. Bakirov ◽  
M. M. Fattakhov ◽  
A. D. Shalyapina ◽  
A. V. Melekhov ◽  
...  

The article is devoted to the quality of well casing at the Pyakyakhinskoye oil and gas condensate field. The issue of improving the quality of well casing is associated with many problems, for example, a large amount of work on finding the relationship between laboratory studies and actual data from the field; the difficulty of finding logically determined relationships between the parameters and the final quality of well casing. The text gives valuable information on a new approach to assessing the impact of various parameters, based on a mathematical apparatus that excludes subjective expert assessments, which in the future will allow applying this method to deposits with different rock and geological conditions. We propose using the principles of mathematical processing of large data sets applying neural networks trained to predict the characteristics of the quality of well casing (continuity of contact of cement with the rock and with the casing). Taking into account the previously identified factors, we developed solutions to improve the tightness of the well casing and the adhesion of cement to the limiting surfaces.


2009 ◽  
Vol 42 (5) ◽  
pp. 783-792 ◽  
Author(s):  
A. Morawiec

Progress in experimental methods of serial sectioning and orientation determination opens new opportunities to study inter-crystalline boundaries in polycrystalline materials. In particular, macroscopic boundary parameters can now be measured automatically. With sufficiently large data sets, statistical analysis of interfaces between crystals is possible. The most basic and interesting issue is to find out the probability of occurrence of various boundaries in a given material. In order to define a boundary density function, a model of uniformity is needed. A number of such models can be conceived. It is proposed to use those derived from an assumed metric structure of the interface manifold. Some basic metrics on the manifold are explicitly given, and a number of notions and constructs needed for a strict definition of the boundary density function are considered. In particular, the crucial issue of the impact of symmetries is examined. The treatments of homo- and hetero-phase boundaries differ in some respects, and approaches applicable to each of these two cases are described. In order to make the abstract matter of the paper more accessible, a concrete boundary parameterization is used and some examples are given.


Psychology ◽  
2020 ◽  
Author(s):  
Jeffrey Stanton

The term “data science” refers to an emerging field of research and practice that focuses on obtaining, processing, visualizing, analyzing, preserving, and re-using large collections of information. A related term, “big data,” has been used to refer to one of the important challenges faced by data scientists in many applied environments: the need to analyze large data sources, in certain cases using high-speed, real-time data analysis techniques. Data science encompasses much more than big data, however, as a result of many advancements in cognate fields such as computer science and statistics. Data science has also benefited from the widespread availability of inexpensive computing hardware—a development that has enabled “cloud-based” services for the storage and analysis of large data sets. The techniques and tools of data science have broad applicability in the sciences. Within the field of psychology, data science offers new opportunities for data collection and data analysis that have begun to streamline and augment efforts to investigate the brain and behavior. The tools of data science also enable new areas of research, such as computational neuroscience. As an example of the impact of data science, psychologists frequently use predictive analysis as an investigative tool to probe the relationships between a set of independent variables and one or more dependent variables. While predictive analysis has traditionally been accomplished with techniques such as multiple regression, recent developments in the area of machine learning have put new predictive tools in the hands of psychologists. These machine learning tools relax distributional assumptions and facilitate exploration of non-linear relationships among variables. These tools also enable the analysis of large data sets by opening options for parallel processing. In this article, a range of relevant areas from data science is reviewed for applicability to key research problems in psychology including large-scale data collection, exploratory data analysis, confirmatory data analysis, and visualization. This bibliography covers data mining, machine learning, deep learning, natural language processing, Bayesian data analysis, visualization, crowdsourcing, web scraping, open source software, application programming interfaces, and research resources such as journals and textbooks.


2020 ◽  
Vol 65 (4) ◽  
pp. 608-627
Author(s):  
Dennis W. Carlton ◽  
Ken Heyer

In this essay, we evaluate the impact of the revolution that has occurred in antitrust and in particular the growing role played by economic analysis. Section II describes exactly what we think that revolution was. There were actually two revolutions. The first was the use by economists and other academics of existing economic insights together with the development of new economic insights to improve the understanding of the consequences of certain forms of market structure and firm behaviors. It also included the application of advanced empirical techniques to large data sets. The second was a revolution in legal jurisprudence, as both the federal competition agencies and the courts increasingly accepted and relied on the insights and evidence emanating from this economic research. Section III explains the impact of the revolution on economists, consulting firms, and research in the field of industrial organization. One question it addresses is why, if economics is being so widely employed and is so useful, one finds skilled economists so often in disagreement. Section IV asks whether the revolution has been successful or whether, as some critics claim, it has gone too far. Our view is that it has generally been beneficial though, as with most any policy, it can be improved. Section V discusses some of the hot issues in antitrust today and, in particular, what some of its critics say about the state of the revolution. The final section concludes with the hope that those wishing to turn back the clock to the antitrust and regulatory policies of fifty years ago more closely study that experience, otherwise they risk having its demonstrated deficiencies be repeated by throwing out the revolution’s baby with the bathwater.


2005 ◽  
Vol 21 (2) ◽  
pp. 137-151 ◽  
Author(s):  
L�on Bottou ◽  
Yann Le Cun

2017 ◽  
Author(s):  
Anthony J. Greenberg

AbstractExplosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Sign in / Sign up

Export Citation Format

Share Document