Complications in Climate Data Classification: The Political and Cultural Production of Variable Names

2013 ◽  
Vol 23 (1) ◽  
pp. 57 ◽  
Author(s):  
Nicholas M Weber ◽  
Andrea K Thomer ◽  
Gary Strand

<p>Model intercomparison projects are a unique and highly specialized form of data—intensive collaboration in the earth sciences. Typically, a set of pre‐determined boundary conditions (scenarios) are agreed upon by a community of model developers that then test and simulate each of those scenarios with individual ‘runs’ of a climate model. Because both the human expertise, and the computational power needed to produce an intercomparison project are exceptionally expensive, the data they produce are often archived for the broader climate science community to use in future research. Outside of high energy physics and astronomy sky surveys, climate modeling intercomparisons are one of the largest and most rapid methods of producing data in the natural sciences (Overpeck et al., 2010).</p><p>But, like any collaborative eScience project, the discovery and broad accessibility of this data is dependent on classifications and categorizations in the form of structured metadata—namely the Climate and Forecast (CF) metadata standard, which provides a controlled vocabulary to normalize the naming of a dataset’s variables. Intriguingly, the CF standard’s original publication notes, “…conventions have been developed only for things we know we need. Instead of trying to foresee the future, we have added features as required and will continue to do this” (Gregory, 2003). Yet, qualitatively we’ve observed that  this is not the case; although the time period of intercomparison projects remains stable (2-3 years), the scale and complexity of models and their output continue to grow—and thus, data creation and variable names consistently outpace the ratification of CF.</p><p> </p>

2010 ◽  
Vol 23 (18) ◽  
pp. 4926-4943 ◽  
Author(s):  
Faez Bakalian ◽  
Harold Ritchie ◽  
Keith Thompson ◽  
William Merryfield

Abstract Principal component analysis (PCA), which is designed to look at internal modes of variability, has often been applied beyond its intended design to study coupled modes of variability in combined datasets, also referred to as combined PCA. There are statistical techniques better suited for this purpose such as singular value decomposition (SVD) and canonical correlation analysis (CCA). In this paper, a different technique is examined that has not often been applied in climate science, that is, redundancy analysis (RA). Similar to multivariate regression, RA seeks to maximize the variance accounted for in one random vector that is linearly regressed against another random vector. RA can be used for forecasting and prediction studies of the climate system. This technique has the added advantage that the time-lagged redundancy index offers a robust method of identifying lead–lag relations among climate variables. In this study, combined PCA and RA of global sea surface temperatures (SSTs) and sea level pressures (SLPs) are carried out for the National Centers for Environmental Prediction (NCEP) reanalysis data and a simulation of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model. A simplified state-space model is also constructed to aid in the diagnosis and interpretation of the results. The relative advantages and disadvantages of combined PCA and RA are discussed. Overall, RA tends to provide a clearer and more consistent picture of the underlying physical processes than combined PCA.


According to the argument from inductive risk, scientists have responsibilities to consider the consequences of error when they set evidential standards for making decisions such as accepting or rejecting hypotheses. This argument has received a great deal of scholarly attention in recent years. Exploring Inductive Risk brings together a set of concrete case studies with the goals of illustrating the pervasiveness of inductive risk, assisting scientists and policymakers in responding to it, and moving theoretical discussions of this phenomenon forward. The book contains eleven case studies ranging over a wide range of scientific contexts and fields: the drug approval process, high energy particle physics, dual-use research, climate science, research on gender disparities, clinical trials, and toxicology. The chapters are divided into four parts: (1) weighing inductive risk; (2) evading inductive risk; (3) the breadth of inductive risk; and (4) exploring the limits of inductive risk. It includes an introduction that provides a historical overview of the argument from inductive risk and a conclusion that highlights three major topic areas that merit future research. These include the nature of inductive risk and the argument from inductive risk (AIR), the extent to which the AIR can be evaded by defenders of the value-free ideal, and the strategies that the scientific community can employ to handle inductive risk in a responsible fashion.


Author(s):  
Marcos Esterman ◽  
Philip Gerst ◽  
Paul H. Stiebitz ◽  
Kosuke Ishii

This paper describes the challenges faced by companies to manage warranty performance during product development. Understanding and reducing warranty cost often focuses exclusively on the analysis of product failures. However, warranty costs can also be incurred by events such as misaligned customer expectations that do not involve a product failure, per se. Many experts agree that effective management of system reliability and reliability validation during product development is a key to achieve superior time to market and life cycle quality. The paper first surveys the challenges faced by various organizations ranging from consumer electronics to aircraft engines to experimental high-energy physics accelerators. From the survey emerge some key and common issues that these companies face: identification of failure events; reliability modeling and prediction; prototyping and validation testing. The paper then reviews the current state of the art to identify areas for improvements as well as needed integrations in order to develop a comprehensive framework that will be useful to product developers to manage and predict warranty performance during product development. This framework extends and integrates three areas: 1) extend scenario-based FMEA to include the diagnosis and repair of failure events as part of the scenario; 2) use of Bayesian methods to integrate field data, product development data and engineering judgments; 3) generate costs models that allow tradeoff studies between product design, service model design and warranty policies. The paper concludes by presenting a future research agenda.


2019 ◽  
Vol 3 (1) ◽  
pp. 12 ◽  
Author(s):  
Hossein Hassani ◽  
Xu Huang ◽  
Emmanuel Silva

Climate science as a data-intensive subject has overwhelmingly affected by the era of big data and relevant technological revolutions. The big successes of big data analytics in diverse areas over the past decade have also prompted the expectation of big data and its efficacy on the big problem—climate change. As an emerging topic, climate change has been at the forefront of the big climate data analytics implementations and exhaustive research have been carried out covering a variety of topics. This paper aims to present an outlook of big data in climate change studies over the recent years by investigating and summarising the current status of big data applications in climate change related studies. It is also expected to serve as a one-stop reference directory for researchers and stakeholders with an overview of this trending subject at a glance, which can be useful in guiding future research and improvements in the exploitation of big climate data.


2020 ◽  
Vol 35 (24) ◽  
pp. 2030009
Author(s):  
C. Z. Yuan

In this article, I review the early years of the study of charmonium physics at the BES experiment and its consequences on the future research at the BESII and BESIII experiments, based on my own experience of working at BES since summer 1992 and material found from the Institute of High Energy Physics (IHEP) archives.


2013 ◽  
Vol 94 (10) ◽  
pp. 1541-1552 ◽  
Author(s):  
R. Hollmann ◽  
C. J. Merchant ◽  
R. Saunders ◽  
C. Downy ◽  
M. Buchwitz ◽  
...  

Observations of Earth from space have been made for over 40 years and have contributed to advances in many aspects of climate science. However, attempts to exploit this wealth of data are often hampered by a lack of homogeneity and continuity and by insufficient understanding of the products and their uncertainties. There is, therefore, a need to reassess and reprocess satellite datasets to maximize their usefulness for climate science. The European Space Agency has responded to this need by establishing the Climate Change Initiative (CCI). The CCI will create new climate data records for (currently) 13 essential climate variables (ECVs) and make these open and easily accessible to all. Each ECV project works closely with users to produce time series from the available satellite observations relevant to users' needs. A climate modeling users' group provides a climate system perspective and a forum to bring the data and modeling communities together. This paper presents the CCI program. It outlines its benefit and presents approaches and challenges for each ECV project, covering clouds, aerosols, ozone, greenhouse gases, sea surface temperature, ocean color, sea level, sea ice, land cover, fire, glaciers, soil moisture, and ice sheets. It also discusses how the CCI approach may contribute to defining and shaping future developments in Earth observation for climate science.


Author(s):  
Preeti Kumari ◽  
◽  
Kavita Lalwani ◽  
Ranjit Dalal ◽  
Ashutosh Bhardwaj ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document