scholarly journals From pole to pole: 33 years of physical oceanography onboard R/V <i>Polarstern</i>

2017 ◽  
Vol 9 (1) ◽  
pp. 211-220 ◽  
Author(s):  
Amelie Driemel ◽  
Eberhard Fahrbach ◽  
Gerd Rohardt ◽  
Agnieszka Beszczynska-Möller ◽  
Antje Boetius ◽  
...  

Abstract. Measuring temperature and salinity profiles in the world's oceans is crucial to understanding ocean dynamics and its influence on the heat budget, the water cycle, the marine environment and on our climate. Since 1983 the German research vessel and icebreaker Polarstern has been the platform of numerous CTD (conductivity, temperature, depth instrument) deployments in the Arctic and the Antarctic. We report on a unique data collection spanning 33 years of polar CTD data. In total 131 data sets (1 data set per cruise leg) containing data from 10 063 CTD casts are now freely available at doi:10.1594/PANGAEA.860066. During this long period five CTD types with different characteristics and accuracies have been used. Therefore the instruments and processing procedures (sensor calibration, data validation, etc.) are described in detail. This compilation is special not only with regard to the quantity but also the quality of the data – the latter indicated for each data set using defined quality codes. The complete data collection includes a number of repeated sections for which the quality code can be used to investigate and evaluate long-term changes. Beginning with 2010, the salinity measurements presented here are of the highest quality possible in this field owing to the introduction of the OPTIMARE Precision Salinometer.

2016 ◽  
Author(s):  
Amelie Driemel ◽  
Eberhard Fahrbach ◽  
Gerd Rohardt ◽  
Agnieszka Beszczynska-Möller ◽  
Antje Boetius ◽  
...  

Abstract. Measuring temperature and salinity profiles in the world's oceans is crucial to understand ocean dynamics and its influence on the heat budget, the water cycle, the marine environment and on our climate. Since 1983 the German research vessel and icebreaker POLARSTERN has been the platform of numerous CTD deployments in the Arctic and the Antarctic. We report on a unique data collection spanning 33 years of polar CTD (conductivity, temperature, depth) data. In total 131 datasets (one dataset per cruise leg) containing data from 10063 CTD casts are now freely available at doi:10.1594/PANGAEA.860066. During this long period five CTD types with different characteristics and accuracies have been used. Therefore the instruments and processing procedures (sensor calibration, data validation etc.) are described in detail. This compilation is special not only with regard to the quantity, but also the quality of the data – the latter one being indicated for each dataset using defined quality codes. The complete data collection includes a number of repeated sections for which the quality code can be used to investigate and evaluate long-term changes. Beginning with 2010, the salinity measurements presented here are of the highest quality possible in this field owing to the introduction of the Optimare Precision Salinometer.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592092800
Author(s):  
Erin M. Buchanan ◽  
Sarah E. Crain ◽  
Ari L. Cunningham ◽  
Hannah R. Johnson ◽  
Hannah Stash ◽  
...  

As researchers embrace open and transparent data sharing, they will need to provide information about their data that effectively helps others understand their data sets’ contents. Without proper documentation, data stored in online repositories such as OSF will often be rendered unfindable and unreadable by other researchers and indexing search engines. Data dictionaries and codebooks provide a wealth of information about variables, data collection, and other important facets of a data set. This information, called metadata, provides key insights into how the data might be further used in research and facilitates search-engine indexing to reach a broader audience of interested parties. This Tutorial first explains terminology and standards relevant to data dictionaries and codebooks. Accompanying information on OSF presents a guided workflow of the entire process from source data (e.g., survey answers on Qualtrics) to an openly shared data set accompanied by a data dictionary or codebook that follows an agreed-upon standard. Finally, we discuss freely available Web applications to assist this process of ensuring that psychology data are findable, accessible, interoperable, and reusable.


2020 ◽  
pp. 56-80
Author(s):  
Jonathan N. Markowitz

Chapter 4 employs data from three new data sets, the Arctic Military Activity Events Data Set, the Arctic Bases Data Set, and the Icebreaker and Ice-Hardened Warships Data Set. These new data enable a systematic comparison of each state’s Arctic military forces and deployments before and after the 2007 climate shock. The data offer a corrective to both sensationalist media accounts that suggest that all states are scrambling to fight over Arctic resources and those who downplay real changes in states’ Arctic military capabilities and presence. Confirming Rent-Addition’s Theory’s predictions, the descriptive statistical comparisons reveal that the states that were most economically dependent on resource rents, Norway and Russia, were the most willing to back their claims by projecting military force to disputed areas and investing in Arctic bases, ice-hardened warships, and icebreakers.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 879 ◽  
Author(s):  
Uwe Köckemann ◽  
Marjan Alirezaie ◽  
Jennifer Renoux ◽  
Nicolas Tsiftes ◽  
Mobyen Uddin Ahmed ◽  
...  

As research in smart homes and activity recognition is increasing, it is of ever increasing importance to have benchmarks systems and data upon which researchers can compare methods. While synthetic data can be useful for certain method developments, real data sets that are open and shared are equally as important. This paper presents the E-care@home system, its installation in a real home setting, and a series of data sets that were collected using the E-care@home system. Our first contribution, the E-care@home system, is a collection of software modules for data collection, labeling, and various reasoning tasks such as activity recognition, person counting, and configuration planning. It supports a heterogeneous set of sensors that can be extended easily and connects collected sensor data to higher-level Artificial Intelligence (AI) reasoning modules. Our second contribution is a series of open data sets which can be used to recognize activities of daily living. In addition to these data sets, we describe the technical infrastructure that we have developed to collect the data and the physical environment. Each data set is annotated with ground-truth information, making it relevant for researchers interested in benchmarking different algorithms for activity recognition.


Radiocarbon ◽  
2004 ◽  
Vol 46 (1) ◽  
pp. 325-344 ◽  
Author(s):  
Christopher Bronk Ramsey ◽  
Sturt W Manning ◽  
Mariagrazia Galimberti

The eruption of the volcano at Thera (Santorini) in the Aegean Sea undoubtedly had a profound influence on the civilizations of the surrounding region. The date of the eruption has been a subject of much controversy because it must be linked into the established and intricate archaeological phasings of both the prehistoric Aegean and the wider east Mediterranean. Radiocarbon dating of material from the volcanic destruction layer itself can provide some evidence for the date of the eruption, but because of the shape of the calibration curve for the relevant period, the value of such dates relies on there being no biases in the data sets. However, by dating the material from phases earlier and later than the eruption, some of the problems of the calibration data set can be circumvented and the chronology for the region can be resolved with more certainty.In this paper, we draw together the evidence we have accumulated so far, including new data on the destruction layer itself and for the preceding cultural horizon at Thera, and from associated layers at Miletos in western Turkey. Using Bayesian models to synthesize the data and to identify outliers, we conclude from the most reliable 14C evidence (and using the INTCAL98 calibration data set) that the eruption of Thera occurred between 1663 and 1599 BC.


2014 ◽  
Vol 14 (3) ◽  
pp. 1635-1648 ◽  
Author(s):  
A. Redondas ◽  
R. Evans ◽  
R. Stuebi ◽  
U. Köhler ◽  
M. Weber

Abstract. The primary ground-based instruments used to report total column ozone (TOC) are Brewer and Dobson spectrophotometers in separate networks. These instruments make measurements of the UV irradiances, and through a well-defined process, a TOC value is produced. Inherent to the algorithm is the use of a laboratory-determined cross-section data set. We used five ozone cross-section data sets: three data sets that are based on measurements of Bass and Paur; one derived from Daumont, Brion and Malicet (DBM); and a new set determined by Institute of Experimental Physics (IUP), University of Bremen. The three Bass and Paur (1985) sets are as follows: quadratic temperature coefficients from the IGACO (a glossary is provided in Appendix A) web page (IGQ4), the Brewer network operational calibration set (BOp), and the set used by Bernhard et al. (2005) in the reanalysis of the Dobson absorption coefficient values (B05). The ozone absorption coefficients for Brewer and Dobson instruments are then calculated using the normal Brewer operative method, which is essentially the same as that used for Dobson instruments. Considering the standard TOC algorithm for the Brewer instruments and comparing to the Brewer standard operational calibration data set, using the slit functions for the individual instruments, we find the IUP data set changes the calculated TOC by −0.5%, the DBM data set changes the calculated TOC by −3.2%, and the IGQ4 data set at −45 °C changes the calculated TOC by +1.3%. Considering the standard algorithm for the Dobson instruments, and comparing to results using the official 1992 ozone absorption coefficients values and the single set of slit functions defined for all Dobson instruments, the calculated TOC changes by +1%, with little variation depending on which data set is used. We applied the changes to the European Dobson and Brewer reference instruments during the Izaña 2012 Absolute Calibration Campaign. With the application of a common Langley calibration and the IUP cross section, the differences between Brewer and Dobson data sets vanish, whereas using those of Bass and Paur and DBM produces differences of 1.5 and 2%, respectively. A study of the temperature dependence of these cross-section data sets is presented using the Arosa, Switzerland, total ozone record of 2003–2006, obtained from two Brewer-type instruments and one Dobson-type instrument, combined with the stratospheric ozone and temperature profiles from the Payerne soundings in the same period. The seasonal dependence of the differences between the results from the various instruments is greatly reduced with the application of temperature-dependent absorption coefficients, with the greatest reduction obtained using the IUP data set.


2011 ◽  
Vol 29 (7) ◽  
pp. 1317-1330 ◽  
Author(s):  
I. Fiorucci ◽  
G. Muscari ◽  
R. L. de Zafra

Abstract. The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O3, HNO3, CO and N2O at polar and mid-latitudes. Its HNO3 data set shed light on HNO3 annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5° N, 68.8° W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO3 data sets from 1993 South Pole observations to date, in order to produce HNO3 version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100 ± 20 % from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1σ uncertainty on HNO3 v2 mixing ratio vertical profiles depends on altitude and is estimated at ~15 % or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO3 vertical profiles, obtained employing the constrained matrix inversion method, show that v1 and v2 profiles are overall consistent. The main difference is at the HNO3 mixing ratio maximum in the 20–25 km altitude range, which is smaller in v2 than v1 profiles by up to 2 ppbv at mid-latitudes and during the Antarctic fall. This difference suggests a better agreement of GBMS HNO3 v2 profiles with both UARS/ and EOS Aura/MLS HNO3 data than previous v1 profiles.


2016 ◽  
Author(s):  
Dorothee C. E. Bakker ◽  
Benjamin Pfeil ◽  
Camilla S. Landa ◽  
Nicolas Metzl ◽  
Kevin M. O'Brien ◽  
...  

Abstract. The Surface Ocean CO2 Atlas (SOCAT) is a synthesis of quality-controlled fCO2 (fugacity of carbon dioxide) values for the global surface oceans and coastal seas with regular updates. Version 3 of SOCAT has 14.5 million fCO2 values from 3646 data sets covering the years 1957 to 2014. This latest version has an additional 4.4 million fCO2 values relative to version 2 and extends the record from 2011 to 2014. Version 3 also significantly increases the data availability for 2005 to 2013. SOCAT has an average of approximately 1.2 million surface water fCO2 values per year for the years 2006 to 2012. Quality and documentation of the data has improved. A new feature is the data set quality control (QC) flag of E for data from alternative sensors and platforms. The accuracy of surface water fCO2 has been defined for all data set QC flags. Automated range checking has been carried out for all data sets during their upload into SOCAT. The upgrade of the interactive Data Set Viewer (previously known as the Cruise Data Viewer) allows better interrogation of the SOCAT data collection and rapid creation of high-quality figures for scientific presentations. Automated data upload has been launched for version 4 and will enable more frequent SOCAT releases in the future. High-profile scientific applications of SOCAT include quantification of the ocean sink for atmospheric carbon dioxide and its long-term variation, detection of ocean acidification, as well as evaluation of coupled-climate and ocean-only biogeochemical models. Users of SOCAT data products are urged to acknowledge the contribution of data providers, as stated in the SOCAT Fair Data Use Statement. This ESSD (Earth System Science Data) "Living Data" publication documents the methods and data sets used for the assembly of this new version of the SOCAT data collection and compares these with those used for earlier versions of the data collection (Pfeil et al., 2013; Sabine et al., 2013; Bakker et al., 2014).


2017 ◽  
Author(s):  
Peter Berg ◽  
Chantal Donnelly ◽  
David Gustafsson

Abstract. Updating climatological forcing data to near current data are compelling for impact modelling, e.g. to update model simulations or to simulate recent extreme events. Hydrological simulations are generally sensitive to bias in the meteorological forcing data, especially relative to the data used for the calibration of the model. The lack of daily resolution data at a global scale has previously been solved by adjusting re-analysis data global gridded observations. However, existing data sets of this type have been produced for a fixed past time period, determined by the main global observational data sets. Long delays between updates of these data sets leaves a data gap between present and the end of the data set. Further, hydrological forecasts require initialisations of the current state of the snow, soil, lake (and sometimes river) storage. This is normally conceived by forcing the model with observed meteorological conditions for an extended spin-up period, typically at a daily time step, to calculate the initial state. Here, we present a method named GFD (Global Forcing Data) to combine different data sets in order to produce near real-time updated hydrological forcing data that are compatible with the products covering the climatological period. GFD resembles the already established WFDEI method (Weedon et al., 2014) closely, but uses updated climatological observations, and for the near real-time it uses interim products that apply similar methods. This allows GFD to produce updated forcing data including the previous calendar month around the 10th of each month. We present the GFD method and different produced data sets, which are evaluated against the WFDEI data set, as well as with hydrological simulations with the HYPE model over Europe and the Arctic region. We show that GFD performs similarly to WFDEI and that the updated period significantly reduces the bias of the reanalysis data, although less well for the last two months of the updating cycle. For real-time updates until the current day, extending GFD with operational meteorological forecasts, a large drift is present in the hydrological simulations due to the bias of the meteorological forecasting model.


2017 ◽  
Author(s):  
Julia Boike ◽  
Inge Juszak ◽  
Stephan Lange ◽  
Sarah Chadburn ◽  
Eleanor Burke ◽  
...  

Abstract. Most permafrost is located in the Arctic, where frozen organic carbon makes it an important component of the global climate system. Despite the fact that the Arctic climate changes more rapidly than the rest of the globe, observational data density in the region is low. Permafrost thaw and carbon release to the atmosphere are a positive feedback mechanism that can exacerbate climate warming. This positive feedback functions via changing land-atmosphere energy and mass exchanges. There is thus a great need to understand links between the energy balance, which can vary rapidly over hourly to annual time scales, and permafrost, which changes slowly over long time periods. This understanding thus mandates long-term observational data sets. Such a data set is available from the Bayelva Site at Ny-Ålesund, Svalbard, where meteorology, energy balance components and subsurface observations have been made for the last 20 years. Additional data include a high resolution digital elevation model and a panchromatic image. This paper presents the data set produced so far, explains instrumentation, calibration, processing and data quality control, as well as the sources for various resulting data sets. The resulting data set is unique in the Arctic and serves a baseline for future studies. Since the data provide observations of temporally variable parameters that mitigate energy fluxes between permafrost and atmosphere, such as snow depth and soil moisture content, they are suitable for use in integrating, calibrating and testing permafrost as a component in Earth System Models. The data set also includes a high resolution digital elevation model that can be used together with the snow physical information for snow pack modeling. The presented data are available in the supplementary material for this paper and through the PANGAEA website ( https://doi.pangaea.de/10.1594/PANGAEA.880120).


Sign in / Sign up

Export Citation Format

Share Document