scholarly journals OzFlux data: network integration from collection to curation

2017 ◽  
Vol 14 (12) ◽  
pp. 2903-2928 ◽  
Author(s):  
Peter Isaac ◽  
James Cleverly ◽  
Ian McHugh ◽  
Eva van Gorsel ◽  
Cacilia Ewenz ◽  
...  

Abstract. Measurement of the exchange of energy and mass between the surface and the atmospheric boundary-layer by the eddy covariance technique has undergone great change in the last 2 decades. Early studies of these exchanges were confined to brief field campaigns in carefully controlled conditions followed by months of data analysis. Current practice is to run tower-based eddy covariance systems continuously over several years due to the need for continuous monitoring as part of a global effort to develop local-, regional-, continental- and global-scale budgets of carbon, water and energy. Efficient methods of processing the increased quantities of data are needed to maximise the time available for analysis and interpretation. Standardised methods are needed to remove differences in data processing as possible contributors to observed spatial variability. Furthermore, public availability of these data sets assists with undertaking global research efforts. The OzFlux data path has been developed (i) to provide a standard set of quality control and post-processing tools across the network, thereby facilitating inter-site integration and spatial comparisons; (ii) to increase the time available to researchers for analysis and interpretation by reducing the time spent collecting and processing data; (iii) to propagate both data and metadata to the final product; and (iv) to facilitate the use of the OzFlux data by adopting a standard file format and making the data available from web-based portals. Discovery of the OzFlux data set is facilitated through incorporation in FLUXNET data syntheses and the publication of collection metadata via the RIF-CS format. This paper serves two purposes. The first is to describe the data sets, along with their quality control and post-processing, for the other papers of this Special Issue. The second is to provide an example of one solution to the data collection and curation challenges that are encountered by similar flux tower networks worldwide.

2016 ◽  
Author(s):  
Peter Isaac ◽  
James Cleverly ◽  
Ian McHugh ◽  
Eva van Gorsel ◽  
Cacilia Ewenz ◽  
...  

Abstract. Measurement of the exchange of energy and mass between the surface and the atmospheric boundary-layer by the eddy covariance technique has undergone great change in the last two decades. Early studies of these exchanges were confined to brief field campaigns in carefully controlled conditions followed by months of data analysis. Current practice is to run tower-based eddy covariance systems continuously over several years due to the need for continuous monitoring as part of a global effort to develop local-, regional-, continental- and global-scale budgets of carbon, water and energy. Efficient methods of processing the increased quantities of data are needed to maximise the time available for analysis and interpretation. Standardised methods are needed to remove differences in data processing as possible contributors to observed spatial variability. Furthermore, public availability of these datasets assists with undertaking global research efforts. The OzFlux data path has been developed (i) to provide a standard set of quality control and post-processing tools across the network, thereby facilitating inter-site integration and spatial comparisons; (ii) to increase the time available to researchers for analysis and interpretation by reducing the time spent collecting and processing data; (iii) to propagate both data and metadata to the final product; and (iv) to facilitate the use of the OzFlux data by adopting a standard file format and making the data available from web-based portals. The fundamentals of the OzFlux data path include the adoption of netCDF as the underlying file format to integrate data and metadata, a suite of Python scripts to provide a standard quality control, post-processing, gap filling and partitioning environment, a portal from which data can be downloaded and an OPeNDAP server offering internet access to the latest version of the OzFlux data set. Discovery of the OzFlux data set is facilitated through incorporation in FluxNet data syntheses and the publication of collection metadata via the RIF-CS format. This paper serves two purposes. The first is to describe the datasets, along with their quality control and post-processing, for the other papers of this Special Issue. The second is to provide an example of one solution to the data collection and curation challenges that are encountered by similar flux tower networks worldwide.


Author(s):  
Sang Lim Choi ◽  
Sung Bin Park ◽  
Seungwook Yang ◽  
Eun Sun Lee ◽  
Hyun Jeong Park ◽  
...  

Purpose: Kidney, ureter, and bladder radiography (KUB) has frequently been used in suspected urolithiasis, but its performance is known to be lower than that of computed tomography (CT). This study aimed to investigate the diagnostic performance of digitally post-processed kidney ureter bladder radiography (KUB) in the detection of ureteral stones. Materials And Methods: Thirty patients who underwent digital KUB and CT were included in this retrospective study. The original digital KUB underwent post-processing that involved noise estimation, reduction, and whitening to improve the visibility of ureteral stones. Thus, 60 digital original or post-processed KUB images were obtained and ordered randomly for blinded review. After a period, a second review was performed after unblinding stone laterality. The detection rates were evaluated at both initial and second review, using CT as reference standard. The objective (size) and subjective (visibility) parameters of ureteral stones were analyzed. Fisher’s exact test was used to compare the detection sensitivity between the original and post-processed KUB data set. Visibility analysis was assessed with a paired t-test. Correlation of stone size between CT and digital KUB data sets was assessed with Pearson’s correlation test. Results: The detection rate was higher for most reviewers once stone laterality was provided and was non-significantly better for the post-processed KUB images (p > 0.05). There was no significant difference in stone size among CT and digital KUB data sets. In all reviews, visibility grade was higher in the post-processed KUB images, irrespective of whether stone laterality was provided. Conclusion: Digital post-processing of KUB yielded higher visibility of ureteral stones and could improve stone detection, especially when stone laterality was available. Thus, digitally post-processed KUB can be an excellent modality for detecting ureteral stones and measuring their exact size.


2009 ◽  
Vol 2 (1) ◽  
pp. 421-475 ◽  
Author(s):  
A. Velo ◽  
F. F. Pérez ◽  
X. Lin ◽  
R. M. Key ◽  
T. Tanhua ◽  
...  

Abstract. Data on carbon and carbon-relevant hydrographic and hydrochemical parameters from previously non-publicly available cruise data sets in the Artic Mediterranean Seas (AMS), Atlantic and Southern Ocean have been retrieved and merged to a new database: CARINA (CARbon IN the Atlantic). These data have gone through rigorous quality control (QC) procedures to assure the highest possible quality and consistency. The data for most of the measured parameters in the CARINA database were objectively examined in order to quantify systematic differences in the reported values, i.e. secondary quality control. Systematic biases found in the data have been corrected in the data products, i.e. three merged data files with measured, calculated and interpolated data for each of the three CARINA regions; AMS, Atlantic and Southern Ocean. Out of a total of 188 cruise entries in the CARINA database, 59 reported pH measured values. Here we present details of the secondary QC on pH for the CARINA database. Procedures of quality control, including crossover analysis between cruises and inversion analysis of all crossover data are briefly described. Adjustments were applied to the pH values for 21 of the cruises in the CARINA dataset. With these adjustments the CARINA database is consistent both internally as well as with GLODAP data, an oceanographic data set based on the World Hydrographic Program in the 1990s. Based on our analysis we estimate the internal accuracy of the CARINA pH data to be 0.005 pH units. The CARINA data are now suitable for accurate assessments of, for example, oceanic carbon inventories and uptake rates and for model validation.


2016 ◽  
Author(s):  
Brecht Martens ◽  
Diego G. Miralles ◽  
Hans Lievens ◽  
Robin van der Schalie ◽  
Richard A. M. de Jeu ◽  
...  

Abstract. The Global Land Evaporation Amsterdam Model (GLEAM) is a set of algorithms dedicated to the estimation of terrestrial evaporation and root-zone soil moisture from satellite data. Ever since its development in 2011, the model has been regularly revised aiming at the optimal incorporation of new satellite-observed geophysical variables, and improving the representation of physical processes. In this study, the next version of this model (v3) is presented. Key changes relative to the previous version include: (1) a revised formulation of the evaporative stress, (2) an optimized drainage algorithm, and (3) a new soil moisture data assimilation system. GLEAM v3 is used to produce three new data sets of terrestrial evaporation and root-zone soil moisture, including a 35-year data set spanning the period 1980–2014 (v3.0a, based on satellite-observed soil moisture, vegetation optical depth and snow water equivalents, reanalysis air temperature and radiation, and a multi-source precipitation product), and two fully satellite-based data sets. The latter two share most of their forcing, except for the vegetation optical depth and soil moisture products, which are based on observations from different passive and active C- and L-band microwave sensors (European Space Agency Climate Change Initiative data sets) for the first data set (v3.0b, spanning the period 2003–2015) and observations from the Soil Moisture and Ocean Salinity satellite in the second data set (v3.0c, spanning the period 2011–2015). These three data sets are described in detail, compared against analogous data sets generated using the previous version of GLEAM (v2), and validated against measurements from 64 eddy-covariance towers and 2338 soil moisture sensors across a broad range of ecosystems. Results indicate that the quality of the v3 soil moisture is consistently better than the one from v2: average correlations against in situ surface soil moisture measurements increase from 0.61 to 0.64 in case of the v3.0a data set and the representation of soil moisture in the second layer improves as well, with correlations increasing from 0.47 to 0.53. Similar improvements are observed for the two fully satellite-based data sets. Despite regional differences, the quality of the evaporation fluxes remains overall similar as the one obtained using the previous version of GLEAM, with average correlations against eddy-covariance measurements between 0.78 and 0.80 for the three different data sets. These global data sets of terrestrial evaporation and root-zone soil moisture are now openly available at http://GLEAM.eu and may be used for large-scale hydrological applications, climate studies and research on land-atmosphere feedbacks.


2015 ◽  
Vol 8 (5) ◽  
pp. 4817-4858
Author(s):  
J. Jia ◽  
A. Rozanov ◽  
A. Ladstätter-Weißenmayer ◽  
J. P. Burrows

Abstract. In this manuscript, the latest SCIAMACHY limb ozone scientific vertical profiles, namely the current V2.9 and the upcoming V3.0, are extensively compared with ozone sonde data from the WOUDC database. The comparisons are made on a global scale from 2003 to 2011, involving 61 sonde stations. The retrieval processors used to generate V2.9 and V3.0 data sets are briefly introduced. The comparisons are discussed in terms of vertical profiles and stratospheric partial columns. Our results indicate that the V2.9 ozone profile data between 20–30 km is in good agreement with ground based measurements with less than 5% relative differences in the latitude range of 90° S–40° N (with exception of the tropical Pacific region where an overestimation of more than 10% is observed), which corresponds to less than 5 DU partial column differences. In the tropics the differences are within 3%. However, this data set shows a significant underestimation northwards of 40° N (up to ~15%). The newly developed V3.0 data set reduces this bias to below 10% while maintaining a good agreement southwards of 40° N with slightly increased relative differences of up to 5% in the tropics.


2014 ◽  
Vol 7 (7) ◽  
pp. 2273-2281 ◽  
Author(s):  
G. Fratini ◽  
M. Mauder

Abstract. A comparison of two popular eddy-covariance software packages is presented, namely, EddyPro and TK3. Two approximately 1-month long test data sets were processed, representing typical instrumental setups (i.e., CSAT3/LI-7500 above grassland and Solent R3/LI-6262 above a forest). The resulting fluxes and quality flags were compared. Achieving a satisfying agreement and understanding residual discrepancies required several iterations and interventions of different nature, spanning from simple software reconfiguration to actual code manipulations. In this paper, we document our comparison exercise and show that the two software packages can provide utterly satisfying agreement when properly configured. Our main aim, however, is to stress the complexity of performing a rigorous comparison of eddy-covariance software. We show that discriminating actual discrepancies in the results from inconsistencies in the software configuration requires deep knowledge of both software packages and of the eddy-covariance method. In some instances, it may be even beyond the possibility of the investigator who does not have access to and full knowledge of the source code. Being the developers of EddyPro and TK3, we could discuss the comparison at all levels of details and this proved necessary to achieve a full understanding. As a result, we suggest that researchers are more likely to get comparable results when using EddyPro (v5.1.1) and TK3 (v3.11) – at least with the setting presented in this paper – than they are when using any other pair of EC software which did not undergo a similar cross-validation. As a further consequence, we also suggest that, to the aim of assuring consistency and comparability of centralized flux databases, and for a confident use of eddy fluxes in synthesis studies on the regional, continental and global scale, researchers only rely on software that have been extensively validated in documented intercomparisons.


2017 ◽  
Author(s):  
Peter Berg ◽  
Chantal Donnelly ◽  
David Gustafsson

Abstract. Updating climatological forcing data to near current data are compelling for impact modelling, e.g. to update model simulations or to simulate recent extreme events. Hydrological simulations are generally sensitive to bias in the meteorological forcing data, especially relative to the data used for the calibration of the model. The lack of daily resolution data at a global scale has previously been solved by adjusting re-analysis data global gridded observations. However, existing data sets of this type have been produced for a fixed past time period, determined by the main global observational data sets. Long delays between updates of these data sets leaves a data gap between present and the end of the data set. Further, hydrological forecasts require initialisations of the current state of the snow, soil, lake (and sometimes river) storage. This is normally conceived by forcing the model with observed meteorological conditions for an extended spin-up period, typically at a daily time step, to calculate the initial state. Here, we present a method named GFD (Global Forcing Data) to combine different data sets in order to produce near real-time updated hydrological forcing data that are compatible with the products covering the climatological period. GFD resembles the already established WFDEI method (Weedon et al., 2014) closely, but uses updated climatological observations, and for the near real-time it uses interim products that apply similar methods. This allows GFD to produce updated forcing data including the previous calendar month around the 10th of each month. We present the GFD method and different produced data sets, which are evaluated against the WFDEI data set, as well as with hydrological simulations with the HYPE model over Europe and the Arctic region. We show that GFD performs similarly to WFDEI and that the updated period significantly reduces the bias of the reanalysis data, although less well for the last two months of the updating cycle. For real-time updates until the current day, extending GFD with operational meteorological forecasts, a large drift is present in the hydrological simulations due to the bias of the meteorological forecasting model.


2018 ◽  
Vol 40 ◽  
pp. 162
Author(s):  
Agni Cristina de Carvalho Brito ◽  
Nara Luisa Reis de Andrade ◽  
Larissa Santos Fambri ◽  
Camila Bermond Ruezzene ◽  
Renata Gonçalves Aguiar

The processes of land use and occupation generate interventions in the natural ecosystems making them susceptible to reactions, such as changes in the processes that govern water cycling, emphasizing the importance of monitoring the evapotranspiration behavior. In this sense, the objective of this study was to verify the applicability of the evaporation product originated by the MODIS sensor to a pasture area, from 2003 to 2010, at Fazenda Nossa Senhora in the municipality of Ouro Preto do Oeste - Rondônia. Were used evapotranspiration data from the MODIS (Terra / Aqua) sensor, estimated by MOD16 algorithm, and micrometeorological tower located in the pasture area, generated by eddy covariance system. It was verified that for ET Eddy x ET MOD16 (Quality control – QC 0/8) data set, ET MOD16 (QC 0/8) data showed evapotranspiration values above those of ET Eddy and with a greater amplitude. A linear correlation between the study datasets was not identified, however, seasonal variations are captured by product, showing good approximation with ET Eddy data, especially in the transition periods.


2021 ◽  
Author(s):  
Wouter Dorigo ◽  
Irene Himmelbauer ◽  
Daniel Aberer ◽  
Lukas Schremmer ◽  
Ivana Petrakovic ◽  
...  

Abstract. In 2009, the International Soil Moisture Network (ISMN) was initiated as a community effort, funded by the European Space Agency, to serve as a centralised data hosting facility for globally available in situ soil moisture measurements (Dorigo et al., 2011a, b). The ISMN brings together in situ soil moisture measurements collected and freely shared by a multitude of organisations, harmonizes them in terms of units and sampling rates, applies advanced quality control, and stores them in a database. Users can freely retrieve the data from this database through an online web portal (https://ismn.earth). Meanwhile, the ISMN has evolved into the primary in situ soil moisture reference database worldwide, as evidenced by more than 3000 active users and over 1000 scientific publications referencing the data sets provided by the network. As of December 2020, the ISMN now contains data of 65 networks and 2678 stations located all over the globe, with a time period spanning from 1952 to present.The number of networks and stations covered by the ISMN is still growing and many of the data sets contained in the database continue to be updated. The main scope of this paper is to inform readers about the evolution of the ISMN over the past decade,including a description of network and data set updates and quality control procedures. A comprehensive review of existing literature making use of ISMN data is also provided in order to identify current limitations in functionality and data usage, and to shape priorities for the next decade of operations of this unique community-based data repository.


2010 ◽  
Vol 2 (1) ◽  
pp. 133-155 ◽  
Author(s):  
A. Velo ◽  
F. F. Pérez ◽  
X. Lin ◽  
R. M. Key ◽  
T. Tanhua ◽  
...  

Abstract. Data on carbon and carbon-relevant hydrographic and hydrochemical parameters from 188 previously non-publicly available cruise data sets in the Artic Mediterranean Seas (AMS), Atlantic Ocean and Southern Ocean have been retrieved and merged to a new database: CARINA (CARbon IN the Atlantic Ocean). These data have gone through rigorous quality control (QC) procedures to assure the highest possible quality and consistency. The data for most of the measured parameters in the CARINA database were objectively examined in order to quantify systematic differences in the reported values. Systematic biases found in the data have been corrected in the data products, three merged data files with measured, calculated and interpolated data for each of the three CARINA regions; AMS, Atlantic Ocean and Southern Ocean. Out of a total of 188 cruise entries in the CARINA database, 59 reported pH measured values. All reported pH data have been unified to the Sea-Water Scale (SWS) at 25 °C. Here we present details of the secondary QC of pH in the CARINA database and the scale unification to SWS at 25 °C. The pH scale has been converted for 36 cruises. Procedures of quality control, including crossover analysis between cruises and inversion analysis are described. Adjustments were applied to the pH values for 21 of the cruises in the CARINA dataset. With these adjustments the CARINA database is consistent both internally as well as with the GLODAP data, an oceanographic data set based on the World Hydrographic Program in the 1990s. Based on our analysis we estimate the internal consistency of the CARINA pH data to be 0.005 pH units. The CARINA data are now suitable for accurate assessments of, for example, oceanic carbon inventories and uptake rates, for ocean acidification assessment and for model validation.


Sign in / Sign up

Export Citation Format

Share Document