scholarly journals Response to reviewer comments on "Simplified SAGE II ozone data usage rules"

2020 ◽  
Author(s):  
Stefanie Kremser
Keyword(s):  
2020 ◽  
Vol 12 (2) ◽  
pp. 1419-1435
Author(s):  
Stefanie Kremser ◽  
Larry W. Thomason ◽  
Leroy J. Bird

Abstract. High-quality satellite-based measurements are crucial to the assessment of global stratospheric composition change. The Stratospheric Aerosol and Gas Experiment II (SAGE II) provides the longest, continuous data set of vertically resolved ozone and aerosol extinction coefficients to date and therefore remains a cornerstone of understanding and detecting long-term ozone variability and trends in the stratosphere. Despite its stability, SAGE II measurements must be screened for outliers that are a result of excessive aerosol emitted into the atmosphere and that degrade inferences of change. Current methods for SAGE II ozone measurement quality assurance consist of multiple ad hoc and sometimes conflicting rules, leading to too much valuable data being removed or outliers being missed. In this work, the SAGE II ozone data set version 7.00 is used to develop and present a new set of screening recommendations and to compare the output to the screening recommendations currently used. Applying current recommendations to SAGE II ozone leads to unexpected features, such as removing ozone values around zero if the relative error is used as a screening criterion, leading to biases in monthly mean zonal mean ozone concentrations. Most of these current recommendations were developed based on “visual inspection”, leading to inconsistent rules that might not be applicable at every altitude and latitude. Here, a set of new screening recommendations is presented that take into account the knowledge of how the measurements were made. The number of screening recommendations is reduced to three, which mainly remove ozone values that are affected by high aerosol loading and are therefore not reliable measurements. More data remain when applying these new recommendations compared to the rules that are currently being used, leading to more data being available for scientific studies. The SAGE II ozone data set used here is publicly available at https://doi.org/10.5281/zenodo.3710518 (Kremser et al., 2020). The complete SAGE II version 7.00 data set, which includes other variables in addition to ozone, is available at https://eosweb.larc.nasa.gov/project/sage2/sage2_v7_table (last access: December 2019), https://doi.org/10.5067/ERBS/SAGEII/SOLAR_BINARY_L2-V7.0 (SAGE II Science Team, 2012; Damadeo et al., 2013).


2020 ◽  
Author(s):  
Stefanie Kremser ◽  
Larry W. Thomason ◽  
Leroy J. Bird

Abstract. High quality satellite-based measurements are crucial to the assessment of global stratospheric composition change. The Stratospheric Aerosol and Gas Experiment II (SAGE II) provides to date the longest, continuous data set of vertically resolved ozone and aerosol extinction coefficients and therefore, remains a cornerstone of understanding and detecting long-term ozone variability and trends in the stratosphere. Despite its stability, SAGE II measurements must be screened for outliers that are a result of excessive aerosol emitted into the atmosphere and that degrade inferences of change. Current methods for SAGE II ozone measurement quality assurance consist of multiple ad-hoc, sometimes conflicting rules, leading to too much valuable data that are being removed or outliers being missed. In this work, the SAGE II ozone data set version 7.00 is used to develop and present a new set of screening recommendations and to compare the output to the screening recommendations currently used. Applying current recommendations to SAGE II ozone lead to unexpected features, such as removing ozone values around zero if the relative error is used as a screening criteria, leading to biases in monthly mean zonal mean ozone concentrations. Most of these current recommendations were developed based on "visual inspection", leading to inconsistent rules that might not be applicable at every altitude and latitude. Here, a set of new screening recommendations is presented that take into account the knowledge about how the measurements were made. The number of screening recommendations is reduced to three, which mainly remove ozone values that are affected by high aerosol loading and therefore are not reliable measurements. More data remain when applying these new recommendations compared to the rules that are currently being used, leading to more data being available for scientific studies. The SAGE II ozone data set used here is publicly available at https://doi.org/10.5281/zenodo.3710518. The complete SAGE II version 7.0 data set, which includes other variables in addition to ozone, is available at https://eosweb.larc.nasa.gov/project/sage2/sage2_v7_table, https://doi.org/10.5067/ERBS/SAGEII/SOLAR_BINARY_L2-V7.0 (SAGE II Science Team, 2012; Damadeo et al., 2013).


Author(s):  
Seyed Kourosh Mahjour ◽  
Antonio Alberto Souza Santos ◽  
Manuel Gomes Correia ◽  
Denis José Schiozer

AbstractThe simulation process under uncertainty needs numerous reservoir models that can be very time-consuming. Hence, selecting representative models (RMs) that show the uncertainty space of the full ensemble is required. In this work, we compare two scenario reduction techniques: (1) Distance-based Clustering with Simple Matching Coefficient (DCSMC) applied before the simulation process using reservoir static data, and (2) metaheuristic algorithm (RMFinder technique) applied after the simulation process using reservoir dynamic data. We use these two methods as samples to investigate the effect of static and dynamic data usage on the accuracy and rate of the scenario reduction process focusing field development purposes. In this work, a synthetic benchmark case named UNISIM-II-D considering the flow unit modelling is used. The results showed both scenario reduction methods are reliable in selecting the RMs from a specific production strategy. However, the obtained RMs from a defined strategy using the DCSMC method can be applied to other strategies preserving the representativeness of the models, while the role of the strategy types to select the RMs using the metaheuristic method is substantial so that each strategy has its own set of RMs. Due to the field development workflow in which the metaheuristic algorithm is used, the number of required flow simulation models and the computational time are greater than the workflow in which the DCSMC method is applied. Hence, it can be concluded that static reservoir data usage on the scenario reduction process can be more reliable during the field development phase.


2021 ◽  
pp. 1-8
Author(s):  
Janessa Mladucky ◽  
Bonnie Baty ◽  
Jeffrey Botkin ◽  
Rebecca Anderson

Introduction: Customer data from direct-to-consumer genetic testing (DTC GT) are often used for secondary purposes beyond providing the customer with test results. Objective: The goals of this study were to determine customer knowledge of secondary uses of data, to understand their perception of risks associated with these uses, and to determine the extent of customer concerns about privacy. Methods: Twenty DTC GT customers were interviewed about their experiences. The semi-structured interviews were transcribed, coded, and analyzed for common themes. Results: Most participants were aware of some secondary uses of data. All participants felt that data usage for research was acceptable, but acceptability for non-research purposes varied across participants. The majority of participants were aware of the existence of a privacy policy, but few read the majority of the privacy statement. When previously unconsidered uses of data were discussed, some participants expressed concern over privacy protections for their data. Conclusion: When exposed to new information on secondary uses of data, customers express concerns and a desire to improve consent with transparency, more opt-out options, improved readability, and more information on future uses and potential risks from direct-to-consumer companies. Effective ways to improve readership about the secondary use, risk of use, and protection of customer data should be investigated and the findings implemented by DTC companies to protect public trust in these practices.


2021 ◽  
Vol 10 (1) ◽  
pp. 30
Author(s):  
Alfonso Quarati ◽  
Monica De Martino ◽  
Sergio Rosim

The Open Government Data portals (OGD), thanks to the presence of thousands of geo-referenced datasets, containing spatial information are of extreme interest for any analysis or process relating to the territory. For this to happen, users must be enabled to access these datasets and reuse them. An element often considered as hindering the full dissemination of OGD data is the quality of their metadata. Starting from an experimental investigation conducted on over 160,000 geospatial datasets belonging to six national and international OGD portals, this work has as its first objective to provide an overview of the usage of these portals measured in terms of datasets views and downloads. Furthermore, to assess the possible influence of the quality of the metadata on the use of geospatial datasets, an assessment of the metadata for each dataset was carried out, and the correlation between these two variables was measured. The results obtained showed a significant underutilization of geospatial datasets and a generally poor quality of their metadata. In addition, a weak correlation was found between the use and quality of the metadata, not such as to assert with certainty that the latter is a determining factor of the former.


Author(s):  
Sébastien Canard ◽  
Nicolas Desmoulins ◽  
Sébastien Hallay ◽  
Adel Hamdi ◽  
Dominique Le Hello

2008 ◽  
Vol 8 (11) ◽  
pp. 2847-2857 ◽  
Author(s):  
J. W. Krzyścin ◽  
J. L. Borkowski

Abstract. The total ozone data over Europe are available for only few ground-based stations in the pre-satellite era disallowing examination of the spatial trend variability over the whole continent. A need of having gridded ozone data for a trend analysis and input to radiative transfer models stimulated a reconstruction of the daily ozone values since January 1950. Description of the reconstruction model and its validation were a subject of our previous paper. The data base used was built within the objectives of the COST action 726 "Long-term changes and climatology of UV radiation over Europe". Here we focus on trend analyses. The long-term variability of total ozone is discussed using results of a flexible trend model applied to the reconstructed total ozone data for the period 1950–2004. The trend pattern, which comprises both anthropogenic and "natural" component, is not a priori assumed but it comes from a smooth curve fit to the zonal monthly means and monthly grid values. The ozone long-term changes are calculated separately for cold (October–next year April) and warm (May–September) seasons. The confidence intervals for the estimated ozone changes are derived by the block bootstrapping. The statistically significant negative trends are found almost over the whole Europe only in the period 1985–1994. Negative trends up to −3% per decade appeared over small areas in earlier periods when the anthropogenic forcing on the ozone layer was weak . The statistically positive trends are found only during warm seasons 1995–2004 over Svalbard archipelago. The reduction of ozone level in 2004 relative to that before the satellite era is not dramatic, i.e., up to ~−5% and ~−3.5% in the cold and warm subperiod, respectively. Present ozone level is still depleted over many popular resorts in southern Europe and northern Africa. For high latitude regions the trend overturning could be inferred in last decade (1995–2004) as the ozone depleted areas are not found there in 2004 in spite of substantial ozone depletion in the period 1985–1994.


Sign in / Sign up

Export Citation Format

Share Document