scholarly journals High‐resolution monthly precipitation climatologies over Norway (1981–2010): Joining numerical model data sets and in situ observations

2018 ◽  
Vol 39 (4) ◽  
pp. 2057-2070 ◽  
Author(s):  
Alice Crespi ◽  
Cristian Lussana ◽  
Michele Brunetti ◽  
Andreas Dobler ◽  
Maurizio Maugeri ◽  
...  
2021 ◽  
Author(s):  
Jouke de Baar ◽  
Gerard van der Schrier ◽  
Irene Garcia-Marti ◽  
Else van den Besselaar

<p><strong>Objective</strong></p><p>The purpose of the European Copernicus Climate Change Service (C3S) is to support society by providing information about the past, present and future climate. For the service related to <em>in-situ</em> observations, one of the objectives is to provide high-resolution (0.1x0.1 and 0.25x0.25 degrees) gridded wind speed fields. The gridded wind fields are based on ECA&D daily average station observations for the period 1970-2020.</p><p><strong>Research question</strong> </p><p>We address the following research questions: [1] How efficiently can we provide the gridded wind fields as a statistically reliable ensemble, in order to represent the uncertainty of the gridding? [2] How efficiently can we exploit high-resolution geographical auxiliary variables (e.g. digital elevation model, terrain roughness) to augment the station data from a sparse network, in order to provide gridded wind fields with high-resolution local features?</p><p><strong>Approach</strong></p><p>In our analysis, we apply greedy forward selection linear regression (FSLR) to include the high-resolution effects of the auxiliary variables on monthly-mean data. These data provide a ‘background’ for the daily estimates. We apply cross-validation to avoid FSLR over-fitting and use full-cycle bootstrapping to create FSLR ensemble members. Then, we apply Gaussian process regression (GPR) to regress the daily anomalies. We consider the effect of the spatial distribution of station locations on the GPR gridding uncertainty.</p><p>The goal of this work is to produce several decades of daily gridded wind fields, hence, computational efficiency is of utmost importance. We alleviate the computational cost of the FSLR and GPR analyses by incorporating greedy algorithms and sparse matrix algebra in the analyses.</p><p><strong>Novelty</strong>   </p><p>The gridded wind fields are calculated as a statistical ensemble of realizations. In the present analysis, the ensemble spread is based on uncertainties arising from the auxiliary variables as well as from the spatial distribution of stations.</p><p>Cross-validation is used to tune the GPR hyper parameters. Where conventional GPR hyperparameter tuning aims at an optimal prediction of the gridded mean, instead, we tune the GPR hyperparameters for optimal prediction of the gridded ensemble spread.</p><p>Building on our experience with providing similar gridded climate data sets, this set of gridded wind fields is a novel addition to the E-OBS climate data sets.</p>


Ocean Science ◽  
2019 ◽  
Vol 15 (2) ◽  
pp. 249-268 ◽  
Author(s):  
Johannes Schulz-Stellenfleth ◽  
Joanna Staneva

Abstract. In many coastal areas there is an increasing number and variety of observation data available, which are often very heterogeneous in their temporal and spatial sampling characteristics. With the advent of new systems, like the radar altimeter on board the Sentinel-3A satellite, a lot of questions arise concerning the accuracy and added value of different instruments and numerical models. Quantification of errors is a key factor for applications, like data assimilation and forecast improvement. In the past, the triple collocation method to estimate systematic and stochastic errors of measurements and numerical models was successfully applied to different data sets. This method relies on the assumption that three independent data sets provide estimates of the same quantity. In coastal areas with strong gradients even small distances between measurements can lead to larger differences and this assumption can become critical. In this study the triple collocation method is extended in different ways with the specific problems of the coast in mind. In addition to nearest-neighbour approximations considered so far, the presented method allows for use of a large variety of interpolation approaches to take spatial variations in the observed area into account. Observation and numerical model errors can therefore be estimated, even if the distance between the different data sources is too large to assume that they measure the same quantity. If the number of observations is sufficient, the method can also be used to estimate error correlations between certain data source components. As a second novelty, an estimator for the uncertainty in the derived observation errors is derived as a function of the covariance matrices of the input data and the number of available samples. In the first step, the method is assessed using synthetic observations and Monte Carlo simulations. The technique is then applied to a data set of Sentinel-3A altimeter measurements, in situ wave observations, and numerical wave model data with a focus on the North Sea. Stochastic observation errors for the significant wave height, as well as bias and calibration errors, are derived for the model and the altimeter. The analysis indicates a slight overestimation of altimeter wave heights, which become more pronounced at higher sea states. The smallest stochastic errors are found for the in situ measurements. Different observation geometries of in situ data and altimeter tracks are furthermore analysed, considering 1-D and 2-D interpolation approaches. For example, the geometry of an altimeter track passing between two in situ wave instruments is considered with model data being available at the in situ locations. It is shown that for a sufficiently large sample, the errors of all data sources, as well as the error correlations of the model, can be estimated with the new method.


2020 ◽  
Author(s):  
Sara Moutia

<p>The main advantage of remote sensing products is that they are reasonably good in terms of temporal and special coverage, and they are available in a near real time. Therefore, an understanding of the strengths and weaknesses of satellite data is useful to choose it as an alternative source of information with acceptable accuracy.  On the first hand, this study assesses an Inter-comparison between CMSAF Sunshine Duration (SD) data records and ground observations of 30 data sets from 1983 to 2015. the correlation is very significant and the satellite data fits very closely to in situ observations. On the other hand, trend analysis is applied to SD and Solar Incoming Direct radiation (SID) data, a number of stations show a statistically significant decreasing trend in SD and also SID shows a decreasing trend over Morocco in most of regions especially in summer. The results indicate a general tendency of decrease in incoming solar radiation mostly during summer which could be of some concern for solar energy.</p>


2021 ◽  
Author(s):  
Tai-Long He ◽  
Dylan Jones ◽  
Kazuyuki Miyazaki ◽  
Kevin Bowman ◽  
Zhe Jiang ◽  
...  

<p>The COVID-19 pandemic led to the lockdown of over one-third of Chinese cities in early 2020. Observations have shown significant reductions of atmospheric abundances of NO<sub>2</sub> over China during this period. This change in atmospheric NO<sub>2</sub> implies a dramatic change in emission of NO<sub>x</sub>, which provides a unique opportunity to study the response of the chemistry of the atmospheric to large reductions in anthropogenic emissions. We use a deep learning (DL) model to quantify the change in surface emissions of NO<sub>x</sub> in China that are associated with the observed changes in atmospheric NO<sub>2</sub> during the lockdown period. Compared to conventional data assimilation systems, deep neural networks are free of the potential errors associated with parameterized subgrid-scale processes. Furthermore, they are not susceptible to the chemical errors typically found in atmospheric chemical transport models. The neural-network-based approach also offers a more computationally efficient means of inverse modeling of NO<sub>x</sub> emissions at high spatial resolutions. Our DL model is trained using meteorological predictors and reanalysis data of surface NO<sub>2</sub> from 2005 to 2017. The evaluation is conducted using in-situ measurements of NO<sub>2</sub> in 2019 and 2020. The Baidu 'Qianxi' migration data sets are used to evaluate the model's performance in capturing the typical variation in Chinese NOx emissions during the Chinese New Year holidays. The TROPOMI-derived TCR-2 chemical reanalysis is used to evaluate the DL analysis in 2020. We show that the DL-based approach is able to better reproduce the variation in anthropogenic NO<sub>x</sub> emissions and capture the reduction in Chinese NO<sub>x</sub> emissions during the period of the COVID-19 pandemic.</p>


2019 ◽  
Vol 11 (8) ◽  
pp. 986 ◽  
Author(s):  
Joanne Nightingale ◽  
Jonathan P.D. Mittaz ◽  
Sarah Douglas ◽  
Dick Dee ◽  
James Ryder ◽  
...  

Decision makers need accessible robust evidence to introduce new policies to mitigate and adapt to climate change. There is an increasing amount of environmental information available to policy makers concerning observations and trends relating to the climate. However, this data is hosted across a multitude of websites often with inconsistent metadata and sparse information relating to the quality, accuracy and validity of the data. Subsequently, the task of comparing datasets to decide which is the most appropriate for a certain purpose is very complex and often infeasible. In support of the European Union’s Copernicus Climate Change Service (C3S) mission to provide authoritative information about the past, present and future climate in Europe and the rest of the world, each dataset to be provided through this service must undergo an evaluation of its climate relevance and scientific quality to help with data comparisons. This paper presents the framework for Evaluation and Quality Control (EQC) of climate data products derived from satellite and in situ observations to be catalogued within the C3S Climate Data Store (CDS). The EQC framework will be implemented by C3S as part of their operational quality assurance programme. It builds on past and present international investment in Quality Assurance for Earth Observation initiatives, extensive user requirements gathering exercises, as well as a broad evaluation of over 250 data products and a more in-depth evaluation of a selection of 24 individual data products derived from satellite and in situ observations across the land, ocean and atmosphere Essential Climate Variable (ECV) domains. A prototype Content Management System (CMS) to facilitate the process of collating, evaluating and presenting the quality aspects and status of each data product to data users is also described. The development of the EQC framework has highlighted cross-domain as well as ECV specific science knowledge gaps in relation to addressing the quality of climate data sets derived from satellite and in situ observations. We discuss 10 common priority science knowledge gaps that will require further research investment to ensure all quality aspects of climate data sets can be ascertained and provide users with the range of information necessary to confidently select relevant products for their specific application.


2016 ◽  
Vol 43 (18) ◽  
pp. 9662-9668 ◽  
Author(s):  
Ming Pan ◽  
Xitian Cai ◽  
Nathaniel W. Chaney ◽  
Dara Entekhabi ◽  
Eric F. Wood

Sign in / Sign up

Export Citation Format

Share Document