scholarly journals Uncertainty information in climate data records from Earth observation

2017 ◽  
Vol 9 (2) ◽  
pp. 511-527 ◽  
Author(s):  
Christopher J. Merchant ◽  
Frank Paul ◽  
Thomas Popp ◽  
Michael Ablain ◽  
Sophie Bontemps ◽  
...  

Abstract. The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.

2017 ◽  
Author(s):  
Christopher J. Merchant ◽  
Frank Paul ◽  
Thomas Popp ◽  
Michael Ablain ◽  
Sophie Bontemps ◽  
...  

Abstract. Climate data records (CDRs) derived from Earth observation (EO) should include rigorous uncertainty information, to support application of the data in policy, climate modelling and numerical weather prediction reanalysis. Uncertainty, error and quality are distinct concepts, and CDR products should follow international norms for presenting quantified uncertainty. Ideally, uncertainty should be quantified per datum in a CDR, and the uncertainty estimates should be able to discriminate more and less certain data with confidence. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence held in the uncertainty estimate provided, or indicators of conditions violating retrieval assumptions). Errors have many sources and some are correlated across a wide range of time and space scales. Error effects that contribute negligibly to the total uncertainty in a single satellite measurement can be the dominant sources of uncertainty in a CDR on large space and long time scales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. Characterisation of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation, where possible. These principles are quite general, but the form of uncertainty information appropriate to different essential climate variables (ECVs) is highly variable, as confirmed by a quick review of the different approaches to uncertainty taken across different ECVs in the European Space Agency’s Climate Change Initiative. User requirements for uncertainty information can conflict with each other, and again a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.


2021 ◽  
Author(s):  
Christian Borger ◽  
Steffen Beirle ◽  
Thomas Wagner

<p><span>Atmospheric water plays a key role for the Earth’s energy budget and temperature distribution via radiative effects (clouds and vapour) and latent heat transport. Thus, the distribution and transport of water vapour are closely linked to atmospheric dynamics on different spatiotemporal scales. In this context, global monitoring of the water vapour distribution is essential for numerical weather prediction, climate modelling, and a better understanding of climate feedbacks.</span></p><p><span>Total column water vapour (TCWV), or integrated water vapour, can be retrieved from satellite spectra in the visible “blue” spectral range (430-450nm) using Differential Optical Absorption Spectroscopy (DOAS). The UV-vis spectral range offers several advantages for monitoring the global water vapour distribution: for instance it allows for accurate, straightforward retrievals over ocean and land even under partly-cloudy conditions.</span></p><p><span>To investigate changes in the TCWV distribution from space, the Ozone Monitoring Instrument (OMI) on board NASA’s Aura satellite is particularly promising as it provides long-term measurements (late 2004-ongoing) with daily global coverage.</span></p><p><span>Here, we present a global analysis of trends of total column water vapour retrieved from multiple years of OMI observations (2005-2020). Furthermore, we put our results in context to trends of other climate data records and validate the OMI TCWV data by comparisons to additional reference data sets.</span></p>


2020 ◽  
Vol 163 (3) ◽  
pp. 1379-1397 ◽  
Author(s):  
Rutger Dankers ◽  
Zbigniew W. Kundzewicz

AbstractThis paper reviews the sources of uncertainty in physical climate impact assessments. It draws on examples from related fields such as climate modelling and numerical weather prediction in discussing how to interpret the results of multi-model ensembles and the role of model evaluation. Using large-scale, multi-model simulations of hydrological extremes as an example, we demonstrate how large uncertainty at the local scale does not preclude more robust conclusions at the global scale. Finally, some recommendations are made: climate impact studies should be clear about the questions they want to address, transparent about the uncertainties involved, and honest about the assumptions being made.


2019 ◽  
Vol 11 (20) ◽  
pp. 2387 ◽  
Author(s):  
Martina Lagasio ◽  
Antonio Parodi ◽  
Luca Pulvirenti ◽  
Agostino Meroni ◽  
Giorgio Boni ◽  
...  

The Mediterranean region is frequently struck by severe rainfall events causing numerous casualties and several million euros of damages every year. Thus, improving the forecast accuracy is a fundamental goal to limit social and economic damages. Numerical Weather Prediction (NWP) models are currently able to produce forecasts at the km scale grid spacing but unreliable surface information and a poor knowledge of the initial state of the atmosphere may produce inaccurate simulations of weather phenomena. The STEAM (SaTellite Earth observation for Atmospheric Modelling) project aims to investigate whether Sentinel satellites constellation weather observation data, in combination with Global Navigation Satellite System (GNSS) observations, can be used to better understand and predict with a higher spatio-temporal resolution the atmospheric phenomena resulting in severe weather events. Two heavy rainfall events that occurred in Italy in the autumn of 2017 are studied—a localized and short-lived event and a long-lived one. By assimilating a wide range of Sentinel and GNSS observations in a state-of-the-art NWP model, it is found that the forecasts benefit the most when the model is provided with information on the wind field and/or the water vapor content.


2018 ◽  
Vol 18 (10) ◽  
pp. 2769-2783 ◽  
Author(s):  
Keith J. Beven ◽  
Willy P. Aspinall ◽  
Paul D. Bates ◽  
Edoardo Borgomeo ◽  
Katsuichiro Goda ◽  
...  

Abstract. Part 1 of this paper has discussed the uncertainties arising from gaps in knowledge or limited understanding of the processes involved in different natural hazard areas. Such deficits may include uncertainties about frequencies, process representations, parameters, present and future boundary conditions, consequences and impacts, and the meaning of observations in evaluating simulation models. These are the epistemic uncertainties that can be difficult to constrain, especially in terms of event or scenario probabilities, even as elicited probabilities rationalized on the basis of expert judgements. This paper reviews the issues raised by trying to quantify the effects of epistemic uncertainties. Such scientific uncertainties might have significant influence on decisions made, say, for risk management, so it is important to examine the sensitivity of such decisions to different feasible sets of assumptions, to communicate the meaning of associated uncertainty estimates, and to provide an audit trail for the analysis. A conceptual framework for good practice in dealing with epistemic uncertainties is outlined and the implications of applying the principles to natural hazard assessments are discussed. Six stages are recognized, with recommendations at each stage as follows: (1) framing the analysis, preferably with input from potential users; (2) evaluating the available data for epistemic uncertainties, especially when they might lead to inconsistencies; (3) eliciting information on sources of uncertainty from experts; (4) defining a workflow that will give reliable and accurate results; (5) assessing robustness to uncertainty, including the impact on any decisions that are dependent on the analysis; and (6) communicating the findings and meaning of the analysis to potential users, stakeholders, and decision makers. Visualizations are helpful in conveying the nature of the uncertainty outputs, while recognizing that the deeper epistemic uncertainties might not be readily amenable to visualizations.


Author(s):  
Pundra Chandra Shaker Reddy ◽  
Alladi Sureshbabu

Aims & Background: India is a country which has exemplary climate circumstances comprising of different seasons and topographical conditions like high temperatures, cold atmosphere, and drought, heavy rainfall seasonal wise. These utmost varieties in climate make us exact weather prediction is a challenging task. Majority people of the country depend on agriculture. Farmers require climate information to decide the planting. Weather prediction turns into an orientation in farming sector to deciding the start of the planting season and furthermore quality and amount of their harvesting. One of the variables are influencing agriculture is rainfall. Objectives & Methods: The main goal of this project is early and proper rainfall forecasting, that helpful to people who live in regions which are inclined natural calamities such as floods and it helps agriculturists for decision making in their crop and water management using big data analytics which produces high in terms of profit and production for farmers. In this project, we proposed an advanced automated framework called Enhanced Multiple Linear Regression Model (EMLRM) with MapReduce algorithm and Hadoop file system. We used climate data from IMD (Indian Metrological Department, Hyderabad) in 1901 to 2002 period. Results: Our experimental outcomes demonstrate that the proposed model forecasting the rainfall with better accuracy compared with other existing models. Conclusion: The results of the analysis will help the farmers to adopt effective modeling approach by anticipating long-term seasonal rainfall.


Computers ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 82
Author(s):  
Ahmad O. Aseeri

Deep Learning-based methods have emerged to be one of the most effective and practical solutions in a wide range of medical problems, including the diagnosis of cardiac arrhythmias. A critical step to a precocious diagnosis in many heart dysfunctions diseases starts with the accurate detection and classification of cardiac arrhythmias, which can be achieved via electrocardiograms (ECGs). Motivated by the desire to enhance conventional clinical methods in diagnosing cardiac arrhythmias, we introduce an uncertainty-aware deep learning-based predictive model design for accurate large-scale classification of cardiac arrhythmias successfully trained and evaluated using three benchmark medical datasets. In addition, considering that the quantification of uncertainty estimates is vital for clinical decision-making, our method incorporates a probabilistic approach to capture the model’s uncertainty using a Bayesian-based approximation method without introducing additional parameters or significant changes to the network’s architecture. Although many arrhythmias classification solutions with various ECG feature engineering techniques have been reported in the literature, the introduced AI-based probabilistic-enabled method in this paper outperforms the results of existing methods in outstanding multiclass classification results that manifest F1 scores of 98.62% and 96.73% with (MIT-BIH) dataset of 20 annotations, and 99.23% and 96.94% with (INCART) dataset of eight annotations, and 97.25% and 96.73% with (BIDMC) dataset of six annotations, for the deep ensemble and probabilistic mode, respectively. We demonstrate our method’s high-performing and statistical reliability results in numerical experiments on the language modeling using the gating mechanism of Recurrent Neural Networks.


2007 ◽  
Vol 135 (6) ◽  
pp. 2168-2184 ◽  
Author(s):  
Gregory L. West ◽  
W. James Steenburgh ◽  
William Y. Y. Cheng

Abstract Spurious grid-scale precipitation (SGSP) occurs in many mesoscale numerical weather prediction models when the simulated atmosphere becomes convectively unstable and the convective parameterization fails to relieve the instability. Case studies presented in this paper illustrate that SGSP events are also found in the North American Regional Reanalysis (NARR) and are accompanied by excessive maxima in grid-scale precipitation, vertical velocity, moisture variables (e.g., relative humidity and precipitable water), mid- and upper-level equivalent potential temperature, and mid- and upper-level absolute vorticity. SGSP events in environments favorable for high-based convection can also feature low-level cold pools and sea level pressure maxima. Prior to 2003, retrospectively generated NARR analyses feature an average of approximately 370 SGSP events annually. Beginning in 2003, however, NARR analyses are generated in near–real time by the Regional Climate Data Assimilation System (R-CDAS), which is identical to the retrospective NARR analysis system except for the input precipitation and ice cover datasets. Analyses produced by the R-CDAS feature a substantially larger number of SGSP events with more than 4000 occurring in the original 2003 analyses. An oceanic precipitation data processing error, which resulted in a reprocessing of NARR analyses from 2003 to 2005, only partially explains this increase since the reprocessed analyses still produce approximately 2000 SGSP events annually. These results suggest that many NARR SGSP events are not produced by shortcomings in the underlying Eta Model, but by the specification of anomalous latent heating when there is a strong mismatch between modeled and assimilated precipitation. NARR users should ensure that they are using the reprocessed NARR analyses from 2003 to 2005 and consider the possible influence of SGSP on their findings, particularly after the transition to the R-CDAS.


2021 ◽  
Author(s):  
James Harding

<p>Earth Observation (EO) satellites are drawing considerable attention in areas of water resource management, given their potential to provide unprecedented information on the condition of aquatic ecosystems. Despite ocean colours long history; water quality parameter retrievals from shallow and inland waters remains a complex undertaking. Consistent, cross-mission retrievals of the primary optical parameters using state-of-the-art algorithms are limited by the added optical complexity of these waters. Less work has acknowledged their non- or weakly optical parameter counterparts. These can be more informative than their vivid counterparts, their potential covariance would be regionally specific. Here, we introduce a multi-input, multi-output Mixture Density Network (MDN), that largely outperforms existing algorithms when applied across different bio-optical regimes in shallow and inland water bodies. The model is trained and validated using a sizeable historical database in excess of 1,000,000 samples across 38 optical and non-optical parameters, spanning 20 years across 500 surface waters in Scotland. The single network learns to predict concurrently Chlorophyll-a, Colour, Turbidity, pH, Calcium, Total Phosphorous, Total Organic Carbon, Temperature, Dissolved Oxygen and Suspended Solids from real Landsat 7, Landsat 8, and Sentinel 2 spectra. The MDN is found to fully preserve the covariances of the optical and non-optical parameters, while known one-to-many mappings within the non-optical parameters are retained. Initial performance evaluations suggest significant improvements in Chl-a retrievals from existing state-of-the-art algorithms. MDNs characteristically provide a means of quantifying the noise variance around a prediction for a given input, now pertaining to real data under a wide range of atmospheric conditions. We find this to be informative for example in detecting outlier pixels such as clouds, and may similarly be used to guide or inform future work in academic or industrial contexts. </p>


2021 ◽  
Author(s):  
Richard Saltus ◽  
Arnaud Chulliat ◽  
Brian Meyer ◽  
Christopher Amante

<p>Magnetic maps depict spatial variations in the Earth’s magnetic field.  These variations occur at a wide range of scales and are produced via a variety of physical processes related to factors including structure and evolution of the Earth’s core field and the geologic distribution of magnetic minerals in the lithosphere.  Mankind has produced magnetic maps for 100’s of years with increasing fidelity and accuracy and there is a general understanding (particularly among the geophysicists who produce and use these maps) of the approximate level of resolution and accuracy of these maps.  However, few magnetic maps, or the digital grids that typically underpin these maps, have been produced with accompanying uncertainty quantification.  When uncertainty is addressed, it is typically a statistical representation at the grid or survey level (e.g., +- 10 nT overall uncertainty based on line crossings for a modern airborne survey) and not at the cell by cell local level.</p><p>As magnetic map data are increasingly used in complex inversions and in combination with other data or constraints (including in machine learning applications), it is increasingly important to have a handle on the uncertainties in these data.  An example of an application with need for detailed uncertainty estimation is the use of magnetic map information for alternative navigation.  In this application data from an onboard magnetometer is compared with previously mapped (or modeled) magnetic variations.  The uncertainty of this previously mapped information has immediate implications for the potential accuracy of navigation.</p><p>We are exploring the factors contributing to magnetic map uncertainty and producing uncertainty estimates for testing using new data collection in previously mapped (or modeled) map areas.  These factors include (but are likely not limited to) vintage and type of measured data, spatial distribution of measured data, expectation of magnetic variability (e.g., geologic or geochemical environment), statistics of redundant measurement, and spatial scale/resolution of the magnetic map or model.  The purpose of this talk is to discuss the overall issue and our initial results and solicit feedback and ideas from the interpretation community.</p>


Sign in / Sign up

Export Citation Format

Share Document