scholarly journals Estimating the State of a Geophysical System with Sparse Observations: Time Delay Methods to Achieve Accurate Initial States for Prediction

2016 ◽  
Author(s):  
Zhe An ◽  
Daniel Rey ◽  
Jing Xin Ye ◽  
Henry D. I. Abarbanel

Abstract. The data assimilation process, in which observational data is used to estimate the states and parameters of a dynamical model, becomes seriously impeded when the model expresses chaotic behavior and the number of measurements is below a critical threshold, Ls. Since this problem of insufficient measurements is typical across many fields, including numerical weather prediction, we analyze a method introduced in Rey et al. (2014a, b) to remedy this matter, in the context of the nonlinear shallow water equations on a β-plane. This approach generalizes standard nudging methods by utilizing time delayed measurements to augment the transfer of information from the data to the model. We will show it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. For instance, in Whartenby et al. (2013) we found that to achieve this goal, standard nudging requires observing approximately 70 % of the full set of state variables. Using time delays, this number can be reduced to about 33 %, and even further if Lagrangian drifter information is also incorporated.

2017 ◽  
Vol 24 (1) ◽  
pp. 9-22 ◽  
Author(s):  
Zhe An ◽  
Daniel Rey ◽  
Jingxin Ye ◽  
Henry D. I. Abarbanel

Abstract. The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.


2019 ◽  
Vol 147 (4) ◽  
pp. 1107-1126 ◽  
Author(s):  
Jonathan Poterjoy ◽  
Louis Wicker ◽  
Mark Buehner

Abstract A series of papers published recently by the first author introduce a nonlinear filter that operates effectively as a data assimilation method for large-scale geophysical applications. The method uses sequential Monte Carlo techniques adopted by particle filters, which make no parametric assumptions for the underlying prior and posterior error distributions. The filter also treats the underlying dynamical system as a set of loosely coupled systems to effectively localize the effect observations have on posterior state estimates. This property greatly reduces the number of particles—or ensemble members—required for its implementation. For these reasons, the method is called the local particle filter. The current manuscript summarizes algorithmic advances made to the local particle filter following recent tests performed over a hierarchy of dynamical systems. The revised filter uses modified vector weight calculations and probability mapping techniques from earlier studies, and new strategies for improving filter stability in situations where state variables are observed infrequently with very accurate measurements. Numerical experiments performed on low-dimensional data assimilation problems provide evidence that supports the theoretical benefits of the new improvements. As a proof of concept, the revised particle filter is also tested on a high-dimensional application from a real-time weather forecasting system at the NOAA/National Severe Storms Laboratory (NSSL). The proposed changes have large implications for researchers applying the local particle filter for real applications, such as data assimilation in numerical weather prediction models.


Author(s):  
Huug van den Dool

This is first and foremost a book about short-term climate prediction. The predictions we have in mind are for weather/climate elements, mainly temperature (T) and precipitation (P), at lead times longer than two weeks, beyond the realm of detailed Numerical Weather Prediction (NWP), i.e. predictions for the next month and the next seasons out to at most a few years. call this short-term climate so as to distinguish it from long-term climate change which is not the main subject of this book. A few decades ago “short-term climate prediction” was known as “longrange weather prediction”. In order to understand short-term climate predictions, their skill and what they reveal about the atmosphere, ocean and land, several chapters are devoted to constructing prediction methods. The approach taken is mainly empirical, which means literally that it is based in experience. We will use global data sets to represent the climate and weather humanity experienced (and measured!) in the past several decades. The idea is to use these existing data sets in order to construct prediction methods. In doing so we want to acknowledge that every measurement (with error bars) is a monument about the workings of Nature. We thought about using the word “statistical” instead of “empirical” in the title of the book. These two notions overlap, obviously, but we prefer the word “empirical” because we are driven more by intuition than by a desire to apply existing or developing new statistical theory. While constructing prediction methods we want to discover to the greatest extent possible how the physical system works from observations. While not mentioned in the title, diagnostics of the physical system will thus be an important part of the book as well. We use a variety of classical tools to diagnose the geophysical system. Some of these tools have been developed further and/or old tools are applied in novel ways. We do not intend to cover all diagnostics methods, only those that relate closely to prediction. There will be an emphasis on methods used in operational prediction. It is quite difficult to gain a comprehensive idea from existing literature about methods used in operational short-term climate prediction.


2009 ◽  
Vol 48 (9) ◽  
pp. 1780-1789 ◽  
Author(s):  
David P. Duda ◽  
Patrick Minnis

Abstract Straightforward application of the Schmidt–Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper-tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy: the percent correct (PC) and the Hanssen–Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher (i.e., the forecasts are more skillful) when the climatological frequency of contrail occurrence is used as the critical threshold, whereas the PC scores are higher (i.e., the forecasts are more accurate) when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85% for the prediction of both contrail occurrence and nonoccurrence, although, in practice, larger errors would be anticipated.


2017 ◽  
Vol 27 (12) ◽  
pp. 1750182 ◽  
Author(s):  
Alexander N. Churilov ◽  
Alexander Medvedev ◽  
Zhanybai T. Zhusubaliyev

A popular biomathematics model of the Goodwin oscillator has been previously generalized to a more biologically plausible construct by introducing three time delays to portray the transport phenomena arising due to the spatial distribution of the model states. The present paper addresses a similar conversion of an impulsive version of the Goodwin oscillator that has found application in mathematical modeling, e.g. in endocrine systems with pulsatile hormone secretion. While the cascade structure of the linear continuous part pertinent to the Goodwin oscillator is preserved in the impulsive Goodwin oscillator, the static nonlinear feedback of the former is substituted with a pulse modulation mechanism thus resulting in hybrid dynamics of the closed-loop system. To facilitate the analysis of the mathematical model under investigation, a discrete mapping propagating the continuous state variables through the firing times of the impulsive feedback is derived. Due to the presence of multiple time delays in the considered model, previously developed mapping derivation approaches are not applicable here and a novel technique is proposed and applied. The mapping captures the dynamics of the original hybrid system and is instrumental in studying complex nonlinear phenomena arising in the impulsive Goodwin oscillator. A simulation example is presented to demonstrate the utility of the proposed approach in bifurcation analysis.


2017 ◽  
Vol 56 (2) ◽  
pp. 317-337 ◽  
Author(s):  
R. D. Sharman ◽  
J. M. Pearson

AbstractCurrent automated aviation turbulence forecast algorithms diagnose turbulence from numerical weather prediction (NWP) model output by identifying large values in computed horizontal or vertical spatial gradients of various atmospheric state variables (velocity; temperature) and thresholding these gradients empirically to indicate expected areas of “light,” “moderate,” and “severe” levels of aviation turbulence. This approach is obviously aircraft dependent and cannot accommodate the many different aircraft types that may be in the airspace. Therefore, it is proposed to provide forecasts of an atmospheric turbulence metric: the energy dissipation rate to the one-third power (EDR). A strategy is developed to statistically map automated turbulence forecast diagnostics or groups of diagnostics to EDR. The method assumes a lognormal distribution of EDR and uses climatological peak EDR data from in situ equipped aircraft in conjunction with the distribution of computed diagnostic values. These remapped values can then be combined to provide an ensemble mean EDR that is the final forecast. New mountain-wave-turbulence algorithms are presented, and the lognormal mapping is applied to them as well. The EDR forecasts are compared with aircraft in situ EDR observations and verbal pilot reports (converted to EDR) to obtain statistical performance metrics of the individual diagnostics and the ensemble mean. It is shown by one common performance metric, the area under the relative operating characteristics curve, that the ensemble mean provides better performance than forecasts from individual model diagnostics at all altitudes (low, mid-, and upper levels) and for two input NWP models.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Mengji Shi ◽  
Kaiyu Qin

This paper solves control problems of agents achieving consensus motions in presence of nonuniform time delays by obtaining the maximal tolerable delay value. Two types of consensus motions are considered: the rectilinear motion and the rotational motion. Unlike former results, this paper has remarkably reduced conservativeness of the consensus conditions provided in such form: for each system, if all the nonuniform time delays are bounded by the maximal tolerable delay value which is referred to as “delay margin,” the system will achieve consensus motion; otherwise, if all the delays exceed the delay margin, the system will be unstable. When discussing the system which is intended to achieve rotational consensus motion, an expanded system whose state variables are real numbers (those of the original system are complex numbers) is introduced, and corresponding consensus condition is given also in the form of delay margin. Numerical examples are provided to illustrate the results.


Sign in / Sign up

Export Citation Format

Share Document