Data assimilation for uncertainty reduction using different fidelity numerical models

Author(s):  
Célio Maschio ◽  
Guilherme Daniel Avansi ◽  
Felipe Bruno Mesquita da Silva ◽  
Denis José Schiozer
2021 ◽  
Author(s):  
Leonardo Mingari ◽  
Andrew Prata ◽  
Federica Pardini

<p>Modelling atmospheric dispersion and deposition of volcanic ash is becoming increasingly valuable for understanding the potential impacts of explosive volcanic eruptions on infrastructures, air quality and aviation. The generation of high-resolution forecasts depends on the accuracy and reliability of the input data for models. Uncertainties in key parameters such as eruption column height injection, physical properties of particles or meteorological fields, represent a major source of error in forecasting airborne volcanic ash. The availability of nearly real time geostationary satellite observations with high spatial and temporal resolutions provides the opportunity to improve forecasts in an operational context. Data assimilation (DA) is one of the most effective ways to reduce the error associated with the forecasts through the incorporation of available observations into numerical models. Here we present a new implementation of an ensemble-based data assimilation system based on the coupling between the FALL3D dispersal model and the Parallel Data Assimilation Framework (PDAF). The implementation is based on the last version release of FALL3D (versions 8.x) tailored to the extreme-scale computing requirements, which has been redesigned and rewritten from scratch in the framework of the EU Center of Excellence for Exascale in Solid Earth (ChEESE). The proposed methodology can be efficiently implemented in an operational environment by exploiting high-performance computing (HPC) resources. The FALL3D+PDAF system can be run in parallel and supports online-coupled DA, which allows an efficient information transfer through parallel communication. Satellite-retrieved data from recent volcanic eruptions were considered as input observations for the assimilation system.</p>


2019 ◽  
Vol 12 (2) ◽  
pp. 629-649 ◽  
Author(s):  
Ahmed Attia ◽  
Adrian Sandu

Abstract. A flexible and highly extensible data assimilation testing suite, named DATeS, is described in this paper. DATeS aims to offer a unified testing environment that allows researchers to compare different data assimilation methodologies and understand their performance in various settings. The core of DATeS is implemented in Python and takes advantage of its object-oriented capabilities. The main components of the package (the numerical models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines) are independent of each other, which offers great flexibility to configure data assimilation applications. DATeS can interface easily with large third-party numerical models written in Fortran or in C, and with a plethora of external solvers.


2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Paul Krause

AbstractFor dealing with dynamical instability in predictions, numerical models should be provided with accurate initial values on the attractor of the dynamical system they generate. A discrete control scheme is presented to this end for trailing variables of an evolutive system of ordinary differential equations. The Influence Sampling (IS) scheme adapts sample values of the trailing variables to input values of the determining variables in the attractor. The optimal IS scheme has affordable cost for large systems. In discrete data assimilation runs conducted with the Lorenz 1963 equations and a nonautonomous perturbation of the Lorenz equations whose dynamics shows on-off intermittency the optimal IS was compared to the straightforward insertion method and the Ensemble Kalman Filter (EnKF). With these unstable systems the optimal IS increases by one order of magnitude the maximum spacing between insertion times that the insertion method can handle and performs comparably to the EnKF when the EnKF converges. While the EnKF converges for sample sizes greater than or equal to 10, the optimal IS scheme does so fromsample size 1. This occurs because the optimal IS scheme stabilizes the individual paths of the Lorenz 1963 equations within data assimilation processes.


2019 ◽  
Author(s):  
Lars Nerger ◽  
Qi Tang ◽  
Longjiang Mu

Abstract. Data assimilation integrates information from observational measurements with numerical models. When used with coupled models of Earth system compartments, e.g. the atmosphere and the ocean, consistent joint states can be estimated. A common approach for data assimilation are ensemble-based methods which use an ensemble of state realizations to estimate the state and its uncertainty. These methods are far more costly to compute than a single coupled model because of the required integration of the ensemble. However, with uncoupled models, the methods also have been shown to exhibit a particularly good scaling behavior. This study discusses an approach to augment a coupled model with data assimilation functionality provided by the Parallel Data Assimilation Framework (PDAF). Using only minimal changes in the codes of the different compartment models, a particularly efficient data assimilation system is generated that utilizes parallelization and in-memory data transfers between the models and the data assimilation functions and hence avoids most of the filter reading and writing and also model restarts during the data assimilation process. The study explains the required modifications of the programs on the example of the coupled atmosphere-sea ice-ocean model AWI-CM. Using the case of the assimilation of oceanic observations shows that the data assimilation leads only small overheads in computing time of about 15 % compared to the model without data assimilation and a very good parallel scalability. The model-agnostic structure of the assimilation software ensures a separation of concerns in that the development of data assimilation methods and be separated from the model application.


2020 ◽  
Author(s):  
Sarah Burnett ◽  
Nathanaël Schaeffer ◽  
Kayo Ide ◽  
Daniel Lathrop

<p>The magnetohydrodynamics of Earth has been explored at the University of Maryland through experiments and numerical models. Experimentally, the interaction between Earth's magnetic fields and its outer core is replicated using a three-meter spherical Couette device filled with liquid sodium that is driven by two independently rotating concentric shells and a dipole magnetic field applied from external electromagnets. Currently, this experiment is being prepared for design modifications that aim to increase the helical flows in the poloidal direction in order to match the turbulence of convection-driven flows of Earth. The experiment currently has 33 hall probes measuring the magnetic field, 4 pressure probes, and torque measurements on each sphere. We supplement the experiment with a numerical model, XSHELLS, that uses pseudospectral and finite difference methods to give a full picture of the velocity and magnetic field in the liquid and stainless steel shells. However, its impracticable to resolve all the turbulence. Our ultimate goal is to implement data assimilation by synchronizing the experimental observations with the numerical model, in order to uncover the unmeasured velocity field in the experiment and the full magnetic field as well as to predict the magnetic fields of the experiment. Through numerical simulations (XSHELLS) and data analysis we probe the behavior of the experiment in order to (i) suggest the best locations for new measurements and (ii) find what parameters are most feasible for data assimilation. These computational studies provide insight on the dynamics of this experiment and the measurements required to predict Earth's magnetic field. We gratefully acknowledge the support of NSF Grant No. EAR1417148 & DGE1322106.</p>


2015 ◽  
Vol 143 (10) ◽  
pp. 3893-3911 ◽  
Author(s):  
Soyoung Ha ◽  
Judith Berner ◽  
Chris Snyder

Abstract Mesoscale forecasts are strongly influenced by physical processes that are either poorly resolved or must be parameterized in numerical models. In part because of errors in these parameterizations, mesoscale ensemble data assimilation systems generally suffer from underdispersiveness, which can limit the quality of analyses. Two explicit representations of model error for mesoscale ensemble data assimilation are explored: a multiphysics ensemble in which each member’s forecast is based on a distinct suite of physical parameterization, and stochastic kinetic energy backscatter in which small noise terms are included in the forecast model equations. These two model error techniques are compared with a baseline experiment that includes spatially and temporally adaptive covariance inflation, in a domain over the continental United States using the Weather Research and Forecasting (WRF) Model for mesoscale ensemble forecasts and the Data Assimilation Research Testbed (DART) for the ensemble Kalman filter. Verification against independent observations and Rapid Update Cycle (RUC) 13-km analyses for the month of June 2008 showed that including the model error representation improved not only the analysis ensemble, but also short-range forecasts initialized from these analyses. Explicitly accounting for model uncertainty led to a better-tuned ensemble spread, a more skillful ensemble mean, and higher probabilistic scores, as well as significantly reducing the need for inflation. In particular, the stochastic backscatter scheme consistently outperformed both the multiphysics approach and the control run with adaptive inflation over almost all levels of the atmosphere both deterministically and probabilistically.


2019 ◽  
Author(s):  
Ali Aydoğdu ◽  
Alberto Carrassi ◽  
Colin T. Guider ◽  
Chris K. R. T. Jones ◽  
Pierre Rampal

Abstract. Numerical models solved on adaptive moving meshes have become increasingly prevalent in recent years. Motivating problems include the study of fluids in a Lagrangian frame and the presence of highly localized structures such as shock waves or interfaces. In the former case, Lagrangian solvers move the nodes of the mesh with the dynamical flow; in the latter, mesh resolution is increased in the proximity of the localized structure. Mesh adaptation can include remeshing, a procedure that adds or removes mesh nodes according to specific rules reflecting constraints in the numerical solver. In this case, the number of mesh nodes will change during the integration and, as a result, the dimension of the model’s state vector will not be conserved. This work presents a novel approach to the formulation of ensemble data assimilation for models with this underlying computational structure. The challenge lies in the fact that remeshing entails a different state space dimension across members of the ensemble, thus impeding the usual computation of consistent ensemble-based statistics. Our methodology adds one forward and one backward mapping step before and after the EnKF analysis respectively. This mapping takes all the ensemble members onto a fixed, uniform, reference mesh where the EnKF analysis can be performed. We consider a high- (HR) and a low-resolution (LR) fixed uniform reference mesh, whose resolutions are determined by the remeshing tolerances. This way the reference meshes embed the model numerical constraints and also are upper and lower uniform meshes bounding the resolutions of the individual ensemble meshes. Numerical experiments are carried out using 1D prototypical models: Burgers and Kuramoto-Sivashinsky equations, and both Eulerian and Lagrangian synthetic observations. While the HR strategy generally outperforms that of LR, their skill difference can be reduced substantially by an optimal tuning of the data assimilation parameters. The LR case is appealing in high-dimensions because of its lower computational burden. Lagrangian observations are shown to be very effective in that fewer of them are able to keep the analysis error at a level comparable to the more numerous observers for the Eulerian case. This study is motivated by the development of suitable EnKF strategies for 2D models of the sea-ice that are numerically solved on a Lagrangian mesh with remeshing.


2016 ◽  
Vol 97 (8) ◽  
pp. 1427-1440 ◽  
Author(s):  
Hui Shao ◽  
John Derber ◽  
Xiang-Yu Huang ◽  
Ming Hu ◽  
Kathryn Newman ◽  
...  

Abstract With a goal of improving operational numerical weather prediction (NWP), the Developmental Testbed Center (DTC) has been working with operational centers, including, among others, the National Centers for Environmental Prediction (NCEP), National Oceanic and Atmospheric Administration (NOAA), National Aeronautics and Space Administration (NASA), and the U.S. Air Force, to support numerical models/systems and their research, perform objective testing and evaluation of NWP methods, and facilitate research-to-operations transitions. This article introduces the first attempt of the DTC in the data assimilation area to help achieve this goal. Since 2009, the DTC, NCEP’s Environmental Modeling Center (EMC), and other developers have made significant progress in transitioning the operational Gridpoint Statistical Interpolation (GSI) data assimilation system into a community-based code management framework. Currently, GSI is provided to the public with user support and is open for contributions from internal developers as well as the broader research community, following the same code transition procedures. This article introduces measures and steps taken during this community GSI effort followed by discussions of encountered challenges and issues. The purpose of this article is to promote contributions from the research community to operational data assimilation capabilities and, furthermore, to seek potential solutions to stimulate such a transition and, eventually, improve the NWP capabilities in the United States.


Sign in / Sign up

Export Citation Format

Share Document