Influence sampling of trailing variables of dynamical systems

2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Paul Krause

AbstractFor dealing with dynamical instability in predictions, numerical models should be provided with accurate initial values on the attractor of the dynamical system they generate. A discrete control scheme is presented to this end for trailing variables of an evolutive system of ordinary differential equations. The Influence Sampling (IS) scheme adapts sample values of the trailing variables to input values of the determining variables in the attractor. The optimal IS scheme has affordable cost for large systems. In discrete data assimilation runs conducted with the Lorenz 1963 equations and a nonautonomous perturbation of the Lorenz equations whose dynamics shows on-off intermittency the optimal IS was compared to the straightforward insertion method and the Ensemble Kalman Filter (EnKF). With these unstable systems the optimal IS increases by one order of magnitude the maximum spacing between insertion times that the insertion method can handle and performs comparably to the EnKF when the EnKF converges. While the EnKF converges for sample sizes greater than or equal to 10, the optimal IS scheme does so fromsample size 1. This occurs because the optimal IS scheme stabilizes the individual paths of the Lorenz 1963 equations within data assimilation processes.

2019 ◽  
Author(s):  
Ali Aydoğdu ◽  
Alberto Carrassi ◽  
Colin T. Guider ◽  
Chris K. R. T. Jones ◽  
Pierre Rampal

Abstract. Numerical models solved on adaptive moving meshes have become increasingly prevalent in recent years. Motivating problems include the study of fluids in a Lagrangian frame and the presence of highly localized structures such as shock waves or interfaces. In the former case, Lagrangian solvers move the nodes of the mesh with the dynamical flow; in the latter, mesh resolution is increased in the proximity of the localized structure. Mesh adaptation can include remeshing, a procedure that adds or removes mesh nodes according to specific rules reflecting constraints in the numerical solver. In this case, the number of mesh nodes will change during the integration and, as a result, the dimension of the model’s state vector will not be conserved. This work presents a novel approach to the formulation of ensemble data assimilation for models with this underlying computational structure. The challenge lies in the fact that remeshing entails a different state space dimension across members of the ensemble, thus impeding the usual computation of consistent ensemble-based statistics. Our methodology adds one forward and one backward mapping step before and after the EnKF analysis respectively. This mapping takes all the ensemble members onto a fixed, uniform, reference mesh where the EnKF analysis can be performed. We consider a high- (HR) and a low-resolution (LR) fixed uniform reference mesh, whose resolutions are determined by the remeshing tolerances. This way the reference meshes embed the model numerical constraints and also are upper and lower uniform meshes bounding the resolutions of the individual ensemble meshes. Numerical experiments are carried out using 1D prototypical models: Burgers and Kuramoto-Sivashinsky equations, and both Eulerian and Lagrangian synthetic observations. While the HR strategy generally outperforms that of LR, their skill difference can be reduced substantially by an optimal tuning of the data assimilation parameters. The LR case is appealing in high-dimensions because of its lower computational burden. Lagrangian observations are shown to be very effective in that fewer of them are able to keep the analysis error at a level comparable to the more numerous observers for the Eulerian case. This study is motivated by the development of suitable EnKF strategies for 2D models of the sea-ice that are numerically solved on a Lagrangian mesh with remeshing.


2019 ◽  
Vol 26 (3) ◽  
pp. 175-193 ◽  
Author(s):  
Ali Aydoğdu ◽  
Alberto Carrassi ◽  
Colin T. Guider ◽  
Chris K. R. T Jones ◽  
Pierre Rampal

Abstract. Numerical models solved on adaptive moving meshes have become increasingly prevalent in recent years. Motivating problems include the study of fluids in a Lagrangian frame and the presence of highly localized structures such as shock waves or interfaces. In the former case, Lagrangian solvers move the nodes of the mesh with the dynamical flow; in the latter, mesh resolution is increased in the proximity of the localized structure. Mesh adaptation can include remeshing, a procedure that adds or removes mesh nodes according to specific rules reflecting constraints in the numerical solver. In this case, the number of mesh nodes will change during the integration and, as a result, the dimension of the model's state vector will not be conserved. This work presents a novel approach to the formulation of ensemble data assimilation (DA) for models with this underlying computational structure. The challenge lies in the fact that remeshing entails a different state space dimension across members of the ensemble, thus impeding the usual computation of consistent ensemble-based statistics. Our methodology adds one forward and one backward mapping step before and after the ensemble Kalman filter (EnKF) analysis, respectively. This mapping takes all the ensemble members onto a fixed, uniform reference mesh where the EnKF analysis can be performed. We consider a high-resolution (HR) and a low-resolution (LR) fixed uniform reference mesh, whose resolutions are determined by the remeshing tolerances. This way the reference meshes embed the model numerical constraints and are also upper and lower uniform meshes bounding the resolutions of the individual ensemble meshes. Numerical experiments are carried out using 1-D prototypical models: Burgers and Kuramoto–Sivashinsky equations and both Eulerian and Lagrangian synthetic observations. While the HR strategy generally outperforms that of LR, their skill difference can be reduced substantially by an optimal tuning of the data assimilation parameters. The LR case is appealing in high dimensions because of its lower computational burden. Lagrangian observations are shown to be very effective in that fewer of them are able to keep the analysis error at a level comparable to the more numerous observers for the Eulerian case. This study is motivated by the development of suitable EnKF strategies for 2-D models of the sea ice that are numerically solved on a Lagrangian mesh with remeshing.


2018 ◽  
Vol 75 (7) ◽  
pp. 2187-2197 ◽  
Author(s):  
A. Guillaume ◽  
B. H. Kahn ◽  
Q. Yue ◽  
E. J. Fetzer ◽  
S. Wong ◽  
...  

AbstractA method is described to characterize the scale dependence of cloud chord length using cloud-type classification reported with the 94-GHz CloudSat radar. The cloud length along the CloudSat track is quantified using horizontal and vertical structures of cloud classification separately for each cloud type and for all clouds independent of cloud type. While the individual cloud types do not follow a clear power-law behavior as a function of horizontal or vertical scale, a robust power-law scaling of cloud chord length is observed when cloud type is not considered. The exponent of horizontal length is approximated by β ≈ 1.66 ± 0.00 across two orders of magnitude (~10–1000 km). The exponent of vertical thickness is approximated by β ≈ 2.23 ± 0.03 in excess of one order of magnitude (~1–14 km). These exponents are in agreement with previous studies using numerical models, satellites, dropsondes, and in situ aircraft observations. These differences in horizontal and vertical cloud scaling are consistent with scaling of temperature and horizontal wind in the horizontal dimension and with scaling of buoyancy flux in the vertical dimension. The observed scale dependence should serve as a guide to test and evaluate scale-cognizant climate and weather numerical prediction models.


2021 ◽  
Vol 217 (3) ◽  
Author(s):  
E. M. Rossi ◽  
N. C. Stone ◽  
J. A. P. Law-Smith ◽  
M. Macleod ◽  
G. Lodato ◽  
...  

AbstractTidal disruption events (TDEs) are among the brightest transients in the optical, ultraviolet, and X-ray sky. These flares are set into motion when a star is torn apart by the tidal field of a massive black hole, triggering a chain of events which is – so far – incompletely understood. However, the disruption process has been studied extensively for almost half a century, and unlike the later stages of a TDE, our understanding of the disruption itself is reasonably well converged. In this Chapter, we review both analytical and numerical models for stellar tidal disruption. Starting with relatively simple, order-of-magnitude physics, we review models of increasing sophistication, the semi-analytic “affine formalism,” hydrodynamic simulations of the disruption of polytropic stars, and the most recent hydrodynamic results concerning the disruption of realistic stellar models. Our review surveys the immediate aftermath of disruption in both typical and more unusual TDEs, exploring how the fate of the tidal debris changes if one considers non-main sequence stars, deeply penetrating tidal encounters, binary star systems, and sub-parabolic orbits. The stellar tidal disruption process provides the initial conditions needed to model the formation of accretion flows around quiescent massive black holes, and in some cases may also lead to directly observable emission, for example via shock breakout, gravitational waves or runaway nuclear fusion in deeply plunging TDEs.


2021 ◽  
Vol 11 (4) ◽  
pp. 1399
Author(s):  
Jure Oder ◽  
Cédric Flageul ◽  
Iztok Tiselj

In this paper, we present uncertainties of statistical quantities of direct numerical simulations (DNS) with small numerical errors. The uncertainties are analysed for channel flow and a flow separation case in a confined backward facing step (BFS) geometry. The infinite channel flow case has two homogeneous directions and this is usually exploited to speed-up the convergence of the results. As we show, such a procedure reduces statistical uncertainties of the results by up to an order of magnitude. This effect is strongest in the near wall regions. In the case of flow over a confined BFS, there are no such directions and thus very long integration times are required. The individual statistical quantities converge with the square root of time integration so, in order to improve the uncertainty by a factor of two, the simulation has to be prolonged by a factor of four. We provide an estimator that can be used to evaluate a priori the DNS relative statistical uncertainties from results obtained with a Reynolds Averaged Navier Stokes simulation. In the DNS, the estimator can be used to predict the averaging time and with it the simulation time required to achieve a certain relative statistical uncertainty of results. For accurate evaluation of averages and their uncertainties, it is not required to use every time step of the DNS. We observe that statistical uncertainty of the results is uninfluenced by reducing the number of samples to the point where the period between two consecutive samples measured in Courant–Friedrichss–Levy (CFL) condition units is below one. Nevertheless, crossing this limit, the estimates of uncertainties start to exhibit significant growth.


2021 ◽  
Author(s):  
Leonardo Mingari ◽  
Andrew Prata ◽  
Federica Pardini

<p>Modelling atmospheric dispersion and deposition of volcanic ash is becoming increasingly valuable for understanding the potential impacts of explosive volcanic eruptions on infrastructures, air quality and aviation. The generation of high-resolution forecasts depends on the accuracy and reliability of the input data for models. Uncertainties in key parameters such as eruption column height injection, physical properties of particles or meteorological fields, represent a major source of error in forecasting airborne volcanic ash. The availability of nearly real time geostationary satellite observations with high spatial and temporal resolutions provides the opportunity to improve forecasts in an operational context. Data assimilation (DA) is one of the most effective ways to reduce the error associated with the forecasts through the incorporation of available observations into numerical models. Here we present a new implementation of an ensemble-based data assimilation system based on the coupling between the FALL3D dispersal model and the Parallel Data Assimilation Framework (PDAF). The implementation is based on the last version release of FALL3D (versions 8.x) tailored to the extreme-scale computing requirements, which has been redesigned and rewritten from scratch in the framework of the EU Center of Excellence for Exascale in Solid Earth (ChEESE). The proposed methodology can be efficiently implemented in an operational environment by exploiting high-performance computing (HPC) resources. The FALL3D+PDAF system can be run in parallel and supports online-coupled DA, which allows an efficient information transfer through parallel communication. Satellite-retrieved data from recent volcanic eruptions were considered as input observations for the assimilation system.</p>


Author(s):  
Sílvio Aparecido Verdério Júnior ◽  
Vicente Luiz Scalon ◽  
Santiago del Rio Oliveira ◽  
Elson Avallone ◽  
Paulo César Mioralli ◽  
...  

Due to their greater flexibility in heating and high productivity, continuous tunnel-type ovens have become the best option for industrial processes. The geometric optimization of ovens to better take advantage of the heat transfer mechanisms by convection and thermal radiation is increasingly researched; with the search for designs that combine lower fuel consumption, greater efficiency and competitiveness, and lower costs. In this sense, this work studied the influence of height on heat exchanges by radiation and convection and other flow parameters to define the best geometric height for the real oven under study. From the dimensions and real operating conditions of continuous tunnel-type ovens were built five numerical models of parametric variation, which were simulated with the free and open-source software OpenFOAM®. The turbulent forced convection regime was characterized in all models. The use of greater heights in the ovens increased and intensified the recirculation regions, reduced the rates of heat transfer by thermal radiation, and reduced the losses of heat by convection. The order of magnitude of heat exchanges by radiation proved to be much higher than heat exchanges by convection, confirming the results of the main references in the technical-scientific literature. It was concluded that the use of ovens with a lower height provides significant increases in the thermal radiation heat transfer rates.


1980 ◽  
Vol 17 (1) ◽  
pp. 60-71 ◽  
Author(s):  
Jean-Claude Mareschal ◽  
Gordon F. West

A tectonic model that attempts to explain common features of Archean geology is investigated. The model supposes the accumulation, by volcanic eruptions, of a thick basaltic pile on a granitoid crust. The thermal blanketing effect of this lava raises the temperature of the granitic crust and eventually softens it enough that gravitational slumping and downfolding of the lava follows.Numerical models of the thermal and mechanical evolution of a granitoid crust covered with a thick lava sequence indicate that such an evolution is possible when reasonable assumptions are made about the temperature dependence of the viscosity in crustal rocks. These models show the lava sinking in relatively narrow regions while wider granite diapirs appear in between. The convection produces strong horizontal temperature gradients that may cause lateral changes in metamoprhic facies. A one order of magnitude drop in accumulated strain occurs between the granite–basalt interface and the center of the granite diaper at a depth of 10–15 km.


Author(s):  
Joshua Simmons ◽  
Kristen Splinter

Physics-based numerical models play an important role in the estimation of storm erosion, particularly at beaches for which there is little historical data. However, the increasing availability of pre-and post-storm data for multiple events and at a number of beaches around the world has opened the possibility of using data-driven approaches for erosion prediction. Both physics-based and purely data-driven approaches have inherent strengths and weaknesses in their ability to predict storm-induced erosion. It is vital that coastal managers and modelers are aware of these trade-offs as well as methods to maximise the value from each modelling approach in an increasingly data-rich environment. In this study, data from approximately 40 years of coastal monitoring at Narrabeen-Collaroy Beach (SE Australia)has been used to evaluate the individual performance of the numerical erosion models SBEACH and XBeach, and a data-driven modelling technique. The models are then combined using a simple weighting technique to provide a hybrid estimate of erosion.Recorded Presentation from the vICCE (YouTube Link): https://youtu.be/v53dZiO8Y60


Sign in / Sign up

Export Citation Format

Share Document