ensemble size
Recently Published Documents


TOTAL DOCUMENTS

190
(FIVE YEARS 53)

H-INDEX

31
(FIVE YEARS 3)

Abstract A Valid Time Shifting (VTS) method is explored for the GSI-based ensemble variational (EnVar) system modified to directly assimilate radar reflectivity at convective scales. VTS is a cost-efficient method to increase ensemble size by including subensembles before and after the central analysis time. Additionally, VTS addresses common time and phase model error uncertainties within the ensemble. VTS is examined here for assimilating radar reflectivity in a continuous hourly analysis system for a case study of 1-2 May 2019. The VTS implementation is compared against a 36-member control experiment (ENS-36), to increase ensemble size (3×36 VTS), and as a cost-savings method (3×12 VTS), with time-shifting intervals τ between 15 and 120 min. The 3×36 VTS experiments increased the ensemble spread, with largest subjective benefits in early cycle analyses during convective development. The 3×12 VTS experiments captured analysis with similar accuracy as ENS-36 by the third hourly analysis. Control forecasts launched from hourly EnVar analyses show significant skill increases in 1-h precipitation over ENS-36 out to hour 12 for 3×36 VTS experiments, subjectively attributable to more accurate placement of the convective line. For 3×12 VTS, experiments with τ ≥ 60 min met and exceeded the skill of ENS-36 out to forecast hour 15, with VTS-3×12τ90 maximizing skill. Sensitivity results demonstrate preference to τ = 30–60 min for 3x36 VTS and 60 – 120 min for 3×12 VTS. The best 3×36 VTS experiments add a computational cost of 45-67%, compared to the near tripling of costs when directly increasing ensemble size, while best 3×12 VTS experiments save about 24-41% costs over ENS-36.


2021 ◽  
Vol 12 (4) ◽  
pp. 1427-1501
Author(s):  
Claudia Tebaldi ◽  
Kalyn Dorheim ◽  
Michael Wehner ◽  
Ruby Leung

Abstract. We consider the problem of estimating the ensemble sizes required to characterize the forced component and the internal variability of a number of extreme metrics. While we exploit existing large ensembles, our perspective is that of a modeling center wanting to estimate a priori such sizes on the basis of an existing small ensemble (we assume the availability of only five members here). We therefore ask if such a small-size ensemble is sufficient to estimate accurately the population variance (i.e., the ensemble internal variability) and then apply a well-established formula that quantifies the expected error in the estimation of the population mean (i.e., the forced component) as a function of the sample size n, here taken to mean the ensemble size. We find that indeed we can anticipate errors in the estimation of the forced component for temperature and precipitation extremes as a function of n by plugging into the formula an estimate of the population variance derived on the basis of five members. For a range of spatial and temporal scales, forcing levels (we use simulations under Representative Concentration Pathway 8.5) and two models considered here as our proof of concept, it appears that an ensemble size of 20 or 25 members can provide estimates of the forced component for the extreme metrics considered that remain within small absolute and percentage errors. Additional members beyond 20 or 25 add only marginal precision to the estimate, and this remains true when statistical inference through extreme value analysis is used. We then ask about the ensemble size required to estimate the ensemble variance (a measure of internal variability) along the length of the simulation and – importantly – about the ensemble size required to detect significant changes in such variance along the simulation with increased external forcings. Using the F test, we find that estimates on the basis of only 5 or 10 ensemble members accurately represent the full ensemble variance even when the analysis is conducted at the grid-point scale. The detection of changes in the variance when comparing different times along the simulation, especially for the precipitation-based metrics, requires larger sizes but not larger than 15 or 20 members. While we recognize that there will always exist applications and metric definitions requiring larger statistical power and therefore ensemble sizes, our results suggest that for a wide range of analysis targets and scales an effective estimate of both forced component and internal variability can be achieved with sizes below 30 members. This invites consideration of the possibility of exploring additional sources of uncertainty, such as physics parameter settings, when designing ensemble simulations.


2021 ◽  
Vol 15 ◽  
Author(s):  
Trond A. Tjøstheim ◽  
Birger Johansson ◽  
Christian Balkenius

Organisms must cope with different risk/reward landscapes in their ecological niche. Hence, species have evolved behavior and cognitive processes to optimally balance approach and avoidance. Navigation through space, including taking detours, appears also to be an essential element of consciousness. Such processes allow organisms to negotiate predation risk and natural geometry that obstruct foraging. One aspect of this is the ability to inhibit a direct approach toward a reward. Using an adaptation of the well-known detour paradigm in comparative psychology, but in a virtual world, we simulate how different neural configurations of inhibitive processes can yield behavior that approximates characteristics of different species. Results from simulations may help elucidate how evolutionary adaptation can shape inhibitive processing in particular and behavioral selection in general. More specifically, results indicate that both the level of inhibition that an organism can exert and the size of neural populations dedicated to inhibition contribute to successful detour navigation. According to our results, both factors help to facilitate detour behavior, but the latter (i.e., larger neural populations) appears to specifically reduce behavioral variation.


Processes ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1980
Author(s):  
Lihua Shen ◽  
Hui Liu ◽  
Zhangxin Chen

In this paper, the deterministic ensemble Kalman filter is implemented with a parallel technique of the message passing interface based on our in-house black oil simulator. The implementation is separated into two cases: (1) the ensemble size is greater than the processor number and (2) the ensemble size is smaller than or equal to the processor number. Numerical experiments for estimations of three-phase relative permeabilities represented by power-law models with both known endpoints and unknown endpoints are presented. It is shown that with known endpoints, good estimations can be obtained. With unknown endpoints, good estimations can still be obtained using more observations and a larger ensemble size. Computational time is reported to show that the run time is greatly reduced with more CPU cores. The MPI speedup is over 70% for a small ensemble size and 77% for a large ensemble size with up to 640 CPU cores.


2021 ◽  
Vol 9 (10) ◽  
pp. 1054
Author(s):  
Ang Su ◽  
Liang Zhang ◽  
Xuefeng Zhang ◽  
Shaoqing Zhang ◽  
Zhao Liu ◽  
...  

Due to the model and sampling errors of the finite ensemble, the background ensemble spread becomes small and the error covariance is underestimated during filtering for data assimilation. Because of the constraint of computational resources, it is difficult to use a large ensemble size to reduce sampling errors in high-dimensional real atmospheric and ocean models. Here, based on Bayesian theory, we explore a new spatially and temporally varying adaptive covariance inflation algorithm. To increase the statistical presentation of a finite background ensemble, the prior probability of inflation obeys the inverse chi-square distribution, and the likelihood function obeys the t distribution, which are used to obtain prior or posterior covariance inflation schemes. Different ensemble sizes are used to compare the assimilation quality with other inflation schemes within both the perfect and biased model frameworks. With two simple coupled models, we examined the performance of the new scheme. The results show that the new inflation scheme performed better than existing schemes in some cases, with more stability and fewer assimilation errors, especially when a small ensemble size was used in the biased model. Due to better computing performance and relaxed demand for computational resources, the new scheme has more potential applications in more comprehensive models for prediction initialization and reanalysis. In a word, the new inflation scheme performs well for a small ensemble size, and it may be more suitable for large-scale models.


Water ◽  
2021 ◽  
Vol 13 (18) ◽  
pp. 2588
Author(s):  
Hao-Che Ho ◽  
Yen-Ming Chiang ◽  
Che-Chi Lin ◽  
Hong-Yuan Lee ◽  
Cheng-Chia Huang

The change in movable beds is related to the mechanisms of sediment transport and hydrodynamics. Numerical modelling with empirical equations and the simplified momentum equation is the common means to analyze the complicated sediment transport processing in river channels. The optimization of parameters is essential to obtain the proper results. Inadequate parameters would cause errors during the simulation process and accumulate the errors with long-time simulation. The optimized parameter combination for numerical modelling, however, is rarely discussed. This study adopted the ensemble method to simulate the change in the river channel, with a single model combined with multiple parameters. The optimized parameter combinations for a given river reach are investigated. Two river basins, located in Taiwan, were used as study cases, to simulate river morphology through the SRH-2D, which was developed by the U.S. Bureau of Reclamation. The input parameters related to the sediment transport module were randomly selected within a reasonable range. The parameter sets with proper results were selected as ensemble members. The concentration of sedimentation and bathymetry elevation was used to conduct the calibration. Both study cases show that 20 ensemble members were good enough to capture the results and save simulation time. However, when the ensemble members increased to 100, there was no significant improvement, but a longer simulation time. The result showed that the peak concentration and the occurrence of time could be predicted by the ensemble size of 20. Moreover, with consideration of the bed elevation as the target, the result showed that this method could quantitatively simulate the change in bed elevation. With both cases, this study showed that the ensemble method is a suitable approach for river morphology numerical modelling. The ensemble size of 20 can effectively obtain the result and reduce the uncertainty for sediment transport simulation.


Author(s):  
Sungju Moon ◽  
Jong-Jin Baik

AbstractThe feasibility of using a (3N)-dimensional generalization of the Lorenz system in testing a traditional implementation of the ensemble Kalman filter is explored through numerical experiments. The generalization extends the Lorenz system, known as the Lorenz ’63 model, into a (3N)-dimensional nonlinear system for any positive integer N. Because the extension involves inclusion of additional wavenumber modes, raising the dimension allows the system to resolve smaller-scale motions, a unique characteristic of the present generalization that can be relevant to real modeling scenarios. Model imperfections are simulated by assuming a high-dimensional generalized Lorenz system as the true system and a generalized system of dimension less than or equal to the dimension of the true system as the model system. Different scenarios relevant to data assimilation practices are simulated by varying the dimensional-differences between the model and true systems, ensemble size, and observation frequency and accuracy. It is suggested that the present generalization of the Lorenz system is an interesting and flexible tool for evaluating the effectiveness of data assimilation methods and a meaningful addition to the portfolio of testbed systems that includes the Lorenz ’63 and ’96 models, especially considering its relationship with the Lorenz ’63 model. The results presented in this study can serve as useful benchmarks for testing other data assimilation methods besides the ensemble Kalman filter.


2021 ◽  
Author(s):  
Claudia Tebaldi ◽  
Kalyn Dorheim ◽  
Michael Wehner ◽  
Ruby Leung

Abstract. We consider the problem of estimating the ensemble sizes required to characterize the forced component and the internal variability of a range of extreme metrics. While we exploit existing large ensembles contributed to the CLIVAR Large Ensemble Project, our perspective is that of a modeling center wanting to estimate a-priori such sizes on the basis of an existing small ensemble (we use five members here). We therefore ask if such small-size ensemble is sufficient to estimate the population variance in a way accurate enough to apply a well established formula that quantifies the expected error as a function of n (the ensemble size). We find that indeed we can anticipate errors in the estimation of the forced component for temperature and precipitation extreme metrics as a function of n by applying the population variance derived by five members in the formula. For a range of spatial and temporal scales, forcing levels (we use RCP8.5 simulations), and both models considered here as our proof of concept, CESM1-CAM5 and CanESM2, it appears that an ensemble size of 20 or 25 members can provide estimates of the forced component for the extreme metrics considered that remain within small absolute and percentage errors. Additional members beyond 20 or 25 add only marginal precision to the estimate, which remains true when extreme value analysis is used. We then ask about the ensemble size required to estimate the ensemble variance (a measure of internal variability) along the length of the simulation, and – importantly – about the ensemble size required to detect significant changes in such variance along the simulation with increased external forcings. When an F-test is applied to the ratio of the variances in question, one estimated on the basis of only 5 or 10 ensemble members, one estimated using the full ensemble (up to 50 members in our study) we do not obtain significant results even when the analysis is conducted at the grid-point scale. While we recognize that there will always exist applications and metric definitions requiring larger statistical power and therefore ensemble sizes, our results suggest that for a wide range of analysis targets and scales an effective estimate of both forced component and internal variability can be achieved with sizes below 30 members. This invites consideration of the possibility of exploring additional sources of uncertainty, like physics parameter settings, when designing ensemble simulations.


Author(s):  
Mohammad Nezhadali ◽  
Tuhin Bhakta ◽  
Kristian Fossum ◽  
Trond Mannseth

With large amounts of simultaneous data, like inverted seismic data in reservoir modeling, negative effects of Monte Carlo errors in straightforward ensemble-based data assimilation (DA) are enhanced, typically resulting in underestimation of parameter uncertainties. Utilization of lower fidelity reservoir simulations reduces the computational cost per ensemble member, thereby rendering the possibility of increasing the ensemble size without increasing the total computational cost. Increasing the ensemble size will reduce Monte Carlo errors and therefore benefit DA results. The use of lower fidelity reservoir models will however introduce modeling errors in addition to those already present in conventional fidelity simulation results. Multilevel simulations utilize a selection of models for the same entity that constitute hierarchies both in fidelities and computational costs. In this work, we estimate and approximately account for the multilevel modeling error (MLME), that is, the part of the total modeling error that is caused by using a multilevel model hierarchy, instead of a single conventional model to calculate model forecasts. To this end, four computationally inexpensive approximate MLME correction schemes are considered, and their abilities to correct the multilevel model forecasts for reservoir models with different types of MLME are assessed. The numerical results show a consistent ranking of the MLME correction schemes. Additionally, we assess the performances of the different MLME-corrected model forecasts in assimilation of inverted seismic data. The posterior parameter estimates from multilevel DA with and without MLME correction are compared to results obtained from conventional single-level DA with localization. It is found that multilevel DA (MLDA) with and without MLME correction outperforms conventional DA with localization. The use of all four MLME correction schemes results in posterior parameter estimates with similar quality. Results obtained with MLDA without any MLME correction were also of similar quality, indicating some robustness of MLDA toward MLME.


Sign in / Sign up

Export Citation Format

Share Document