Development of an Inverse Plume Model for Mass Eruption Rate in Unsteady Conditions

Author(s):  
Stephen A Solovitz

Abstract Following volcanic eruptions, forecasters need accurate estimates of mass eruption rate (MER) to appropriately predict the downstream effects. Most analyses use simple correlations or models based on large eruptions at steady conditions, even though many volcanoes feature significant unsteadiness. To address this, a superposition model is developed based on a technique used for spray injection applications, which predicts plume height as a function of the time-varying exit velocity. This model can be inverted, providing estimates of MER using field observations of a plume. The model parameters are optimized using laboratory data for plumes with physically-relevant exit profiles and Reynolds numbers, resulting in predictions that agree to within 10% of measured exit velocities. The model performance is examined using a historic eruption from Stromboli with well-documented unsteadiness, again providing MER estimates of the correct order of magnitude. This method can provide a rapid alternative for real-time forecasting of small, unsteady eruptions.

2018 ◽  
Vol 360 ◽  
pp. 61-83 ◽  
Author(s):  
Tobias Dürig ◽  
Magnús T. Gudmundsson ◽  
Fabio Dioguardi ◽  
Mark Woodhouse ◽  
Halldór Björnsson ◽  
...  

2021 ◽  
Vol 13 (12) ◽  
pp. 2405
Author(s):  
Fengyang Long ◽  
Chengfa Gao ◽  
Yuxiang Yan ◽  
Jinling Wang

Precise modeling of weighted mean temperature (Tm) is critical for realizing real-time conversion from zenith wet delay (ZWD) to precipitation water vapor (PWV) in Global Navigation Satellite System (GNSS) meteorology applications. The empirical Tm models developed by neural network techniques have been proved to have better performances on the global scale; they also have fewer model parameters and are thus easy to operate. This paper aims to further deepen the research of Tm modeling with the neural network, and expand the application scope of Tm models and provide global users with more solutions for the real-time acquisition of Tm. An enhanced neural network Tm model (ENNTm) has been developed with the radiosonde data distributed globally. Compared with other empirical models, the ENNTm has some advanced features in both model design and model performance, Firstly, the data for modeling cover the whole troposphere rather than just near the Earth’s surface; secondly, the ensemble learning was employed to weaken the impact of sample disturbance on model performance and elaborate data preprocessing, including up-sampling and down-sampling, which was adopted to achieve better model performance on the global scale; furthermore, the ENNTm was designed to meet the requirements of three different application conditions by providing three sets of model parameters, i.e., Tm estimating without measured meteorological elements, Tm estimating with only measured temperature and Tm estimating with both measured temperature and water vapor pressure. The validation work is carried out by using the radiosonde data of global distribution, and results show that the ENNTm has better performance compared with other competing models from different perspectives under the same application conditions, the proposed model expanded the application scope of Tm estimation and provided the global users with more choices in the applications of real-time GNSS-PWV retrival.


1995 ◽  
Vol 377 ◽  
Author(s):  
Tilo P. Drüsedau ◽  
Andreas N. Panckow ◽  
Bernd Schröder

ABSTRACTInvestigations on the gap state density were performed on a variety of samples of hydrogenated amorphous silicon germanium alloys (Ge fraction around 40 at%) containing different amounts of hydrogen. From subgap absorption measurements the values of the “integrated excess absorption” and the “defect absorption” were determined. Using a calibration constant, which is well established for the determination of the defect density from the integrated excess absorption of a-Si:H and a-Ge:H, it was found that the defect density is underestimated by nearly one order of magnitude. The underlying mechanisms for this discrepancy are discussed. The calibration constants for the present alloys are determined to 8.3×1016 eV−1 cnr2 and 1.7×1016 cm−2 for the excess and defect absorption, respectively. The defect density of the films was found to depend on the Urbach energy according to the law derived from Stutzmann's dangling bond - weak bond conversion model for a-Si:H. However, the model parameters - the density of states at the onset of the exponential tails N*=27×1020 eV−1 cm−3 and the position of the demarcation energy Edb-E*=0.1 eV are considerably smaller than in a-Si:H.


2016 ◽  
Vol 144 (2) ◽  
pp. 575-589 ◽  
Author(s):  
S. Lu ◽  
H. X. Lin ◽  
A. W. Heemink ◽  
G. Fu ◽  
A. J. Segers

Abstract Volcanic ash forecasting is a crucial tool in hazard assessment and operational volcano monitoring. Emission parameters such as plume height, total emission mass, and vertical distribution of the emission plume rate are essential and important in the implementation of volcanic ash models. Therefore, estimation of emission parameters using available observations through data assimilation could help to increase the accuracy of forecasts and provide reliable advisory information. This paper focuses on the use of satellite total-ash-column data in 4D-Var based assimilations. Experiments show that it is very difficult to estimate the vertical distribution of effective volcanic ash injection rates from satellite-observed ash columns using a standard 4D-Var assimilation approach. This paper addresses the ill-posed nature of the assimilation problem from the perspective of a spurious relationship. To reduce the influence of a spurious relationship created by a radiate observation operator, an adjoint-free trajectory-based 4D-Var assimilation method is proposed, which is more accurate to estimate the vertical profile of volcanic ash from volcanic eruptions. The method seeks the optimal vertical distribution of emission rates of a reformulated cost function that computes the total difference between simulated and observed ash columns. A 3D simplified aerosol transport model and synthetic satellite observations are used to compare the results of both the standard method and the new method.


2018 ◽  
Vol 15 (9) ◽  
pp. 2909-2930 ◽  
Author(s):  
Sebastian Lienert ◽  
Fortunat Joos

Abstract. A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.


2018 ◽  
Vol 22 (8) ◽  
pp. 4565-4581 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes has led to the formulation of the multiple hypotheses approach in hydrological modeling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step by step. This study tackles the problem the other way around: we start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash–Sutcliffe; percentage bias, PBIAS; and the ratio between the root mean square error and the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 and the final Model 15 with HBV Light. The final model is not able to outperform HBV Light, but we find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and fewer model parameters.


1980 ◽  
Vol 17 (1) ◽  
pp. 60-71 ◽  
Author(s):  
Jean-Claude Mareschal ◽  
Gordon F. West

A tectonic model that attempts to explain common features of Archean geology is investigated. The model supposes the accumulation, by volcanic eruptions, of a thick basaltic pile on a granitoid crust. The thermal blanketing effect of this lava raises the temperature of the granitic crust and eventually softens it enough that gravitational slumping and downfolding of the lava follows.Numerical models of the thermal and mechanical evolution of a granitoid crust covered with a thick lava sequence indicate that such an evolution is possible when reasonable assumptions are made about the temperature dependence of the viscosity in crustal rocks. These models show the lava sinking in relatively narrow regions while wider granite diapirs appear in between. The convection produces strong horizontal temperature gradients that may cause lateral changes in metamoprhic facies. A one order of magnitude drop in accumulated strain occurs between the granite–basalt interface and the center of the granite diaper at a depth of 10–15 km.


2014 ◽  
Vol 14 (23) ◽  
pp. 32233-32323 ◽  
Author(s):  
M. Bocquet ◽  
H. Elbern ◽  
H. Eskes ◽  
M. Hirtl ◽  
R. Žabkar ◽  
...  

Abstract. Data assimilation is used in atmospheric chemistry models to improve air quality forecasts, construct re-analyses of three-dimensional chemical (including aerosol) concentrations and perform inverse modeling of input variables or model parameters (e.g., emissions). Coupled chemistry meteorology models (CCMM) are atmospheric chemistry models that simulate meteorological processes and chemical transformations jointly. They offer the possibility to assimilate both meteorological and chemical data; however, because CCMM are fairly recent, data assimilation in CCMM has been limited to date. We review here the current status of data assimilation in atmospheric chemistry models with a particular focus on future prospects for data assimilation in CCMM. We first review the methods available for data assimilation in atmospheric models, including variational methods, ensemble Kalman filters, and hybrid methods. Next, we review past applications that have included chemical data assimilation in chemical transport models (CTM) and in CCMM. Observational data sets available for chemical data assimilation are described, including surface data, surface-based remote sensing, airborne data, and satellite data. Several case studies of chemical data assimilation in CCMM are presented to highlight the benefits obtained by assimilating chemical data in CCMM. A case study of data assimilation to constrain emissions is also presented. There are few examples to date of joint meteorological and chemical data assimilation in CCMM and potential difficulties associated with data assimilation in CCMM are discussed. As the number of variables being assimilated increases, it is essential to characterize correctly the errors; in particular, the specification of error cross-correlations may be problematic. In some cases, offline diagnostics are necessary to ensure that data assimilation can truly improve model performance. However, the main challenge is likely to be the paucity of chemical data available for assimilation in CCMM.


2008 ◽  
Vol 5 (3) ◽  
pp. 1641-1675 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives an unique and very best parameter vector. The parameters of hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on the half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


Sign in / Sign up

Export Citation Format

Share Document