Influence of the Meteorological Input Data on the Comparison between Calculated and Measured Aerosol Ground Level Concentrations and Depositions

Author(s):  
I. Mertens ◽  
J. Kretzschmar ◽  
B. Vanderborght

In the special case shown here, the Gaussian plume model does not predict the location of the maximum concentration in agreement with the experiment, but it is appropriate to determine the concentration decay in downwind direction. That what happens between the point source location and the maximum location is of accademic interest only. A question for practical purpose is how we can get information about the maximum location, where from the model is realistic. From equation (3.13) we can deduct a rough approximation of the location where maximum ground-level concentration occurs. It is argued that the turbulent diffusion acts more and more on the emitted substances, when the distance from the point source increases: therefore the downwind distance dependency of the diffusion coefficients is done afterwards. If we drop this dependency, equation (3.13) leads to Xmax = 34,4 m for AK = I (curve a) and xmax = 87,7 m for AK = V (curve b), what is demonstrated in fig. 11. The interpolated ranges of measured values are lined in. Curve a overestimates the nondimensional concentration maximum, but its location seems to be correct. In the case of curve b the situation is inverted. C urve c is calculated with the data of AK = II. The decay of the nondimen­ sional concentration is predicted well behind the maximum. Curve d is produced with F - 12,1, f = 0,069, G = 0,04 and g = 1,088. The ascent of concentration is acceptable, but that is all, because there is no explana­ tion of plausibility how to alter the diffusivity parameters. Therefore it must be our aim to find a suitable correction in connection with the meteorological input data. o 0


2020 ◽  
Vol 24 (8) ◽  
pp. 4061-4090 ◽  
Author(s):  
Silvia Terzago ◽  
Valentina Andreoli ◽  
Gabriele Arduini ◽  
Gianpaolo Balsamo ◽  
Lorenzo Campo ◽  
...  

Abstract. Snow models are usually evaluated at sites providing high-quality meteorological data, so that the uncertainty in the meteorological input data can be neglected when assessing model performances. However, high-quality input data are rarely available in mountain areas and, in practical applications, the meteorological forcing used to drive snow models is typically derived from spatial interpolation of the available in situ data or from reanalyses, whose accuracy can be considerably lower. In order to fully characterize the performances of a snow model, the model sensitivity to errors in the input data should be quantified. In this study we test the ability of six snow models to reproduce snow water equivalent, snow density and snow depth when they are forced by meteorological input data with gradually lower accuracy. The SNOWPACK, GEOTOP, HTESSEL, UTOPIA, SMASH and S3M snow models are forced, first, with high-quality measurements performed at the experimental site of Torgnon, located at 2160 m a.s.l. in the Italian Alps (control run). Then, the models are forced by data at gradually lower temporal and/or spatial resolution, obtained by (i) sampling the original Torgnon 30 min time series at 3, 6, and 12 h, (ii) spatially interpolating neighbouring in situ station measurements and (iii) extracting information from GLDAS, ERA5 and ERA-Interim reanalyses at the grid point closest to the Torgnon site. Since the selected models are characterized by different degrees of complexity, from highly sophisticated multi-layer snow models to simple, empirical, single-layer snow schemes, we also discuss the results of these experiments in relation to the model complexity. The results show that, when forced by accurate 30 min resolution weather station data, the single-layer, intermediate-complexity snow models HTESSEL and UTOPIA provide similar skills to the more sophisticated multi-layer model SNOWPACK, and these three models show better agreement with observations and more robust performances over different seasons compared to the lower-complexity models SMASH and S3M. All models forced by 3-hourly data provide similar skills to the control run, while the use of 6- and 12-hourly temporal resolution forcings may lead to a reduction in model performances if the incoming shortwave radiation is not properly represented. The SMASH model generally shows low sensitivity to the temporal degradation of the input data. Spatially interpolated data from neighbouring stations and reanalyses are found to be adequate forcings, provided that temperature and precipitation variables are not affected by large biases over the considered period. However, a simple bias-adjustment technique applied to ERA-Interim temperatures allowed all models to achieve similar performances to the control run. Regardless of their complexity, all models show weaknesses in the representation of the snow density.


2015 ◽  
Vol 17 (3) ◽  
pp. 149
Author(s):  
Pande Made Udiyani ◽  
Sri Kuntjoro

ABSTRAK PENGARUH KONDISI ATMOSFERIK TERHADAP PERHITUNGAN PROBABILISTIK DAMPAK RADIOLOGI KECELAKAAN PWR 1000-MWe.  Perhitungan dampak kecelakaan radiologi terhadap lepasan produk fisi akibat kecelakaan potensial yang mungkin terjadi di Pressurized Water Reactor (PWR) diperlukan secara probabilistik. Mengingat kondisi atmosfer sangat berperan terhadap dispersi radionuklida di lingkungan, dalam penelitian ini akan dianalisis pengaruh kondisi atmosferik terhadap perhitungan probabilistik dari konsekuensi kecelakaan reaktor.  Tujuan penelitian adalah melakukan analisis terhadap pengaruh kondisi atmosfer berdasarkan model data input meteorologi terhadap dampak radiologi kecelakaan PWR 1000-MWe yang disimulasikan pada tapak yang mempunyai kondisi meteorologi yang berbeda. Simulasi menggunakan program PC-Cosyma dengan moda perhitungan probabilistik, dengan data input meteorologi yang dieksekusi secara cyclic dan stratified, dan disimulasikan di Tapak Semenanjung Muria dan Pesisir Serang. Data meteorologi diambil setiap jam untuk jangka waktu satu tahun. Hasil perhitungan menunjukkan bahwa frekuensi kumulatif  untuk model input yang sama untuk Tapak pesisir Serang lebih tinggi dibandingkan dengan Semenanjung Muria. Untuk tapak yang sama, frekuensi kumulatif model input cyclic lebih tinggi dibandingkan model stratified. Model cyclic memberikan keleluasan dalam menentukan tingkat ketelitian perhitungan dan tidak membutuhkan data acuan dibandingkan dengan model stratified. Penggunaan model cyclic dan stratified melibatkan jumlah data yang besar dan pengulangan perhitungan  akan meningkatkan  ketelitian nilai-nilai statistika perhitungan. Kata kunci: dampak kecelakaan, PWR 1000-MWe,  probabilistik,  atmosferik, PC-Cosyma   ABSTRACT THE INFLUENCE OF ATMOSPHERIC CONDITIONS TO PROBABILISTIC CALCULATION OF IMPACT OF RADIOLOGY ACCIDENT ON PWR-1000MWe. The calculation of the radiological impact of the fission products releases due to potential accidents that may occur in the PWR (Pressurized Water Reactor) is required in a  probabilistic. The atmospheric conditions greatly contribute to the dispersion of radionuclides in the environment, so that in this study will be analyzed the influence of atmospheric conditions on probabilistic calculation of the reactor accidents consequences. The objective of this study is to conduct an analysis of the influence of atmospheric conditions based on meteorological input data models on the radiological consequences of PWR-1000MWe accidents. Simulations using PC-Cosyma code with probabilistic calculations mode, the meteorological data input executed cyclic and stratified, the meteorological input data are executed in the cyclic and stratified, and simulated in Muria Peninsula and Serang Coastal. Meteorological data were taken every hour for the duration of the year. The result showed that the cumulative frequency for the same input models for Serang coastal is higher than the Muria Peninsula. For the same site, cumulative frequency on cyclic input models is higher than stratified models. The cyclic models provide flexibility in determining the level of accuracy of calculations and do not require reference data compared to stratified models. The use of cyclic and stratified models involving large amounts of data and calculation repetition will improve the accuracy of statistical calculation values. Keywords: accident impact, PWR 1000 MWe, probabilistic, atmospheric, PC-Cosyma


2019 ◽  
Vol 159 ◽  
pp. 90-95 ◽  
Author(s):  
Kosmas A. Kavadias ◽  
Emily Karamanou

2020 ◽  
Author(s):  
Stephanie Mayer ◽  
Alec van Herwijnen ◽  
Mathias Bavay ◽  
Bettina Richter ◽  
Jürg Schweizer

<p>Numerical snow cover models enable simulating present or future snow stratigraphy based on meteorological input data from automatic weather stations, numerical weather prediction or climate models. To assess avalanche danger for short-term forecasts or with respect to long-term trends induced by a warming climate, the modeled vertical layering of the snowpack has to be interpreted in terms of mechanical instability. In recent years, improvements in our understanding of dry-snow slab avalanche formation have led to the introduction of new metrics describing the fracture processes leading to avalanche release. Even though these instability metrics have been implemented into the detailed snow cover model SNOWPACK, validated threshold values that discriminate rather stable from rather unstable snow conditions are not readily available. To overcome this issue, we compared a comprehensive dataset of almost 600 manual snow profiles with simulations. The manual profiles were observed in the region of Davos over 17 different winters and include stability tests such as the Rutschblock test as well as observations of signs of instability. To simulate snow stratigraphy at the locations of the manual profiles, we obtained meteorological input data by interpolating measurements from a network of automatic weather stations. By matching simulated snow layers with the layers from traditional snow profiles, we established a method to detect potential weak layers in the simulated profiles and determine the degree of instability. To this end, thresholds for failure initiation (skier stability index) and crack propagation criteria (critical crack length) were calibrated using the observed stability test results and signs of instability incorporated in the manual observations. The resulting instability criteria are an important step towards exploiting numerical snow cover models for snow instability assessment.</p>


2019 ◽  
Author(s):  
Silvia Terzago ◽  
Valentina Andreoli ◽  
Gabriele Arduini ◽  
Gianpaolo Balsamo ◽  
Lorenzo Campo ◽  
...  

Abstract. Snow models are usually evaluated at sites providing high-quality meteorological data, so that the uncertainty in the meteorological input data can be neglected when assessing the model performances. However, high-quality input data are rarely available in mountain areas and, in practical applications, the meteorological forcing to drive snow models is typically derived from spatial interpolation of the available in-situ data or from reanalyses, whose accuracy can be considerably lower. In order to fully characterize the performances of a snow model, the model sensitivity to errors in the input data should be quantified. In this study we test the ability of six snow models to reproduce snow water equivalent, snow density and snow depth when they are forced by meteorological input data with gradually lower accuracy. The SNOWPACK, GEOTOP, HTESSEL, UTOPIA, SMASH and S3M snow models are forced, first, with high-quality measurements performed at the experimental site of Torgnon, located at 2160 m a.s.l. in the Italian Alps (control run). Then, the models are forced by data at gradually lower temporal and/or spatial resolutions, obtained (i) by sampling the original Torgnon 30-minute time series at 3, 6, and 12 hours, (ii) by spatially interpolating neighboring in-situ station measurements and (iii) by extracting information from GLDAS, ERA5, ERA-Interim reanalyses at the gridpoint closest to the Torgnon station. Since the selected models are characterized by different degrees of complexity, from highly sophisticated multi-layer snow models to simple, empirical, single-layer snow schemes, we also discuss the results of these experiments in relation to the model complexity. Results show that when forced by accurate 30-min resolution weather station data the single-layer, intermediate-complexity snow models HTESSEL and UTOPIA provide similar skills as the more sophisticated multi-layer model SNOWPACK, and these three models show better agreement with observations and more robust performances over different seasons compared to the lower complexity models SMASH and S3M. All models forced by 3-hourly data provide similar skills as the control run while with 6- and 12-hourly temporal resolution forcings we generally observe a reduction in model performances, except for the SMASH model which shows low sensitivity to the temporal degradation of the input data. Spatially interpolated data from neighboring stations and reanalyses result to be adequate forcings, provided that temperature and precipitation variables are not affected by large biases over the considered period. A simple bias-adjustment technique applied to ERA-Interim temperatures, however, allowed all models to achieve similar performances as in the control run. All models irrespectively of their complexity show weaknesses in the representation of the snow density.


2017 ◽  
Author(s):  
Martin Van Damme ◽  
Simon Whitburn ◽  
Lieven Clarisse ◽  
Cathy Clerbaux ◽  
Daniel Hurtmans ◽  
...  

Abstract. Recently, Whitburn et al. (2016) presented a neural network-based algorithm for retrieving atmospheric ammonia (NH3) columns from IASI satellite observations. In the past year, several improvements have been introduced and the resulting new baseline version, ANNI-NH3-v2, is documented here. One of the main changes to the algorithm is that separate neural networks were trained for land and sea observations, resulting in a better training performance for both groups. By reducing and transforming the input parameter space, performance is now also better for observations associated with favourable sounding conditions (i.e. enhanced thermal contrasts). Other changes relate to the introduction of a bias correction over sea and the treatment of the satellite zenith angle. In addition to these algorithmic changes, new recommendations for post-filtering the data and for averaging data in time or space are formulated. We also introduce a second dataset (ANNI-NH3-v2R-I) which relies on ERA-Interim ECMWF meteorological input data, along with built-in surface temperature, rather than the operationally provided Eumetsat IASI L2 data used for the standard near-real time version. The need for such a dataset emerged after a series of sharp discontinuities were identified in the NH3 timeseries, which could be traced back to incremental changes in the IASI L2 algorithms for temperature and clouds. The reanalysed dataset is coherent in time and can therefore be used to study trends. Furthermore, both datasets agree reasonably well in the mean on recent data, after the date when the IASI meteorological L2 version 6 became operational (30 September 2014).


Sign in / Sign up

Export Citation Format

Share Document