meteorological input
Recently Published Documents


TOTAL DOCUMENTS

85
(FIVE YEARS 19)

H-INDEX

17
(FIVE YEARS 3)

Atmosphere ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1654
Author(s):  
Nand Lal Kushwaha ◽  
Jitendra Rajput ◽  
Ahmed Elbeltagi ◽  
Ashraf Y. Elnaggar ◽  
Dipaka Ranjan Sena ◽  
...  

Precise quantification of evaporation has a vital role in effective crop modelling, irrigation scheduling, and agricultural water management. In recent years, the data-driven models using meta-heuristics algorithms have attracted the attention of researchers worldwide. In this investigation, we have examined the performance of models employing four meta-heuristic algorithms, namely, support vector machine (SVM), random tree (RT), reduced error pruning tree (REPTree), and random subspace (RSS) for simulating daily pan evaporation (EPd) at two different locations in north India representing semi-arid climate (New Delhi) and sub-humid climate (Ludhiana). The most suitable combinations of meteorological input variables as covariates to estimate EPd were ascertained through the subset regression technique followed by sensitivity analyses. The statistical indicators such as root mean square error (RMSE), mean absolute error (MAE), Nash–Sutcliffe efficiency (NSE), Willmott index (WI), and correlation coefficient (r) followed by graphical interpretations, were utilized for model evaluation. The SVM algorithm successfully performed in reconstructing the EPd time series with acceptable statistical criteria (i.e., NSE = 0.937, 0.795; WI = 0.984, 0.943; r = 0.968, 0.902; MAE = 0.055, 0.993 mm/day; and RMSE = 0.092, 1.317 mm/day) compared with the other applied algorithms during the testing phase at the New Delhi and Ludhiana stations, respectively. This study also demonstrated and discussed the potential of meta-heuristic algorithms for producing reasonable estimates of daily evaporation using minimal meteorological input variables with applicability of the best candidate model vetted in two diverse agro-climatic settings.


2021 ◽  
Author(s):  
Stephanie Mayer ◽  
Alec van Herwijnen ◽  
Jürg Schweizer

<p>Numerical snow cover models enable simulating present or future snow stratigraphy based on meteorological input data from automatic weather stations, numerical weather prediction or climate models. To assess avalanche danger for short-term forecasts or with respect to long-term trends induced by a warming climate, modeled snow stratigraphy has to be interpreted in terms of mechanical instability. Several instability metrics describing the mechanical processes of avalanche release have been implemented into the detailed snow cover model SNOWPACK. However, there exists no readily available method that combines these metrics to predict snow instability.</p><p>To overcome this issue, we compared a comprehensive dataset of almost 600 manual snow profiles with SNOWPACK simulations. The manual profiles were observed in the region of Davos over 17 different winter seasons and include a Rutschblock stability test as well as a local assessment of avalanche danger. To simulate snow stratigraphy at the locations of the manual profiles, we interpolated meteorological input data from a network of automatic weather stations. For each simulated profile, we manually determined the layer corresponding to the weakest layer indicated by the Rutschblock test in the corresponding observed snow profile. We then used the subgroups of the most unstable and the most stable profiles to train a random forest (RF) classification model on the observed stability described by a binary target variable (unstable vs. stable).</p><p>As potential explanatory variables, we considered all implemented stability indices calculated for the manually picked weak layers in the simulated profiles as well as further weak layer and slab properties (e.g. weak layer grain size or slab density).  After selecting the six most decisive features and tuning the hyper-parameters of the RF, the model was able to distinguish between unstable and stable profiles with a five-fold cross-validated accuracy of 88%.</p><p>Our RF model provides the probability of instability (POI) for any simulated snow layer given the features of this layer and the overlying slab. Applying the RF model to each layer of a complete snow profile thus enables the detection of the most unstable layers by considering the local maxima of the POI among all layers of the profile. To analyze the evolution of snow instability over a complete winter season, the RF model can provide the daily maximal POI values for a time series of snow profiles. By comparing this series of POI values with observed avalanche activity, the RF model can be validated.</p><p>The resulting statistical model is an important step towards exploiting numerical snow cover models for snow instability assessment.</p>


2020 ◽  
Vol 20 (11) ◽  
pp. 2873-2888 ◽  
Author(s):  
Bettina Richter ◽  
Alec van Herwijnen ◽  
Mathias W. Rotach ◽  
Jürg Schweizer

Abstract. To perform spatial snow cover simulations for numerical avalanche forecasting, interpolation and downscaling of meteorological data are required, which introduce uncertainties. The repercussions of these uncertainties on modeled snow stability remain mostly unknown. We therefore assessed the contribution of meteorological input uncertainty to modeled snow stability by performing a global sensitivity analysis. We used the numerical snow cover model SNOWPACK to simulate two snow instability metrics, i.e., the skier stability index and the critical crack length, for a field site equipped with an automatic weather station providing the necessary input for the model. Simulations were performed for a winter season, which was marked by a prolonged dry period at the beginning of the season. During this period, the snow surface layers transformed into layers of faceted and depth hoar crystals, which were subsequently buried by snow. The early-season snow surface was likely the weak layer of many avalanches later in the season. Three different scenarios were investigated to better assess the influence of meteorological forcing on snow stability during (a) the weak layer formation period, (b) the slab formation period, and (c) the weak layer and slab formation period. For each scenario, 14 000 simulations were performed, by introducing quasi-random uncertainties to the meteorological input. Uncertainty ranges for meteorological forcing covered typical differences observed within a distance of 2 km or an elevation change of 200 m. Results showed that a weak layer formed in 99.7 % of the simulations, indicating that the weak layer formation was very robust due to the prolonged dry period. For scenario a, modeled grain size of the weak layer was mainly sensitive to precipitation, while the shear strength of the weak layer was sensitive to most input variables, especially air temperature. Once the weak layer existed (case b), precipitation was the most prominent driver for snow stability. The sensitivity analysis highlighted that for all scenarios, the two stability metrics were mostly sensitive to precipitation. Precipitation determined the load of the slab, which in turn influenced weak layer properties. For cases b and c, the two stability metrics showed contradicting behaviors. With increasing precipitation, i.e., deep snowpacks, the skier stability index decreased (became less stable). In contrast, the critical crack length increased with increasing precipitation (became more stable). With regard to spatial simulations of snow stability, the high sensitivity to precipitation suggests that accurate precipitation patterns are necessary to obtain realistic snow stability patterns.


2020 ◽  
Vol 24 (8) ◽  
pp. 4061-4090 ◽  
Author(s):  
Silvia Terzago ◽  
Valentina Andreoli ◽  
Gabriele Arduini ◽  
Gianpaolo Balsamo ◽  
Lorenzo Campo ◽  
...  

Abstract. Snow models are usually evaluated at sites providing high-quality meteorological data, so that the uncertainty in the meteorological input data can be neglected when assessing model performances. However, high-quality input data are rarely available in mountain areas and, in practical applications, the meteorological forcing used to drive snow models is typically derived from spatial interpolation of the available in situ data or from reanalyses, whose accuracy can be considerably lower. In order to fully characterize the performances of a snow model, the model sensitivity to errors in the input data should be quantified. In this study we test the ability of six snow models to reproduce snow water equivalent, snow density and snow depth when they are forced by meteorological input data with gradually lower accuracy. The SNOWPACK, GEOTOP, HTESSEL, UTOPIA, SMASH and S3M snow models are forced, first, with high-quality measurements performed at the experimental site of Torgnon, located at 2160 m a.s.l. in the Italian Alps (control run). Then, the models are forced by data at gradually lower temporal and/or spatial resolution, obtained by (i) sampling the original Torgnon 30 min time series at 3, 6, and 12 h, (ii) spatially interpolating neighbouring in situ station measurements and (iii) extracting information from GLDAS, ERA5 and ERA-Interim reanalyses at the grid point closest to the Torgnon site. Since the selected models are characterized by different degrees of complexity, from highly sophisticated multi-layer snow models to simple, empirical, single-layer snow schemes, we also discuss the results of these experiments in relation to the model complexity. The results show that, when forced by accurate 30 min resolution weather station data, the single-layer, intermediate-complexity snow models HTESSEL and UTOPIA provide similar skills to the more sophisticated multi-layer model SNOWPACK, and these three models show better agreement with observations and more robust performances over different seasons compared to the lower-complexity models SMASH and S3M. All models forced by 3-hourly data provide similar skills to the control run, while the use of 6- and 12-hourly temporal resolution forcings may lead to a reduction in model performances if the incoming shortwave radiation is not properly represented. The SMASH model generally shows low sensitivity to the temporal degradation of the input data. Spatially interpolated data from neighbouring stations and reanalyses are found to be adequate forcings, provided that temperature and precipitation variables are not affected by large biases over the considered period. However, a simple bias-adjustment technique applied to ERA-Interim temperatures allowed all models to achieve similar performances to the control run. Regardless of their complexity, all models show weaknesses in the representation of the snow density.


Atmosphere ◽  
2020 ◽  
Vol 11 (7) ◽  
pp. 678
Author(s):  
Marlon Brancher ◽  
Martin Piringer ◽  
Werner Knauder ◽  
Chuandong Wu ◽  
K. David Griffiths ◽  
...  

Annoyance due to environmental odour exposure is in many jurisdictions evaluated by a yes/no decision. Such a binary decision has been typically achieved via odour impact criteria (OIC) and, when applicable, the resultant separation distances between emission sources and residential areas. If the receptors lie inside the required separation distance, odour exposure is characterised with the potential of causing excessive annoyance. The state-of-the-art methodology to determine separation distances is based on two general steps: (i) calculation of the odour exposure (time series of ambient odour concentrations) using dispersion models and (ii) determination of separation distances through the evaluation of this odour exposure by OIC. Regarding meteorological input data, dispersion models need standard meteorological observations and/or atmospheric stability typically on an hourly basis, which requires expertise in this field. In the planning phase, and as a screening tool, an educated guess of the necessary separation distances to avoid annoyance is in some cases sufficient. Therefore, empirical equations (EQs) are in use to substitute the more time-consuming and costly application of dispersion models. Because the separation distance shape often resembles the wind distribution of a site, wind data should be included in such approaches. Otherwise, the resultant separation distance shape is simply given by a circle around the emission source. Here, an outline of selected empirical equations is given, and it is shown that only a few of them properly reflect the meteorological situation of a site. Furthermore, for three case studies, separation distances as calculated from empirical equations were compared against those from Gaussian plume and Lagrangian particle dispersion models. Overall, our results suggest that some empirical equations reach their limitation in the sense that they are not successful in capturing the inherent complexity of dispersion models. However, empirical equations, developed for Germany and Austria, have the potential to deliver reasonable results, especially if used within the conditions for which they were designed. The main advantage of empirical equations lies in the simplification of the meteorological input data and their use in a fast and straightforward approach.


Atmosphere ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 574 ◽  
Author(s):  
Mario Adani ◽  
Antonio Piersanti ◽  
Luisella Ciancarella ◽  
Massimo D’Isidoro ◽  
Maria Gabriella Villani ◽  
...  

Since 2017, the operational high-resolution air quality forecasting system FORAIR_IT, developed and maintained by the Italian National Agency for New Technologies, Energy and Sustainable Economic Development, has been providing three-day forecasts of concentrations of atmospheric pollutants over Europe and Italy, on a daily basis, with high spatial resolution (20 km on Europe, 4 km on Italy). The system is based on the Atmospheric Modelling System of the National Integrated Assessment Model for Italy (AMS-MINNI), which is a national modelling system evaluated in several studies across Italy and Europe. AMS-MINNI, in its forecasting setup, is presently a candidate model for the Copernicus Atmosphere Monitoring Service’s regional production, dedicated to European-scale ensemble model forecasts of air quality. In order to improve the quality of the meteorological input into the chemical transport model component of FORAIR_IT, several tests were carried out on daily forecasts of NO2 and O3 concentrations for January and August 2019 (representative of the meteorological seasons of winter and summer, respectively). The aim was to evaluate the sensitivity to the meteorological input in NO2 and O3 concentration forecasting. More specifically, the Weather Research and Forecasting model (WRF) was tested to potentially improve the meteorological driver with respect to the Regional Atmospheric Modelling System (RAMS), which is currently embedded in FORAIR_IT. In this work, the WRF chain is run in several setups, changing the parameterization of several micrometeorological variables (snow, mixing height, albedo, roughness length, soil heat flux + friction velocity, Monin–Obukhov length), with the main objective being to take advantage of WRF’s consistent physics in the calculation of both mesoscale variables and micrometeorological parameters for air quality simulations. Daily forecast concentrations produced by the different meteorological model configurations are compared to the available measured concentrations, showing the general good performance of WRF-driven results, even if performance skills are different according to the single meteorological configuration and to the pollutant type. WRF-driven forecasts clearly improve the model reproduction of the temporal variability of concentrations, while the bias of O3 is higher than in the RAMS-driven configuration. The results suggest that we should keep testing WRF configurations, with the objective of obtaining a robust improvement in forecast concentrations with respect to RAMS-driven forecasts.


2020 ◽  
Vol 8 (4) ◽  
pp. 260 ◽  
Author(s):  
Luigi Cavaleri ◽  
Francesco Barbariol ◽  
Alvise Benetazzo

We perform a critical analysis of the present approach in wave modeling and of the related results. While acknowledging the good quality of the best present forecasts, we point out the limitations that appear when we focus on the corresponding spectra. Apart from the meteorological input, these are traced back to the spectral approach at the base of the present operational models, and the consequent approximations involved in properly modeling the various physical processes at work. Future alternatives are discussed. We then focus our attention on how, given the situation, to deal today with the estimate of the maximum wave heights, both in the long term and for a specific situation. For this, and within the above limits, a more precise evaluation of the wave spectrum is shown to be a mandatory condition.


2020 ◽  
Author(s):  
Stephanie Mayer ◽  
Alec van Herwijnen ◽  
Mathias Bavay ◽  
Bettina Richter ◽  
Jürg Schweizer

<p>Numerical snow cover models enable simulating present or future snow stratigraphy based on meteorological input data from automatic weather stations, numerical weather prediction or climate models. To assess avalanche danger for short-term forecasts or with respect to long-term trends induced by a warming climate, the modeled vertical layering of the snowpack has to be interpreted in terms of mechanical instability. In recent years, improvements in our understanding of dry-snow slab avalanche formation have led to the introduction of new metrics describing the fracture processes leading to avalanche release. Even though these instability metrics have been implemented into the detailed snow cover model SNOWPACK, validated threshold values that discriminate rather stable from rather unstable snow conditions are not readily available. To overcome this issue, we compared a comprehensive dataset of almost 600 manual snow profiles with simulations. The manual profiles were observed in the region of Davos over 17 different winters and include stability tests such as the Rutschblock test as well as observations of signs of instability. To simulate snow stratigraphy at the locations of the manual profiles, we obtained meteorological input data by interpolating measurements from a network of automatic weather stations. By matching simulated snow layers with the layers from traditional snow profiles, we established a method to detect potential weak layers in the simulated profiles and determine the degree of instability. To this end, thresholds for failure initiation (skier stability index) and crack propagation criteria (critical crack length) were calibrated using the observed stability test results and signs of instability incorporated in the manual observations. The resulting instability criteria are an important step towards exploiting numerical snow cover models for snow instability assessment.</p>


2020 ◽  
Author(s):  
Ignacio Pisso ◽  

<p>Following its release and corresponding publication in GMD, we present the Lagrangian model FLEXPART 10.4, which simulates the transport, diffusion, dry and wet deposition, radioactive decay and first order chemical reactions of atmospheric tracers. The model has been recently updated, both technical and in the representation of physico-chemical processes.<span> </span></p><p>FLEXPART was in its original version in the mid-1990s designed for calculating the long-range and mesoscale dispersion of hazardous substances from point sources, such as released after an accident in a nuclear power plant. Given suitable meteorological input data, it can be used for scales from dozens of meters to the global scale. In particular, inverse modelling based on source-receptor relationships from FLEXPART has become widely used. In this paper, we present FLEXPART version 10.4, which works with meteorological input data from the European Centre for Medium-Range Weather Forecasts’ (ECMWF) Integrated Forecast System (IFS), and data from the United States’ National Centers of Environmental Prediction (NCEP) Global Forecast System (GFS). Since the last publication of a detailed FLEXPART description (version 6.2), the model has been improved in different aspects such as performance, physico-chemical parametrizations, input/output formats and available pre- and post-processing software. The model code has also been parallelized using the Message Passing Interface (MPI). We demonstrate that the model scales well up to using 256 processors, with a parallel efficiency greater than 75% for up to 64 processes on multiple nodes in runs with very large numbers of particles. The deviation from 100% efficiency is almost entirely due to remaining non-parallelized parts of the code, suggesting large potential for further speed-up. A new turbulence scheme for the convective boundary layer has been developed that considers the skewness in the vertical velocity distribution (updrafts and downdrafts) and vertical gradients in air density. FLEXPART is the only model available considering both effects, making it highly accurate for small-scale applications, e.g. to quantify dispersion in the vicinity of a point source. The wet deposition scheme for aerosols has been completely rewritten and a new, more detailed gravitational settling parameterization for aerosols has also been implemented. FLEXPART has had the option for running backward in time from atmospheric concentrations at receptor locations for many years, but this has now been extended to work also for deposition values . To our knowledge, to date FLEXPART is the only model with that capability. Furthermore, temporal variation and temperature dependence of chemical reactions with the OH radical have been included, allowing more accurate simulations for species with intermediate lifetimes against the reaction with OH, such as ethane. Finally, user settings can now be specified in a more flexible namelist format, and output files can be produced in NetCDF format instead of FLEXPART’s customary binary format. In this paper, we describe these new developments. Moreover, we present some<span> </span> tools for the preparation of the meteorological input data and for processing of FLEXPART output data and briefly report on alternative FLEXPART versions.<span> </span></p>


Sign in / Sign up

Export Citation Format

Share Document