scholarly journals Independent evaluation of the SNODAS snow depth product using regional-scale lidar-derived measurements

2015 ◽  
Vol 9 (1) ◽  
pp. 13-23 ◽  
Author(s):  
A. Hedrick ◽  
H.-P. Marshall ◽  
A. Winstral ◽  
K. Elder ◽  
S. Yueh ◽  
...  

Abstract. Repeated light detection and ranging (lidar) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 lidar-derived data set of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the conterminous United States. Independent validation data are scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation data set with substantial geographic coverage. Within 12 distinctive 500 × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 lidar acquisitions. This supplied a data set for constraining the uncertainty of upscaled lidar estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled lidar snow depths were then compared to the SNODAS estimates over the entire study area for the dates of the lidar flights. The remotely sensed snow depths provided a more spatially continuous comparison data set and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between lidar observations and SNODAS estimates were most drastic, providing insight into the causal influences of natural processes on model uncertainty.

2014 ◽  
Vol 8 (3) ◽  
pp. 3141-3170
Author(s):  
A. Hedrick ◽  
H.-P. Marshall ◽  
A. Winstral ◽  
K. Elder ◽  
S. Yueh ◽  
...  

Abstract. Repeated Light Detection and Ranging (LiDAR) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 LiDAR-derived dataset of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically-based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the coterminous United States. Independent validation data is scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation dataset with substantial geographic coverage. Within twelve distinctive 500 m × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 LiDAR acquisitions. This supplied a dataset for constraining the uncertainty of upscaled LiDAR estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled LiDAR snow depths were then compared to the SNODAS-estimates over the entire study area for the dates of the LiDAR flights. The remotely-sensed snow depths provided a more spatially continuous comparison dataset and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between LiDAR observations and SNODAS estimates were most drastic, suggesting natural processes specific to these regions as causal influences on model uncertainty.


2013 ◽  
Vol 54 (62) ◽  
pp. 273-281 ◽  
Author(s):  
Kjetil Melvold ◽  
Thomas Skaugen

AbstractThis study presents results from an Airborne Laser Scanning (ALS) mapping survey of snow depth on the mountain plateau Hardangervidda, Norway, in 2008 and 2009 at the approximate time of maximum snow accumulation during the winter. The spatial extent of the survey area is >240 km2. Large variability is found for snow depth at a local scale (2 m2), and similar spatial patterns in accumulation are found between 2008 and 2009. The local snow-depth measurements were aggregated by averaging to produce new datasets at 10, 50, 100, 250 and 500 m2 and 1 km2 resolution. The measured values at 1 km2 were compared with simulated snow depth from the seNorge snow model (www.senorge.no), which is run on a 1 km2 grid resolution. Results show that the spatial variability decreases as the scale increases. At a scale of about 500 m2 to 1 km2 the variability of snow depth is somewhat larger than that modeled by seNorge. This analysis shows that (1) the regional-scale spatial pattern of snow distribution is well captured by the seNorge model and (2) relatively large differences in snow depth between the measured and modeled values are present.


2006 ◽  
Vol 7 (5) ◽  
pp. 880-895 ◽  
Author(s):  
M. J. Tribbeck ◽  
R. J. Gurney ◽  
E. M. Morris

Abstract Models of snow processes in areas of possible large-scale change need to be site independent and physically based. Here, the accumulation and ablation of the seasonal snow cover beneath a fir canopy has been simulated with a new physically based snow–soil vegetation–atmosphere transfer scheme (Snow-SVAT) called SNOWCAN. The model was formulated by coupling a canopy optical and thermal radiation model to a physically based multilayer snow model. Simple representations of other forest effects were included. These include the reduction of wind speed and hence turbulent transfer beneath the canopy, sublimation of intercepted snow, and deposition of debris on the surface. This paper tests this new modeling approach fully at a fir site within Reynolds Creek Experimental Watershed, Idaho. Model parameters were determined at an open site and subsequently applied to the fir site. SNOWCAN was evaluated using measurements of snow depth, subcanopy solar and thermal radiation, and snowpack profiles of temperature, density, and grain size. Simulations showed good agreement with observations (e.g., fir site snow depth was estimated over the season with r 2 = 0.96), generally to within measurement error. However, the simulated temperature profiles were less accurate after a melt–freeze event, when the temperature discrepancy resulted from underestimation of the rate of liquid water flow and/or the rate of refreeze. This indicates both that the general modeling approach is applicable and that a still more complete representation of liquid water in the snowpack will be important.


1970 ◽  
Vol 4 (6) ◽  
pp. 27-40 ◽  
Author(s):  
Sahotra Sarkar ◽  
Michael Mayfield ◽  
Susan Cameron ◽  
Trevon Fuller ◽  
Justin Garson

We present a framework for systematic conservation planning for biodiversity with an emphasis on the Indian context. We illustrate the use of this framework by analyzing two data sets consisting of environmental and physical features that serve as surrogates for biodiversity. The aim was to select networks of potential conservation areas (such as reserves and national parks) which include representative fractions of these environmental features or surrogates. The first data set includes the entire subcontinent while the second is limited to the Eastern Himalayas. The environmental surrogates used for the two analyses result in the selection of conservation area networks with different properties. Tentative results indicate that these surrogates are successful in selecting most areas known from fieldwork to have high biodiversity content such as the broadleaf and subalpine conifer forests of the Eastern Himalayas. However, the place-prioritization algorithm also selected areas not known to be high in biodiversity content such as the coast of the Arabian Sea. Areas selected to satisfy a 10% target of representation for the complete surrogate set provide representation for 46.03% of the ecoregions in the entire study area. The algorithm selected a disproportionately small number of cells in the Western Ghats, a hotspot of vascular plant endemism. At the same target level, restricted surrogate sets represent 33.33% of the ecoregions in the entire study area and 46.67% of the ecoregions in the Eastern Himalayas. Finally, any more sophisticated use of such systematic methods will require the assembly of Geographical Information Systems (GIS)-based biogeographical data sets on a regional scale. Key words: Indian biodiversity, Eastern Himalayas, complementarity, area prioritization, reserve selection, surrogacy Himalayan Journal of Sciences Vol.4(6) 2007 p.27-40


2014 ◽  
Vol 8 (2) ◽  
pp. 329-344 ◽  
Author(s):  
G. A. Sexstone ◽  
S. R. Fassnacht

Abstract. This study uses a combination of field measurements and Natural Resource Conservation Service (NRCS) operational snow data to understand the drivers of snow density and snow water equivalent (SWE) variability at the basin scale (100s to 1000s km2). Historic snow course snowpack density observations were analyzed within a multiple linear regression snow density model to estimate SWE directly from snow depth measurements. Snow surveys were completed on or about 1 April 2011 and 2012 and combined with NRCS operational measurements to investigate the spatial variability of SWE near peak snow accumulation. Bivariate relations and multiple linear regression models were developed to understand the relation of snow density and SWE with terrain variables (derived using a geographic information system (GIS)). Snow density variability was best explained by day of year, snow depth, UTM Easting, and elevation. Calculation of SWE directly from snow depth measurement using the snow density model has strong statistical performance, and model validation suggests the model is transferable to independent data within the bounds of the original data set. This pathway of estimating SWE directly from snow depth measurement is useful when evaluating snowpack properties at the basin scale, where many time-consuming measurements of SWE are often not feasible. A comparison with a previously developed snow density model shows that calibrating a snow density model to a specific basin can provide improvement of SWE estimation at this scale, and should be considered for future basin scale analyses. During both water year (WY) 2011 and 2012, elevation and location (UTM Easting and/or UTM Northing) were the most important SWE model variables, suggesting that orographic precipitation and storm track patterns are likely driving basin scale SWE variability. Terrain curvature was also shown to be an important variable, but to a lesser extent at the scale of interest.


2014 ◽  
Vol 18 (7) ◽  
pp. 2695-2709 ◽  
Author(s):  
D. Freudiger ◽  
I. Kohn ◽  
K. Stahl ◽  
M. Weiler

Abstract. In January 2011 a rain-on-snow (RoS) event caused floods in the major river basins in central Europe, i.e. the Rhine, Danube, Weser, Elbe, Oder, and Ems. This event prompted the questions of how to define a RoS event and whether those events have become more frequent. Based on the flood of January 2011 and on other known events of the past, threshold values for potentially flood-generating RoS events were determined. Consequently events with rainfall of at least 3 mm on a snowpack of at least 10 mm snow water equivalent (SWE) and for which the sum of rainfall and snowmelt contains a minimum of 20% snowmelt were analysed. RoS events were estimated for the time period 1950–2011 and for the entire study area based on a temperature index snow model driven with a European-scale gridded data set of daily climate (E-OBS data). Frequencies and magnitudes of the modelled events differ depending on the elevation range. When distinguishing alpine, upland, and lowland basins, we found that upland basins are most influenced by RoS events. Overall, the frequency of rainfall increased during winter, while the frequency of snowfall decreased during spring. A decrease in the frequency of RoS events from April to May has been observed in all upland basins since 1990. In contrast, the results suggest an increasing trend in the magnitude and frequency of RoS days in January and February for most of the lowland and upland basins. These results suggest that the flood hazard from RoS events in the early winter season has increased in the medium-elevation mountain ranges of central Europe, especially in the Rhine, Weser, and Elbe river basins.


2019 ◽  
Author(s):  
David F. Hill ◽  
Elizabeth A. Burakowski ◽  
Ryan L. Crumley ◽  
Julia Keon ◽  
J. Michelle Hu ◽  
...  

Abstract. We present a simple method that allows snow depth measurements to be converted to snow water equivalent (SWE) estimates. These estimates are useful to individuals interested in water resources, ecological function, and avalanche forecasting. They can also be assimilated into models to help improve predictions of total water volumes over large regions. The conversion of depth to SWE is particularly valuable since snow depth measurements are far more numerous than costlier and more complex SWE measurements. Our model regresses SWE against snow depth and climatological (30-year normal) values for mean annual precipitation (MAP) and mean February temperature, producing a power-law relationship. Relying on climatological normals rather than weather data for a given year allows our model to be applied at measurement sites lacking a weather station. Separate equations are obtained for the accumulation and the ablation phases of the snowpack, which introduces day of water year (DOY) as an additional variable. The model is validated against a large database of snow pillow measurements and yields a bias in SWE of less than 0.5 mm and a root-mean-squared-error (RMSE) in SWE of approximately 65 mm. When the errors are investigated on a station-by-station basis, the average RMSE is about 5 % of the MAP at each station. The model is additionally validated against a completely independent set of data from the northeast United States. Finally, the results are compared with other models for bulk density that have varying degrees of complexity and that were built in multiple geographic regions. The results show that the model described in this paper has the best performance for the validation data set.


BMJ Open ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. e040778
Author(s):  
Vineet Kumar Kamal ◽  
Ravindra Mohan Pandey ◽  
Deepak Agrawal

ObjectiveTo develop and validate a simple risk scores chart to estimate the probability of poor outcomes in patients with severe head injury (HI).DesignRetrospective.SettingLevel-1, government-funded trauma centre, India.ParticipantsPatients with severe HI admitted to the neurosurgery intensive care unit during 19 May 2010–31 December 2011 (n=946) for the model development and further, data from same centre with same inclusion criteria from 1 January 2012 to 31 July 2012 (n=284) for the external validation of the model.Outcome(s)In-hospital mortality and unfavourable outcome at 6 months.ResultsA total of 39.5% and 70.7% had in-hospital mortality and unfavourable outcome, respectively, in the development data set. The multivariable logistic regression analysis of routinely collected admission characteristics revealed that for in-hospital mortality, age (51–60, >60 years), motor score (1, 2, 4), pupillary reactivity (none), presence of hypotension, basal cistern effaced, traumatic subarachnoid haemorrhage/intraventricular haematoma and for unfavourable outcome, age (41–50, 51–60, >60 years), motor score (1–4), pupillary reactivity (none, one), unequal limb movement, presence of hypotension were the independent predictors as its 95% confidence interval (CI) of odds ratio (OR)_did not contain one. The discriminative ability (area under the receiver operating characteristic curve (95% CI)) of the score chart for in-hospital mortality and 6 months outcome was excellent in the development data set (0.890 (0.867 to 912) and 0.894 (0.869 to 0.918), respectively), internal validation data set using bootstrap resampling method (0.889 (0.867 to 909) and 0.893 (0.867 to 0.915), respectively) and external validation data set (0.871 (0.825 to 916) and 0.887 (0.842 to 0.932), respectively). Calibration showed good agreement between observed outcome rates and predicted risks in development and external validation data set (p>0.05).ConclusionFor clinical decision making, we can use of these score charts in predicting outcomes in new patients with severe HI in India and similar settings.


Author(s):  
David McCallen ◽  
Houjun Tang ◽  
Suiwen Wu ◽  
Eric Eckert ◽  
Junfei Huang ◽  
...  

Accurate understanding and quantification of the risk to critical infrastructure posed by future large earthquakes continues to be a very challenging problem. Earthquake phenomena are quite complex and traditional approaches to predicting ground motions for future earthquake events have historically been empirically based whereby measured ground motion data from historical earthquakes are homogenized into a common data set and the ground motions for future postulated earthquakes are probabilistically derived based on the historical observations. This procedure has recognized significant limitations, principally due to the fact that earthquake ground motions tend to be dictated by the particular earthquake fault rupture and geologic conditions at a given site and are thus very site-specific. Historical earthquakes recorded at different locations are often only marginally representative. There has been strong and increasing interest in utilizing large-scale, physics-based regional simulations to advance the ability to accurately predict ground motions and associated infrastructure response. However, the computational requirements for simulations at frequencies of engineering interest have proven a major barrier to employing regional scale simulations. In a U.S. Department of Energy Exascale Computing Initiative project, the EQSIM application development is underway to create a framework for fault-to-structure simulations. This framework is being prepared to exploit emerging exascale platforms in order to overcome computational limitations. This article presents the essential methodology and computational workflow employed in EQSIM to couple regional-scale geophysics models with local soil-structure models to achieve a fully integrated, complete fault-to-structure simulation framework. The computational workflow, accuracy and performance of the coupling methodology are illustrated through example fault-to-structure simulations.


Geosciences ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 35
Author(s):  
Luca Schilirò ◽  
José Cepeda ◽  
Graziella Devoli ◽  
Luca Piciullo

In Norway, shallow landslides are generally triggered by intense rainfall and/or snowmelt events. However, the interaction of hydrometeorological processes (e.g., precipitation and snowmelt) acting at different time scales, and the local variations of the terrain conditions (e.g., thickness of the surficial cover) are complex and often unknown. With the aim of better defining the triggering conditions of shallow landslides at a regional scale we used the physically based model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope stability) in an area located in upper Gudbrandsdalen valley in South-Eastern Norway. We performed numerical simulations to reconstruct two scenarios that triggered many landslides in the study area on 10 June 2011 and 22 May 2013. A large part of the work was dedicated to the parameterization of the numerical model. The initial soil-hydraulic conditions and the spatial variation of the surficial cover thickness have been evaluated applying different methods. To fully evaluate the accuracy of the model, ROC (Receiver Operating Characteristic) curves have been obtained comparing the safety factor maps with the source areas in the two periods of analysis. The results of the numerical simulations show the high susceptibility of the study area to the occurrence of shallow landslides and emphasize the importance of a proper model calibration for improving the reliability.


Sign in / Sign up

Export Citation Format

Share Document