Abstract
Snow is a fundamental component of global and regional water budgets, particularly in mountainous areas and regions downstream that rely on snowmelt for water resources. Land surface models (LSMs) are commonly used to develop spatially distributed estimates of snow water equivalent (SWE) and runoff. However, LSMs are limited by uncertainties in model physics and parameters, among other factors. In this study, we describe the use of model calibration tools to improve snow simulations within the Noah-MP LSM as the first step in an Observing System Simulation Experiment (OSSE). Noah-MP is calibrated against the University of Arizona (UA) SWE product over a Western Colorado domain. With spatially varying calibrated parameters, we run calibrated and default Noah-MP simulations for water years 2010-2020. By evaluating both simulations against the UA dataset, we show that calibration decreases domain averaged temporal RMSE and bias for snow depth from 0.15 to 0.13 m and from -0.036 to -0.0023 m, respectively, and improves the timing of snow ablation. Increased snow simulation performance also improves estimates of model-simulated runoff in four of six study basins, though only one has statistically significant improvement. Spatially distributed Noah-MP snow parameters perform better than default uniform values. We demonstrate that calibrating variables related to snow albedo calculations and rain-snow partitioning, among other processes, is a necessary step for creating a nature run that reasonably approximates true snow conditions for the OSSEs. Additionally, the inclusion of a snowfall scaling term can address biases in precipitation from meteorological forcing datasets, further improving the utility of LSMs for generating reliable spatiotemporal estimates of snow.