scholarly journals Parameterization of the Spatial Variability of Rain for Large-Scale Models and Remote Sensing

2015 ◽  
Vol 54 (10) ◽  
pp. 2027-2046 ◽  
Author(s):  
Z. J. Lebo ◽  
C. R. Williams ◽  
G. Feingold ◽  
V. E. Larson

AbstractThe spatial variability of rain rate R is evaluated by using both radar observations and cloud-resolving model output, focusing on the Tropical Warm Pool–International Cloud Experiment (TWP-ICE) period. In general, the model-predicted rain-rate probability distributions agree well with those estimated from the radar data across a wide range of spatial scales. The spatial variability in R, which is defined according to the standard deviation of R (for R greater than a predefined threshold Rmin) σ(R), is found to vary according to both the average of R over a given footprint μ(R) and the footprint size or averaging scale Δ. There is good agreement between area-averaged model output and radar data at a height of 2.5 km. The model output at the surface is used to construct a scale-dependent parameterization of σ(R) as a function of μ(R) and Δ that can be readily implemented into large-scale numerical models. The variability in both the rainwater mixing ratio qr and R as a function of height is also explored. From the statistical analysis, a scale- and height-dependent formulation for the spatial variability of both qr and R is provided for the analyzed tropical scenario. Last, it is shown how this parameterization can be used to assist in constraining parameters that are often used to describe the surface rain-rate distribution.

2019 ◽  
Vol 862 ◽  
pp. 672-695 ◽  
Author(s):  
Timour Radko

A theoretical model is developed which illustrates the dynamics of layering instability, frequently realized in ocean regions with active fingering convection. Thermohaline layering is driven by the interplay between large-scale stratification and primary double-diffusive instabilities operating at the microscale – temporal and spatial scales set by molecular dissipation. This interaction is described by a combination of direct numerical simulations and an asymptotic multiscale model. The multiscale theory is used to formulate explicit and dynamically consistent flux laws, which can be readily implemented in large-scale analytical and numerical models. Most previous theoretical investigations of thermohaline layering were based on the flux-gradient model, which assumes that the vertical transport of density components is uniquely determined by their local background gradients. The key deficiency of this approach is that layering instabilities predicted by the flux-gradient model have unbounded growth rates at high wavenumbers. The resulting ultraviolet catastrophe precludes the analysis of such basic properties of layering instability as its preferred wavelength or the maximal growth rate. The multiscale model, on the other hand, incorporates hyperdiffusion terms that stabilize short layering modes. Overall, the presented theory carries the triple advantage of (i) offering an explicit description of the interaction between microstructure and layering modes, (ii) taking into account the influence of non-uniform stratification on microstructure-driven mixing, and (iii) avoiding unphysical behaviour of the flux-gradient laws at small scales. While the multiscale approach to the parametrization of time-dependent small-scale processes is illustrated here on the example of fingering convection, we expect the proposed technique to be readily adaptable to a wide range of applications.


2021 ◽  
Author(s):  
Giulia Mazzotti ◽  
Clare Webster ◽  
Richard Essery ◽  
Johanna Malle ◽  
Tobias Jonas

<p>Forest snow cover dynamics affect hydrological regimes, ecosystem processes, and climate feedbacks, and thus need to be captured by model applications that operate across a wide range of spatial scales. At large scales and coarse model resolutions, high spatial variability of the processes shaping forest snow cover evolution creates a major modelling challenge. Variability of canopy-snow interactions is determined by heterogeneous canopy structure and can only be explicitly resolved with hyper-resolution models (<5m).</p><p>Here, we address this challenge with model upscaling experiments with the forest snow model FSM2, using hyper-resolution simulations as intermediary between experimental data and coarse-resolution simulations. When run at 2-m resolution, FSM2 is shown to capture the spatial variability of forest snow dynamics with a high level of detail: Its accurate performance is verified at the level of individual energy balance components based on extensive, spatially distributed sub-canopy measurements of micrometeorological and snow variables, obtained with mobile multi-sensor platforms. Results from hyper-resolution simulations over a 150,000 m<sup>2</sup> domain are then compared to spatially lumped, coarse-resolution runs, where 50m x 50m grid cells are represented by one model run only. For the spatially lumped simulations, we evaluate alternative upscaling strategies, aiming to explore the representation of forest snow processes at model resolutions coarser than the spatial scales at which these processes vary and interact.</p><p>Different upscaling strategies exhibited large discrepancies in simulated (1) distribution of snow water equivalent at peak of winter, and (2) timing of snow disappearance. Our results indicate that detailed canopy structure metrics, as included in hyper-resolution runs, are necessary to capture the spatial variability of forest snow processes even at coarser resolutions. They further demonstrate the relevance of accounting for unresolved sub-grid variability in snowmelt calculations even at relatively small spatial aggregation scales. By identifying important model features, which allow coarse-resolution simulations to approximate spatially averaged results of corresponding hyper-resolution simulations, this work provides recommendations for modelling forest snow processes in medium- to large-scale applications.</p>


2000 ◽  
Vol 663 ◽  
Author(s):  
J. Samper ◽  
R. Juncosa ◽  
V. Navarro ◽  
J. Delgado ◽  
L. Montenegro ◽  
...  

ABSTRACTFEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project dealing with the bentonite engineered barrier designed for sealing and containment of waste in a high level radioactive waste repository (HLWR). It includes two main experiments: an situ full-scale test performed at Grimsel (GTS) and a mock-up test operating since February 1997 at CIEMAT facilities in Madrid (Spain) [1,2,3]. One of the objectives of FEBEX is the development and testing of conceptual and numerical models for the thermal, hydrodynamic, and geochemical (THG) processes expected to take place in engineered clay barriers. A significant improvement in coupled THG modeling of the clay barrier has been achieved both in terms of a better understanding of THG processes and more sophisticated THG computer codes. The ability of these models to reproduce the observed THG patterns in a wide range of THG conditions enhances the confidence in their prediction capabilities. Numerical THG models of heating and hydration experiments performed on small-scale lab cells provide excellent results for temperatures, water inflow and final water content in the cells [3]. Calculated concentrations at the end of the experiments reproduce most of the patterns of measured data. In general, the fit of concentrations of dissolved species is better than that of exchanged cations. These models were later used to simulate the evolution of the large-scale experiments (in situ and mock-up). Some thermo-hydrodynamic hypotheses and bentonite parameters were slightly revised during TH calibration of the mock-up test. The results of the reference model reproduce simultaneously the observed water inflows and bentonite temperatures and relative humidities. Although the model is highly sensitive to one-at-a-time variations in model parameters, the possibility of parameter combinations leading to similar fits cannot be precluded. The TH model of the “in situ” test is based on the same bentonite TH parameters and assumptions as for the “mock-up” test. Granite parameters were slightly modified during the calibration process in order to reproduce the observed thermal and hydrodynamic evolution. The reference model captures properly relative humidities and temperatures in the bentonite [3]. It also reproduces the observed spatial distribution of water pressures and temperatures in the granite. Once calibrated the TH aspects of the model, predictions of the THG evolution of both tests were performed. Data from the dismantling of the in situ test, which is planned for the summer of 2001, will provide a unique opportunity to test and validate current THG models of the EBS.


2018 ◽  
Vol 610 ◽  
pp. A84 ◽  
Author(s):  
Iker S. Requerey ◽  
Basilio Ruiz Cobo ◽  
Milan Gošić ◽  
Luis R. Bellot Rubio

Context. Photospheric vortex flows are thought to play a key role in the evolution of magnetic fields. Recent studies show that these swirling motions are ubiquitous in the solar surface convection and occur in a wide range of temporal and spatial scales. Their interplay with magnetic fields is poorly characterized, however. Aims. We study the relation between a persistent photospheric vortex flow and the evolution of a network magnetic element at a supergranular vertex. Methods. We used long-duration sequences of continuum intensity images acquired with Hinode and the local correlation-tracking method to derive the horizontal photospheric flows. Supergranular cells are detected as large-scale divergence structures in the flow maps. At their vertices, and cospatial with network magnetic elements, the velocity flows converge on a central point. Results. One of these converging flows is observed as a vortex during the whole 24 h time series. It consists of three consecutive vortices that appear nearly at the same location. At their core, a network magnetic element is also detected. Its evolution is strongly correlated to that of the vortices. The magnetic feature is concentrated and evacuated when it is caught by the vortices and is weakened and fragmented after the whirls disappear. Conclusions. This evolutionary behavior supports the picture presented previously, where a small flux tube becomes stable when it is surrounded by a vortex flow.


2020 ◽  
Author(s):  
Philipp Eichheimer ◽  
Marcel Thielmann ◽  
Wakana Fujita ◽  
Gregor J. Golabek ◽  
Michihiko Nakamura ◽  
...  

Abstract. Fluid flow on different scales is of interest for several Earth science disciplines like petrophysics, hydrogeology and volcanology. To parameterize fluid flow in large-scale numerical simulations (e.g. groundwater and volcanic systems), flow properties on the microscale need to be considered. For this purpose experimental and numerical investigations of flow through porous media over a wide range of porosities are necessary. In the present study we sinter glass bead media with various porosities. The microstructure, namely effective porosity and effective specific surface, is investigated using image processing. We determine flow properties like hydraulic tortuosity and permeability using both experimental measurements and numerical simulations. By fitting microstructural and flow properties to porosity, we obtain a modified Kozeny-Carman equation for isotropic low-porosity media, that can be used to simulate permeability in large-scale numerical models. To verify the modified Kozeny-Carman equation we compare it to the computed and measured permeability values.


2019 ◽  
Vol 867 ◽  
pp. 146-194 ◽  
Author(s):  
G. L. Richard ◽  
A. Duran ◽  
B. Fabrèges

We derive a two-dimensional depth-averaged model for coastal waves with both dispersive and dissipative effects. A tensor quantity called enstrophy models the subdepth large-scale turbulence, including its anisotropic character, and is a source of vorticity of the average flow. The small-scale turbulence is modelled through a turbulent-viscosity hypothesis. This fully nonlinear model has equivalent dispersive properties to the Green–Naghdi equations and is treated, both for the optimization of these properties and for the numerical resolution, with the same techniques which are used for the Green–Naghdi system. The model equations are solved with a discontinuous Galerkin discretization based on a decoupling between the hyperbolic and non-hydrostatic parts of the system. The predictions of the model are compared to experimental data in a wide range of physical conditions. Simulations were run in one-dimensional and two-dimensional cases, including run-up and run-down on beaches, non-trivial topographies, wave trains over a bar or propagation around an island or a reef. A very good agreement is reached in every cases, validating the predictive empirical laws for the parameters of the model. These comparisons confirm the efficiency of the present strategy, highlighting the enstrophy as a robust and reliable tool to describe wave breaking even in a two-dimensional context. Compared with existing depth-averaged models, this approach is numerically robust and adds more physical effects without significant increase in numerical complexity.


2006 ◽  
Vol 14 (02) ◽  
pp. 275-293 ◽  
Author(s):  
CHRISTOPHER S. OEHMEN ◽  
TJERK P. STRAATSMA ◽  
GORDON A. ANDERSON ◽  
GALYA ORR ◽  
BOBBIE-JO M. WEBB-ROBERTSON ◽  
...  

The future of biology will be increasingly driven by the fundamental paradigm shift from hypothesis-driven research to data-driven discovery research employing the growing volume of biological data coupled to experimental testing of new discoveries. But hardware and software limitations in the current workflow infrastructure make it impossible or intractible to use real data from disparate sources for large-scale biological research. We identify key technological developments needed to enable this paradigm shift involving (1) the ability to store and manage extremely large datasets which are dispersed over a wide geographical area, (2) development of novel analysis and visualization tools which are capable of operating on enormous data resources without overwhelming researchers with unusable information, and (3) formalisms for integrating mathematical models of biosystems from the molecular level to the organism population level. This will require the development of algorithms and tools which efficiently utilize high-performance compute power and large storage infrastructures. The end result will be the ability of a researcher to integrate complex data from many different sources with simulations to analyze a given system at a wide range of temporal and spatial scales in a single conceptual model.


2020 ◽  
Author(s):  
Shuyi Chen ◽  
Brandon Kerns

<p>Precipitation is a highly complex, multiscale entity in the global weather and climate system. It is affected by both global and local circulations over a wide range of time scales from hours to weeks and beyond. It is also an important measure of the water and energy cycle in climate models. To better understand the physical processes controlling precipitation in climate models, we need to evaluate precipitation not only in in terms of its global climatological distribution but also multiscale variability in time and space.</p><p>This study presents a new set of metrics to quantify characteristics of global precipitation using 20-years the TRMM-GPM Multisatellite Precipitation Analysis (TMPA) data from June 1998 to May 2018 over the global tropics-midlatitudes (50°S – 50°N) with 3-hourly and 0.25-degree resolutions.  We developed a method to identify large-scale precipitation objects (LPOs) using a temporal-spatial filter and then track the LPOs in time, namely the Large-scale Precipitation Tracking systems (LPTs) as described in Kerns and Chen (2016, 2020, JGR-Atmos). The most unique feature of this method is that it can distinguish large-scale precipitation organized by, for example, monsoons and the Madden-Julian Oscillation (MJO), from that of mesoscale and synoptic scale weather systems, as well as those relatively stationary local topographically and diurnally forced precipitation. The new precipitation metrics based on the satellite observation are used to evaluate climate models.  Early results show that most models overproduce precipitation over land in non-LPTs and underestimate large-scale precipitation (LPTs) over the oceans compared with the observations. For example, the MJO contributes up to 40-50% of the observed annual precipitation over the Indio-Pacific warm pool region, which are usually much less in the models because of models’ inability to represent the MJO dynamics. Furthermore, the spatial variability of precipitation associated with ENSO is more pronounced in the observations than models.</p>


2005 ◽  
Vol 18 (23) ◽  
pp. 5110-5124 ◽  
Author(s):  
Lazaros Oreopoulos ◽  
Robert F. Cahalan

Abstract Two full months (July 2003 and January 2004) of Moderate Resolution Imaging Spectroradiometer (MODIS) Atmosphere Level-3 data from the Terra and Aqua satellites are analyzed in order to characterize the horizontal variability of vertically integrated cloud optical thickness (“cloud inhomogeneity”) at global scales. The monthly climatology of cloud inhomogeneity is expressed in terms of standard parameters, initially calculated for each day of the month at spatial scales of 1° × 1° and subsequently averaged at monthly, zonal, and global scales. Geographical, diurnal, and seasonal changes of inhomogeneity parameters are examined separately for liquid and ice phases and separately over land and ocean. It is found that cloud inhomogeneity is overall weaker in summer than in winter. For liquid clouds, it is also consistently weaker for local morning than local afternoon and over land than ocean. Cloud inhomogeneity is comparable for liquid and ice clouds on a global scale, but with stronger spatial and temporal variations for the ice phase, and exhibits an average tendency to be weaker for near-overcast or overcast grid points of both phases. Depending on cloud phase, hemisphere, surface type, season, and time of day, hemispheric means of the inhomogeneity parameter ν (roughly the square of the ratio of optical thickness mean to standard deviation) have a wide range of ∼1.7 to 4, while for the inhomogeneity parameter χ (the ratio of the logarithmic to linear mean) the range is from ∼0.65 to 0.8. The results demonstrate that the MODIS Level-3 dataset is suitable for studying various aspects of cloud inhomogeneity and may prove invaluable for validating future cloud schemes in large-scale models capable of predicting subgrid variability.


2012 ◽  
Vol 51 (4) ◽  
pp. 763-779 ◽  
Author(s):  
Terry J. Schuur ◽  
Hyang-Suk Park ◽  
Alexander V. Ryzhkov ◽  
Heather D. Reeves

AbstractA new hydrometeor classification algorithm that combines thermodynamic output from the Rapid Update Cycle (RUC) model with polarimetric radar observations is introduced. The algorithm improves upon existing classification techniques that rely solely on polarimetric radar observations by using thermodynamic information to help to diagnose microphysical processes (such as melting or refreezing) that might occur aloft. This added information is especially important for transitional weather events for which past studies have shown radar-only techniques to be deficient. The algorithm first uses vertical profiles of wet-bulb temperature derived from the RUC model output to provide a background precipitation classification type. According to a set of empirical rules, polarimetric radar data are then used to refine precipitation-type categories when the observations are found to be inconsistent with the background classification. Using data from the polarimetric KOUN Weather Surveillance Radar-1988 Doppler (WSR-88D) located in Norman, Oklahoma, the algorithm is tested on a transitional winter-storm event that produced a combination of rain, freezing rain, ice pellets, and snow as it passed over central Oklahoma on 30 November 2006. Examples are presented in which the presence of a radar bright band (suggesting an elevated warm layer) is observed immediately above a background classification of dry snow (suggesting the absence of an elevated warm layer in the model output). Overall, the results demonstrate the potential benefits of combining polarimetric radar data with thermodynamic information from numerical models, with model output providing widespread coverage and polarimetric radar data providing an observation-based modification of the derived precipitation type at closer ranges.


Sign in / Sign up

Export Citation Format

Share Document