Elucidating Model Inadequacies in a Cloud Parameterization by Use of an Ensemble-Based Calibration Framework

2007 ◽  
Vol 135 (12) ◽  
pp. 4077-4096 ◽  
Author(s):  
Jean-Christophe Golaz ◽  
Vincent E. Larson ◽  
James A. Hansen ◽  
David P. Schanen ◽  
Brian M. Griffin

Abstract Every cloud parameterization contains structural model errors. The source of these errors is difficult to pinpoint because cloud parameterizations contain nonlinearities and feedbacks. To elucidate these model inadequacies, this paper uses a general-purpose ensemble parameter estimation technique. In principle, the technique is applicable to any parameterization that contains a number of adjustable coefficients. It optimizes or calibrates parameter values by attempting to match predicted fields to reference datasets. Rather than striving to find the single best set of parameter values, the output is instead an ensemble of parameter sets. This ensemble provides a wealth of information. In particular, it can help uncover model deficiencies and structural errors that might not otherwise be easily revealed. The calibration technique is applied to an existing single-column model (SCM) that parameterizes boundary layer clouds. The SCM is a higher-order turbulence closure model. It is closed using a multivariate probability density function (PDF) that represents subgrid-scale variability. Reference datasets are provided by large-eddy simulations (LES) of a variety of cloudy boundary layers. The calibration technique locates some model errors in the SCM. As a result, empirical modifications are suggested. These modifications are evaluated with independent datasets and found to lead to an overall improvement in the SCM’s performance.

2019 ◽  
Author(s):  
Doug McNeall ◽  
Jonny Williams ◽  
Richard Betts ◽  
Ben Booth ◽  
Peter Challenor ◽  
...  

Abstract. A key challenge in developing flagship climate model configurations is the process of setting uncertain input parameters at values that lead to credible climate simulations. Setting these parameters traditionally relies heavily on insights from those involved in parameterisation of the underlying climate processes. Given the many degrees of freedom and computational expense involved in evaluating such a selection, this can be imperfect leaving open questions about whether any subsequent simulated biases result from mis-set parameters or wider structural model errors (such as missing or partially parameterised processes). Here we present a complementary approach to identifying plausible climate model parameters, with a method of bias correcting subcomponents of a climate model using a Gaussian process emulator that allows credible values of model input parameters to be found even in the presence of a significant model bias. A previous study (McNeall et al., 2016) found that a climate model had to be run using land surface input parameter values from very different, almost non-overlapping parts of parameter space to satisfactorily simulate the Amazon and other forests respectively. As the forest fraction of modelled non-Amazon forests was broadly correct at the default parameter settings and the Amazon too low, that study suggested that the problem most likely lay in the model's treatment of non-plant processes in the Amazon region. This might be due to (1) modelling errors such as missing deep-rooting in the Amazon in the land surface component of the climate model, (2) a warm-dry bias in the Amazon climate of the model, or a combination of both. In this study we bias correct the climate of the Amazon in a climate model using an augmented Gaussian process emulator, where temperature and precipitation, variables usually regarded as model outputs, are treated as model inputs alongside regular land surface input parameters. A sensitivity analysis finds that the forest fraction is nearly as sensitive to climate variables as changes in its land surface parameter values. Bias correcting the climate in the Amazon region using the emulator corrects the forest fraction to tolerable levels in the Amazon at many candidates for land surface input parameter values, including the default ones, and increases the valid input space shared with the other forests. We need not invoke a structural model error in the land surface model, beyond having too dry and hot a climate in the Amazon region. The augmented emulator allows bias correction of an ensemble of climate model runs and reduces the risk of choosing poor parameter values because of an error in a subcomponent of the model. We discuss the potential of the augmented emulator to act as a translational layer between model subcomponents, simplifying the process of model tuning when there are compensating errors, and helping model developers discover and prioritise model errors to target.


2020 ◽  
Vol 13 (5) ◽  
pp. 2487-2509
Author(s):  
Doug McNeall ◽  
Jonny Williams ◽  
Richard Betts ◽  
Ben Booth ◽  
Peter Challenor ◽  
...  

Abstract. A key challenge in developing flagship climate model configurations is the process of setting uncertain input parameters at values that lead to credible climate simulations. Setting these parameters traditionally relies heavily on insights from those involved in parameterisation of the underlying climate processes. Given the many degrees of freedom and computational expense involved in evaluating such a selection, this can be imperfect leaving open questions about whether any subsequent simulated biases result from mis-set parameters or wider structural model errors (such as missing or partially parameterised processes). Here, we present a complementary approach to identifying plausible climate model parameters, with a method of bias correcting subcomponents of a climate model using a Gaussian process emulator that allows credible values of model input parameters to be found even in the presence of a significant model bias. A previous study (McNeall et al., 2016) found that a climate model had to be run using land surface input parameter values from very different, almost non-overlapping, parts of parameter space to satisfactorily simulate the Amazon and other forests respectively. As the forest fraction of modelled non-Amazon forests was broadly correct at the default parameter settings and the Amazon too low, that study suggested that the problem most likely lay in the model's treatment of non-plant processes in the Amazon region. This might be due to modelling errors such as missing deep rooting in the Amazon in the land surface component of the climate model, to a warm–dry bias in the Amazon climate of the model or a combination of both. In this study, we bias correct the climate of the Amazon in the climate model from McNeall et al. (2016) using an “augmented” Gaussian process emulator, where temperature and precipitation, variables usually regarded as model outputs, are treated as model inputs alongside land surface input parameters. A sensitivity analysis finds that the forest fraction is nearly as sensitive to climate variables as it is to changes in its land surface parameter values. Bias correcting the climate in the Amazon region using the emulator corrects the forest fraction to tolerable levels in the Amazon at many candidates for land surface input parameter values, including the default ones, and increases the valid input space shared with the other forests. We need not invoke a structural model error in the land surface model, beyond having too dry and hot a climate in the Amazon region. The augmented emulator allows bias correction of an ensemble of climate model runs and reduces the risk of choosing poor parameter values because of an error in a subcomponent of the model. We discuss the potential of the augmented emulator to act as a translational layer between model subcomponents, simplifying the process of model tuning when there are compensating errors and helping model developers discover and prioritise model errors to target.


1989 ◽  
Author(s):  
CHAUR-MING CHOU ◽  
JOHN O'CALLAHAN ◽  
CHI-HSING WU

2015 ◽  
Vol 32 (6) ◽  
pp. 1144-1162 ◽  
Author(s):  
Adrian Sescu ◽  
Charles Meneveau

AbstractEffects of atmospheric thermal stratification on the asymptotic behavior of very large wind farms are studied using large-eddy simulations (LES) and a single-column model for vertical distributions of horizontally averaged field variables. To facilitate comparisons between LES and column modeling based on Monin–Obukhov similarity theory, the LES are performed under idealized conditions of statistical stationarity in time and fully developed conditions in space. A suite of simulations are performed for different thermal stratification levels and the results are used to evaluate horizontally averaged vertical profiles of velocity, potential temperature, vertical turbulent momentum, and heat flux. Both LES and the model show that the stratification significantly affects the atmospheric boundary layer structure, its height, and the surface fluxes. However, the effects of the wind farm on surface heat fluxes are found to be relatively small in both LES and the single-column model. The surface fluxes are the result of two opposing trends: an increase of mixing in wakes and a decrease in mixing in the region below the turbines due to reduced momentum fluxes there for neutral and unstable cases, or relatively unchanged shear stresses below the turbines in the stable cases. For the considered cases, the balance of these trends yields a slight increase in surface flux magnitude for the stable and near-neutral unstable cases, and a very small decrease in flux magnitude for the strongly unstable cases. Moreover, thermal stratification is found to have a negligible effect on the roughness scale as deduced from the single-column model, consistent with the expectations of separation of scale.


2018 ◽  
Vol 51 (4) ◽  
pp. 1059-1068 ◽  
Author(s):  
Pascal Parois ◽  
James Arnold ◽  
Richard Cooper

Crystallographic restraints are widely used during refinement of small-molecule and macromolecular crystal structures. They can be especially useful for introducing additional observations and information into structure refinements against low-quality or low-resolution data (e.g. data obtained at high pressure) or to retain physically meaningful parameter values in disordered or unstable refinements. However, despite the fact that the anisotropic displacement parameters (ADPs) often constitute more than half of the total model parameters determined in a structure analysis, there are relatively few useful restraints for them, examples being Hirshfeld rigid-bond restraints, direct equivalence of parameters and SHELXL RIGU-type restraints. Conversely, geometric parameters can be subject to a multitude of restraints (e.g. absolute or relative distance, angle, planarity, chiral volume, and geometric similarity). This article presents a series of new ADP restraints implemented in CRYSTALS [Parois, Cooper & Thompson (2015), Chem. Cent. J. 9, 30] to give more control over ADPs by restraining, in a variety of ways, the directions and magnitudes of the principal axes of the ellipsoids in locally defined coordinate systems. The use of these new ADPs results in more realistic models, as well as a better user experience, through restraints that are more efficient and faster to set up. The use of these restraints is recommended to preserve physically meaningful relationships between displacement parameters in a structural model for rigid bodies, rotationally disordered groups and low-completeness data.


2021 ◽  
Author(s):  
Gregory Wagner ◽  
Andre Souza ◽  
Adeline Hillier ◽  
Ali Ramadhan ◽  
Raffaele Ferrari

<p>Parameterizations of turbulent mixing in the ocean surface boundary layer (OSBL) are key Earth System Model (ESM) components that modulate the communication of heat and carbon between the atmosphere and ocean interior. OSBL turbulence parameterizations are formulated in terms of unknown free parameters estimated from observational or synthetic data. In this work we describe the development and use of a synthetic dataset called the “LESbrary” generated by a large number of idealized, high-fidelity, limited-area large eddy simulations (LES) of OSBL turbulent mixing. We describe how the LESbrary design leverages a detailed understanding of OSBL conditions derived from observations and large scale models to span the range of realistically diverse physical scenarios. The result is a diverse library of well-characterized “synthetic observations” that can be readily assimilated for the calibration of realistic OSBL parameterizations in isolation from other ESM model components. We apply LESbrary data to calibrate free parameters, develop prior estimates of parameter uncertainty, and evaluate model errors in two OSBL parameterizations for use in predictive ESMs.</p>


2022 ◽  
Vol 22 (1) ◽  
pp. 319-333
Author(s):  
Ian Boutle ◽  
Wayne Angevine ◽  
Jian-Wen Bao ◽  
Thierry Bergot ◽  
Ritthik Bhattacharya ◽  
...  

Abstract. An intercomparison between 10 single-column (SCM) and 5 large-eddy simulation (LES) models is presented for a radiation fog case study inspired by the Local and Non-local Fog Experiment (LANFEX) field campaign. Seven of the SCMs represent single-column equivalents of operational numerical weather prediction (NWP) models, whilst three are research-grade SCMs designed for fog simulation, and the LESs are designed to reproduce in the best manner currently possible the underlying physical processes governing fog formation. The LES model results are of variable quality and do not provide a consistent baseline against which to compare the NWP models, particularly under high aerosol or cloud droplet number concentration (CDNC) conditions. The main SCM bias appears to be toward the overdevelopment of fog, i.e. fog which is too thick, although the inter-model variability is large. In reality there is a subtle balance between water lost to the surface and water condensed into fog, and the ability of a model to accurately simulate this process strongly determines the quality of its forecast. Some NWP SCMs do not represent fundamental components of this process (e.g. cloud droplet sedimentation) and therefore are naturally hampered in their ability to deliver accurate simulations. Finally, we show that modelled fog development is as sensitive to the shape of the cloud droplet size distribution, a rarely studied or modified part of the microphysical parameterisation, as it is to the underlying aerosol or CDNC.


Author(s):  
Awais Nazir ◽  
Muhammad Shahzad Younis ◽  
Muhammad Khurram Shahzad

Speckle noise is one of the most difficult noises to remove especially in medical applications. It is a nuisance in ultrasound imaging systems which is used in about half of all medical screening systems. Thus, noise removal is an important step in these systems, thereby creating reliable, automated, and potentially low cost systems. Herein, a generalized approach MFNR (Multi-Frame Noise Removal) is used, which is a complete Noise Removal system using KDE (Kernal Density Estimation). Any given type of noise can be removed if its probability density function (PDF) is known. Herein, we extracted the PDF parameters using KDE. Noise removal and detail preservation are not contrary to each other as the case in single-frame noise removal methods. Our results showed practically complete noise removal using MFNR algorithm compared to standard noise removal tools. The Peak Signal to Noise Ratio (PSNR) performance was used as a comparison metric. This paper is an extension to our previous paper where MFNR Algorithm was showed as a general purpose complete noise removal tool for all types of noises


2021 ◽  
Author(s):  
Lucile Ricard ◽  
Athanasios Nenes ◽  
Jakob Runge ◽  
Paraskevi Georgakaki

<p>Aerosol-cloud interactions remain the largest uncertainty in assessments of anthropogenic climate forcing, while the complexity of these interactions require methods that enable abstractions and simplifications that allow their improved treatment in climate models. Marine boundary layer clouds are an important component of the climate system as their large albedo and spatial coverage strongly affect the planetary radiative balance. High resolution simulations of clouds provide an unprecedented understanding of the structure and behavior of these clouds in the marine atmosphere, but the amount of data is often too large and complex to be useful in climate simulations. Data reduction and inference methods provide a way that to reduce the complexity and dimensionality of datasets generated from high-resolution Large Eddy Simulations.</p><p>In this study we use network analysis, (the δ-Maps method) to study the complex interaction between liquid water, droplet number and vertical velocity in Large Eddy Simulations of Marine Boundary Layer clouds. δ-Maps identifies domains that are spatially contiguous and possibly overlapping and characterizes their connections and temporal interactions. The objective is to better understand microphysical properties of marine boundary layer clouds, and how they are impacted by the variability in aerosols. Here we will capture the dynamical structure of the cloud fields predicted by the MIMICA Large Eddy Simulation (LES) model. The networks inferred from the different simulation fields are compared between them (intra-comparisons) using perturbations in initial conditions and aerosol, using a set of four metrics. The networks are then evaluated for their differences, quantifying how much variability is inherent in the LES simulations versus the robust changes induced by the aerosol fields. </p>


2016 ◽  
Author(s):  
Brian M. Griffin ◽  
Vincent E. Larson

Abstract. The subgrid-scale representation of hydrometeor fields is important for calculating microphysical process rates. In order to represent subgrid-scale variability, the Cloud Layers Unified By Binormals (CLUBB) parameterization uses a multivariate Probability Density Function (PDF). In addition to vertical velocity, temperature, and moisture fields, the PDF includes hydrometeor fields. Previously, each hydrometeor field was assumed to follow a multivariate single lognormal distribution. Now, in order to better represent the distribution of hydrometeors, two new multivariate PDFs are formulated and introduced. The new PDFs represent hydrometeors using either a delta-lognormal or a delta-double-lognormal shape. The two new PDF distributions, plus the previous single lognormal shape, are compared to histograms of data taken from Large-Eddy Simulations (LES) of a precipitating cumulus case, a drizzling stratocumulus case, and a deep convective case. Finally, the warm microphysical process rates produced by the different hydrometeor PDFs are compared to the same process rates produced by the LES.


Sign in / Sign up

Export Citation Format

Share Document