scholarly journals Probabilistic description of ice-supersaturated layers in low resolution profiles of relative humidity

2010 ◽  
Vol 10 (14) ◽  
pp. 6749-6763 ◽  
Author(s):  
N. C. Dickson ◽  
K. M. Gierens ◽  
H. L. Rogers ◽  
R. L. Jones

Abstract. The global observation, assimilation and prediction in numerical models of ice super-saturated (ISS) regions (ISSR) are crucial if the climate impact of aircraft condensation trails (contrails) is to be fully understood, and if, for example, contrail formation is to be avoided through aircraft operational measures. Given their small scales compared to typical atmospheric model grid sizes, statistical representations of the spatial scales of ISSR are required, in both horizontal and vertical dimensions, if global occurrence of ISSR is to be adequately represented in climate models. This paper uses radiosonde launches made by the UK Meteorological Office, from the British Isles, Gibraltar, St. Helena and the Falkland Islands between January 2002 and December 2006, to investigate the probabilistic occurrence of ISSR. Each radiosonde profile is divided into 50- and 100-hPa pressure layers, to emulate the coarse vertical resolution of some atmospheric models. Then the high resolution observations contained within each thick pressure layer are used to calculate an average relative humidity and an ISS fraction for each individual thick pressure layer. These relative humidity pressure layer descriptions are then linked through a probability function to produce an s-shaped curve which empirically describes the ISS fraction in any average relative humidity pressure layer. Using this empirical understanding of the s-shaped relationship a mathematical model was developed to represent the ISS fraction within any arbitrary thick pressure layer. Two models were developed to represent both 50- and 100-hPa pressure layers with each reconstructing their respective s-shapes within 8–10% of the empirical curves. These new models can be used, to represent the small scale structures of ISS events, in modelled data where only low vertical resolution is available. This will be useful in understanding, and improving the global distribution, both observed and forecasted, of ice super-saturation.

2010 ◽  
Vol 10 (2) ◽  
pp. 2357-2395 ◽  
Author(s):  
N. C. Dickson ◽  
K. M. Gierens ◽  
H. L. Rogers ◽  
R. L. Jones

Abstract. The global observation, assimilation and prediction in numerical models of ice super-saturated (ISS) regions (ISSR) are crucial if the climate impact of aircraft condensations trails (contrails) is to be fully understood, and if, for example, contrail formation is to be avoided through aircraft operational measures. A robust assessment of the global distribution of ISSR will further this debate, and ISS event occurrence, frequency and spatial scales have recently attracted significant attention. The mean horizontal path length through ISSR as observed by MOZAIC aircraft is 150 km (±250 km). The average vertical thickness of ISS layers is 600–800 m (±575 m) but layers ranging from 25 m to 3000 m have been observed, with up to one third of ISS layers thought to be less than 100 m deep. Given their small scales compared to typical atmospheric model grid sizes, statistical representations of the spatial scales of ISSR are required, in both horizontal and vertical dimensions, if global occurrence of ISSR is to be adequately represented in climate models. This paper uses radiosonde launches made by the UK Meteorological Office, from the British Isles, Gibraltar, St. Helena and the Falkland Islands between January 2002 and December 2006, to investigate the probabilistic occurrence of ISSR. Specifically each radiosonde profile is divided into 50- and 100-hPa pressure layers, to emulate the coarse vertical resolution of some atmospheric models. Then the high resolution observations contained within each thick pressure layer are used to calculate an average relative humidity and an ISS fraction for each individual thick pressure layer. These relative humidity pressure layer descriptions are then linked through a probability function to produce an s-shaped curve describing the ISS fraction in any average relative humidity pressure layer. An empirical investigation has shown that this one curve is statistically valid for mid-latitude locations, irrespective of season and altitude, however, pressure layer depth is an important variable. Using this empirical understanding of the s-shaped relationship a mathematical model was developed to represent the ISS fraction within any arbitrary thick pressure layer. Here the statistical distributions of actual high resolution RHi observations in any thick pressure layer, along with an error function, are used to mathematically describe the s-shape. Two models were developed to represent both 50- and 100-hPa pressure layers with each reconstructing their respective s-shapes within 8–10% of the empirical curves. These new models can be used, to represent the small scale structures of ISS events, in modelled data where only low vertical resolution is available. This will be useful in understanding, and improving the global distribution, both observed and forecasted, of ice super-saturation.


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


2020 ◽  
Author(s):  
Jiayi Lai

<p><span>The next generation of weather and climate models will have an unprecedented level of resolution and model complexity, while also increasing the requirements for calculation and memory speed. Reducing the accuracy of certain variables and using mixed precision methods in atmospheric models can greatly improve Computing and memory speed. However, in order to ensure the accuracy of the results, most models have over-designed numerical accuracy, which results in that occupied resources have being much larger than the required resources. Previous studies have shown that the necessary precision for an accurate weather model has clear scale dependence, with large spatial scales requiring higher precision than small scales. Even at large scales the necessary precision is far below that of double precision. However, it is difficult to find a guided method to assign different precisions to different variables, so that it can save unnecessary waste. This paper will take CESM1.2.1 as a research object to conduct a large number of tests to reduce accuracy, and propose a new discrimination method similar to the CFL criterion. This method can realize the correlation verification of a single variable, thereby determining which variables can use a lower level of precision without degrading the accuracy of the results.</span></p>


2020 ◽  
Author(s):  
Apostolos Koumakis ◽  
Panayiotis Dimitriadis ◽  
Theano Iliopoulou ◽  
Demetris Koutsoyiannis

<p>Stochastic comparison of climate model outputs to observed relative humidity fields</p><p>We compare the stochastic behaviour of relative humidity outputs of climate models for the 20<sup>th</sup> century to the historical data (stations and reanalysis fields) at several temporal and spatial scales. In particular we examine the marginal distributions and the dependence structure with emphasis on the Hurst-Kolmogorov behaviour. The comparison aims to contribute to the quantification of reliability and predictive uncertainty of relative humidity climate model outputs over different scales in a framework of assessing their relevance for engineering planning and design.</p><p> </p><p>(Acknowledgement: This research is conducted within the frame of the course "Stochastic Methods" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.)</p>


2011 ◽  
Vol 92 (9) ◽  
pp. 1181-1192 ◽  
Author(s):  
Frauke Feser ◽  
Burkhardt Rockel ◽  
Hans von Storch ◽  
Jörg Winterfeldt ◽  
Matthias Zahn

An important challenge in current climate modeling is to realistically describe small-scale weather statistics, such as topographic precipitation and coastal wind patterns, or regional phenomena like polar lows. Global climate models simulate atmospheric processes with increasingly higher resolutions, but still regional climate models have a lot of advantages. They consume less computation time because of their limited simulation area and thereby allow for higher resolution both in time and space as well as for longer integration times. Regional climate models can be used for dynamical down-scaling purposes because their output data can be processed to produce higher resolved atmospheric fields, allowing the representation of small-scale processes and a more detailed description of physiographic details (such as mountain ranges, coastal zones, and details of soil properties). However, does higher resolution add value when compared to global model results? Most studies implicitly assume that dynamical downscaling leads to output fields that are superior to the driving global data, but little work has been carried out to substantiate these expectations. Here a series of articles is reviewed that evaluate the benefit of dynamical downscaling by explicitly comparing results of global and regional climate model data to the observations. These studies show that the regional climate model generally performs better for the medium spatial scales, but not always for the larger spatial scales. Regional models can add value, but only for certain variables and locations—particularly those influenced by regional specifics, such as coasts, or mesoscale dynamics, such as polar lows. Therefore, the decision of whether a regional climate model simulation is required depends crucially on the scientific question being addressed.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Gerhard Krinner ◽  
Viatcheslav Kharin ◽  
Romain Roehrig ◽  
John Scinocca ◽  
Francis Codron

Abstract Climate models and/or their output are usually bias-corrected for climate impact studies. The underlying assumption of these corrections is that climate biases are essentially stationary between historical and future climate states. Under very strong climate change, the validity of this assumption is uncertain, so the practical benefit of bias corrections remains an open question. Here, this issue is addressed in the context of bias correcting the climate models themselves. Employing the ARPEGE, LMDZ and CanAM4 atmospheric models, we undertook experiments in which one centre’s atmospheric model takes another centre’s coupled model as observations during the historical period, to define the bias correction, and as the reference under future projections of strong climate change, to evaluate its impact. This allows testing of the stationarity assumption directly from the historical through future periods for three different models. These experiments provide evidence for the validity of the new bias-corrected model approach. In particular, temperature, wind and pressure biases are reduced by 40–60% and, with few exceptions, more than 50% of the improvement obtained over the historical period is on average preserved after 100 years of strong climate change. Below 3 °C global average surface temperature increase, these corrections globally retain 80% of their benefit.


2006 ◽  
Vol 19 (21) ◽  
pp. 5554-5569 ◽  
Author(s):  
P. Good ◽  
J. Lowe

Abstract Aspects of model emergent behavior and uncertainty in regional- and small-scale effects of increasing CO2 on seasonal (June–August) precipitation are explored. Nineteen different climate models are studied. New methods of comparing multiple climate models reveal a clearer and more impact-relevant view of precipitation projections for the current century. First, the importance of small spatial scales in multimodel projections is demonstrated. Local trends can be much larger than or even have an opposing sign to the large-scale regional averages used in previous studies. Small-scale effects of increasing CO2 and natural internal variability both play important roles here. These small-scale features make multimodel comparisons difficult for precipitation. New methods that allow information from small spatial scales to be usefully compared across an ensemble of multiple models are presented. The analysis philosophy of this study works with statistical distributions of small-scale variations within climatological regions. A major result of this work is a set of emergent relationships coupling the small- and regional-scale effects of CO2 on precipitation trends. Within each region, a single relationship fits the ensemble of 19 different climate models. Using these relationships, a surprisingly large part of the intermodel variance in small-scale effects of CO2 is explainable simply by the intermodel variance in the regional mean (a form of pattern scaling). Different regions show distinctly different relationships. These relationships imply that regional mean results are still useful, as long as the interregional variation in their relationship with impact-relevant extreme trends is recognized. These relationships are used to present a clear but rich picture of an aspect of model uncertainty, characterized by the intermodel spread in seasonal precipitation trends, including information from small spatial scales.


2013 ◽  
Vol 28 (3) ◽  
pp. 815-841 ◽  
Author(s):  
Kent H. Knopfmeier ◽  
David J. Stensrud

Abstract The expansion of surface mesoscale networks (mesonets) across the United States provides a high-resolution observational dataset for meteorological analysis and prediction. To clarify the impact of mesonet data on the accuracy of surface analyses, 2-m temperature, 2-m dewpoint, and 10-m wind analyses for 2-week periods during the warm and cold seasons produced through an ensemble Kalman filter (EnKF) approach are compared to surface analyses created by the Real-Time Mesoscale Analysis (RTMA). Results show in general a similarity between the EnKF analyses and the RTMA, with the EnKF exhibiting a smoother appearance with less small-scale variability. Root-mean-square (RMS) innovations are generally lower for temperature and dewpoint from the RTMA, implying a closer fit to the observations. Kinetic energy spectra computed from the two analyses reveal that the EnKF analysis spectra match more closely to the spectra computed from observations and numerical models in earlier studies. Data-denial experiments using the EnKF completed for the first week of the warm and cold seasons, as well as for two periods characterized by high mesoscale variability within the experimental domain, show that mesonet data removal imparts only minimal degradation to the analyses. This is because of the localized background covariances computed for the four surface variables having spatial scales much larger than the average spacing of mesonet stations. Results show that removing 75% of the mesonet observations has only minimal influence on the analysis.


2003 ◽  
Vol 3 (4) ◽  
pp. 1023-1035 ◽  
Author(s):  
P. Good ◽  
C. Giannakopoulos ◽  
F. M. O’Connor ◽  
S. R. Arnold ◽  
M. de Reus ◽  
...  

Abstract. A technique is demonstrated for estimating atmospheric mixing time-scales from in-situ data, using a Lagrangian model initialised from an Eulerian chemical transport model (CTM). This method is applied to airborne tropospheric CO observations taken during seven flights of the Mediterranean Intensive Oxidant Study (MINOS) campaign, of August 2001. The time-scales derived, correspond to mixing applied at the spatial scale of the CTM grid. They are relevant to the family of hybrid Lagrangian-Eulerian models, which impose Eulerian grid mixing to an underlying Lagrangian model. The method uses the fact that in Lagrangian tracer transport modelling, the mixing spatial and temporal scales are decoupled: the spatial scale is determined by the resolution of the initial tracer field, and the time scale by the trajectory length. The chaotic nature of lower-atmospheric advection results in the continuous generation of smaller spatial scales, a process terminated in the real atmosphere by mixing. Thus, a mix-down lifetime can be estimated by varying trajectory length so that the model reproduces the observed amount of small-scale tracer structure. Selecting a trajectory length is equivalent to choosing a mixing timescale. For the cases studied, the results are very insensitive to CO photochemical change calculated along the trajectories. That is, it was found that if CO was treated as a passive tracer, this did not affect the mix-down timescales derived, since the slow CO photochemistry does not have much influence at small spatial scales. The results presented correspond to full photochemical calculations. The method is most appropriate for relatively homogeneous regions, i.e. it is not too important to account for changes in aircraft altitude or the positioning of stratospheric intrusions, so that small scale structure is easily distinguished. The chosen flights showed a range of mix-down time upper limits: a very short timescale of 1 day for 8 August, due possibly to recent convection or model error, 3 days for 3 August, probably due to recent convective and boundary layer mixing, and 6-9 days for 16, 17, 22a, 22c and 24 August. These numbers refer to a mixing spatial scale of 2.8°, defined here by the resolution of the Eulerian grid from which tracer fields were interpolated to initialise the Lagrangian model. For the flight of 3 August, the observed concentrations result from a complex set of transport histories, and the models are used to interpret the observed structure, while illustrating where more caution is required with this method of estimating mix-down lifetimes.


2010 ◽  
Vol 25 (3) ◽  
pp. 885-894 ◽  
Author(s):  
José Roberto Rozante ◽  
Demerval Soares Moreira ◽  
Luis Gustavo G. de Goncalves ◽  
Daniel A. Vila

Abstract The measure of atmospheric model performance is highly dependent on the quality of the observations used in the evaluation process. In the particular case of operational forecast centers, large-scale datasets must be made available in a timely manner for continuous assessment of model results. Numerical models and surface observations usually work at distinct spatial scales (i.e., areal average in a regular grid versus point measurements), making direct comparison difficult. Alternatively, interpolation methods are employed for mapping observational data to regular grids and vice versa. A new technique (hereafter called MERGE) to combine Tropical Rainfall Measuring Mission (TRMM) satellite precipitation estimates with surface observations over the South American continent is proposed and its performance is evaluated for the 2007 summer and winter seasons. Two different approaches for the evaluation of the performance of this product against observations were tested: a cross-validation subsampling of the entire continent and another subsampling of only areas with sparse observations. Results show that over areas with a high density of observations, the MERGE technique’s performance is equivalent to that of simply averaging the stations within the grid boxes. However, over areas with sparse observations, MERGE shows superior results.


Sign in / Sign up

Export Citation Format

Share Document