scholarly journals HEPPA III intercomparison experiment on electron precipitation impacts:  Estimated ionization rates during a geomagnetic active period in April 2010

Author(s):  
Hilde Nesse Tyssøy ◽  
Miriam Sinnhuber ◽  
Timo Asikainen ◽  
Stefan Bender ◽  
Mark A. Clilverd ◽  
...  

<p>Precipitating auroral and radiation belt electrons are considered an important part of the natural forcing of the climate system.  Recent studies suggest that this forcing is underestimated in current chemistry-climate models. The HEPPA III intercomparison experiment is a collective effort to address this point. Here, eight different estimates of medium energy electron (MEE) (>30 keV) ionization rates are assessed during a geomagnetic active period in April 2010.  The objective is to understand the potential uncertainty related to the MEE energy input. The ionization rates are all based on the Medium Energy Proton and Electron Detector (MEPED) on board the NOAA/POES and EUMETSAT/MetOp spacecraft series. However, different data handling, ionization rate calculations, and background atmospheres result in a wide range of mesospheric electron ionization rates. Although the eight data sets agree well in terms of the temporal variability, they differ by about an order of magnitude in ionization rate strength both during geomagnetic quiet and disturbed periods. The largest spread is found in the aftermath of the geomagnetic activity.  Furthermore, governed by different energy limits, the atmospheric penetration depth varies, and some differences related to latitudinal coverage are also evident. The mesospheric NO densities simulated with the Whole Atmospheric Community Climate Model driven by highest and lowest ionization rates differ by more than a factor of eight. In a follow-up study, the atmospheric responses are simulated in four chemistry-climate models and compared to satellite observations, considering both the model structure and the ionization forcing.</p>


2007 ◽  
Vol 88 (3) ◽  
pp. 375-384 ◽  
Author(s):  
E. S. Takle ◽  
J. Roads ◽  
B. Rockel ◽  
W. J. Gutowski ◽  
R. W. Arritt ◽  
...  

A new approach, called transferability intercomparisons, is described for advancing both understanding and modeling of the global water cycle and energy budget. Under this approach, individual regional climate models perform simulations with all modeling parameters and parameterizations held constant over a specific period on several prescribed domains representing different climatic regions. The transferability framework goes beyond previous regional climate model intercomparisons to provide a global method for testing and improving model parameterizations by constraining the simulations within analyzed boundaries for several domains. Transferability intercomparisons expose the limits of our current regional modeling capacity by examining model accuracy on a wide range of climate conditions and realizations. Intercomparison of these individual model experiments provides a means for evaluating strengths and weaknesses of models outside their “home domains” (domain of development and testing). Reference sites that are conducting coordinated measurements under the continental-scale experiments under the Global Energy and Water Cycle Experiment (GEWEX) Hydrometeorology Panel provide data for evaluation of model abilities to simulate specific features of the water and energy cycles. A systematic intercomparison across models and domains more clearly exposes collective biases in the modeling process. By isolating particular regions and processes, regional model transferability intercomparisons can more effectively explore the spatial and temporal heterogeneity of predictability. A general improvement of model ability to simulate diverse climates will provide more confidence that models used for future climate scenarios might be able to simulate conditions on a particular domain that are beyond the range of previously observed climates.



2021 ◽  
Vol 14 (8) ◽  
pp. 4865-4890
Author(s):  
Peter Uhe ◽  
Daniel Mitchell ◽  
Paul D. Bates ◽  
Nans Addor ◽  
Jeff Neal ◽  
...  

Abstract. Riverine flood hazard is the consequence of meteorological drivers, primarily precipitation, hydrological processes and the interaction of floodwaters with the floodplain landscape. Modeling this can be particularly challenging because of the multiple steps and differing spatial scales involved in the varying processes. As the climate modeling community increases their focus on the risks associated with climate change, it is important to translate the meteorological drivers into relevant hazard estimates. This is especially important for the climate attribution and climate projection communities. Current climate change assessments of flood risk typically neglect key processes, and instead of explicitly modeling flood inundation, they commonly use precipitation or river flow as proxies for flood hazard. This is due to the complexity and uncertainties of model cascades and the computational cost of flood inundation modeling. Here, we lay out a clear methodology for taking meteorological drivers, e.g., from observations or climate models, through to high-resolution (∼90 m) river flooding (fluvial) hazards. Thus, this framework is designed to be an accessible, computationally efficient tool using freely available data to enable greater uptake of this type of modeling. The meteorological inputs (precipitation and air temperature) are transformed through a series of modeling steps to yield, in turn, surface runoff, river flow, and flood inundation. We explore uncertainties at different modeling steps. The flood inundation estimates can then be related to impacts felt at community and household levels to determine exposure and risks from flood events. The approach uses global data sets and thus can be applied anywhere in the world, but we use the Brahmaputra River in Bangladesh as a case study in order to demonstrate the necessary steps in our hazard framework. This framework is designed to be driven by meteorology from observational data sets or climate model output. In this study, only observations are used to drive the models, so climate changes are not assessed. However, by comparing current and future simulated climates, this framework can also be used to assess impacts of climate change.



Geosciences ◽  
2019 ◽  
Vol 9 (6) ◽  
pp. 255 ◽  
Author(s):  
Thomas J. Bracegirdle ◽  
Florence Colleoni ◽  
Nerilie J. Abram ◽  
Nancy A. N. Bertler ◽  
Daniel A. Dixon ◽  
...  

Quantitative estimates of future Antarctic climate change are derived from numerical global climate models. Evaluation of the reliability of climate model projections involves many lines of evidence on past performance combined with knowledge of the processes that need to be represented. Routine model evaluation is mainly based on the modern observational period, which started with the establishment of a network of Antarctic weather stations in 1957/58. This period is too short to evaluate many fundamental aspects of the Antarctic and Southern Ocean climate system, such as decadal-to-century time-scale climate variability and trends. To help address this gap, we present a new evaluation of potential ways in which long-term observational and paleo-proxy reconstructions may be used, with a particular focus on improving projections. A wide range of data sources and time periods is included, ranging from ship observations of the early 20th century to ice core records spanning hundreds to hundreds of thousands of years to sediment records dating back 34 million years. We conclude that paleo-proxy records and long-term observational datasets are an underused resource in terms of strategies for improving Antarctic climate projections for the 21st century and beyond. We identify priorities and suggest next steps to addressing this.



Climate ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 68 ◽  
Author(s):  
Flora Gofa ◽  
Anna Mamara ◽  
Manolis Anadranistakis ◽  
Helena Flocas

The creation of realistic gridded precipitation fields improves our understanding of the observed climate and is necessary for validating climate model output for a wide range of applications. The challenge in trying to represent the highly variable nature of precipitation is to overcome the lack of density of observations in both time and space. Data sets of mean monthly and annual precipitations were developed for Greece in gridded format with an analysis of 30 arcsec (∼800 m) based on data from 1971 to 2000. One hundred and fifty-seven surface stations from two different observation networks were used to cover a satisfactory range of elevations. Station data were homogenized and subjected to quality control to represent changes in meteorological conditions rather than changes in the conditions under which the observations were made. The Meteorological Interpolation based on Surface Homogenized Data Basis (MISH) interpolation method was used to develop data sets that reproduce, as closely as possible, the spatial climate patterns over the region of interest. The main geophysical factors considered for the interpolation of mean monthly precipitation fields were elevation, latitude, incoming solar irradiance, Euclidian distance from the coastline, and land-to-sea percentage. Low precipitation interpolation uncertainties estimated with the cross-validation method provided confidence in the interpolation method. The resulting high-resolution maps give an overall realistic representation of precipitation, especially in fall and winter, with a clear longitudinal dependence on precipitation decreasing from western to eastern continental Greece.



2020 ◽  
Author(s):  
Maximilian Gelbrecht ◽  
Jürgen Kurths ◽  
Frank Hellmann

<p>Many high-dimensional complex systems such as climate models exhibit an enormously complex landscape of possible asymptotic state. On most occasions these are challenging to analyse with traditional bifurcation analysis methods. Often, one is also more broadly interested in classes of asymptotic states. Here, we present a novel numerical approach prepared for analysing such high-dimensional multistable complex systems: Monte Carlo Basin Bifurcation Analysis (MCBB).<span>  </span>Based on random sampling and clustering methods, we identify the type of dynamic regimes with the largest basins of attraction and track how the volume of these basins change with the system parameters. In order to due this suitable, easy to compute, statistics of trajectories with randomly generated initial conditions and parameters are clustered by an algorithm such as DBSCAN. Due to the modular and flexible nature of the method, it has a wide range of possible applications. While initially oscillator networks were one of the main applications of this methods, here we present an analysis of a simple conceptual climate model setup up by coupling an energy balance model to the Lorenz96 system. The method is available to use as a package for the Julia language.<span> </span></p>



2017 ◽  
Vol 98 (1) ◽  
pp. 79-93 ◽  
Author(s):  
Elizabeth J. Kendon ◽  
Nikolina Ban ◽  
Nigel M. Roberts ◽  
Hayley J. Fowler ◽  
Malcolm J. Roberts ◽  
...  

Abstract Regional climate projections are used in a wide range of impact studies, from assessing future flood risk to climate change impacts on food and energy production. These model projections are typically at 12–50-km resolution, providing valuable regional detail but with inherent limitations, in part because of the need to parameterize convection. The first climate change experiments at convection-permitting resolution (kilometer-scale grid spacing) are now available for the United Kingdom; the Alps; Germany; Sydney, Australia; and the western United States. These models give a more realistic representation of convection and are better able to simulate hourly precipitation characteristics that are poorly represented in coarser-resolution climate models. Here we examine these new experiments to determine whether future midlatitude precipitation projections are robust from coarse to higher resolutions, with implications also for the tropics. We find that the explicit representation of the convective storms themselves, only possible in convection-permitting models, is necessary for capturing changes in the intensity and duration of summertime rain on daily and shorter time scales. Other aspects of rainfall change, including changes in seasonal mean precipitation and event occurrence, appear robust across resolutions, and therefore coarse-resolution regional climate models are likely to provide reliable future projections, provided that large-scale changes from the global climate model are reliable. The improved representation of convective storms also has implications for projections of wind, hail, fog, and lightning. We identify a number of impact areas, especially flooding, but also transport and wind energy, for which very high-resolution models may be needed for reliable future assessments.



2015 ◽  
Vol 56 (70) ◽  
pp. 175-183 ◽  
Author(s):  
Andrew Zammit-Mangion ◽  
Jonathan L. Bamber ◽  
Nana W. Schoen ◽  
Jonathan C. Rougier

AbstractCombinations of various numerical models and datasets with diverse observation characteristics have been used to assess the mass evolution of ice sheets. As a consequence, a wide range of estimates have been produced using markedly different methodologies, data, approximation methods and model assumptions. Current attempts to reconcile these estimates using simple combination methods are unsatisfactory, as common sources of errors across different methodologies may not be accurately quantified (e.g. systematic biases in models). Here we provide a general approach which deals with this issue by considering all data sources simultaneously, and, crucially, by reducing the dependence on numerical models. The methodology is based on exploiting the different space–time characteristics of the relevant ice-sheet processes, and using statistical smoothing methods to establish the causes of the observed change. In omitting direct dependence on numerical models, the methodology provides a novel means for assessing glacio-isostatic adjustment and climate models alike, using remote-sensing datasets. This is particularly advantageous in Antarctica, where in situ measurements are difficult to obtain. We illustrate the methodology by using it to infer Antarctica’s mass trend from 2003 to 2009 and produce surface mass-balance anomaly estimates to validate the RACMO2.1 regional climate model.



2001 ◽  
Vol 442 ◽  
pp. 267-291 ◽  
Author(s):  
MICHAEL E. BARRY ◽  
GREGORY N. IVEY ◽  
KRAIG B. WINTERS ◽  
JÖRG IMBERGER

Linearly stratified salt solutions of different Prandtl number were subjected to turbulent stirring by a horizontally oscillating vertical grid in a closed laboratory system. The experimental set-up allowed the independent direct measurement of a root mean square turbulent lengthscale Lt, turbulent diffusivity for mass Kρ, rate of dissipation of turbulent kinetic energy ε, buoyancy frequency N and viscosity v, as time and volume averaged quantities. The behaviour of both Lt and Kρ was characterized over a wide range of the turbulence intensity measure, ε/vN2, and two regimes were identified.In the more energetic of these regimes (Regime E, where 300 < ε/vN2 < 105), Lt was found to be a function of v, κ and N, whilst Kρ was a function of v, κ and (ε/vN2)1/3. From these expressions for Lt and Kρ, a scaling relation for the root mean square turbulent velocity scale Ut was derived, and this relationship showed good agreement with direct measurements from other data sets.In the weaker turbulence regime (Regime W, where 10 < ε/vN2 < 300) Kρ was a function of v, κ and ε/vN2.For 10 < ε/vN2 < 1000, our directly measured diffusivities, Kρ, are approximately a factor of 2 different to the diffusivity predicted by the model of Osborn (1980). For ε/vN2 > 1000, our measured diffusivities diverge from the model prediction. For example, at ε/vN2 ≈ 104 there is at least an order of magnitude difference between the measured and predicted diffusivities.



2020 ◽  
Vol 33 (23) ◽  
pp. 10383-10402
Author(s):  
Giuliana Pallotta ◽  
Benjamin D. Santer

AbstractStudies seeking to identify a human-caused global warming signal generally rely on climate model estimates of the “noise” of intrinsic natural variability. Assessing the reliability of these noise estimates is of critical importance. We evaluate here the statistical significance of differences between climate model and observational natural variability spectra for global-mean mid- to upper-tropospheric temperature (TMT). We use TMT information from satellites and large multimodel ensembles of forced and unforced simulations. Our main goal is to explore the sensitivity of model-versus-data spectral comparisons to a wide range of subjective decisions. These include the choice of satellite and climate model TMT datasets, the method for separating signal and noise, the frequency range considered, and the statistical model used to represent observed natural variability. Of particular interest is the amplitude of the interdecadal noise against which an anthropogenic tropospheric warming signal must be detected. We find that on time scales of 5–20 years, observed TMT variability is (on average) overestimated by the last two generations of climate models participating in the Coupled Model Intercomparison Project. This result is relatively insensitive to different plausible analyst choices, enhancing confidence in previous claims of detectable anthropogenic warming of the troposphere and indicating that these claims may be conservative. A further key finding is that two commonly used statistical models of short-term and long-term memory have deficiencies in their ability to capture the complex shape of observed TMT spectra.



2020 ◽  
Author(s):  
Hilde Nesse Tyssøy ◽  
Miriam Sinnhuber ◽  
Timo Asikainen ◽  
Max van de Kamp ◽  
Joshua Pettit ◽  
...  

&lt;p&gt;Quantifying the ionization rates due to medium energy electron (MEE) precipitation into the mesosphere has long been an outstanding question. It is the key to understand the total effect of particle precipitation on the atmosphere. The first MEE ionization rate was provided by the Atmospheric Ionization Module Osnabr&amp;#252;ck (AIMOS) in 2009. It applies electron measurements by the 0&lt;sup&gt;o&lt;/sup&gt; electron detector on the MEPED instrument on board the NOAA/POES satellites together with geomagnetic indices. Since then several other efforts to estimate the MEE precipitation and associated ionization rates has been made taking account e.g. of cross contamination by low-energy protons; Full Range Energy Electron Spectra (FRES) and ISSI-19. Recently, a parameterization based on the same electron data, scaled by the geomagnetic index Ap, has been included in the solar-driven particle forcing in the recommendation for Coupled Model Intercomparison Project 6 (CMIP6). Another parameterization aiming to resolve substorm activity applies the SML index, AISstorm. Further, three different methods to construct the total bounce loss cone fluxes based on both MEPED detectors has been suggested by the University of Colorado, University of Oulo, and the University of Bergen. In total, the space physics community offers a wide range of mesospheric ionization rates to be used in studies of the subsequent chemical-dynamical impact of the atmosphere, which are all based on the MEPED electron measurement.&lt;/p&gt;&lt;p&gt;Here we present a review of eight different estimates of energetic electron fluxes and the ionization rates during an event in April 2010. The objective of this comparison is to understand the potential uncertainty related to the MEE energy input in order to assess its subsequent impact on the atmosphere. We find that although the different parameterizations agree well in terms of the temporal variability, they differ by orders of magnitude in ionization strength both during geomagnetic quiet and disturbed periods and show some inconsistency in terms of latitudinal coverage.&lt;/p&gt;



Sign in / Sign up

Export Citation Format

Share Document